Edge Computing with Akamai

Hello.

At Akamai, we've been operating our intelligent edge platform for over 20 years.

With edge servers in over 4,100 locations throughout 135 plus countries, it remains the largest distributed global network platform, reducing last mile latency across the internet.

Over the past two decades, computing at the edge has been used for our website performance and security features.

These include application load balancing, automatic image optimization, API gateway, application security protection, bot mitigation, and much more.

All of these have been built on the Akamai platform using code written by Akamai.

But over the past few years, we've seen a large increase in the interest for developers to run their own code at the edge.

To open our platform up to developers, we've deployed Akamai Edge Workers across our network.

Edge Workers is our functions as a service platform built on the lightweight V8 JavaScript engine, which minimizes execution and cold start overhead.

With Edge Workers, code executes on the same servers that are delivering content, thus reducing latency they would otherwise slow the user experience.

With no per-server or per-region limits on code execution, Edge Workers code will scale nearly infinitely across our platform.

If your code fits within our per-request limits, while you're developing and testing, you can be confident it will scale out to handle your production load and beyond.

But code doesn't live in a vacuum.

Code typically needs to operate on data.

In some cases, this data comes from the HTTP request or response, but often we need a data store.

The data can be stored in the cloud and retrieve through a restful service call, allowing you to use your existing datastore.

Or you can store your data in Akamai Edge KV, a distributed key value database at the edge.

Optimized for data that is frequently accessed from Edge Workers, code Edge KV, replicates the data to the same edge servers where your code is executing, and your content is delivered from.

Thus saving from the latency of making trips back to the datacenter.

Let's look at an example of what can be achieved with Akamai's Edge Computing.

Imagine you have a user in Europe and a Web server in the US.

The user's request must cross the ocean, adding significant latency.

By caching this response at a local edge server, we can improve performance for subsequent requests to the same page.

Caching content close to the user is one of the easiest ways to improve performance of static content.

However, when using server side A/B testing one user may receive a different content variant from the server that another user receives at the exact same URL.

Because the response may be different for the same page, this breaks the ability to apply basic local caching.

Each user must request content from the server as the server makes the decision on which variant to display.

By moving the A/B decision away from the server and onto the edge, we can cache both variants, close to the user and make the decision of which variant to serve all without having to make the long round trip to the datacenter.

To implement this, we can use Edge Workers to execute the A/B testing logic, retrieving data about the test configuration from Edge KV.

Let's take a quick look at the code.

First, we import a few helper libraries.

Next we initialize the Edge KV library, providing the namespace and group where our data is stored.

When a client requests the page, we execute the onclient request function.

In this function, we retrieve the A/B test configuration data, which was stored in Edge KV.

We read the request cookies to determine if a user has already been assigned to a variant.

If the user has not been assigned to a variant, we select a weighted random variant based on the configuration data that was retrieved from Edge KV.

The small function at the right is just a helper function to implement the weighted random selection algorithm.

We then add a request header.

If content is not yet in cache, we have to go forward to the application server to request the content.

The application server can use this request header to determine which variant to respond with.

A request variable is then set to the specific variant, and this variable is added to the cache key.

These lines of code allow us to vary the response from the Web server while still caching each variant individually.

Finally, we log the selected variant to aid in debugging.

Then just before the response is sent back to the client, the onclient response function is executed.

This function stores the A/B decision in a cookie, so the user receives the same experience throughout their session.

With only a few lines of JavaScript we've implemented a basic A/B testing engine, allowing us to execute A/B testing at the edge, delivering cacheable pages with very low latency, regardless of where the user is located.

If we wish to update the A/B configuration data, we can simply update a single Edge KV object via API or command line interface command.

The updated configuration will be propagated globally within 10 seconds.

So now let's look at the results.

When performing the A/B testing logic at the datacenter, we see a time to firstbyte of over 800 milliseconds.

This is caused by the network latency from the user to the datacenter plus application processing time.

By moving the A/B testing logic to the edge and delivering content from cache, we could reduce the time to first bite to about 35 millisecond.

Reducing the time to first bite of the HTML will generally shift the entire page waterfall to the left, improving other performance metrics as well.

For more information about Edge Workers, you can go to developer.akamai.com/edgeworkers.

See additional examples, review our documentation and join our Slack workspace.

Thank you.

Serverless Computing at the Edge

Josh Johnson

Sr. Enterprise Architect

Akamai Intelligent Edge

  • 4,100+ Locations
  • 135+ Countries

EdgeWorkers

V8 JavaScript engine deployed across the Edge

Scale nearly infinitely

No per-server or per-region limits!

Code needs data

HTTP Request/Response RESTful API calls

EdgeKV brings data to the Edge

Distributed Key-Value store, accessible from EdgeWorkers

Example

Cacheable A/B testing

Long trip to data center

Short trip to edge cache

The code

Shoes the code for the whole A/B test in on screen–we focus on the specific pieces of code that Josh refers to below

The code

import { Cookies, SetCookie } from 'cookies';
import { logger } from 'log';
import ( EdgeKV } from './edgekv.js';

The code

const edgeKv_abpath = new EdgeKV( {namespace: "default", group: "ab-data"});

The code

export async function onClientRequest(request) {
	let abConfig = await edgeKv_abpath.getJson({ item: "ab-config" });

	let cookies = new Cookies(request.getHeader('Cookie'));
	let abVariant = cookies.get('ab-variant');

	if (!abVariant) {
		logger.log(' choosing random variant');
		abVariant = getRandomVariant(abConfig)
	}
  
	request.addleader(abConfig.forwardHeaderName, abConfig.variants[abVariant])

  	request.setVariable(' PMUSER_AB VARIANT', abVariant);
	request.cacheKey.includeVariable('PMUSER_AB_VARIANT');
  
	logger.log('Variant: %s', abVariant);
}

The code

let abConfig = await edgeKv_abpath.getJson({ item: "ab-config"});

The code

let cookies = new Cookies(request.getHeader('Cookie'));
let abVariant = cookies.get('ab-variant');

The code

if (!abVariant) {
	logger.log(' choosing random variant');
	abVariant = getRandomVariant(abConfig)
}

Arrow points at function definition for getRandomVariant

The code

request.addHeader(abConfig. forwardHeaderName, abConfig. variants [abVariant])

The code

request.setVariable(' PMUSER_AB VARIANT', abVariant);
request.cacheKey.includeVariable('PMUSER_AB_VARIANT');

The code

logger.log('Variant: %s', abVariant);

The code

export function onClientResponse(request, response) {
	let variantId = request.getVariable('PMUSER_AB_VARIANT');
	if (variantId) {
		let expDate = new Date();
		expDate.setDate(expDate.getDate() + 7);
		let setBucketCookie = new SetCookie({ name: "ab-variant", value: variantId, expires: expDate });
		response.addHeader('Set-Cookie', setBucketCookie.toHeader());
	}
}

The code

Shows all the code again

The results

  • /B logic at the Datacenter shows Waiting Time to First Byte (TTFB) as 868.69ms
  • A/B logic at the Edge shows Waiting Time to First Byte (TTFB) as 35.41ms

Thank you

https://developer.akamai.com /edgeworkers