A couple of weeks ago Cloudflare, one of our competitors, published a blog post in which they claimed that their edge compute platform is roughly three times as fast as Compute@Edge. This nonsensical conclusion provides a great example of how statistics can be used to mislead. Read on for an analysis of Cloudflare’s testing methodology, and the results of a more scientific and useful comparison.
It has often been said that there are three kinds of untruths: lies, damned lies, and statistics. This is perhaps unfair: Some statistics are pretty sound. But these are not:
Where to begin? Citing Catchpoint like this makes this claim sound like an independent study from a third party. It’s not. Catchpoint allows you to configure their tools for your needs, meaning you could use them to create a test based on a fair and rigorous benchmark standard, or you could use them to tell the story you want to tell.
OK, so what’s wrong with their tests?
The design and execution of Cloudflare’s tests were flawed in several ways:
-
Cloudflare used a curated selection of Catchpoint nodes in the tests. There’s no explanation of why this specific set of nodes was chosen, but it’s worth noting that Cloudflare’s infrastructure is not in exactly the same places as ours, and a biased choice of test locations affects the result dramatically.
-
Their tests compare JavaScript running on Cloudflare Workers, a mature, generally available product, with JavaScript running on Compute@Edge. Although the Compute@Edge platform is now available for all in production, support for JavaScript on Compute@Edge is a beta product. We clearly identify in our documentation that beta products are not ready for production use. A fairer test on this point would have compared Rust on Compute@Edge with JavaScript on Cloudflare Workers, which are at more comparable stages of the product lifecycle.
-
Cloudflare used a free Fastly trial account to conduct their tests. Free trial accounts are designed for limited use compared to paid accounts, and performance under load is not comparable between the two.
-
Cloudflare conducted their tests in a single hour, on a single day. This fails to normalize for daily traffic patterns or abnormal events and is susceptible to random distortion effects. If you ran several sets of tests at different times of day, it’s likely that at some point, you’d achieve your desired outcome.
-
The blog post states that the test code “executed a function that simply returns the current time” but then goes on to show a code sample that returns a copy of the headers from the inbound request. One of these must be wrong. It’s impossible to objectively evaluate or reproduce a result when the test methodology is not clearly explained.
-
Solely evaluating time-to-first-byte (TTFB) using tests that involve almost no computational load, no significantly sized payloads, and no platform APIs does not measure the performance of Compute@Edge in any meaningful way.
This is bad science. So why are we drawing attention to it, and since we are choosing to do so, what is the performance of Compute@Edge really like?
Surprise! Compute@Edge is faster than Cloudflare Workers
To be clear, we can’t