I’ve been trying to come up with a good explanation of what exactly “edge compute” is and for reasons I don’t need to justify to you, I’ve landed on the analogy: it’s like selling knitted hats for dogs.
Why knitted dog hats? Because they’re hilarious!

And they make an OK analogy, but before we get there, let’s define each part of “edge compute”.
We’ll start with the latter.
(Note that “edge compute” is also sometimes referred to as “edge functions” or “edge workers”)
What is “compute”?
Compute is what happens any time you ask a machine to do something for you. For example, when you ask a calculator for the product of 5 x 7 (and while you question what all those years in math class were good for), the calculator will do some beeps and boops and respond with 35.
That calculator is a computer and those beeps and boops are the time and processing energy it needs to calculate the result; also known as “compute”.
In the context of web development, compute can be used to generate several different types of products: HTML, JSON, machine learning data models, selfies of you and your friends with filters making you look like cute anime characters, etc.
For the sake of simplicity, I’ll mostly focus on generating HTML.
And for the sake of our analogy, we can think of “compute” as the time and energy it takes to knit a hat for a dog.
Hence the dog hats.
Where does “compute” take place?
Here is where things get a little more complicated. Some folks may tell you there are two places where compute can occur: on the server or in the browser (on a user’s computer).
While that’s not wrong, it’s a bit oversimplified these days because both options can be broken into smaller categories with distinctly different characteristics.
To handle that nuance I want to cover this in 4 parts:
- Traditional Servers
- Clients (Browsers)
- Static-Site-Generators
- Cloud Functions
Feel free to skip these sections if you’re already familiar, but you’ll be missing out on my whole analogy thing.
Traditional Servers
In a traditional server, a computer runs software that you selected to execute code you wrote to return HTML whenever a request comes in. Using a server to generate HTML is commonly referred to as Server-Side-Rendering (SSR).
The computer may be a local (or “on-premise”) machine that you own and is housed in your building, or it’s also very common to in the “cloud” which is basically renting a computer someone else owns and is housed in their building.
These servers run 24×7 (ideally) and are ready to receive traffic at any time. You can also setup separate long running tasks or scheduled tasks with a cron job.
This is handy, but there are some downsides:
- You pay for the server even when it’s just sitting there.
- High traffic could expire the resources (memory/CPU) and cause it to crash.
- Scaling up/down requires planning and calculating performance vs cost.
- Users that are far away from your server have longer latency periods (slower).
One last point I want to highlight in particular is that when you use traditional servers, you are responsible for the business logic code, the server software, and the state of the computer. This can be a good thing because you have all the flexibility and control to do with it whatever you want, but it comes at a cost of maintenance. Security, upgrades, and maintenance are all on you to take care of.

Servers are like commercial workspace 🏭
For our analogy, we can think of servers kind of like the building where we make dog hats. We might be renting the space, or flat-out purchase it, but we have a physical place where folks can come and request a hat for their dogs.
It’s a beautiful, office with exposed brick and lots of natural light. We can paint it how we want and modify it as needed. But there are some downsides.
Some people have to travel a long way to get to our building. We also have to pay the bills (rent, electricity, internet) regardless of how many dog hats we sell (I know we’re going to sell, like, a bajillion, but still). And when someone brings their dog by to get a new hat and the dog poops on the grass on the way out, guess who’s going to have to clean it up.
Clients
When we say the word “client” most folks think of a customer. For example, “I’m going to have a billion clients when this dog hat business takes off.” In the case of web development, a “client” is the user’s browser.
After a user requests our website, we can instruct the browser to download some JavaScript, and when this JavaScript executes it can inject some HTML onto the page. In fact, we can even use JavaScript to create the entire application.
This is commonly referred to as Client-Side-Rendering (CSR).
Generating HTML on the client side is great because it can create more dynamic interactions that feel faster because you don’t need to wait for pages to reload.
We can even utilize tools like Service Workers or WebAssembly to make that compute feel less impactful.
Moving compute to the client also means that we can do less work on our own servers. Which could ultimately save us some money, but that compute still has to happen, and the cost is on the user.
Here’s how I see the downsides:
- User’s must download more data (JavaScript).
- We can’t have secrets like API keys because source code is accessible.
- Performance greatly impacted by user’s device.
- What we can do relies on the user’s device and browser.
For these reasons as well as Search Engine Optimization, Accessibility, and others, I think we’re seeing more of the industry move away from client side renders.

Clients side rendering is like DIY sewing kits 🧶💉
To drive the idea home, client side rendering is a lot like giving customers a DIY sewing kit. We can provide them with all the instructions and materials to make their own dog hats, but the work needs to be done by them. And although this can save us some time and energy, it comes at the cost of the customer.
It can be a good fit for some folks, but is not right for everyone.
Static-Site-Generators
Static-Site-Generators (SSG) are interesting because instead of building a web page on demand as requests come in, they pre-build all the pages of a website ahead of time. The result is a collections of static folders and files (HTML, CSS, JavaScript) representing the website.
Once you have all the static files for the website, you can deploy them to any host you like.
This approach technically falls into the SSR bucket because you are not using a browser to do the compute. You are using some programming language to build the pages ahead of time on a computer you control (your laptop, a build pipeline, etc).
Technically, the end result isn’t much different than if you were to write all those HTML pages by hand, but using a SSG is probably faster and easier to work with in the end.
There are a few advantages to using SSGs. By generating the HTML ahead of time, you are removing that compute time from the user’s request. This can speed up response times because they only need to wait for the server to respond with the static HTML file. No time is spent building it, and that can be significant.
Since you’re only dealing with these static files that don’t change with every request, SSG also make a great pairing with Content Deliver Networks. I’ll cover those more in a moment, but the result is even faster responses because you can remove most of the latency.
Static websites are also very very easy to host. Because they are only serving static files and there’s no need for compute, you can host your own server with very limited resources and handle tons of traffic without a problem. This also makes them very cheap to host. In fact, there are plenty of services available that will let you host a static site for free.
The last big benefit I’ll point out is that when dealing with static sites, there is no need to deal with runtime scripting languages or databases. This makes them incredibly secure. You can’t really hack a static web page, so unless you’re literally sharing private information publicly, you shouldn’t have much to worry about.
Now this all might sound great, but it comes with some significant downsides. Primarily, static HTML cannot have dynamic content (unless you use client-side compute). For some sites where content doesn’t change often, this is fine; blogs, brochure sites, documentation. This lack of dynamic data also means that the experience cannot be personalized for each user.
While you can add dynamic content to a static site with JavaScript, it introduces added complexity and inherent downsides (see CSR above).
One other fault to SSG is that it takes time to build each page. If you have tens or hundreds of thousands of pages to generate, this can take a long time. And when you publish new content or change existing content, you may need to prebuild everything. This could be a non-starter.
