A few weeks ago I was at Heathrow airport getting a bit of work done before a flight, and I noticed something odd about the performance of GitHub: It was quicker to open links in a new window than simply click them. Here’s a video I took at the time:
Here I click a link, then paste the same link into a fresh tab. The page in the fresh tab renders way sooner, even though it’s started later.
Show them what you got
When you load a page, the browser takes a network stream and pipes it to the HTML parser, and the HTML parser is piped to the document. This means the page can render progressively as it’s downloading. The page may be 100k, but it can render useful content after only 20k is received.
This is a great, ancient browser feature, but as developers we often engineer it away. Most load-time performance advice boils down to “show them what you got” – don’t hold back, don’t wait until you have everything before showing the user anything.
GitHub cares about performance so they server-render their pages. However, when navigating within the same tab navigation is entirely reimplemented using JavaScript. Something like…
const response = await fetch('page-data.inc');
const html = await response.text();
document.querySelector('.content').innerHTML = html;
This breaks the rule, as all of page-data.inc
is downloaded before anything is done with it. The server-rendered version doesn’t hoard content this way, it streams, making it faster. For GitHub’s client-side render, a lot of JavaScript was written to make this slow.
I’m just using GitHub as an example here – this anti-pattern is used by almost every single-page-app.
Switching content in the page can have some benefits, especially if you have some heavy scripts, as you can update content without re-evaluating all that JS. But can we do that without losing streaming? I’ve often said that JavaScript has no access to the streaming parser, but it kinda does…
Using iframes and document.write to improve performance
The worst hacks involve s, and this one uses
s and
document.write()
, but it does allow you to stream content to the page. It goes like this:
const iframe = document.createElement('iframe');
iframe.style.display = 'none';
document.body.appendChild(iframe);
iframe.onload = () => {
iframe.onload = null;
iframe.contentDocument.write('' );
const streamingElement = iframe.contentDocument.querySelector(
'streaming-element',
);
document.body.appendChild(streamingElement);
iframe.contentDocument.write('Hello!
');
iframe.contentDocument.write('');
iframe.contentDocument.close();
};
iframe.src = '';
Although
Hello!
is written to the iframe, it appears in the parent document! This is because the parser maintains a stack of open elements, which newly created elements are inserted into. It doesn’t matter that we moved
, it just works.
Also, this technique processes HTML much closer to the standard page-loading parser than innerHTML
. Notably, scripts will download and execute in the context of the parent document, except in Firefox where script doesn’t execute at all, but I think that’s a bug update: turns out scripts shouldn’t be executed (thanks to Simon Pieters for pointing this out), but Edge, Safari & Chrome all do.
Now we just have to stream HTML content from the server and call iframe.contentDocument.write()
as each part arrives. Streaming is really efficient with fetch()
, but for a sake of Safari support we’ll hack it with XHR.
I’ve built a little demo wher