A modern display’s refresh rate ranges from 60 to 120 frames per second, which means an application only has 8.33ms per frame to push pixels to screen. This includes updating the application state, laying out UI elements, and finally writing data into the frame buffer.
It’s a tight deadline, and if you’ve ever built an application with Electron, it’s a deadline that may feel impossible to consistently meet. Working on Atom, this is exactly how we felt: no matter how hard we tried, there was always something in the way of delivering frames on time. A random pause due to garbage collection and we missed a frame. An expensive DOM relayout and we missed another frame. The frame rate was never consistent, and many of the causes were beyond our control.
Yet while we struggled to micro-optimize Atom’s rendering pipeline consisting of simple boxes and glyphs, we stared in awe of computer games rendering beautiful, complex geometry at a constant rate of 120 frames per second. How could it be that rendering a few
s was so much slower than drawing a three-dimensional, photorealistic character?
When we set out to build Zed, we were determined to create a code editor so responsive it almost disappeared. Inspired by the gaming world, we realized that the only way to achieve the performance we needed was to build our own UI framework: GPUI.
GPUI: Rendering
When we started building Zed, arbitrary 2D graphics rendering on the GPU was still very much a research project. We experimented with Patrick Walton’s Pathfinder crate, but it wasn’t fast enough to achieve our performance goals.
So we took a step back, and reconsidered the problem we were trying to solve. While a library capable of rendering arbitrary graphics may have been nice, the truth was that we didn’t really need it for Zed. In practice, most 2D graphical interfaces break down into a few basic elements: rectangles, shadows, text, icons, and images.
Instead of worrying about a general purpose graphics library, we decided to focus on writing a custom shader for each specific graphical primitive we knew we’d need to render Zed’s UI. By describing the properties of each primitive in a data-driven way on the CPU, we could delegate all of the heavy-lifting to the GPU where UI elements could be drawn in parallel.
In the following sections, I am going to illustrate the techniques used in GPUI to draw each primitive.
Drawing rectangles
The humble rectangle is a fundamental building block of graphical UIs.
To understand how drawing rectangles works in GPUI, we first need to take a detour into the concept of Signed Distance Functions (SDFs for short). As implied by the name, an SDF is a function that, given an input position, returns the distance to the edge of some mathematically-defined object. The distance approaches zero as the position gets closer to the object, and becomes negative when stepping inside its boundaries.
The list of known SDFs is extensive, mostly thanks to Inigo Quilez’s seminal work on the subject. On his website, you can also find a never-ending series of techniques that allow distortion, composition and repetition of SDFs to generate the most complex and realistic 3D scenes. Seriously, check it out. It’s pretty amazing.
Back to rectangles: let’s derive a SDF for them. We can simplify the problem by centering the rectangle we want to draw at the origin. From here, it’s relatively straightforward to see the problem is symmetric. In other words, calculating the distance for a point lying in one of the four quadrants is equivalent to calculating the distance for the mirror image of that point in any of the other three quadrants.
This means we only need to worry about the top-right portion of the rectangle. Taking the corner as a reference, we can distinguish three cases:
- Case 1), the point is both above and to the left of the corner. In this case, the shortest distance between the point and the rectangle is given by the vertical distance from the point to the top edge.
- Case 2), the point is both below and to the right of the corner. In this case, the shortest distance between the point and the rectangle is given by the horizontal distance from point to the right edge.
- Case 3), the point is above and to the right of the corner. In this case, we can use the Pythagorean theorem to determine the distance between the corner and the point.
Case 3 can be generalized to cover the other two if we forbid the distance vector to assume negative components.
The rules we just sketched out are sufficient to draw a simple rectangle and, later in this post, we will describe how that translates to GPU code. Before we get to that though, we can make a simple observation that allows extending those rules to calculate the SDF of rounded rectangles too!
Notice how in case 3) above, there are infinitely many points located at the same distance from the corner. In fact, those aren’t just random points, they are the points that describe a circle originating at the corner and having a radius equal to the distance.
Borders start to get smoother as we move away from the straight rectangle. That’s the key insight to drawing rounded corners: given a desired corner radius, we can shrink the original rectangle by it, calculate the distance to the point and subtract the corner radius from the computed distance.
Porting the rectangle SDF to the GPU is very intuitive. As a quick recap, the classic GPU pipeline consists of a vertex and a fragment shader.
The vertex shader is responsible for mapping arbitrary input data into points in 3-dimensional space, with each set of three points defining a triangle that we want to draw on screen. Then, for every pixel inside the triangles generated by the vertex shader, the GPU invokes the fragment shader, which is responsible for assigning a color to the given pixel.
In our case, we use the vertex shader to define the bounding box of the shape we want to draw on screen using two triangles. We won’t necessarily fill every pixel inside this box. That is left to the fragment shader, which we’ll discuss next.
The following code is in Metal Shader Language, and is designed to be used with instanced rendering to draw multiple rectangles to the screen in a single draw call:
struct RectangleFragmentInput {
float4 position [[position]];
float2 origin [[flat]];
float2 size [[flat]];
float4 background_color [[flat]];
float corner_radius [[flat]];
};
vertex RectangleFragmentInput rect_vertex(
uint unit_vertex_id [[vertex_id]],
uint rect_id [[instance_id]],
constant float2 *unit_vertices [[buffer(GPUIRectInputIndexVertices)]],
constant GPUIRect *rects [[buffer(GPUIRectInputIndexRects)]],
constant GPUIUniforms *uniforms [[buffer(GPUIRectInputIndexUniforms)]]
) {
float2 position = unit_vertex * rect.size + rect.origin;
float4 device_position = to_device_position(position, viewport_size);
return RectangleFragmentInput {
device_position,
rect.origin,
rect.size,
rect.background_color,
rect.corner_radius
};
}
To determine the color to assign to each pixel within this bounding box, the fragment shader calculates the dista