I’ve released a new version of Use.GPU, my experimental reactive/declarative WebGPU framework, now at version 0.8.
My goal is to make GPU rendering easier and more sane. I do this by applying the lessons and patterns learned from the React world, and basically turning them all up to 11, sometimes 12. This is done via my own Live run-time, which is like a martian React on steroids.
The previous 0.7 release was themed around compute, where I applied my shader linker to a few challenging use cases. It hopefully made it clear that Use.GPU is very good at things that traditional engines are kinda bad at.
In comparison, 0.8 will seem banal, because the theme was to fill the gaps and bring some traditional conveniences, like:
- Scenes and nodes with matrices
- Meshes with instancing
- Shadow maps for lighting
- Visibility culling for geometry
These were absent mostly because I didn’t really need them, and they didn’t seem like they’d push the architecture in novel directions. That’s changed however, because there’s one major refactor underpinning it all: the previously standard forward renderer is now entirely swappable. There is a shiny deferred-style renderer to showcase this ability, where lights are rendered separately, using a g-buffer with stenciling.
This new rendering pipeline is entirely component-driven, and fully dogfooded. There is no core renderer per-se: the way draws are realized depends purely on the components being used. It effectively realizes that most elusive of graphics grails, which established engines have had difficulty delivering on: a data-driven, scriptable render pipeline, that mortals can hopefully use.
Root of the App
Deep inside the tree
I’ve spent countless words on Use.GPU’s effect-based architecture in prior posts, which I won’t recap. Rather, I’ll just summarize the one big trick: it’s structured entirely as if it needs to produce only 1 frame. Then in order to be interactive, and animate, it selectively rewinds parts of the program, and reactively re-runs them. If it sounds crazy, that’s because it is. And yet it works.
So the key point isn’t the feature list above, but rather, how it does so. It continues to prove that this way of coding can pay off big. It has all the benefits of immediate-mode UI, with none of the downsides, and tons of extensibility. And there are some surprises along the way.
Real Reactivity
You might think: isn’t this a solved problem? There are plenty of JS 3D engines. Hasn’t React-Three-Fiber (R3F) shown how to make that declarative? And aren’t these just web versions of what native engines like Unreal and Unity already do well, and better?
My answer is no, but it might not be clear why. Let me give an example from my current job.
My client needs a specialized 3D editing tool. In gaming terms you might think of it as a level design tool, except the levels are real buildings. The details don’t really matter, only that they need a custom 3D editing UI. I’ve been using Three.js and R3F for it, because that’s what works today and what other people know.
Three.js might seem like a great choice for the job: it has a 3D scene, editing controls and so on. But, my scene is not the source of truth, it’s the output of a process. The actual source of truth being live-edited is another tree that sits before it. So I need to solve a two-way synchronization problem between both. This requires careful reasoning about state changes.
Change handlers in Three.js and R3F
Sadly, the way Three.js responds to changes is ill-defined. As is common, its objects have “dirty” flags. They are resolved and cleared when the scene is re-rendered. But this is not an iron rule: many methods do trigger a local refresh on the spot. Worse, certain properties have an invisible setter, which immediately triggers a “change” event when you assign a new value to it. This also causes derived state to update and cascade, and will be broadcast to any code that might be listening.
The coding principle applied here is “better safe than sorry”. Each of these triggers was only added to fix a particular stale data bug, so their effects are incomplete, creating two big problems. Problem 1 is a mix of old and new state… but problem 2 is you can only make it worse, by adding even more pre-emptive partial updates, sprinkled around everywhere.
These “change” events are oblivious to the reason for the change, and this is actually key: if a change was caused by a user interaction, the rest of the app needs to respond to it. But if the change was computed from something else, then you explicitly don’t want anything earlier to respond to it, because it would just create an endless cycle, which you need to detect and halt.
R3F introduces a declarative model on top, but can’t fundamentally fix this. In fact it adds a few new problems of it own in trying to bridge the two worlds. The details are boring and too specific to dig into, but let’s just say it took me a while to realize why my objects were moving around whenever I did a hot-reload, because the second render is not at all the same as the first.
Yet this is exactly what one-way data flow in reactive frameworks is meant to address. It creates a fundamental distinction between the two directions: cascading down (derived state) vs cascading up (user interactions). Instead of routing both through the same mutable objects, it creates a one-way reverse-path too, triggered only in specific circumstances, so that cause and effect are always unambigious, and cycles are impossible.
Three.js is good for classic 3D. But if you’re trying to build applications with R3F it feels fragile, like there’s something fundamentally wrong with it, that they’ll never be able to fix. The big lesson is this: for code to be truly declarative, changes must not be allowed to travel backwards. They must also be resolved consistently, in one big pass. Otherwise it leads to endless bug whack-a-mole.
What reactivity really does is take cache invalidation, said to be the hardest problem, and turn the problem itself into the solution. You never invalidate a cache without immediately refreshing it, and you make that the sole way to cause anything to happen at all. Crazy, and yet it works.
When I tell people this, they often say “well, it might work well for your domain, but it couldn’t possibly work for mine.” And then I show them how to do it.
Figuring out which way your cube map points:
just gfx programmer things.
And… Scene
One of the cool consequences of this architecture is that even the most traditional of constructs can suddenly bring neat, Lispy surprises.
The new scene system is a great example. Contrary to most other engines, it’s actually entirely optional. But that’s not the surprising part.
Normally you just have a tree where nodes contain other nodes, which eventually contain meshes, like this:
It’s a way to compose matrices: they cascade and combin