It matters when your games script actually starts running
Hey all. This post is about a common issue you may be facing when sandboxing your games script. Maybe this specific problem will be familiar to you? I want to talk about execution timeouts.
Execution timeouts
If you are going to sandbox something, having an execution timeout is a necessity. It doesn’t matter if security is not important, or even a consideration, but you must have a timeout so you don’t have a thread in your engine just spinning in a loop. So, let’s enumerate the two most common ways to do timeouts:
- Signalling or otherwise interrupting a thread with a running simulation.
- Counting instructions or jumps.
- sigsetjmp and friends (but it’s a wasp nest).
Maybe there are others, but these are the two paradigms that matter for this post. If you add KVM in the mix there is actually a 4th option, but I will not go into that. In wasmtime the first is supported by the epoch API and the second is using the fuel API.
Now that we know the two ways that we can stop execution, we should look at the performance characteristics of each one.
Execution in a secondary thread
threadpool: task median 5364ns lowest: 4701ns highest: 6060ns
A casual micro-benchmark of a gold standard C++11 threadpool task that returns a future with return value and supports forwarding exceptions. On a live system the overhead is going to be much more than this, probably 2–5x. Let’s just round it up to 10 microseconds, as that will cover more minimalistic implementations people might have as well as relatively idle systems for undemanding games. Remember that the people playing your game is running with ondemand frequency scheduling and so C-states will change all the time. You might even find your task took 100 microseconds to even land on the thread.
threadpool: task median 11152ns lowest: 5556ns highest: 14649ns
With the ondemand CPU frequency scaling governor. Seems right for a micro benchmark.
Execution in same thread
Lua and LuaJIT supports