**How much boilerplate code you need to write a ray-tracer?**
# Introduction
If you want to write a ray-tracer from scratch you would usually start with something simple, like **Ray Tracing in One Weekend**.
There will be spheres, planes, rays. Eventually you would want to render a bunny, so you add a simple 3D model loading, which would require
adding triangles, and routines for triangle intersection, then area light sources, environment images, participating media…
The list could be continued forever. And then at some point you realize that you want to render a caustics (either as a refraction from glass,
or reflective from metals). And now you need to learn stuff about bidirectional renderers and add even more code to make it work.
In this post I want to give you an idea how much code is needed to write a bidirectional ray-tracer, which can produce images like this:

The purpose of this post is to rather serve as a checklist and starting point, than to scare and frustrate people who are learning raytracing.
Also, it describes my personal experience.
Let’s take a closer look!
# Code for unidirectional ray-tracer
Let’s pretend we are writing a unidirectional path-tracing from scratch.
## Images
The first thing you would want to have is to see a result of your work. Therefore, you need an ability to save an image.
Right now, we are not talking about UI and visualizing images on the screen, which is rather important feature
together with debug visualization. Let’s just start with saving a result to a file. A lot of projects using something like PPM file format, which
is very simple, but in the end, you need to write an extra code for this simplicity (like tone mapping, converting from floating point values to uint8, etc.)
That’s why I propose to have an output in `float4` (RGBA32F) format and use [tinyexr](https://github.com/syoyo/tinyexr) library for writing
HDR images in EXR format. This library would also be useful in the future for loading EXR images.
## Intersections
Now you want to actually trace something. There is at least three ways to it:
– write all intersection and BVH code by yourself;
– build and use open-source third-party library;
– use something like [Intel Embree](https://www.embree.org/).
I’d personally recommend going with the third option. Direct comparison between Embree and couple of other libraries from GitHub shows that
with Embree ray-tracer could be easily 2x faster even when using the most naive way (tracing one ray without batching them).
And the integration and using of Embree is quite simple.
But of course, it highly depends on your personal motivation and purposes. If you want to get an experience writing everything from scratch – just do it!
Having the intersection routines, you can now hardcode a couple of triangles and see if everything works. Let’s move to the actual geometry!
## Scene
Now you want to load some geometry from files. The easiest way to do it would be using something like [tinyobjloader](https://github.com/tinyobjloader/tinyobjloader).
I think OBJ is still one of the most popular formats around because it’s simplicity. Of course, you might want to load GLTF files, or even something more complicated.
But the point is that now you have a bunch of triangles and vertices which you upload to your intersection library and you desperate to trace them.
At this point you can hardcode some camera parameters and throw some rays, and actually see a result. But it’s good time to add a camera to your ray-tracer.
## Camera
The easiest way to setup a camera in the scene would be to define a position and a view point, and a field of view.
If you are loading format which contains camera data (like GLTF) – it would be even simpler.
But anyway, you would need a code which builds camera data (like matrices).
And now having a camera data you could easily generate rays for the specific camera configuration. Going through each pixel of the image you need to
generate a ray for this specific pixel, so there should be two functions – to get NDC coordinates from a pixel, and to generate ray out of these coordinates.
At this point you’d probably see something like that in the output:

And this is definitely a good start, but let’s move on.
## Multithreading
So far, we were using a simple loop over all pixels of the image. But this is not the fastest way. We usually want to spread a workload over all threads