A while ago at $work, we had a performance issue with one of our core Python libraries.
This particular library forms the backbone of our 3D processing pipeline. It’s a rather big and complex library which uses NumPy and other scientific Python packages to do a wide range of mathematical and geometrical operations.
Our system also has to work on-prem with limited CPU resources, and while at first it performed well,
as the number of concurrent physical users grew we started running into problems and our system struggled to keep up with the load.
We came to the conclusion that we had to make our system at least 50 times faster to handle the increased workload, and we figured that Rust could help us achieve that.
Because the performance problems we encountered are pretty common,
we can recreate & solve them right here, in a (not-so-short) article.
So grab a cup of tea (or coffee) and I’ll walk you through (a) the basic underlying problem and (b) a few iterations of optimizations we can apply to solve this problem.
If you want to jump straight to the final code, just to go to the summary.
Our running example
Let’s create a small library, which will exhibit our original performance issues (but does completely arbitrary work).
Imagine you have a list of polygons and a of list points, all in 2D.
For business reasons, we want to “match” each point to a single polygon.
Our imaginary library is going to:
- Start with an initial list of points and polygons (all in 2D).
- For each point, find a much smaller subset of polygons that are closest to it, based on distance from the center.
- Out of those polygons, select the “best” one (we are going to use “smallest area” as “best”).
In code, that’s going to look like this (The full code can be found here):
from typing import List, Tuple
import numpy as np
from dataclasses import dataclass
from functools import cached_property
Point = np.array
@dataclass
class Polygon:
x: np.array
y: np.array
@cached_property
def center(self) -> Point: ...
def area(self) -> float: ...
def find_close_polygons(polygon_subset: List[Polygon], point: Point, max_dist: float) -> List[Polygon]:
...
def select_best_polygon(polygon_sets: List[Tuple[Point, List[Polygon]]]) -> List[Tuple[Point, Polygon]]:
...
def main(polygons: List[Polygon], points: np.ndarray) -> List[Tuple[Point, Polygon]]:
...
The key difficulty (performance wise) is this mix of Python objects and numpy arrays.
We are going to analyze this in depth in a minute.
It’s worth noting that converting parts of / everything to vectorized numpy might be possible for this toy library,
but will be nearly impossible for the real library while making the code much less readable and modifiable, and the gains are going to be limited (here’s a partially vertorized version, which is faster but far from the results we are going to achieve).
Also, using any JIT-based tricks (PyPy / numba) results in very small gains (as we will measure, just to make sure).
Why not just Rewrite It (all) In Rust™?
As compelling as a complete rewrite was, it had a few problems:
- The library was already using numpy for a lot of its calculations, so why should we expect Rust to be better?
- It is big and complex and very business critical and highly algorithmic, so that would take ~months of work, and our poor on-prem server is dying today.
- A bunch of friendly researchers are actively working on said library, implementing better algorithms and doing a lot of experiments.
They aren’t going to be very happy to learn a new programming language, waiting for things to compile and fighting with the borrow checker. They would appreciate us not moving their cheese too far.
Dipping our toes
It is time to introduce our friend the profiler.
Python has a built in Profiler (cProfile
), but in this case it’s not really the right tool for the job:
- It’ll introduce a lot of overhead to all the Python code, and none for native code, so our results might be biased.
- We won’t be able to see into native frames, meaning we aren’t going to be able to see into our Rust code.
We are going to use py-spy
(GitHub).
py-spy
is a sampling profiler which can see into native frames.
They also mercifully publish pre-built wheels to pypi, so we can just pip install py-spy
and get to work.
We also need something to measure.
# measure.py
import time
import poly_match
import os
# Reduce noise, actually improve perf in our case.
os.environ["OPENBLAS_NUM_THREADS"] = "1"
polygons, points = poly_match.generate_example()
# We are going to increase this as the code gets faster and faster.
NUM_ITER = 10
t0 = time.perf_counter()
for _ in range(NUM_ITER):
poly_match.main(polygons, points)
t1 = time.perf_counter()
took = (t1 - t0) / NUM_ITER
print(f"Took and avg of {took * 1000:.2f}ms per iteration")
It’s not very scientific, but it’s going to take us very far.
“Good benchmarking is hard. Having said that, do not stress too much about having a perfect benchmarking setup, particularly when you start optimizing a program.”
~ Nicholas Nethercote, in “The Rust Performance Book”
Running this script will give us our baseline:
$ python measure.py
Took an avg of 293.41ms per iteration
For the original library, we used 50 different examples to make sure all cases are covered.
This matched the overall system perf, meaning we can start working on crushing this number.
Side note: We can also measure using PyPy (we’ll also add a warmup to allow the JIT to do its magic).
$ conda create -n pypyenv -c conda-forge pypy numpy && conda activate pypyenv
$ pypy measure_with_warmup.py
Took an avg of 1495.81ms per iteration
Measure first
So, let’s find out what is so slow here.
$ py-spy record --native -o profile.svg -- python measure.py
py-spy> Sampling process 100 times a second. Press Control-C to exit.
Took an avg of 365.43ms per iteration
py-spy> Stopped sampling because process exited
py-spy> Wrote flamegraph data to 'profile.svg'. Samples: 391 Errors: 0
Already, we can see that the overhead is pretty small.
Just for comparison, using cProfile
we get this:
$ python -m cProfile measure.py
Took an avg of 546.47ms per iteration
7551778 function calls (7409483 primitive calls) in 7.806 seconds
...
We get this nice, reddish graph called a flamegraph:
Each box is a function, and we can see the relative time we spend in each function,
including the functions it is calling to (going down the graph/stack).
Try clicking on a the norm
box to zoom into it.
Here, the main takeaways are:
- The vast majority of time is spent in
find_close_polygons
. - Most of that time is spend doing
norm
, which is a numpy function.
So, let’s have a look at find_close_polygons
:
def find_close_polygons(
polygon_subset: List[Polygon], point: np.array, max_dist: float
) -> List[Polygon]:
close_polygons = []
for poly in polygon_subset:
if np.linalg.norm(poly.center - point) < max_dist:
close_polygons.append(poly)
return close_polygons
We are going to rewrite this function in Rust.
Before diving into the details, it’s important to notice a few things here:
- This function accepts & returns complex objects (
Polygon
,np.array
). - The size of the objects is non-trivial (so copying stuff might cost us).
- This function is called “a lot” (so overhead we introduce is probably going to matter).
My first Rust module
pyo3
is a crate for interacting between Python and Rust.
It has exceptionally good documentation, and they explain the basic setup here.
We are going to call our crate poly_match_rs
, and add function called find_close_polygons
.
mkdir poly_match_rs && cd "$_"
pip install maturin
maturin init --bindings pyo3
maturin develop
Starting out, our crate is going to look like this:
use pyo3::prelude::*;
#[pyfunction]
fn find_close_polygons() -> PyResult<()> {
Ok(())
}
#[pymodule]
fn poly_match_rs(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_function(wrap_pyfunction!(find_close_polygons, m)?)?;
Ok(())
}
We also need to remember to execute maturin develop
every time we change the Rust library.
And thats it! Let’s call our new function and see what happens.
>>> poly_match_rs.find_close_polygons(polygons, point, max_dist)
E TypeError: poly_match_rs.poly_match_rs.find_close_polygons() takes no arguments (3 given)
v1 – A naive Rust translation
We’ll start with matching the expected API.
PyO3 is pretty smart about Python to Rust conversions, so that’s going to be pretty easy:
#[pyfunction]
fn find_close_polygons(polygons: Vec<PyObject>, point: PyObject, max_dist: f64) -> PyResult<Vec<PyObject>> {
Ok(vec![])
}
PyObject
is (as the name suggest) a generic “anything goes” Python object.
We’ll try to interact with it in a bit.
This should make the program run (albeit incorrectly).
I’m going to just copy and paste the original Python function, and fix the syntax.
#[pyfunction]
fn find_close_polygons(polygons: Vec<PyObject>, point: PyObject, max_dist: f64) -> PyResult<Vec<PyObject>> {
let mut close_polygons = vec![];
for poly in polygons {
if norm(poly.center - point) < max_dist {
close_polygons.push(poly)
}
}
Ok(close_polygons)
}
Cool, but this won’t compile:
% maturin develop
...
error[E0609]: no field `center` on type `Py`
--> src/lib.rs:8:22
|
8 | if norm(poly.center - point) < max_dist {
| ^^^^^^ unknown field
error[E0425]: cannot find function `norm` in this scope
--> src/lib.rs:8:12
|
8 | if norm(poly.center - point) < max_dist {
| ^^^^ not found in this scope
error: aborting due to 2 previous errors ] 58/59: poly_match_rs
We need three crates to implement our function:
# For Rust-native array operations.
ndarray = "0.15"
# For a `norm` function for arrays.
ndarray-linalg = "0.16"
# For accessing numpy-created objects, based on `ndarray`.
numpy = "0.18"
First, lets turn the opaque and generic point: PyObject
into something we can work with.
Just like we asked PyO3 for a “Vec
of PyObject
s”, we can ask for a numpy-array,
and it’ll auto-convert the argument for us.
use numpy::PyReadonlyArray1;
#[pyfunction]
fn find_close_polygons(
// An object which says "I have the GIL", so we can access Python-managed memory.
py: Python<'_>,
polygons: Vec<PyObject>,
// A reference to a numpy array we will be able to access.
point: PyReadonlyArray1<f64>,
max_dist: f64,
) -> PyResult<Vec<PyObject>> {
// Convert to `ndarray::ArrayView1`, a fully operational native array.
let point = point.as_array();
...
}
Because point
is now an ArrayView1
, we can actually use it. For example:
// Make the `norm` function available.
use ndarray_linalg::Norm;
assert_eq!((point.to_owned() - point).norm(), 0.);
Now we just need to get the center of each polygon, and “cast” it to an ArrayView1
.
In PyO3, this looks like this:
let center = poly
.getattr(py, "center")? // Python-style getattr, requires a GIL token (`py`).
.extract::<PyReadonlyArray1<f64>>(py)? // Tell PyO3 what to convert the result to.
.as_array() // Like `point` before.
.to_owned(); // We need one of the sides of the `-` to be "owned".
It’s a bit of a mouthful, but overall the result is a pretty clear line-to-line translation of the original code:
1use pyo3::prelude::*;
2
3use ndarray_linalg::Norm;
4use numpy::PyReadonlyArray1;
5
6#[pyfunction]
7fn find_close_polygons(
8 py: Python<'_>,
9 polygons: Vec<PyObject>,
10 point: PyReadonlyArray1<f64>,
11 max_dist: f64,
12) -> PyResult<Vec<PyObject>> {
13 let mut close_polygons = vec![];
14 let point = point.as_array();
15 for poly in polygons {
16 let center = poly
17 .getattr(py, "center")?
18 .extract::<PyReadonlyArray1<f64>>(py)?
19 .as_array()
20 .to_owned();
21
22 if (center - point).norm() < max_dist {
23 close_polygons.push(poly)
24 }
25 }
26
27 Ok(close_polygons)
28}
vs the original:
def find_close_polygons(
polygon_subset: List[Polygon], point: np.array, max_dist: float
) -> List[Polygon]:
close_polygons = []
for poly in polygon_subset:
if np.linalg.norm(poly.center - point) < max_dist:
close_polygons.append(poly)
return close_polygons
We expect this version to have some advantage over the original function,
but how much?
$ (cd ./poly_match_rs/ && maturin develop)
$ python measure.py
Took an avg of 609.46ms per iteration
So.. Is Rust just super slow?
No! We just forgot to ask for speed!
If we run with maturin develop --release
we get much better results:
$ (cd ./poly_match_rs/ && maturin develop --release)
$ python measure.py
Took an avg of 23.44ms per iteration
Now that is a nice speedup!
We also want to see into our native code, so we are going to enable debug symbols in release.
While we are at it, we might as well ask for maximum speed.
# added to Cargo.toml
[profile.release]
debug = true # Debug symbols for our profiler.
lto = true # Link-time optimization.
codegen-units = 1 # Slower compilation but faster code.
v2 – Rewrite even more in Rust
Now, using the --native
flag in py-spy
is going to show us both Python and our new native code.
Running py-spy
again
$ py-spy record --native -o profile.svg -- python measure.py
py-spy> Sampling process 100 times a second. Press Control-C to exit.
we get this flamegraph (non-red colors are added to so we can refer to them):
Looking at the profiler output, we can see a few interesting things:
- The relative size of
find_close_polygons::...::trampoline
(the symbol Python directly calls) and__pyfunction_find_close_polygons
(our actual implementation).- Hovering, they are 95% vs 88% of samples, so the overhead is pretty small.
- The actual logic (
if (center - point).norm() < max_dist { ... }
) which islib_v1.rs:22
(very small box on the right), is about 9% of the total runtime.- So x10 improvement should still be possible!
- Most of the time is spent in
lib_v1.rs:16
, which ispoly.getattr(...).extract(...)
and if we zoom in we can see is really justgetattr
and getting the underlying array usingas_array
.
The conclusion here is that we need to focus on solving the 3rd point,
and the way to do that is to Rewrite Polygon
in Rust.
Let’s look at our target:
@dataclass
class Polygon:
x: np.array
y: np.array
_area: float = None
@cached_property
def center(self) -> np.array:
centroid = np.array([self.x, self.y]).mean(axis=1)
return centroid
def area(self) -> float:
if self._area is None:
self._area = 0.5 * np.abs(
np.dot(self.x, np.roll(self.y, 1)) - np.dot(self.y, np.roll(self.x, 1))
)
return self._area
We’ll want to keep the existing API as much as possible,
but we don’t really need area
to be that fast (for now).
The actual class might have additional complex stuff,
l