I present a new low discrepancy quasirandom sequence that offers many substantial improvements over other popular sequences such as the Sobol and Halton sequences.

First published: 25th April, 2018
Last updated: 23 April, 2020
This blog post has been featured on the front page of Hacker News a couple of times, Aug 2018 and Oct 2020. See here for the awesome discussion.
Topics Covered
- Low discrepancy sequences in one-dimension
- Low discrepancy methods in two dimensions
- Packing distance
- Multiclass low discrepancy sets
- Quasirandom sequences on the surface of sphere
- Quasiperiodic Tiling of the plane
- Dither masks in computer graphics
In figure 1b, it can be seen that the simple uniform random sampling of a point inside a unit square exhibits clustering of points, as well as regions that contain no points at all . A low discrepancy quasirandom sequence is a fully deterministic sequence of points that covers the entire space as uniformly as possible. They appear less regular than lattices but look more regular than random sampling (see figure 1b).
Well known quasirandom sequences include the Halton and Sobol sequences. They play a large role in many numerical computing applications, including physics, finance and in more recent decades computer graphics.
Simple random sampling is often called ‘white noise’, and low discrepancy quasirandom sequences are often called ‘blue noise’ samples’ (although technically blue noise and low discrepancy are two similar but distinct concepts).

The methods for creating low discrepancy quasirandom sequences in one dimension are extremely well studied.
The fundamental advantage of open sequences (that is, extensible in $n$) is that if the resultant error based on a given (finite) number of terms is still too large, the sequence can be arbitrarily extended without discarding all the previously calculated points.
There are many methods to construct open sequences. One way to categorise the different types is by the method of constructing their basis (hyper)-parameters:
- irrational fractions: Kronecker, Richtmyer, Ramshaw, Weyl
- (co)prime numbers: Van der Corput, Halton, Faure
- Irreducible Polynomials : Niederreiter
- Primitive polynomials: Sobol’
This post describes a new additive recurrence $R$-sequence that falls in the first category of those sequences that are based on specially chosen irrational numbers. We compare its properties with the Halton sequence and the Sobol sequence.
This special recurrence sequence is defined as: $$R_1(alpha): ;; t_n = {s_0 + n alpha}, quad n=1,2,3,…$$ where $alpha $ is any irrational number. Note that the notation $ {x}$ indicates the fractional part of $x$. In computing, this function is more commonly expressed in the following way $$R_1(alpha): ;; t_n = s_0 + n alpha ; (textrm{mod} ; 1); quad n=1,2,3,…$$ For $s_0 = 0$, the first few terms of the sequence, $R(phi)$ are: $$t_n = 0.618, ; 0.236, ; 0.854, ; 0.472, ; 0.090, ; 0.708, ; 0.327, ; 0.944, ; 0.562, ; 0.180,; 798,; 416, ; 0.034, ; 0.652, ; 0.271, ; 0.888,…$$
It is important to note that the value of $s_0$ does not affect the overall characteristics of the sequence, and in nearly all cases is set to zero. However, especially in computing the option of $s neq 0$ offers an additional degree of freedom that is often useful. Applying a non-zero value of $s$ is often called a toroidal shift, or a Cranley-Patterson transformation, and the resulting sequence is usually called a ‘shifted lattice sequence’.
The value of ( alpha ) that gives the lowest possible discrepancy is achieved if ( alpha = 1/phi ), where ( phi ) is the golden ratio. That is, $$ phi equiv frac{sqrt{5}+1}{2} simeq 1.61803398875… ; $$ It is interesting to note that there are an infinite number of other values of $alpha$ that also achieve optimal discrepancy, and they are all related via the Moebius transformation $$ alpha’ = frac{palpha+q}{ralpha+s} quad textrm{for all integers} ; p,q,r,s quad textrm{such that} |ps-qr|=1 $$ We now compare this recurrence method to the well known van der Corput reverse-radix sequences [van der Corput, 1935] . The van der Corput sequences are actually a family of sequences, each defined by a unique hyper-parameter, b. The first few terms of the sequence for b=2 are: $$t_n^{[2]} = frac{1}{2}, frac{1}{4},frac{3}{4}, frac{1}{8}, frac{5}{8}, frac{3}{8}, frac{7}{8}, frac{1}{16},frac{9}{16},frac{5}{16},frac{13}{16}, frac{3}{16}, frac{11}{16}, frac{7}{16}, frac{15}{16},…$$ The following section compares the general characteristics and effectiveness of each of these sequences. Consider the task of evaluating the definite integral $$ A = int_0^1 f(x) textrm{d}x $$ We may approximate this by: $$ A simeq A_n = frac{1}{n} sum_{i=1}^{n} f(x_i), quad x_i in [0,1] $$
- If the ( {x_i} ) are equal to (i/n), this is the rectangle rule;
- If the ( {x_i} ) are chosen randomly, this is the Monte Carlo method; and
- If the ( {x_i} ) are elements of a low discrepancy sequence, this is the quasi-Monte Carlo method.
The following graph shows the typical error curves ( s_n = |A-A_n| ) for approximating a definite integral associated with the function, ( f(x) = textrm{exp}(frac{-x^2}{2}), ; x in [0,1] ) with: (i) quasi-random points based on the additive recurrence, where ( alpha = 1/phi ), (blue); (ii) quasi-random points based on the van der Corput sequence, (orange); (iii) randomly selected points, (green); (iv) Sobol sequence (red). It shows that for (n=10^6) points, the random sampling approach results in an error of ( simeq 10^{-4}), the van der Corput sequence results in an error of ( simeq 10^{-6}), whilst the (R(phi))-sequence results in an error of ( simeq 10^{-7}), which is (sim)10x better than the van der Corput error and (sim) 1000x better than (uniform) random sampling.

Several things from this figure are worth noting:
- it is consistent with the knowledge that the errors based on uniform random sampling asymptotically decrease by $ 1/sqrt{n} $, whereas error curves based on both quasi-random sequences tend to $1/n $.
- The results for the $R_1(phi)$-sequence (blue) and Sobol (red) are the best.
- It shows that the van der Corput sequence produces error rates that are far better than random, but consistently not quite as good as Sobol or the new $R_1$ sequence.
The new $R_1$ sequence, which is a Kronecker sequence using the Golden ratio, is one of the best choices for one-dimensional Quasirandom Monte Carlo (QMC) integration methods
It should also be noted that although (alpha = phi) theoretically offers the provably optimal option, (sqrt{2}) and is very close to optimal, and almost any other irrational value of (alpha) provides excellent error curves for one-dimensional integration. This is why, (alpha = sqrt{p} ) for any prime is very commonly used. Furthermore, from a computing perspective, selecting a random value in the interval $ alpha in [0,1] $ is with almost certainty going to be (within machine precision) an irrational number, and therefore a solid choice for a low discrepancy sequence. For visual clarity, the above figure does not show the results the Niederreiter sequence as the results are virtually indistinguishable to that of the the Sobol and $R$ sequences. The Neiderreiter and Sobol sequences (along with their optimized parameter selection) that were used in this post were calculated via Mathematica using what is documented as “closed, proprietary and fully optimized generators provided in Intel’s MKL library”.
Most current methods to construct higher dimension low discrepancy simply combine (in a component-wise manner), (d) one-dimensional sequences together. For brevity, this post mainly focuses on describing the Halton sequence [Halton, 1960] , Sobol sequence and the (d-)dimensional Kronecker sequence.
The Halton sequence is constructed simply by using (d) different one-dimensional van de Corput sequence each with a base that is relatively prime to all the others. That is, pairwise co-prime. By far, the most frequent selection, due to its obvious simplicity and sensibility, is to select the first (d) primes. The distribution of the first 625 points defined by the (2,3)-Halton sequence is show in figure 1. Although many two-dimensional Halton sequences are excellent sources for low discrepancy sequences, it is also well known that many are highly problematic and do not exhibit low discrepancies. For example, figure 3 shows that the (11,13)-Halton sequence produces highly visible lines. Much effort has gone into methods of selecting which pairs of ( (p_1, p_2) ) are exemplars and which ones are problematic. This issue is even more problematic in higher dimensions.
Kronecker recurrence methods generally suffer even greater challenges when generalizing to higher dimensions. That is, although using ( alpha = sqrt{p} ) produces excellent one-dimensional sequences, it is very challenging to even find pairs of prime numbers to be used as the basis for the two dimensional case that are not problematic! As a way around this, some have suggested using other well-known irrational numbers, such as the ( phi,pi,e, … ). These produce moderately acceptable solutions but not are generally not used as they are usually not as good as a well-chosen Halton sequence. A great deal of effort has focused on these issues of degeneracy.
Proposed solutions include skipping/burning, leaping/thinning. And for finite sequences scrambling is another technique that is frequently used to overcome this problem. Scrambling can not be used to create an open (infinite) low discrepancy sequence.

Similarly, despite the generally better performance of the Sobol sequence, its complexity and more importantly the requirement of very careful choices of its hyperparameters makes it not as inviting.
- Thus reiterating, in $d$-dimensions:
- the typical Kronecker sequences require the selection of $d$ linearly independent irrational numbers;
- the Halton sequence requires $d$ pairwise coprime integers; and
- the Sobol sequence requires selecting $d$ direction numbers.
The new $R_d$ sequence is the only $d$-dimensional low discrepancy quasirandom sequence that does not require any selection of basis parameters.
Generalizing the Golden Ratio
tl;dr In this section, I show how to construct a new class of (d-)dimensional open (infinite) low discrepancy sequence that do not require choosing any basis parameters, and which has outstanding low discrepancy properties.
There are many possible ways to generalize the Fibonacci sequence and/or the Golden ratio. The following proposed method of generalizing the golden ratio is not new [Krcadinac, 2005]. Also the characteristic polynomial is related to many fields of algebra, including Perron Numbers, and Pisot-Vijayaraghavan numbers. However, what is new is the explicit connection between this generalized form and the construction of higher-dimensional low discrepancy sequences. We define the generalized version of the golden ratio,( phi_d) as the unique positive root $ x^{d+1}=x+1 $. That is,
For (d=1), ( phi_1 = 1.6180339887498948482… ), which is the canonical golden ratio.
For (d=2), ( phi_2 = 1.324717957244746… ), which is often called the plastic constant, and has some beautiful properties (see also here). This value was conjectured to most likely be the optimal value for a related two-dimensional problem [Hensley, 2002].
For (d=3), ( phi_3 = 1.22074408460575947536… ),
For $ d>3$, although the roots of this equation do not have a closed algebraic form, we can easily obtain a numerical approximation either through standard means such as Newton-Rhapson, or by noting that for the following sequence, ( R_d(phi_d) ): $$ t_0=t_1 = … = t_{d} = 1;$$ $$t_{n+d+1} ;=;t_{n+1}+t_n, quad textrm{for} ; n=1,2,3,..$$
This special sequenc