High precision calculations

Yade supports high and arbitrary precision Real type for performing calculations. All tests and checks pass but still the current support is considered experimental. The backend library is boost multiprecision along with corresponding boost math toolkit.

The supported types are following:

type bits decimal places [1] notes
float 32 6 hardware accelerated (not useful, it is only for testing purposes)
double 64 15 hardware accelerated
long double 80 18 hardware accelerated
boost float128 128 33 depending on processor type it may be hardware accelerated, wrapped by boost
boost mpfr Nbit Nbit/(log(2)/log(10)) uses external mpfr library, wrapped by boost
boost cpp_bin_float Nbit Nbit/(log(2)/log(10)) uses boost only, but is slower

The last two types are arbitrary precision, and their number of bits Nbit or decimal places is specified as argument during compilation.


See file Real.hpp for details. All Real types pass the real type concept test from boost concepts. The supoprt for Eigen and CGAL is done with numerical traits.

[1]The amount of decimal places in this table is the amount of places which are completely determined by the binary represenation. One additional decimal digit (or two if rounding down is used, which is not the case by default) is necessary to fully reconstruct binary representation. A simple python example to demonstrate this fact: for a in range(16): print(1./pow(2.,a)), shows that every binary digit produces “extra” …25 at the end of decimal representation, but these decimal digits are not completely determined by the binary representation, because for example …37 is impossible to obtain there. More binary bits are necessary to represent …37, but the …25 was produced by the last available bit.


The precompiled Yade package uses double type by default. In order to use high precision type Yade has to be compiled and installed from source code by following the regular installation instructions. With extra following caveats:

  1. Following packages are required to be installed: python3-mpmath libmpfr-dev libmpfrc++-dev (the mpfr related packages are necessary only to use boost::multiprecision::mpfr type). These packages are already listed in the default requirements.

  2. A g++ compiler version 9.2.1 or higher is required. It shall be noted that upgrading only the compiler on an existing linux installation is difficult and it is not recommended. A simpler solution is to upgrade entire linux installation.

  3. During cmake invocation specify:

    1. either number of bits as REAL_PRECISION_BITS=……,
    2. or number of requested decimal places as REAL_DECIMAL_PLACES=……, but not both
    3. to use MPFR specify ENABLE_MPFR=ON.

    The arbitrary precision (mpfr or cpp_bin_float) types are used only when more than 128 bits or more than 39 decimal places are requested. In such case if ENABLE_MPFR=OFF then the slower cpp_bin_float type is used. The difference in decimal places between 39 and 33 stems from the fact that 15 bits are used for exponent. Note: a fast quad-double (debian package libqd-dev) implementation with 62 decimal places is in the works with boost multiprecision team.

Supported modules

During compilation several Yade modules can be enabled or disabled by passing an ENABLE_* command line argument to cmake. The following table lists which modules are currently working with high precision (those marked with “maybe” were not tested):

ENABLE_* module name HP support cmake default setting notes
ENABLE_GUI yes ON native support [2]
ENABLE_CGAL yes ON native support [2]
ENABLE_VTK yes ON supported [3]
ENABLE_OPENMP partial ON partial support [4]
ENABLE_MPI maybe OFF not tested [5]
ENABLE_GTS yes ON supported [6]
ENABLE_GL2PS yes ON supported [6]
ENABLE_LINSOLV no OFF not supported [7]
ENABLE_PFVFLOW no OFF not supported [7]
ENABLE_TWOPHASEFLOW no OFF not supported [7]
ENABLE_THERMAL no OFF not supported [7]
ENABLE_LBMFLOW yes ON supported [6]
ENABLE_SPH maybe OFF not tested [8]
ENABLE_LIQMIGRATION maybe OFF not tested [8]
ENABLE_MASK_ARBITRARY maybe OFF not tested [8]
ENABLE_PROFILING maybe OFF not tested [8]
ENABLE_POTENTIAL_BLOCKS no OFF not supported [9]
ENABLE_DEFORM maybe OFF not tested [8]
ENABLE_OAR maybe OFF not tested [8]
ENABLE_FEMLIKE yes ON supported [6]
ENABLE_ASAN yes OFF supported [6]
ENABLE_MPFR yes OFF native support [2]

The unsupported modules are automatically disabled during the cmake stage.


[2](1, 2, 3) This feature is supported natively, which means that specific numerical traits were written for Eigen and for CGAL, as well as GUI and python support was added.
[3]VTK is supported via the compatibility layer which converts all numbers down to double type. See below.
[4]The OpenMPArrayAccumulator is experimentally supported for long double and float128. For types mpfr and cpp_bin_float the single-threaded version of accumulator is used. File lib/base/openmp-accu.hpp needs further testing. If in doubt, compile yade with ENABLE_OPENMP=OFF. In all other places OpenMP multithreading should work correctly.
[5]MPI support has not been tested and sending data over network hasn’t been tested yet.
[6](1, 2, 3, 4, 5) The module was tested, the yade --test and yade --check pass, as well as most of examples are working. But it hasn’t been tested extensively for all possible use cases.
[7](1, 2, 3, 4) Not supported, the code uses external cholmod library which supports only double type. To make it work a native Eigen solver for linear equations should be used.
[8](1, 2, 3, 4, 5, 6) This feature is OFF by default, the support of this feature has not been tested.
[9]Potential blocks use external library coinor for linear programming, this library uses double type only. To make it work a linear programming routine has to be implemented using Eigen or coinor library should start using C++ templates or a converter/wrapper similar to LAPACK library should be used.
[10]The module is enabled by default, the yade --test and yade --check pass, as well as most of examples are working. However the calculations are performed at lower double precision. A wrapper/converter layer for LAPACK library has been implemented. To make it work with full precision these routines should be reimplemented using Eigen.



Python has native support for high precision types using mpmath package. Old Yade scripts that use supported modules can be immediately converted to high precision by switching to yade.minieigenHP. In order to do so, the following line:

from minieigen import *

has to be replaced with:

from yade.minieigenHP import *

Respectively import minieigen has to be replaced with import yade.minieigenHP as minieigen, the old name as minieigen being used only for the sake of backward compatibility. Then high precision (binary compatible) version of minieigen is used when non double type is used as Real.


There may be still some parts of python code that were not migrated to high precision and may not work well with mpmath module. See debugging section for details.


Before introducing high precision it was assumed that Real is actually a POD double type. It was possible to use memset(…), memcpy(…) and similar functions on double. This was not a good approach and even some compiler #pragma commands were used to silence the compilation warnings. To make Real work with other types, this assumption had to be removed. A single memcpy(…) still remains in file openmp-accu.hpp and will have to be removed. In future development such raw memory access functions are to be avoided.

All remaining double were replaced with Real and any attempts to use double type in the code will fail in the gitlab-CI pipeline.

Mathematical functions of all high precision types are wrapped using file MathFunctions.hpp, these are the inline redirections to respective functions of the type that Yade is currently being compiled with. The code will not pass the pipeline checks if std:: is used. All functions that take Real argument should now call these functions in yade::math:: namespace. Functions which take only Real arguments may omit math:: specifier and use ADL instead. Examples:

  1. Call to std::min(a,b) is replaced with math::min(a,b), because a or b may be non Real the math:: is necessary.
  2. Call to std::sqrt(a) is replaced with math::sqrt(a). Since a is always Real the sqrt(a) may be written without math:: as well, thanks to ADL.

If a new mathematical function is needed it has to be added in the following places:

  1. lib/high-precision/MathFunctions.hpp
  2. py/high-precision/_math.cpp, see math module for details.
  3. py/tests/testMath.py
  4. py/tests/testMathHelper.py

The tests for a new function are to be added in py/tests/testMath.py in one of these functions: oneArgMathCheck(…):, twoArgMathCheck(…):, threeArgMathCheck(…):. A table of expected error tolerances in self.defaultTolerances is to be supplemented as well. To determine tolerances with better confidence it is recommended to temporarily increase number of tests in the test loop, but scale the arguments a and b accordingly to avoid infinities cropping up. To determine tolerances for currently implemented functions a range(2000) in both loops was used.

String conversions

It is recommended to use math::toString(…) and math::fromStringReal(…) conversion functions instead of boost::lexical_cast<std::string>(…). The toString function (in file RealIO.hpp) guarantees full precision during conversion. It is important to note that std::to_string does not guarantee this and boost::lexical_cast does not guarantee this either.

Eigen and CGAL

Eigen and CGAL libraries have native high precision support.


Since VTK is only used to record results for later viewing in other software, such as paraview, the recording of all decimal places does not seem to be necessary. Hence all recording commands in C++ convert Real type down to double using static_cast<double> command. This has been implemented via classes vtkPointsReal, vtkTransformReal and vtkDoubleArrayFromReal in file VTKCompatibility.hpp. Maybe VTK in the future will support non double types. If that will be needed, the interface can be updated there.


Lapack is an external library which only supports double type. Since it is not templatized it is not possible to use it with Real type. Current solution is to down-convert arguments to double upon calling linear equation solver (and other functions), then convert them back to Real. This temporary solution omits all benefits of high precision, so in the future Lapack is to be replaced with Eigen or other templatized libraries which support arbitrary floating point types.


High precision is still in the experimental stages of implementation. Some errors may occur during use. Not all of these errors are caught by the checks and tests. Following examples may be instructive:

  1. Trying to use const references to Vector3r members - a type of problem with results in a segmentation fault during runtime.
  2. A part of python code does not cooperate with mpmath - the checks and tests do not cover all lines of the python code (yet), so more errors like this one are expected. The solution is to put the non compliant python functions into py/high-precision/math.py. Then replace original calls to this function with function in yade.math, e.g. numpy.linspace(…) is replaced with yade.math.linspace(…).

The most flexibility in debugging is with the long double type, because special files ThinRealWrapper.hpp, ThinComplexWrapper.hpp were written for that. They are implemented with boost::operators, using partially ordered field. Note that they do not provide operator++.

A couple of #defines were introduced in these two files to help debugging more difficult problems:

  1. YADE_IGNORE_IEEE_INFINITY_NAN - it can be used to detect all occurrences when NaN or Inf are used. Also it is recommended to use this define when compiling Yade with -Ofast flag, without -fno-associative-math -fno-finite-math-only -fsigned-zeros
  2. YADE_WRAPPER_THROW_ON_NAN_INF_REAL, YADE_WRAPPER_THROW_ON_NAN_INF_COMPLEX - can be useful for debugging when calculations go all wrong for unknown reason.

Also refer to address sanitizer section, as it is most useful for debugging in many cases.


If crash is inside a macro, for example YADE_CLASS_BASE_DOC_ATTRS_CTOR_PY, it is useful to know where inside this macro the problem happens. For this purpose it is possible to use g++ preprocessor to remove the macro and then compile the postprocessed code without the macro. Invoke the preprocessor with some variation of this command:

g++ -E -P core/Body.hpp -I ./ -I /usr/include/eigen3 -I /usr/include/python3.7m > /tmp/Body.hpp

Maybe use clang-format so that this file is more readable:

./scripts/clang-formatter.sh /tmp/Body.hpp

Be careful because such files tend to be large and clang-format is slow. So sometimes it is more useful to only use the last part of the file, where the macro was postprocessed. Then replace the macro in the original file in question, and then continue debugging. But this time it will be revealed where inside a macro the problem occurs.