Contact Version française

The ProVal team was stopped at the end of August 2012, and reborn into a new team Toccata
These pages do not evolve anymore, please follow the link above for up-to-date informations about our team.

Proofs for Floating-Point Arithmetic and Numerical Analysis

Overview

Orientation of three points

The floating-point arithmetic thematic aims at proving programs that contain computations on floating-point numbers, such as single or double precision numbers as defined by the IEEE-754 standard.

Floating-point arithmetic is primarily used as an efficient way of approximating arithmetic on real numbers. Due to its limited precision, floating-point computations may introduce inaccuracies in the numerical results. The picture on the right is an example of that. It shows the orientation of three points represented by the orange arrow. The extremal vertexes are fixed on the diagonal, while the middle point can move freely. The color (blue/green/red) at the position of the middle point shows the orientation. For instance, when this point is above the diagonal, the three points are oriented counter-clockwise, hence the color blue.

The orientation of the three points can be obtained by computing the determinant of a 2×2-matrix and looking at the sign of the result (5 subtractions, 2 multiplications). Zero means the three points are aligned; if positive, they are oriented one way; if negative, they are oriented the other way. While the computed sign seems correct overall, the lower part of the picture shows that the inaccuracies introduced by floating-point computations will produce an incorrect orientation for some positions of the middle point, in particular when the three points are almost aligned.

What Every Computer Scientist Should Know About Floating-Point Arithmetic


People

Sylvie Boldo, Guillaume Melquiond, Tuyen Nguyen, and partly Claude Marché and Jean-Christophe Filliâtre.

Thematics

Floating-Point Arithmetic

Emulation of FMA

We are interested in verifying algorithms that perform floating-point computations to obtain intricate results. They are usually short but their purpose is not immediate from looking at their code. Indeed they are state-of-the-art devices that have been tuned beyond recognition by reordering operations, using function approximates, relying on exclusion principles, and so on, until a balance between accuracy and performance was reached. Testing is heavily used while developing such devices, but the extent of floating-point domains prevents the usage of exhaustive testing. That is why we are investigating methods and developing tools for formally verifying them and ensuring that all the behaviors have been accounted for.

We have developed a generic formalization of floating-point arithmetic that is suitable both for verifying numerical programs and for computing inside a proof assistant. We have also formally proved the correctness of several algorithms that are often encountered as basic blocks: accurate polynomial discriminant, error-free transformations, emulation of fused-multiply-add operator (FMA), and so on. The picture on the right side presents the algorithm for emulating the FMA.

While we have been successful in manually writing formal proofs, it is an approach that does not scale well: verifying lengthy floating-point algorithms soon becomes tedious and time-consuming, even for experts in formal systems. So we are also developing automated tools for verifying arithmetic facts (domains, rounding errors, and so on) and generating their formal proof.

Due to the mathematical and numerical expertise that went into the numerical devices we are interested in, only part of them will ever be in reach of purely automated tools. Manual intervention is still needed to handle the most intricate parts of the algorithm. That is why we are focusing on combining interactive proving and automated tools. Starting from an annotated program, proof obligations ensuring its correctness are generated. Hopefully, most of these obligations will be tackled automatically, possibly by tools dedicated to floating-point arithmetic. The remaining few obligations will then be handled manually inside a proof assistant; but they do not have to be solved in their entirety, they just need to be simplified sufficiently for the automated tools to take over.

Our analyses also take into account semantic changes of floating-point algorithms due to peculiarities of the computing environment, e.g. architectures and compilers. For instance, an instruction set may provide extended registers for storing floating-point values or fused-multiply-add operators. A compiler may then take advantage of these features and optimize the code accordingly. As a consequence, precision of floating-point operations at execution time no longer matches the original algorithm, hence causing discrepancies and behavior changes.


Numerical Analysis Programs

We aim at developing and applying methods to formally prove the soundness of programs from numerical analysis. It can be done in two steps: rounding error and method error. The rounding error corresponds to the accumulation of the round-off errors due to the floating-point computations of the values. We had to develop new complex methods as the usual techniques gave too large an error bound. The method error corresponds to the errors due to the discretization of the partial differential equation on a grid. The mathematical proofs from the literature have shown to be insufficient and we had to fill some holes in order to finish this formal proof. The main difficulty was the definition of the mathematical big O that must be very precise (compared to the mathematical imprecise notations) and uniform on an interval.

Numerical scheme Oscillating rope

We have worked on the 1D acoustic wave equation, meaning the oscillation of a rope attached by its two ends as shown in the left picture. This partial differential equation is approximated by a second order centered finite difference scheme, also known as the “three-point scheme.” More precisely and as explained in the right picture, the value at pjk (red dot) depends on pj-1k-1, pjk-1, pj+1k-1, and pjk-2 (blue dots).


Software or Proofs developments

Related Grants

Related Publications

[1] Florent de Dinechin, Christoph Lauter, and Guillaume Melquiond. Certifying the floating-point implementation of an elementary function using Gappa. IEEE Transactions on Computers, 60(2):242-253, 2011. [ bib | DOI | full paper on HAL ]
[2] Sylvie Boldo. Floats & Ropes: a case study for formal numerical program verification. In 36th International Colloquium on Automata, Languages and Programming, volume 5556 of Lecture Notes in Computer Science - ARCoSS, pages 91-102, Rhodos, Greece, July 2009. Springer. [ bib ]
[3] Sylvie Boldo and Guillaume Melquiond. Emulation of FMA and Correctly-Rounded Sums: Proved Algorithms Using Rounding to Odd. IEEE Transactions on Computers, 57(4):462-471, 2008. [ bib | full paper on HAL | PDF ]
[4] Sylvie Boldo and Thi Minh Tuyen Nguyen. Hardware-independent proofs of numerical programs. In César Muñoz, editor, Proceedings of the Second NASA Formal Methods Symposium, NASA Conference Publication, pages 14-23, Washington D.C., USA, April 2010. [ bib | full paper on HAL | PDF ]
[5] Sylvie Boldo, Jean-Christophe Filliâtre, and Guillaume Melquiond. Combining Coq and Gappa for Certifying Floating-Point Programs. In 16th Symposium on the Integration of Symbolic Computation and Mechanised Reasoning, volume 5625 of Lecture Notes in Artificial Intelligence, pages 59-74, Grand Bend, Canada, July 2009. Springer. [ bib ]