Searching Critical Values for Floating-Point Programs

Programs with floating-point computations are often derived from mathematical models or designed with the semantics of the real numbers in mind. However, for a given input, the computed path with floating-point numbers may significantly differ from the path corresponding to the same computation with real numbers. As a consequence, developers do not know whether the program can actually produce very unexpected outputs. We introduce here a new constraint-based approach that searches for test cases in the part of the over-approximation where errors due to floating-point arithmetic could lead to unexpected decisions.


Introduction
In numerous applications, programs with floating-point computations are derived from mathematical models over the real numbers. However, computations on floating-point numbers are different from calculations in an idealised semantics 1 of real numbers [8]. For some values of the input variables, the result of a sequence of operations over the floating-point numbers can be significantly different from the result of the corresponding mathematical operations over the real numbers. As a consequence, the computed path with floating-point numbers may differ from the path corresponding to the same computation with real numbers. This can entail wrong outputs and dangerous decisions of critical systems. That's why identifying these values is a crucial issue for programs controlling critical systems.
Abstract interpretation based error analysis [3] of finite precision implementations computes an over-approximation of the errors due to floating-point operations. The point is that state-of-the-art tools [6] may generate numerous false alarms. In [16], we introduced a hybrid approach combining abstract interpretation and constraint programming techniques that reduces the number of false alarms. However, the remaining false alarms are very embarrassing since we cannot know whether the predicted unstable behaviors will occur with actual data.
More formally, consider a program P , a set of intervals I defining the expected input values of P , and an output variable x of P on which depend critical decisions, e.g., activating an anti-lock braking system. Let [x R , x R ] be a sharp approximation over the set of real numbers R of the domain of variable x for any input of P . [x F , x F ] stands for the domain of variable x in the over-approximation computed over the set of floating-point F for input values of I. The range [x R , x R ] can be determined by calculation or from physical limits. It includes a small tolerance to take into account approximation errors, e.g. measurement, statistical, or even floating-point arithmetic errors. This tolerance -specified by the userdefines an acceptable loss of accuracy between the value computed over the floating-point numbers and the value calculated over the real numbers. Values outside the interval [x R , x R ] can lead a program to misbehave, e.g. take a wrong branch in the control flow.
The problem we address in this paper consists of verifying whether there exist critical values in I for which the program can actually produce a result value of x inside the suspicious intervals [x F , x R ) and (x R , x F ]. To handle this problem, we introduce a new constraint-based approach that searches for test cases that hit the suspicious intervals in programs with floating-point computations. In other words, our framework reduces this test case generation problem to a constraint-solving problem over the floating-point numbers where the domain of a critical decision variable has been shrunk to a suspicious interval. A constraint solver -based on filtering techniques designed to handle constraints over floating-point numbers-is used to search values for the input data. Preliminary results of experiments on small programs with classical floating-point errors are encouraging.
The CPBPV FP, the system we developed, outperforms generate and test methods for programs with more than one input variable. Moreover, these search strategies can prove in many cases that no critical value exists.

Motivating example
Before going into the details, we illustrate our approach on a small example. Assume we want to compute the area of a triangle from the lengths of its sides a, b, and c with Heron's formula: where s = (a + b + c)/2. The C program in Fig. 1 implements this formula, when a is the longest side of the triangle.
The test of line 5 ensures that the given lengths form a valid triangle. Now, suppose that the input domains are a ∈ [5, 10] and b, c ∈ [0, 5]. Over the real numbers, s is greater than any of the sides of the triangle and squared_area cannot be negative. Moreover, squared_area cannot be greater than 156.25 over the real numbers since the triangle area is maximized for a right triangle with CPBPV FP could also prove the absence of test cases for a tolerance ε = 10 −3 with squared_area > 156.25 + ε.

Framework for generating test cases
This section details the framework we designed to generate test cases reaching suspicious intervals for a variable x in a program P with floating-point computations.
The kernel of our framework is FPCS [14,13,1,12], a solver for constraints over the floating-point numbers; that's to say a symbolic execution approach for floating-point problems which combines interval propagation with explicit search for satisfiable floating-point assignments. FPCS is used inside the CPBPV bounded model checking framework [5]. CPBPV FP is the adaptation of CPBPV for generating test cases that hit the suspicious intervals in programs with floating-point computations.
The inputs of CPBPV FP are: P , an annotated program; a critical test ct , a suspicious interval for x. Annotations of P specify the range of the input variables of P as well as the suspicious interval for x. The latter assertion is just posted before the critical test ct.
To compute the suspicious interval for x, we approximate the domain of CPBPV FP performs first some pre-processing: P is transformed into DSAlike form 4 . If the program contains loops, CPBPV FP unfolds loops k times where k is a user specified constant. Loops are handled in CPBPV and rAiCp with standard unfolding and abstraction techniques 5 . So, there are no more loops in the program when we start the constraint generation process. Standard slicing operations are also performed to reduce the size of the control flow graph.
In a second step, CPBPV FP searches for executable paths reaching ct. For each of these paths, the collected constraints are sent to FPCS, which solves the corresponding constraint systems over the floating point numbers. FPCS returns either a satisfiable instantiation of the input variables of P , or ∅. As said before, FPCS [14,13,1,12] is a constraint solver designed to solve a set of constraints over floating-point numbers without losing any solution. It uses 2Bconsistency along with projection functions adapted to floating-point arithmetic [13,1] to filter constraints over the floating-point numbers. FPCS also provides stronger consistencies like kB-consistencies, which allow better filtering results.
The search of solutions in constraint systems over floating numbers is trickier than the standard bisection-based search in constraint systems over intervals of CPBPV FP ends up with one of the following results: a test case proving that P can produce a suspicious value for x; -a proof that no test case reaching the suspicious interval can be generated: this is the case if the loops in P cannot be unfolded beyond the bound k (See [5] for details on bounded unfolding) ; -an inconclusive answer: no test case could be generated but the loops in P could be unfolded beyond the bound k. In other words, the process is incomplete and we cannot conclude whether P may produce a suspicious value.

Preliminary experiments
We experimented with CPBPV FP on six small programs with cancellation and absorption phenomena, two very common pitfalls of floating-point arithmetic.
The benchmarks are listed in the first two columns of table 1. First two benchmarks concern the heron program and the optimized heron program with the suspicious intervals described in the section 1.
Program slope (see Fig. 2) approximates the derivative of the square function f (x) = x 2 at a given point x 0 . More precisely, it computes the slope of a nearby secant line with a finite difference quotient: . Over the real numbers, the smaller h is, the more accurate the formula is. For this function, the derivative is given by f ′ (x) = 2x which yields exactly 26 for x = 13. Over the floats, Fluctuat [6] approximates the return value of the slope program to the interval [0, 25943] when h ∈ [10 −6 , 10 −3 ] and x 0 = 13.
simple interpolator and simple square are two benches extracted from [9]. The first bench computes an interpolator, affine by sub-intervals while the second is a rewrite of a square root function used in an industrial context. All experiments were done on an Intel Core 2 Duo at 2.8 GHz with 4 GB of memory running 64-bit Linux. We assume C programs handling IEEE 754 compliant floating-point arithmetic, intended to be compiled with GCC without any optimization option and run on a x86 64 architecture managed by a 64bit Linux operating system. Rounding mode was to the nearest, i.e., where ties round to the nearest even digit in the required position.

Strategies and solvers
We run CPBPV FP with the following search strategies for the FPCS solver: For all these strategies, we select first the variables with the largest domain and we perform a 3B−consistency filtering step before starting the splitting process.
We compared CPBPV FP with CBMC [4] and CDFL [7], two state-of-theart software bounded model checkers based on SAT solvers that are able to deal with floating-point computations. We also run a simple generate & test strategy: the program is run with randomly generated input values and we test whether the result is inside the suspicious interval. The process is stopped as soon as a test case hitting the suspicious interval is found. Table 1 reports the results for the other strategies and solvers. Since strategy fpc3s is incomplete, we indicate whether a test case was found or not. Column s? specifies whether a test case actually exists. Note that the computation times of CBMC and CDFL include the pre-processing time for generating the constraint systems; the pre-processing time required by CPBPV is around 0.6s but CPBPV is a non-optimised system written in java.  Strategy fpc is definitely the most efficient and most robust one on all these benchmarks. Note that CBMC and CDFL could neither handle the initial, nor the optimized version of program heron in a timeout of 20 minutes whereas FPCS found solutions in a reasonable time.

Name
These preliminary results are very encouraging: they show that CPBPV FP is effective for generating test cases for suspicious values outside the range of acceptable values on small programs with classical floating-point errors. More importantly, a strong point of CPBPV FP is definitely its refutation capabilities.
Of course, experiments on more significant benchmarks and on real applications are still necessary to evaluate the full capabilities and limits of CPBPV FP.

Related and further work
The goals of software bounded model checkers based on SAT solvers are close to our approach. The point is that SAT solvers tend to be inefficient on these problems due to the size of the domains of floating-point variables and the cost of bit-vector operations [7]. CDFL [7] tries to address this issue by embedding an abstract domain in the conflict driven clause learning algorithm of a SAT solver. SAT solvers often use bitwise representations of numerical operations, which may be very expensive (e.g., thousands of variables for one equation in CDFL). Brain and al [11,2] have recently introduced a bit-precise decision procedure for the theory of floating-point arithmetic. The core of their approach is a generalization of the conflict-driven clause-learning algorithm used in modern SAT solvers. Their technique is significantly faster than a bit-vector encoding approach. Note that the constraint programming techniques used in our approach are better suited to generate several test cases than these SAT-based approaches. The advantage of CP is that it provides a uniform framework for representing and handling integers, real numbers and floats. A new abstract-interpretation based robustness analysis of finite precision implementations has recently been proposed [9] for sound rounding error propagation in a given path in presence of unstable tests.
A close connection between our floating-point solvers and the two abovementioned approaches is certainly worth exploring.
A second direction for further work concerns the integration of our constraintbased approach with new abstract-interpretation based robustness analysis of finite precision implementations for sound rounding error propagation in a given path in presence of unstable tests.