2 posts • joined 20 Sep 2006
"Error correction inadequate"
Fixing up bit flips may seem important, but is hardly sufficient for a "serious HPC center" to trust the alleged answers generated. As is well-known, there are many other sources of error: floating-point rounding, measurement error, approximate physical constants, use of discrete models in place of continuous models. I wouldn't trust any answer, regardless of the presence of logic detecting bit-flips, unless the solutions are accompanied by guaranteed bounds on the errors. Now if these new GPUs had hardware implementations of Interval Arithmetic instructions, that might be something to get excited about.
One of the common uses of floating point arithmetic is in
modelling real-world phenomena. The techniques described in
the article I believe are inadequate to handle some of the
issues that crop up in trying to solve these models. For
example: we often have measurement error; know physical
constants to varying degrees of accuracy; discretize data
that is supposed to represent a continuum; use algorithms
that may not be numerically stable over the entire domain;
or deal with problems that are inherently stiff. The real
world is nonlinear.
There is an alternative to using fixed point schemes
(whether integers, scaled integers, rationals, or floating
point numbers as approximations to a continuum), which is
to compute with sets of numbers. For reasons of efficiency
and to take advantage of hardware acceleration, we
generally use intervals, defined as the set of all numbers
between a lower bound and upper bound [a,b].
By using intervals we can represent measurement error,
floating point rounding error, and imprecise constants in a
unified and consistent way. For some of us, one of the
greatest strengths of this approach is that when you
compute something, you also obtain an indication of the
quality of the answer i.e. the width of the interval.
Some problems have traditionally been considered
intractable, which may no longer be the case when using IA.
Consider a (large) solution space. By eliminating boxes
(multidimensional intervals) where it can be proved the
solution cannot be, you can iterate towards more and more
accurate approximations of the solution, subject to the
precision of the arithmetic being used. As the size of
boxes shrink, switch to higher precision if required. A
classic example of this technique is an Interval Newton
method for finding all roots of a function.
For all of this to work, you do need an implementation of
interval arithmetic, one that guarantees containment of the
true solution for all operator-operand combinations. In my
own work, I use the implementation that is part of Sun's
- Infosec geniuses hack a Canon PRINTER and install DOOM
- Feature Be your own Big Brother: Monitoring your manor, the easy way
- Boffins say they've got Lithium batteries the wrong way around
- In a spin: Samsung accuses LG exec of washing machine SABOTAGE
- Phones 4u slips into administration after EE cuts ties with Brit mobe retailer