T.R | Title | User | Personal Name | Date | Lines |
---|
957.1 | They don't | HIBOB::SIMMONS | | Wed Oct 26 1988 15:39 | 11 |
| Compilers don't break-down etc. What happens is that canned library
routines are used. By the way, the series you gave is a power series.
Power series are important theoretically but of little use for function
approximation.
The canned algorithms are often based on approximating polynomials
obtained in various ways, rational approximations and occasionally
continued fractions. Pocket calculators use even more arcane methods
but the ideas involved are part of the field on numerical analysis.
Chuck
|
957.2 | | CLT::GILBERT | Multiple inheritence happens | Wed Oct 26 1988 15:53 | 3 |
| The VAX RTL documentation of many of the MTH$ routines gives a capsule
description of some of the algorithms. They suffice for an understanding
of what's going on.
|
957.3 | Book recommendation. | ERLTC::COOPER | Topher Cooper | Wed Oct 26 1988 16:14 | 7 |
| I recommend "Numerical Recipes" by Press, Flannery, Teukolsky, and
Vetterling. Its a very practical book, which will tell you a lot
about what you want to know without overwhelming you with theory
or unneccssary details.
Topher
|
957.4 | something like this, anyway | AITG::DERAMO | Daniel V. {AITG,ZFC}:: D'Eramo | Wed Oct 26 1988 22:11 | 31 |
| For example, a way to compute e^x on a machine with single
and double precision binary floating point. You are
given the double precision value x. Now
x (x / ln 2) cx
e = 2 = 2 where c = 1 / ln 2
The constant c is known in advance to arbitrary precision.
Now you compute cx as the sum of an integral part and
a fractional part, and you do it very carefully; perhaps
in quadruple precision, or by breaking up cx into
(c1 + c2)(x1 + x2) where each of x1 and x2 have "half
the bits" [but both are double precision]. Take the
four cross products c1 x1, c1 x2, c1 x2, c2 x2, and
carefully add them so as to get an integer plus a fraction
with as much precision preserved as possible.
Now you have
x cx n+f
e = 2 = 2 n is an integer, |f| <= 1/2.
n
The 2 part is easy because you just plug the exponent
into the binary floating point number. Then you use
a polynomial approximation for 2^f that is optimized
[to minimize error or maximize speed, say] for the small
range that f takes; i.e., not a valid power series for
all x but close enough for the "reduced argument" f.
Dan
|
957.5 | | CLT::GILBERT | Multiple inheritence happens | Wed Oct 26 1988 23:39 | 21 |
| In the VAX/VMS RTL, the range of 'f' is further reduced, a la:
x cx n+f
e = 2 = 2 n is an integer, |f| <= 1/2,
n k/8 g
= 2 * 2 * 2 , where |k| <= 8, |g| <= 1/8, f = k/8 + g,
n k/8 k/8
= 2 * ( 2 + 2 * P(g) ), where P(x) is a polynomial
approximation of 1 - 2**g.
n k/8 k/8 k/8
= 2 * ( 2 + ( 2 + 2 * P(g) ))
high low high
k/8
The values of 2 are precomputed, and are in a table indexed by k.
For precision, these are stored as high and low parts -- for F-floating,
|z_high| >= 2**(-25) * |z_low|. For accuracy, the operations are done
as parenthesized in the last equation above.
|
957.6 | sounds right anyhow | AKQJ10::YARBROUGH | I prefer Pi | Thu Oct 27 1988 20:09 | 9 |
| > An associate mentioned the Quadric series representations. Am I even
>in the ball park?
Close - I think he or she was refering to the CORDIC approximation scheme,
which is used in pocket calculators. As I recall, there was an extensive
description of this scheme in the early literature describing the HP
calculators, several years ago.
Lynn
|
957.7 | Cordic - see also the Monthly | HIBOB::SIMMONS | | Thu Oct 27 1988 21:10 | 4 |
| Re. .6 For a discussion of CORDIC see also the Mathematical Monthly.
There was an article with examples several years ago.
Chuck
|