[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

957.0. "?Function representation through Power series?" by MURDOC::HOUGHTON () Wed Oct 26 1988 13:46

    Being a non-expert in math and being in need of direction I thought it may
be fruitful to drop a question I have about function definition through
Power Series.  Specifically, I'm interested in how compilers break-down high-
level language statements, of transcendental functions, into a finite and 
convergent series.  For example:

                          2   3
         x               x   x
        e    =   1 + x + - + - + ...        is the Power Series representation
                         2!  3!             of e^x.

    How are appropriate series representations chosen for computational
solution?   How is this done while considering convergence, accuracy, and
computational limits.
    An associate mentioned the Quadric series representations.  Am I even
in the ball park?
    Doing a DIR/TITLE="series" turned-up some interesting functions but no
algorithms.
    Any explanations, suggestions or pointers to reference would be useful.

Thanks in Advance,
Chris Houghton
T.RTitleUserPersonal
Name
DateLines
957.1They don'tHIBOB::SIMMONSWed Oct 26 1988 15:3911
    Compilers don't break-down etc.  What happens is that canned library
    routines are used.  By the way, the series you gave is a power series.
    Power series are important theoretically but of little use for function
    approximation.
    
    The canned algorithms are often based on approximating polynomials
    obtained in various ways, rational approximations and occasionally
    continued fractions.  Pocket calculators use even more arcane methods
    but the ideas involved are part of the field on numerical analysis.
                            
    Chuck
957.2CLT::GILBERTMultiple inheritence happensWed Oct 26 1988 15:533
    The VAX RTL documentation of many of the MTH$ routines gives a capsule
    description of some of the algorithms.  They suffice for an understanding
    of what's going on.
957.3Book recommendation.ERLTC::COOPERTopher CooperWed Oct 26 1988 16:147
    I recommend "Numerical Recipes" by Press, Flannery, Teukolsky, and
    Vetterling.  Its a very practical book, which will tell you a lot
    about what you want to know without overwhelming you with theory
    or unneccssary details.
    
    					Topher
    
957.4something like this, anywayAITG::DERAMODaniel V. {AITG,ZFC}:: D'EramoWed Oct 26 1988 22:1131
     For example, a way to compute e^x on a machine with single
     and double precision binary floating point.  You are
     given the double precision value x.  Now
     
           x    (x / ln 2)    cx
          e  = 2           = 2     where  c = 1 / ln 2
     
     The constant c is known in advance to arbitrary precision.
     Now you compute cx as the sum of an integral part and
     a fractional part, and you do it very carefully; perhaps
     in quadruple precision, or by breaking up cx into
     (c1 + c2)(x1 + x2) where each of x1 and x2 have "half
     the bits" [but both are double precision].  Take the
     four cross products c1 x1, c1 x2, c1 x2, c2 x2, and
     carefully add them so as to get an integer plus a fraction
     with as much precision preserved as possible.
     
     Now you have
     
           x    cx    n+f
          e  = 2   = 2      n is an integer, |f| <= 1/2.
     
          n
     The 2  part is easy because you just plug the exponent
     into the binary floating point number.  Then you use
     a polynomial approximation for 2^f that is optimized
     [to minimize error or maximize speed, say] for the small
     range that f takes; i.e., not a valid power series for
     all x but close enough for the "reduced argument" f.
     
     Dan
957.5CLT::GILBERTMultiple inheritence happensWed Oct 26 1988 23:3921
    In the VAX/VMS RTL, the range of 'f' is further reduced, a la:
    
           x    cx    n+f
          e  = 2   = 2      n is an integer, |f| <= 1/2,
    
    		n    k/8    g
    	     = 2  * 2    * 2  ,  where |k| <= 8, |g| <= 1/8, f = k/8 + g,
    
    		n      k/8     k/8
    	     = 2  * ( 2    +  2    * P(g) ),  where P(x) is a polynomial
    						approximation of 1 - 2**g.
    
    		n      k/8       k/8    k/8
    	     = 2  * ( 2     + ( 2    + 2     * P(g) ))
    		       high	 low	high
    
    		   k/8
    The values of 2    are precomputed, and are in a table indexed by k.
    For precision, these are stored as high and low parts -- for F-floating,
    |z_high| >= 2**(-25) * |z_low|.  For accuracy, the operations are done
    as parenthesized in the last equation above.
957.6sounds right anyhowAKQJ10::YARBROUGHI prefer PiThu Oct 27 1988 20:099
>    An associate mentioned the Quadric series representations.  Am I even
>in the ball park?

Close - I think he or she was refering to the CORDIC approximation scheme,
which is used in pocket calculators. As I recall, there was an extensive
description of this scheme in the early literature describing the HP
calculators, several years ago. 

Lynn 
957.7Cordic - see also the MonthlyHIBOB::SIMMONSThu Oct 27 1988 21:104
    Re. .6 For a discussion of CORDIC see also the Mathematical Monthly.
    There was an article with examples several years ago.
    
    Chuck