[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

1454.0. "Can you "Project" a convergent series?" by CHOVAX::YOUNG (Still billing, after all these years.) Wed Jun 12 1991 01:58

    This is a practical problem in applied math.
    
    	Given a series F, where  F(x+1) = f( F(X) )
    	and where f() is a function over the reals.
    
    	and Given F(x), F(x+1), and F(x+2)
    
    	Can we (efficiently) calculate 'G', a "Good projection" of F ?
    
    	Where a "Good projection" is defined by:
    
    		-  If F is convergent (and it usually is) then G is closer
    		to [Limit F] than F(x+2)  (and ideally closer than
    		F(x+1) and F(x) also).
    
    		-  If F is periodic then G is a value in the period
    
    		-  If F is divergent then G is closer to one of the
    		divergent "targets" than F(x+2) is.
    
    		-  If F is chaotic, then G is within the chaotic range 
    		of F.
    
    
    The first condition is by far the most important.  The others are just
    nice to have.
    
    --  Barry
T.RTitleUserPersonal
Name
DateLines
1454.1Seq transformations?ALLVAX::JROTHI know he moves along the piersWed Jun 12 1991 15:4039
    What you seem to be asking about is sequence transformation.

    Euler's series accelerating transformation for alternating series
    is such an example.  The alternating series is rearranged in terms
    of finite differences.  If you want to be impressed, try it on the
    Leibnitz series for arctan(1).

    Another is the "epsilon algorithm".  Here we assume that the series
    is the sum of a bunch of exponentially decaying "transients".
    By looking at a finite section of the series, you can extrapolate
    to the limit value.  The epsilon algorithm is a very clever way of
    doing the calculation without inverting any matrices.

    Another way to see this is to think of the series (x1, x2, x3, ...)
    as converging to some limit value (x, x, x, x, ...).  Treat k consecutive
    elements of the sequence as a point in k-space.  Then we may hope that
    a hyperplane passing thru the points (k = 3 in this example, say)

	(x1, x2, x3), (x2, x3, x4), (x3, x4, x5)

    will intersect the diagonal line from the origin to (x, x, x) as being a
    close approximation to the limit value x.

    If you have some idea of how the error committed by truncating the
    series behaves, you can use Richardson's extrapolation to knock off
    the error terms.

    Related to the epsilon algorithm is a way of transforming a power series
    to a continued fraction by the quotient difference algorithm.  Often the
    continued fraction will have a vastly wider region of convergence since
    continued fractions can approximate meromorphic functions but power
    series are stopped by the poles.

    A good book that explains many techniques along these lines is
    by Jett Wimp, "Sequence Transformations", Academic Press.

    This may be all wet - I could have totally misunderstood your question.

    - Jim
1454.2no, no yes and yesCSSE::NEILSENWally Neilsen-SteinhardtWed Jun 12 1991 16:1545
    Re:  <<< Note 1454.0 by CHOVAX::YOUNG "Still billing, after all these years." >>>
                  -< Can you "Project" a convergent series? >-

.0>    This is a practical problem in applied math.
>    	Given a series F, where  F(x+1) = f( F(X) )
>    	and where f() is a function over the reals.
>    	and Given F(x), F(x+1), and F(x+2)
        
>    	Can we (efficiently) calculate 'G', a "Good projection" of F ?
    
    A: Taking you at your word, I believe the answer is no.  If all you have
    is F(x), F(x+1), and F(x+2) and the knowledge that f is a function,
    then I don't believe you can say anything about G.  I know a lot of
    ways of generating G from the givens (although everything in .1 was new
    to me), but they all depend on more knowledge about f.  For most of the
    ways I know, I could design an f which would defeat them.
    
    From here on, I'll assume that you meant to state an easier problem, so
    I have something to talk about.
    
    B: Assume that I know f as a black box, so that I can always calculate
    f(x) from x, but I don't know anything about the behavior or properties
    of f.  Then I am really no better off than in A above.  There are a lot
    of methods I could apply, but f may be such as to defeat them.
    
    C: Assume I know f as an analytic function.  Then the first case is
    equivalent to 
    
    	G = f(G)	or	g(G) = G - f(G) = 0
    
    and all I have to do is find the roots of g.  If I can't find the roots
    analytically, I can apply any of the well-known numerical methods.  The
    structure of g will tell me, by methods I have forgotten, within what
    domains F will converge to the roots G.  If g has no roots in a
    suitable domain, then the series F will not converge.
    
    D: (the most common case in practice)  Assume that I know f as a black
    box, but that I can also prove or am willing to assume certain
    properties of it.  Many of the methods of C require the assumption that
    all the derivatives of f exist and are bounded within some domain
    including the root.  The methods in .1 seem to require similar kinds of
    assumptions.  Then I can apply the methods to the convergent case in .0
    with some confidence that I am not dealing with a pathological f.  I
    think there are methods for dealing with the divergent, etc. cases, but
    I have never studied them.
1454.3More infoCHOVAX::YOUNGStill billing, after all these years.Thu Jun 13 1991 03:3024
    The function 'f' is basically well-behaved, but it is not (usually)
    analytical.  'f' *is* algorithimic, but it is computationally
    intensive, so that any reliable methods to short circuit the series
    convergence would be of great benefit.  Analytical methods exist for
    the simpler cases, but they are even more computationally intensive.
    
    You may assume the following about 'f':
    
    	1)  There exists 'c' such that  [  c = f(c)  ], this is the series
    	convergence.
    
    	2)  'c' is unique.
    
    	3)  For the series F(n) derived from 'f':
    
    	given	M = F(m) and N = F(n), where  Sign( M - c ) = Sign( N - c )
    
    	then	ABS( N - c )  <  ABS( M - c )  IF  N > M
    
    Which basically just says that the series will keep getting closer to
    the convergence point (subject to possibly different scaling on either
    sied of the converegence point).
    
    --  Barry
1454.4zero trapping and false positionCSSE::NEILSENWally Neilsen-SteinhardtThu Jun 13 1991 16:2964
>    You may assume the following about 'f':
>    
>    	1)  There exists 'c' such that  [  c = f(c)  ], this is the series
>    	convergence.
>    
>    	2)  'c' is unique.

This helps some.  It says we are looking for the unique root of g(x) = x - f(x)

I have been assuming that the domain of the function is x=real.  Life is more
complicated if x is complex or a vector.

Can I also assume that F is continuous?  If so, there is a method of zero 
trapping or binary search:

	choose xpos, xneg such that g(xpos)>0, g(xneg)<0
	do 
		x = (xpos+xneg)/2
		if g(x) > 0
		then
			xpos = x
		else
			xneg = x
		end if
	until g(x) sufficently small

This is guaranteed to converge, although perhaps not rapidly.  It is a neat 
fallback for later methods which converge more rapidly but less certainly.

Can I assume anything about the derivatives of f?  Under the right conditions
(I'll have to look this stuff up again; I keep referring to it in this 
conference; I think it is f'' bounded in an interval around c)  you can 
use the method of false position:

		x = xpos - g(xpos)*(xpos-xneg)/(g(xpos)-g(xneg))

use this way of guessing the next x in the loop above, but fall back if this
x is outside the interval [xpos,xneg].  For x sufficiently close to c, this will
converge rapidly.  Note that it amounts to fitting a straight line to two 
points.  With the right bounds on derivatives, fitting higher order functions
will give faster convergence, but in the real world, these conditions are
seldom met.

If evaluating f is really expensive, you can graph g(x) as you get values, and
make some guesses about its functional form to get the next guess at x.  "It
looks like g(x) is sort of logarithmic around c, so I'll graph exp(g(x)) and
guess the x which makes this new function = 1."

>    	3)  For the series F(n) derived from f
>    	given	M = F(m) and N = F(n), where  Sign( M - c ) = Sign( N - c )
>    	then	ABS( N - c )  <  ABS( M - c )  IF  N > M

I think this is misstated.  I can get any sequence out of f, by starting from 
a given x.  So each term you are talking about is a function of x and n, where
x is the starting value.  Then your inequality can be written as

	ABS( F(N,x) - c ) < ABS( F(M,x) - c ) IF N > M

This is nice because it gives you two more fallback guesses to use in the 
iteration above: x = f(xpos) or x = f(neg).

It also seems to imply that abs(f') < 1 over the real line.  This is not the 
derivative we wanted to bound, but I think we just showed that f is continuous.
 
1454.5CHOVAX::YOUNGStill billing, after all these years.Thu Jun 13 1991 20:5720
    Re .4:
    
>>    	3)  For the series F(n) derived from f
>>    	given	M = F(m) and N = F(n), where  Sign( M - c ) = Sign( N - c )
>>    	then	ABS( N - c )  <  ABS( M - c )  IF  N > M
>
>I think this is misstated.
    
    
    Yes it is.  But I think that you may have misguessed what was intended.
    This is the correct form:
    
    	3)  For the series F(n) derived from f
    	given	M = F(m) and N = F(n), where  Sign( M - c ) = Sign( N - c )
    	then	ABS( N - c )  <  ABS( M - c )  IF  n > m
    						  ^^^^^^^
    
    Remember that F is a series, and therefore 'n' and 'm' are integers.
    
    --  Barry
1454.6OK, but I still have a problemCSSE::NEILSENWally Neilsen-SteinhardtFri Jun 14 1991 16:0521
>   	3)  For the series F(n) derived from f
>    	given	M = F(m) and N = F(n), where  Sign( M - c ) = Sign( N - c )
>    	then	ABS( N - c )  <  ABS( M - c )  IF  n > m
    						  ^^^^^^^   
>    Remember that F is a series, and therefore 'n' and 'm' are integers.

Well, F is a sequence, but it is not uniquely determined by f.  If you define
the sequence by

	F(0,x) = x

	F(n,x) = F(n-1,x) for n>0, with n integral

It may be clearer that the terms in the sequence, given by F(n,x), depend on 
both n and x.

Except for a case convention, the rule I gave 

.4>	ABS( F(N,x) - c ) < ABS( F(M,x) - c ) IF N > M

still seems to match your statement.
1454.7CHOSRV::YOUNGStill billing, after all these years.Thu Jun 20 1991 01:167
    Using lowercase m and n, then Wally, your final restatement of the
    inequality in .4 would be correct.
    
    Also, yes, f is continuous, except for some (possible) point 
    discontinuities.
    
    --  Barry