[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

1334.0. "SQRT(2) : fast approximate" by SMAUG::ABBASI () Sun Nov 11 1990 01:38

this is an interesting way to approx to sqrt(2):

look at points at hyperbola y^2=2x^2+1 , which is asymptotic to line 
y= x*sqrt(2)

it can be shown that this formula has the relationship for all the roots

                                          n
 X(n) * SQRT(2) + Y(n) = ( 2*SQRT(2) + 3 )
where X(n),Y(n) are one solution.

 so for n's  we pick an X and find corresponding y and come up with this table

n       X       Y      Y/N= approx to SQRT(2)
---   ----    -----    --------------------
1       2      3         1.5
2       12     17        1.416
3       70     99        1.41428
.
.
10  15,994,428   22,619,537   1.414213562373096

I think geometrical based methods as the above are easier to
understand than non-geomertry based ones. (although might not always be better)
as in the example of approximating to pie by using the method of drawing many 
sided gons inside a circle, and that of using series approx.

the above method is from intro to number theory, by daniel E. FLATH (Wiley)

does anyone knows of  a faster/simpler method to approach SQRT(2) ?

/naser
    
T.RTitleUserPersonal
Name
DateLines
1334.1GUESS::DERAMODan D'EramoSun Nov 11 1990 03:364
        Those look like the continued fraction approximants to
        sqrt(2) to me.
        
        Dan
1334.2AlternativeSHIRE::ALAINDAlain Debecker @GEO DTN 821-4912Mon Nov 12 1990 11:1228
1334.3ConfusedCHOSRV::YOUNGThe OOL's are not what they seem.Mon Nov 12 1990 13:2545
    I am a little confused by the hand waving in .0 :
    
>look at points at hyperbola y^2=2x^2+1 , which is asymptotic to line 
>y= x*sqrt(2)
    
    OK, I understand this.

>it can be shown that this formula has the relationship for all the roots
>
>                                          n
> X(n) * SQRT(2) + Y(n) = ( 2*SQRT(2) + 3 )
>where X(n),Y(n) are one solution.
    
    How was this Formula derived?  I cannot see the connection at all.

> so for n's  we pick an X and find corresponding y and come up with this table
    
    Huh?  How can you use this formula to approximate SQRT(2) when it has 
    SQRT(2) in it?!?  Are you saying to use iterative substitution?

>n       X       Y      Y/N= approx to SQRT(2)
>---   ----    -----    --------------------
>1       2      3         1.5
>2       12     17        1.416
    
    I think that "Y/N=" is a typo and should be "Y/X=".
    
    But, even using iterative substitution, I do not get the same results:
    				  
    Y(n) = (2*(Y(n-1))/(X(n-1)) + 3)^n - X(n)*(Y(n-1))/(X(n-1))
    
    Assume X0,Y0=1
    
    For X1=2 :
    
    	Y1 = (2*1 + 3)^1 - 2*1)		= 3	So this works out the same.
    
    For X2=12 :
    
    	Y1 = (2*1.5 + 3)^2 - 12*1.5
    	   =   (6)^2	-  18		= 18	Now I get different answers!?
    
    So whats going on?
    
    --  Barry
1334.4I which I can hand wave that good :-)SMAUG::ABBASIMon Nov 12 1990 13:5516
    ref .-1
    >How was this Formula derived?  I cannot see the connection at all.
    
    that was not hand waving, I just did show how the author cam up with
    that formual, I'll write down his method tonit when I get home.
    
    yep that was a typo "y/x"
    
    >Huh?  How can you use this formula to approximate SQRT(2) when it has 
    >SQRT(2) in it?!?  Are you saying to use iterative substitution?
      
      the SQRT(2) cancels out, so we find Y for X, and Y/X=> SQRT(2) approx
    
    /naser
    
    
1334.5exact book proof of .0SMAUG::ABBASITue Nov 13 1990 01:3649
    this is the handwaving from the text book itself regarding .0
    
    "failing to find a rational number that eqluals sqrt(2), we might
    search for rationals that aprox sqrt(2), it seems natural to look
    for integral points on hyperbola y^2=2x^2+1 which is asymptotic to line
    y=sqrt(2) x.
    the situation is beter immediatly because there are integers points on
    the hyperbola, for instance(x,y) = (2,3) or (13860,19601).
    the ancients knew how to mutliply solutions of the equation y^2-2x^2=1
    : if (x,y)= (u,v),(U,V) are two such solutions, then their product
       (u,v).(U,V) = (uV + uU, 2uU+vV)
    is a third.
    this follows from the key identity
               2           2
     (2uU + uV)  - 2(uV+uU)  =  (v^2 - 2u^2) ( V^2 - 2U^2)
    
    starting with one integer point (x,y) on the hyperbola in the first
    quadrant we can find infinitly many others by taking powers.
    for example ley (x1,y1)= (2,3)  and let (Xn,Yn)=(2,3)(Xn-1,Yn-1) for
    n>1 .
    we will write (Xn,Yn)= (2,3)^n .
                          2     2
    we have the equality Yn - 2Xn = 1 for all n >= 1 and as easily seen,
    the inequalities x1<x2<x3... and y1<y2<.... .
    
    two intersting formuals for (Xn,Yn) are easily proved by finite
    induction               n
            /   \    /     \     /  \
            | Xn | = | 3  2 |    | 0 |   for n>=1 
            | Yn |   | 4  3 |    | 1 |
            \    /   \      /    \   /
    
                                      n
      Xn sqrt(2) + Yn = (2sqrt(2) + 3)
                        2     2
    table of solutions Yn - 2Xn = 1
    
    n     Xn              Yn                    Yn/Xn
    1     2                3                    1.5
    2     12               17                   1.416
    .
    .
    
    "
    
    end of quote.
    
    I should get my self a scanner !
    /naser
1334.6ComPelling ProofIOSG::CARLINDick Carlin IOSG, Reading, EnglandTue Nov 13 1990 08:109
>               <<< Note 1334.1 by GUESS::DERAMO "Dan D'Eramo" >>>
>       Those look like the continued fraction approximants to
>       sqrt(2) to me...
    
    	... and of course they are. That is the traditional way to solve
    	Pell's equation. I was very miffed years back when I thought I had
    	discovered it, only to find it was standard textbook stuff.
    
    	dick
1334.7Continous fraction ?SHIRE::ALAINDAlain Debecker @GEO DTN 821-4912Tue Nov 13 1990 10:0318
>>               <<< Note 1334.1 by GUESS::DERAMO "Dan D'Eramo" >>>
>>       Those look like the continued fraction approximants to
>>       sqrt(2) to me...
>    
>    <<< Note 1334.6 by IOSG::CARLIN "Dick Carlin IOSG, Reading, England" >>>
>    	... and of course they are. That is the traditional way to solve
>    	Pell's equation. I was very miffed years back when I thought I had
>    	discovered it, only to find it was standard textbook stuff.
>    

	Not in the strict sense.  In facts, THE continued fraction
	approximants would alternate around the limit.  In the case 
	of sqrt(2) it is   1, 3/2, 7/5, 17/12, 41/29, 99/70,...

	Any good approximation by fraction will be a subsequence, 
	because the best approximation of sqrt(2) by rational p/q 
	with q<5 is 3/2, with q<12 is 7/5, with q<29 is 17/12, with
	q<70 is 41/29, ...
1334.8Newton shows the wayCIVAGE::LYNNLynn Yarbrough @WNP DTN 427-5663Tue Nov 13 1990 15:536
It looks to me as though the standard iteration formula

	x[n+1] = (x[n]+2/x[n])/2

is even faster, since it skips over several of the approximants along the 
way: starting with x[0]=1, it produces 3/2, 17/12, 577/408, ....
1334.9GUESS::DERAMODan D'EramoTue Nov 13 1990 17:5321
>>	x[n+1] = (x[n]+2/x[n])/2

	Letting n be the numerator and d the denominator, this
	is

	n/d ==> (n/d + 2d/n) / 2
	    ==> ((n^2 + 2d^2)/dn) / 2
	    ==> (n^2 + 2d^2) / 2dn

	which is the formula from .5 in a slightly different form:

		(u,v).(U,V) = (uV + vU, 2uU+vV)

		u,U <--> d	v,V <--> n

		(d,n).(d,n) -> (2dn,2d^2 + n^2)

	Which is why it skips terms ... it "multiplies" the latest
	term with itself, not with the first term.

	Dan
1334.10ClarificationSHIRE::ALAINDAlain Debecker @GEO DTN 821-4912Tue Nov 20 1990 11:58150
Thanks to off line communication with Eric Osman, I was able to rewrite
note .2

In order to read it: EXTRACT the note, EDIT the file to remove this cover 
instructions (up to the Line-Feed), and $ TYPE/PAGE the file.  Note that 
the file contains REGIS instructions and was tested on VT340 only.

Alain









  --- Last line to be removed ----

Take a square ABCD.
          AC    
 Let x = ---- = sqrt(2).
          AB  
PpS(A[0,479][799,0])R(I0)S(C(I[+0,+0]"  "))
p[ 50, 50]v[300,300]
p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]
p[ 50, 50]t"A"
p[300, 50]t"B"
p[290,320]t"C"
p[ 50,320]t"D"
\










Take a square ABCD.  Take the point E on the diagonal AC, such that 
AE = AB.  We have AC/AE = sqrt(2).
          AC    
 Let x = ---- = sqrt(2).
          AB  
     AC         CE
x = ---- = 1 + ---- 
     AB         AB
Pp
p[ 50, 50]v[300,300]
p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]
p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"
p[300, 50]C(A45C)[ 50, 50]p[217,247]T"E"
\








Take a square ABCD.  Take the point E on the diagonal AC, such that 
AE = AB.  We have AC/AE = sqrt(2).  Draw from E the perpendicular 
to AC which cuts BC in F.  We have CE = EF = FB, thus FC/FE = sqrt(2).
          AC    
 Let x = ---- = sqrt(2).
          AB  
     AC         CE
x = ---- = 1 + ---- 
     AB         AB
          1              1
x = 1 + ------ = 1 + ---------- 
          CB               CF
         ----         1 + ----
          CE               CE
                From where, ...
Pp
p[ 50, 50]v[300,300]p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"p[300, 50]C(A45C)[ 50, 50]p[217,247]T"E"p[300, 50]C(A45C)[ 50, 50]v[300,154]p[310,164]t"F"
\

Take a square ABCD.  Take the point E on the diagonal AC, such that 
AE = AB.  We have AC/AE = sqrt(2).  Draw from E the perpendicular 
to AC which cuts BC in F.  We have CE = EF = FB, thus FC/FE = sqrt(2).
                       1       
We have (1) x = 1 + -------
                     1 + x      
Pp
p[ 50, 50]v[300,300]p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"p[300, 50]C(A45C)[ 50, 50]p[217,247]T"E"p[300, 50]C(A45C)[ 50, 50]v[300,154]p[310,164]t"F"
\









Take a square ABCD.  Take the point E on the diagonal AC, such that 
AE = AB.  We have AC/AE = sqrt(2).  Draw from E the perpendicular 
to AC which cuts BC in F.  We have CE = EF = FB, thus FC/FE = sqrt(2).
But FE and FC are the side and diagonal of a new square
Ppp[ 50, 50]v[300,300]p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"p[300, 50]C(A45C)[ 50, 50]p[217,247]T"E"p[300, 50]C(A45C)[ 50, 50]v[300,154]p[310,164]t"F"
p[373,227]v[300,300]v[227,227]v[300,154]v[373,227]C(A45C)[300,154]v[343,257]\
                       1
(1) x = 1 + -------
                     1 + x
Replacing x by (1) in (1), we get
                1
 (2) x = 1 + -------------
                     1
              2 + -------
                   1 + x 




Take a square ABCD.  Take the point E on the diagonal AC, such that 
AE = AB.  We have AC/AE = sqrt(2).  Draw from E the perpendicular 
to AC which cuts BC in F.  We have CE = EF = FB, thus FC/FE = sqrt(2).
But FE and FC are the side and diagonal of a new square and the process 
can be repeated.

Ppp[ 50, 50]v[300,300]p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]p[300, 50]
C(A45C)[ 50, 50]v[300,154]p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"p[217,247]T"E"p[310,164]t"H"
p[373,227]v[300,300]v[227,227]v[300,154]v[373,227]C(A45C)[300,154]v[343,257]
p[343,300]v[300,300]v[300,257]v[343,257]v[343,300]C(A45C)[343,257]v[325,300]\
1(1) x = 1 + -------1 + xReplacing x by (1) in (2), we get1(3) x = 1 + -----------------12 + ------------12 + -------1 + x






Take a square ABCD.  Take the point E on the diagonal AC, such that 
AE = AB.  We have AC/AE = sqrt(2).  Draw from E the perpendicular 
to AC which cuts BC in F.  We have CE = EF = FB, thus FC/FE = sqrt(2).
But FE and FC are the side and diagonal of a new square and the process 
can be iterated.
1(1) x = 1 + -------1 + xReplacing x by (1) in (n), we get1(n+1) x = 1 + -------------------12 + --------------12 + ---------12 + -----...
Ppp[ 50, 50]v[300,300]p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]p[300, 50]C(A45C)[ 50, 50]v[300,154]p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"p[217,247]T"E"p[310,164]t"H"
p[373,227]v[300,300]v[227,227]v[300,154]v[373,227]C(A45C)[300,154]v[343,257]
p[343,300]v[300,300]v[300,257]v[343,257]v[343,300]C(A45C)[343,257]v[325,300]
p[313,313]v[300,300]v[313,287]v[325,300]v[313,313]C(A45C)[325,300]v[307,307]
p[300,307]v[300,300]v[307,300]v[307,307]v[300,307]C(A45C)[307,307]v[300,304]
p[298,302]v[300,300]v[302,302]v[300,304]v[298,302]C(A45C)[300,304]v[299,301]
p[299,300]v[300,300]v[300,301]v[299,301]v[299,300]C(A45C)[299,301]v[299,300]
p[300,300]v[300,300]v[300,300]v[299,300]v[300,300]C(A45C)[299,300]v[300,300]\
1334.11The Newton iteration formulaDECWET::BISHOPAvery: father, hacker, brewer, Japanese speakerTue Nov 20 1990 22:1441
Re .8:

>It looks to me as though the standard iteration formula
>
>	x[n+1] = (x[n]+2/x[n])/2
>
>is even faster, since it skips over several of the approximants along the 
>way: starting with x[0]=1, it produces 3/2, 17/12, 577/408, ....

This is the Newton formula for iterating to an approximate solution of f(x)=0,
for differentiable f():

(#)	x[n+1] = x[n] - f(x[n])/f'(x[n]) ;

With f(x) = x^2 - A for a constant A, (#) becomes

	 x[n+1] = (x[n]+A/x[n])/2.

Given a sufficiently accurate first approximation x[0], Newton's formula
converges quadratically, i.e., the error term e[n] = x[n] - x[n-1] satisfies

	|e[n+1]| < K e[n]^2, for some constant K.

This means that the number of digits of accuracy in the approximation 
essentially doubles with each iteration.  Faster converging algorithms are 
hard to come by.  

There are a couple of ways to derive (#).  The most intuitive is to draw the 
tangent to the function f() at the point (x[n],f(x[n]).  Then x[n+1] is the
interception of this tangent line with the x axis.  

However, another derivation of (#) as a solution to a certain fixed point 
problem g(x) = x, is also instructive in that it explains why (#) converges
so fast.  I can't remember what you use for g(x) to get (#), but it turns out
that g(x) is flat (g' ~ 0) near the fixed point a (i.e., a satisfies g(a) = a).
It is easy to convince yourself graphically that the traditional approach of
iterating x[n+1] = g(x[n]) converges very fast to a for a function with the
property g'(a) = 0.  As I recall, this approach also yields some conditions
underwhich (#) is gauranteed to converge.

Avery
1334.12GUESS::DERAMODan D'EramoTue Nov 20 1990 22:3613
>>This is the Newton formula for iterating to an approximate solution of f(x)=0,
>>for differentiable f():
>>	.
>>	.
>>	.
>>Given a sufficiently accurate first approximation x[0], Newton's formula
>>converges quadratically, i.e., the error term e[n] = x[n] - x[n-1] satisfies

	CLarification ...

	The statement about quadratic convergence is for this particular f.

	Dan
1334.13Not so!DECWET::BISHOPYakudoshi!Wed Nov 21 1990 00:2131
	No, unless my memory has been zapped by jet lag, the theorem is of the
	form

	Given that f() satisfies blah blah blah, then there exists an open set
	S containing a such that (#) converges quadratically for every initial
	guess x[0] in S.

	It is only coincidence that this function contains x^2.  Quadratic
	convergence works for any suitable f.

	Of course, the conditions "blah, blah, blah", that I can't recall right
	now, are the crux of the problem.  In the worst case, if the 
	conditions were "blah, blah, blah" = "|f(x)| = x^2-A", then you would 
	be right!  In this case it turns out that S is the positive x axis.

	Seriously, I believe one of the conditions is that f'(a) <> 0.  Even
	if this condition is not met, then (#) will usually still converge, 
	albeit slowly, if f'(x) <> 0 for some region about a.
  
	If f'(a) = 0, then there is a higher order iteration of the form (#) 
	but with a correction term containing f''(x[n]) such that it 
	converges quadratically..  This can be generalized for any f() with

	f^m(a) = 0, m=1,2,...,p, f^p(a) <> 0, where f^k() means the kth 
	derivative of f().  

	In fact, these higher order methods work even if the lower order 
	derivatives do not disappear, but the gain in convergence speed is
	not worth the extra effort per iteration.

	-fab
1334.14???CHOVAX::YOUNGOperation Imminent Blunder.Wed Nov 21 1990 02:505
    Well, I for one would like to hear what the "blah, blah, blah"
    conditions are, since there are some suprisingly simple (and normal)
    functions for which Newtonian Iteration will not converge *at all*.
    
    --  Barry
1334.15Cayley problem for complex zALLVAX::JROTHfrom Saturday alley up to Sunday StreetWed Nov 21 1990 03:525
    This is known as the "Cayley problem", and the basins of
    attraction to the two roots of a quadratic in the complex
    plane have boundaries that are a self-similar fractal set.

    - Jim
1334.16CHOSRV::YOUNGOperation Imminent Blunder.Wed Nov 21 1990 13:3514
    No, thats not what I was talking about, Jim.  Quadratics with 2 roots
    have *some* starting places (ie. "sufficiently close") that will
    converge on one or the other root. 
    
    But there are other, quite normal functions, that have *no* convergent
    starting points, other than the root itself.  Take for instance:
    
    		      (1/3)
    		Y =  X
    
    Other than the root itself (X[0] = 0) all starting points are
    *divergent*.
    
    --  Barry
1334.17GUESS::DERAMODan D'EramoWed Nov 21 1990 15:048
>>    Other than the root itself (X[0] = 0) all starting points are
>>    *divergent*.

	... and I vaguely recall one example where the
	iterations cycled (X[0] = x[2] = x[4] = ... and
	x[1] = x[3] = x[5] = ...).

	Dan
1334.18slightly more info on the blah, blah, blahCSSE::NEILSENI used to be PULSAR::WALLYWed Nov 21 1990 15:3422
I once knew these conditions, although I don't think I ever fully understood
them.  You can find them in any good book on numerical analysis.  Unfortunately
I don't have one in my office.

Basically, the condition is that higher derivatives have to be bounded.  I don't
remember whether the bound applies to the next derivative or all higher derivs.

The precise statement is something like this:  If the second derivative of f 
is bounded by M, then there exists an interval of size e(M) containing the root 
in which Newton's method is bound to converge.  There is a way to compute
e(M) given M.

Note that in the .16 example y=x^(1/3) higher derivatives are unbounded.

The example in .17, if I am thinking of the same case, is one where starting
points outside the interval lead to cycling, but starting points within the
interval still converge.

There are other pathological cases which do not contradict the theorem above.
In one common nasty variety starting points near the interval lead to rapid 
divergence.  I have even seen cases where exact calculations lead to 
convergence but double precision approximations diverge.
1334.19CHOSRV::YOUNGOperation Imminent Blunder.Wed Nov 21 1990 16:3514
    Re .18:
    
    Hmmm, that may be a sufficient condition, but is certainly is not a
    *necesary* one.
    
    For instance, all functions of the form:
    
    
    		Y = X^n
    
    Where 'n' is an integer > 3, fail this condition, but nonetheless will
    converge *very* rapidly with Newtonian Iteration.
    
    --  Barry
1334.20another detail or twoCSSE::NEILSENI used to be PULSAR::WALLYMon Nov 26 1990 15:5130
Since I wrote .18, I remembered that the derivative(s) must be bounded within
a circle on the complex plane centered on the root.  The 'complexity' can 
cause some surprises for the unwary.  A function which is well-behaved on the 
real line can fail to converge because it has a singularity off the real line
but near the root.  Numerical analysis texts usually give an example of this.

.19 is correct that this is a sufficient condition and not a necessary one.  
Sufficient conditions are much more interesting in numerical analysis, since 
they tell us when a procedure is reasonably safe.

To show that Newton's method is not safe when applied to a general cubic, 
apply it to 

	f(x) = x^3 - 3

and start with the guess x=1.

.19>    For instance, all functions of the form:
>   
>    
>    		Y = X^n
>    
>    Where 'n' is an integer > 3, fail this condition, but nonetheless will
>    converge *very* rapidly with Newtonian Iteration.

When I apply Newton's method to this, I get 

	x[i+1] = x[i] * (n-1)/n

which is very slow for large n.
1334.21DECWET::BISHOPYakudoshi!Mon Nov 26 1990 17:0713
Re .20:

>When I apply Newton's method to this, I get 
>
>	x[i+1] = x[i] * (n-1)/n
>
>which is very slow for large n.

This is exactly the situation I was talking about in .13, where f'(a) = 0.  It
will still sometimes converge, but to get quadratic convergence you have to 
go to a higher order method with terms in f''(x), f'''(x), etc.

Avery
1334.22don't count and don't testTOOK::CBRADLEYChuck BradleyMon Nov 26 1990 18:4710
back about 64-66 a clever person at CDC found good way to take advantage of
the rapid convergence of Newton's method for the math library.
He chose the starting point to be floating point number with a constant,
fraction, and the exponent being a one bit shift from the exponent of the 
argument.  Then two(?) single precision iterations and three(?) double 
precision iterations gave the answer. No loop counts, and no convergence tests.
The trick is common now.  The VMS RTL separates the cases for even and odd
exponents and does 2, 3, or 5 iterations for single, double, and H formats.

it's good math and good programming.
1334.23No need to test convergence in special casesDECWET::BISHOPYakudoshi!Tue Nov 27 1990 11:148
If you know things like the precision of your data type and maybe 
characteristics of the function, you can figure pretty accurately how many
iterations are needed.  It's an obvious simplification.

The question is, what function(s) did/do they use this on?  SQRT? others?

Avery

1334.24VMSDEV::HALLYBThe Smart Money was on GoliathTue Nov 27 1990 19:165
1334.25only square rootTOOK::CBRADLEYChuck BradleyWed Nov 28 1990 15:278
re .23    square root was all i recall hearing about.

re .24    at Lucasfilms 1/sqrt(x^2+y^2) would probably be called often
with x and y close to the previous value. For that case, a calculation
based on a series expansion around the previous point could be used n
times before going back to the stated formula.  n is not a constant,
but if enough is known about the step size and the domain of interest,
it could be treated as a constant.