[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

1052.0. "A arithmatic method for finding (square) roots?" by RAIN::DELPHIA (Captain Slash!) Mon Apr 10 1989 03:04

    A friend of mine mentioned to me while working on a program that
    solves roots,that a method exists to solve square roots  using
    arithmatic. This came up when I joked about what if we didn't
    have built-in functions that did such things. Is this true? I
    can't find it anywhere in my math books.
T.RTitleUserPersonal
Name
DateLines
1052.1Easy as plonkIOSG::CARLINDick Carlin IOSGMon Apr 10 1989 10:44112
    The method we were taught at school was the "plonk" method (explanation
    of name later). It works on a sort of pseudo-long-division process and
    it's best illustrated by an example.

    To find the suare root of 24336:

    Group the digits in pairs from the right, marking them with bars, one
    for each digit of the result as it turns out:

                 - -- --
                 2 43 36

    Starting with the number under the first bar, find the highest digit
    which, when squared, is less than or equal to the number under the bar.
    In this case it's obviously 1:

                 1
                 - -- --
                 2 43 36

    Double the number you just entered and place it over on the left with a
    dot after it. Also square the number, put the result under the 2 and
    subtract the result from the 2:

                 1
                 - -- --
        2.       2 43 36
                 1
                 -
                 1

    I. Bring down the 43:

                 1
                 - -- --
        2.       2 43 36
                 1
                 -
                 1 43

    That dot after the 2, and the space above the next bar, have to be
    replaced by the digit "plonk", chosen such that plonk is the highest
    digit for which twenty-plonk times plonk is less than or equal to 143.

    In this case plonk = 5, so enter it in the two appropriate places,
    multiply out and subtract:

                 1  5
                 - -- --
        25       2 43 36
                 1
                 -
                 1 43
                 1 25
                 ----
                   18

    Double the number above the bars, and put the result on the left,
    followed by a dot for another plonk:

                 1  5
                 - -- --
        25       2 43 36
                 1
                 -
                 1 43
                 1 25
                 ----
        30.        18


    Now repeat from step I onwards. In other words, bring down the 36, and
    choose plonk so that three hundred and plonk times plonk is as close to
    1836 as possible. We are lucky (?!), 306*6 = 1836 so the result is
    exact:

                 1  5  6
                 - -- --
        25       2 43 36
                 1
                 -
                 1 43
                 1 25
                 ----
        306        18 36
                   18 36
                   =====

    Everything before step I is obviously no more than a special case of
    the operations in the main loop.

    To cater for numbers with decimal points, or numbers that are not
    perfect squares, simply group the digits in twos in both directions
    from the decimal point. For example, to find the square root of 432,
    start as follows:

                - --   -- -- --
                4 32 . 00 00 00

    if you want the result to three decimal places (actually that's not
    very rigorous, it'll really just tell you whether to round up or down
    in the second decimal place).


    My apologies for explaining it in this longwinded fashion. I just took
    the opportunity to indulge in a nostalgic recollection of the way it
    was explained at school.

Dick


    
1052.2any one else remember "guessing sticks"?CTCADM::ROTHIf you plant ice you'll harvest windMon Apr 10 1989 11:2047
    Aaah yes, the "pencil and paper method"; another piece of history lost
    in the age of calculators.

    The following example may make the method clear.


		  1  4  1  4  2  1  3  5  6 ...
		 ------------------------------
		| 2 00 00 00 00 00 00 00 00 ...
		  1
		 -----
	    2 4	| 1 00
	   -----    96
		 -------
	   28 1 |   4 00
	   -----    2 81
		 ----------
	  282 4 |   1 19 00
	  ------    1 12 96
		 -------------
	 2828 2 |      6 04 00		     
	 -------       5 65 64
		 ----------------
	28284 1 |        38 36 00
	--------	 28 28 41
		 -------------------
       282842 3 |        10 07 59 00
       ---------          8 48 52 96
		 ----------------------
      2828426 5	|         1 59 06 04 00
      ----------	  1 41 42 13 25
		 -------------------------
     28284270 6 |	    17 63 90 75 00
     -----------

    Note that you can estimate the remaining digits (with increasing
    accuracy) by simply dividing the radicator on the left into the
    remaining radicand.  For example, some more digits of the above
    root are approximately 1763907500/282842706 = 623...

    You can see why this works by examining the expansion of
    (1+eps)^2 = 1 + 2*eps + eps^2, so the error comitted by ignoring the
    eps^2 is bounded enough for you to do two digits at a time.

    This method is very amenable to hardware implementation in binary.

    - Jim
1052.3AITG::DERAMODaniel V. {AITG,ZFC}:: D'EramoMon Apr 10 1989 16:2715
     So why "plonk"?
     
     Dan
     
     P.S.  Another method sometimes used is to find the square
     root of x is to take a guess "a" and iterate the process:
     
                     a     = a
                      0
     
                     a     = (1/2)(a  + x/a )
                      n+1           n      n
     
     Once you are close, each iteration about doubles the number
     of correct digits in the answer.
1052.4BAILEY::XIAMon Apr 10 1989 16:398
    re .0 .3:
    
    When you break it down, everything is arithmatic (in binary).  So
    why not just do it the way the built-in function does.  I am sure
    it is also optimized :-.  Is .3 the way, the built-in function does
    the job?
    
    Eugene
1052.5CTCADM::ROTHIf you plant ice you'll harvest windTue Apr 11 1989 11:1634
    Newton Raphson is not as suitable for "hand calculation" as the pencil
    and paper method because it makes you do many divides.  The hand
    method costs only one divide cycle.  In fact, I wrote such a binary
    square root routine (long ago) for the Fortran runtime system on the
    PDP-11, and it was considerably faster than the iterative routine
    using hardware fixed-point multiply and divide (but slower than
    floating point hardware, of course...)

    The math library uses such an optimized square root; I believe it gets
    the starting approximation by table indexing on the 8 most significant
    bits of the fraction, or something similar.

    There is another class of algorithms that are nice for hardware
    implementation called "pseudo multiplication".

    Here's an example that calculates ln(Y):

	Let y[0] = Y, normalized to 1 <= Y < 2, say, and x[0] = 0.
	The normalization is no problem in binary floating point...

	We have the identity Y = y[k]*exp(x[k])

	Try to reduce y[k] closer to 1 by multiplying by (1-2^-k), which can
	be done by shift and subtract.  if y[k]*(1-2^-k) > 1, then
	update y and set x[k+1] = x[k]-ln(1-2^-k) by a table fetch.

	Repeat for all bits in Y and we get Y = y[n]*exp(x[n]) = 1*exp(x[n]),
	so x[n] = ln(Y).

    This works in complex for calculating sines and cosines, and in fact
    works for a wide class of functions with suitable tables and functional
    equations.

    - Jim
1052.6Newton-Raphson divisionTRIBES::CREANPer ardua ad anticlimaxWed Apr 12 1989 12:2517
    I suspect a lot of calculators, and compilers, rely on the
    identity
    	SQRT(x)= ANTILOG(0.5*(LOG(x)))
    Years ago I used the Newton-Raphson (succesive approximation)
    method when I had to use a desk calculator (there were none
    that fitted in a pocket then!) that didn't have a square
    root function. With experience one could make a good first
    guess and get it to converge (9 digits) in three passes. I
    had a similar algorithm for cube roots, but I can't recall
    it now.
    The Newton-Raphson method is used a lot in hardware division
    algorithms, with a look-up table providing the first approx-
    imation. It is said to be the fastest known division technique,
    at least it was the last time I checked!
    
    
    
1052.7Newton-Raphson for x**aCSCOA5::BERGH_PPeter Bergh, DTN 435-2658Wed Apr 12 1989 15:408
    If I remember my old college text book correctly, the general formula
    for Newton-Raphson is
    	x(n+1) = x(n) - f'(x(n))/f(x(n))
    If we take f(x) == x**a, we see that f'(x) == a*x**(a-1) and that
    f'(x)/f(x) == a/x.  Thus, for any b'th root of x, the NR formula
    is
    	x(n+1) = x(n) - 1/(b*x(n))
    Again, with reservations for misremebering!
1052.8Yep, mine's going, too...NIZIAK::YARBROUGHI PREFER PIWed Apr 12 1989 18:477
>    If I remember my old college text book correctly, the general formula
>    for Newton-Raphson is
>    	x(n+1) = x(n) - f'(x(n))/f(x(n))

Nope, it's 
    	x(n+1) = x(n) - f(x(n))/f'(x(n))
so when you get close to the root the correction tends to zero.
1052.9Newtons methodRAIN::DELPHIACaptain Slash!Wed Apr 12 1989 20:112
    I don't know who Ralphson is but I learned that as "Newtons" method
    for approximating roots.
1052.10AITG::DERAMODaniel V. {AITG,ZFC}:: D'EramoThu Apr 13 1989 02:4012
	Using x(n+1) = x(n) - f(x(n))/f'(x(n)) with f(x) = x^m - a
	[so that the zero of f(x) is the m-th root of a] results in
	the recursion

		         m-1           1     a
		x(n+1) = ---  x(n)  +  -  -------
		          m            m      m-1
		                          x(n)

	For m = 2 this is x(n+1) = (x(n) + a/x(n)) / 2.

	Dan
1052.11Newton and RaphsonPULSAR::WALLYWally Neilsen-SteinhardtWed Apr 19 1989 17:137
    Newton gave the method for solving 
    
    		x^n = 0
    
    and Raphson generalized it to solve 
    
    		f(x) = 0
1052.12Newton's origional methodWRKSYS::ROTHGeometry is the real life!Tue Oct 04 1994 11:4170
   This is an old one... but somebody just provided a reference to it.

>    Newton gave the method for solving 
>    
>    		x^n = 0
    
>    and Raphson generalized it to solve 
>    
>    		f(x) = 0

    Actually, Newton gave a method for solving an arbitrary polynomial
    (though he may have also written something specifically for
    an n'th root too...)

    His technique was to "shift" the polynomial by his current
    approximation to the root, so that the new polynomial would
    have an isolated root close to zero.  (Shifting is done
    by replacing x in P(x) by (x + c), and using Horner's algorithm
    to expand in the new variable.)

    Then he remarked that, since the new root was near to zero,
    it could be well approximated by solving only the linear part
    of the new polynomial, and another shift could be performed if
    more accuracy was needed.

    If you have a polynomial

	P(x) = a0 + a1 x + a2 x^2 + ...

    Then P(0) = a0, P'(0) = a1, x approx = 0 - a0/a1 = 0 - P(0)/P'(0)
    so it's "Newton's method" in terms of derivatives after all, starting
    from zero as the current approximation.

    For example,

	x^3 - 3 x - 1,   x ~= 2

	(x + 2)^3 - 3 ( x + 2) - 1 =  x^3 + 6 x^2 + 9 x + 1
	x ~= -1/9

	x^3 + 5.66666666666666 x^2 + 7.703703703703703 x + 0.072702331961591
	x ~= -0.072702331961591/7.703703703703703 = -0.0094373219373219	

	x^3 + 5.638354700854701 x^2 + 7.597014577550100 x + 0.000503850073677
	x ~= 0.000503850073677/7.597014577550100 = -0.0000663221148958

	x^3 + 5.638155734510013 x^2 + 7.596266695529382 x + 0.000000024800705
	x ~= -0.000000024800705/7.596266695529382 = -0.0000000032648544

    so the root is

	+2
	-0.1111111111111111
	-0.0094373219373219
	-0.0000663221148958
	-0.0000000032648544
	-------------------
	 1.879385241571817   = 2*cos(20 degrees)

    In hand calculation, this procedure detects when there is a multiple
    root, or other accuracy problems, and Newton also would apply this to
    power series taking as many terms as needed.  In addition, Newton
    would only shift by limited amounts early on (such as -0.1 instead of
    -0.1111111111111 in the second step) to save on work in doing the
    shifts, keeping only as many decimals as needed.

    Modern eigenvalue and polynomial solvers use variations of this same
    shifting and iterating technique.

    - Jim