[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

1581.0. "Coefficients of the Interpolating Polynomial ???" by ROMCSA::VENTURINO () Thu Mar 12 1992 07:46

	Hi,

I need to know a method to evaluate the COEFFICIENTS of the interpolating 
polynomial that passes through N points in two dimensions.My tabulated 
points meet D=f(x,y) and x and y are two array of N elements each.

Does anyone have any help or suggestions?

						Tina.

Posted also in  algorithms conference.
			

T.RTitleUserPersonal
Name
DateLines
1581.1BEING::EDPAlways mount a scratch monkey.Thu Mar 12 1992 10:5931
    Re .0:
    
    If you want the polynomial to exactly pass through n points, then the
    polynomial is called the Lagrangian.  Suppose your points are (x1,y1),
    (x2,y2), . . . (xn,yn).  I will also write x[1] for x1 or x[i] for the
    i-th x.  Then the polynomial is:
    
                n
    	P(x) = sum l[i](x)*y[i],
    	       i=1
    
    where l[i](x) is another polynomial:
    
                     n
    	l[i](x) = product (x-x[j])/(x[i]-x[j]).
    	            j=1,
    	        but skip j=i
    
    For example, if your points are (3,4), (5,6), and (7,9), the Lagrangian
    is:
    
    	(x-5)(x-7)       (x-3)(x-7)       (x-3)(x-5)
    	---------- * 4 + ---------- * 6 + ---------- * 9.
    	(3-5)(3-7)       (5-3)(5-7)       (7-3)(7-5)
    
    To get the coefficients for x^2, x, and 1, you have to reduce that
    polynomial to simplest form.  Maybe somebody has a program handy for
    this.
    
    
    				-- edp
1581.2ROMCSA::VENTURINOThu Mar 12 1992 12:149
re .1:

Thanks for the reply,
...and what about multidimensional interpolation?
How can I do the same analysis but in three dimensions?
In other words how is the polynomial P(x,y)?

Thank again
			Tina.
1581.3undetermined coefficients3D::ROTHGeometry is the real life!Thu Mar 12 1992 14:1461
    You can solve such problems in general in terms of undetermined
    coefficients.

    Think of all the allowable powers of x and y as forming a linear
    vector space, with a basis consisting of those powers.  Then
    your general polynomial will be a linear combination of these
    powers.  For example, a general cubic will be of the form

	f(x,y) = c00 + c01*x + c10*y + c02*x^2 + c11*x*y + c20*y^2 +
		 c03*x^3 + c12*x^2*y + c21*x*y^2 + c30*y^3

    You are asking for an f that takes on specified values at a set of
    points - for a cubic you need 10 of them to uniquely determine all
    10 values of C above.

    One way to do this is by setting up a set of simultaneous equations and
    using a linear equation solving routine.

    For this cubic example, you'd populate a matrix with the powers
    of x and y at 10 data points.  The unknowns are the c's above and
    the right hand side would be the values of f at the data points.

    Like this:

    | 1 x0 y0 x0^2  x0*y0  y0^2  x0^3 x0^2*y0 x0*y0^2 y0^3 | | c00 |   | f0 |
    | 1 x1 y1 x1^2  x1*y1  y1^2  x1^3 x1^2*y1 x1*y1^2 y1^3 | | c01 |   | f1 |
    | 1 x2 y2 x2^2  x2*y2  y2^2  x2^3 x2^2*y2 x2*y2^2 y2^3 | | c10 |   | f2 |
    ................................................................ = ......
    | 1 x7 y7 x7^2  x7*y7  y7^2  x7^3 x7^2*y7 x7*y7^2 y7^3 | | c12 |   | f7 |
    | 1 x8 y8 x8^2  x8*y8  y8^2  x8^3 x8^2*y8 x8*y8^2 y8^3 | | c21 |   | f8 |
    | 1 x9 y9 x9^2  x9*y9  y9^2  x9^3 x9^2*y9 x9*y9^2 y9^3 | | c30 |   | f9 |

    Another way that doesn't give the coefficients directly, but also
    doesn't require solving simultaneous equations is to use a 2 dimensional
    generalization of a Newton interpolating polynomial.  I can give
    instructions on how do use that if you wish.  Another alternative is
    to structure your data points in such a way that solving the
    equations above is much simplified.  For example, if you are free to
    choose your x's and y's to lie on a triangular lattice of points
    of this kind of form

                    o
                   / \
                  o---o
                 / \ / \
                o---o---o
               / \ / \ / \
              o---o---o---o

    where the data lie on straight lines the interpolating polynomial
    can be written down by inspection.  (This is a 2 dimensional
    generalization of Lagrangian interpolation.)

    It's unclear why you want an implicit form, but it can be done
    in a variety of ways.

    Also, it's permissable to have more than the minimum of needed data
    points.  In that case you could ask for a least squares solution,
    which people do all the time when fitting experimental data.

    - Jim    
1581.4some questions..STAR::ABBASIThu Mar 12 1992 17:1014
    >Think of all the allowable powers of x and y as forming a linear
    >vector space, with a basis consisting of those powers.  Then
    
    Iam not i undertand this, are the powers all in the field on integers,
    rationals or what? 
    integers do not make a vector space over the field of integers? because
    there is no element such that a*(inverse(a)) = 1 .
    
    iam sure iam missing something here. (so what's new..?)
    
    if the powers are the rational numbers, then what is the basis?
    
    confused..
    /nasser
1581.5GAUSS::ROTHGeometry is the real life!Thu Mar 12 1992 19:4216
   Re .4

	Polynomials form a linear space.  One basis is the powers
   themselves, such as 1, x, x^2, x^3, ... these are clearly linearly
   independant (consider a Vandermonde determinant.)

	Other bases would be the Bernstein polynomials, various sets of
   orthogonal polynomials, forward difference polynomials, Lagrangian
   functions, and so forth.

	This works over multivariate polynomials as well.

	Philip Davis' book on approximation theory is one good reference.
   (an inexpensive Dover paperback.)

   - Jim
1581.6ROMCSA::VENTURINOFri Mar 13 1992 07:5114
RE .3:

Thanks for your suggestions, I'm investigating on an implicit form because
a customer of mine asked for this kind of approach. From my point of view, 
it is better the statistical approach like least squares, confidence limits 
and so on also because it is well known that the coefficients of the 
interpolating polynomial cannot be determined with an high degree of accuracy.
In the reply you speak about the two dimensional generalization of a Newton 
interpolating polynomial: I'll be very grateful if you would give me some
instructions on how I can use it or a pointer to some literature about it. 
I'm trying to have enough knoknowledge about this kind of data analysis.

							Tina.
1581.7y3D::ROTHGeometry is the real life!Fri Mar 13 1992 12:16119
    It is not *necessarily* true that interpolating polynomial
    coefficients are ill conditioned, but they often are.
    A low order (say, quadratic or cubic) fit is usually no problem.

    The information I have on this is scattered about and not in one place,
    unfortunately.

    Many numerical analysis books discuss finite differences, usually
    only the one dimensional case.  Look in _Numerical Recipes_ for
    some discussion on that as well as least squares.  The Schaums
    outline on numerical analysis has a an unusually good discussion of finite
    differences and interpolation in one dimension.

    Multidimensional generalizations are discussed in finite element
    texts and not numerical analysis per se.  The introductory volume of
    the books by J. T. Oden (I think it's called a course in finite elemments)
    is one of the best I know of.

    A book on computer aided design by G. Farin, _Curves and Surfaces
    in CAGD_ also discusses multidimensional interpolation - it's one of
    the best on that subject.

    Anyway, to give a general idea of this, here first is a generalization
    of Lagrangian interpolation.

    In one dimension, you construct cardinal polynomials that are zero
    at all interpolation points but one, where the function is equal to one.
    Then a weighted sum of these guys will trivially interpolate an
    arbitrary set of function values.  This is what EDP showed in his
    note.

    To do this in two dimensions in a straightforward way, you should lay
    your interpolation points out to be the intersection of a mesh of
    lines.  Each line has the implicit equation

	l[i](x, y) = a[i]*x + b[i]*y + c[i]

    which is zero on the line.  You form your interpolant by products of
    the impicit line equations of all lines *not* intersecting a given
    point divided by that product with (x,y) equal to the coordinates of
    the given point.  That's why you have to constrain the data points
    to lie in a triangular mesh, so it's easy to solve for the a,b,c
    coefficients of each line passing thru the data points.  Othewise
    it's just as much of a mess to solve for the Lagrangian coefficients as
    to do the general linear equation approach.  In finite elements
    books, they normalze to a unit triangle first and do the interpolation
    there (via a change of coordinates) making it particularly easy
    to write things down.

    Newton interpolation is based on forward difference polynomials.

    These are of the form k[n](x) = x(x-1)(x-2)...(x-n+1)/n!

    The neat thing about these is the forward difference of one of these
    is just one lower order.

			    (x+1)x(x-1)...(x-n+2)-x(x-1)...(x-n+1)
	k[n](x+1)-k[n](x) = --------------------------------------
                                              n!

                            x(x-1)...(x-n)
			  = -------------- * ((x+1)-(x-n+1)) = k[n-1](x)
                                   n!

    A function in polynomial form

	a0 + a1*x + a2*x^2 + ...

    can be written in forward difference form in terms of the k's

	b0 + b1*k[1](x) + b2*k[2](x) + b3*k[3](x) + ...

    If you evaluate this at x = 0, you get b0.  If you forward difference
    this, everything shifts left

	b1 + b2*k[1](x) + ...

    and evaluating that at zero gets you b1 and so on.  So you can
    get the b's by forming a table of differences, differences of
    differences, and so on.

    The 2 dimensional generalization of the power basis is

	a00 + a01*x + a10*y + a02*x^2 + a11*x*y + a20*y^2 + ...

    and of the forward difference basis is

	b00 + b01*k[1](x) + b10*k[1](y) + b02*k[2](x) + b11*k[1](x)*k[1](y) +
	b20*k[2](y) + ...

    (I've never seen this in a book but it's an obvious generalization.)

    If the data is not laid on a regular unit spaced grid, then you have
    to form divided differences instead of simple differences, but the
    idea is the same.  Your successive divided differences are of the form

	y[x0] = y0, y[x1] = y1, ...

	y[x0, x1] = (y[x1] - y[x0])/(x1-x0)
	y[x1, x2] = (y[x2] - y[x1])/(x2-x1)

	y[x0, x1, x2] = (y[x1, x2] - y[x1, x0])/(x2-x0)

	y[x0, x1, x2, x3] = (y[x1, x2, x3] - y[x0, x1, x2])/(x3-x0)

    And the Newton polynomial is

	f(x) = y[x0] + (x-x0)*(y[x0, x1] + (x-x1)*(y[x0, x1, x2] + (x-x2)*(...

    Again, you'd write a multidimensional generalization much the same
    way.  Note that this again requires the points to be on a regular
    lattice as in the Lagrangian case - that's the limitation of these
    closed form solutions.

    Check out the references, they should be helpful. (Least squares
    is probably your best bet though from the sound of things; if you
    want the LINPACK routines, let me know.)

    - Jim
1581.8re .7: many many thanksROMCSA::VENTURINOFri Mar 13 1992 12:450
1581.9caveat emptorPULPO::BELDIN_RPull us together, not apartMon Mar 16 1992 12:2724
   Re:                    <<< Note 1581.8 by ROMCSA::VENTURINO >>>

Just a few cautions I didn't see anyone mention.

   1) Regardless of whether you assume exact or statistical fit,
      the total number of terms to be estimated is a critical
      determinant of whether you can be practically successful.
      Numerical accuracy is a major concern in all of these
      problems and can be a stumbling point if ignored.  Note
      that a 3 dimensional cubic has 9 parameters to be
      calculated due to interactions among the dimensions.  This
      problem would be equivalent to a 9th order polynomial.
      
   2) Such problems really need careful attention from an expert
      because there are many special cases and opportunites that
      the generalists among us will never be able to identify.
      If the application is at all critical, I recommend that
      you not deliver anything to the customer until it has been
      worked over by a specialist.  If its just a nice-to-have
      for the customer, then maybe that's overkill.
      
fwiw,

   Dick
1581.10ROMCSA::VENTURINOWed Mar 25 1992 07:1215

	Hi folks,

	thanks again for your answers and suggestions.
	I've been studying carefully all the stuff about 
	my problem and I infer that the regression analysis
	it is the best way. Now I would like to have the 
	LINPACK routines, could someone give me the pointer?


						Tina.
	


1581.11some pointersSTAR::ABBASIi^(-i) = SQRT(exp(PI))Wed Mar 25 1992 08:0320
    Tina,
    Did You try Netlib? I think LINPACK might be avaliable there , not
    sure, you can try getting the index and see, i dont have the index
    handy around to check now.
    
    see 1583.6 on how to send mail for index.
    
    Also Check the DXML library , They might allready have some routines
    for what you want, and you can get the KIT and install it on your
    system. check the DXML notes file, they have a bookreader you can view
    for online documentation on DXML. 
    
    There is also a Book with a DOS floopy diskette that comes with it, the
    floopy contains LINPACK fortran source code routines, Quadpack,
    Minpack, SLATEC and other functions, the Book is Numerical methods and
    software, Prentice hall, isbn 0-13-627258-4.
    if you want it, call prentic hall, and they'll mail ship you a copy, or
    check your local DEC library, they might have a copy.
    
    /nasser
1581.12ISSHIN::MATTHEWSOO -0 -/ @Tue Jun 09 1992 17:1313
       <<< Note 1581.1 by BEING::EDP "Always mount a scratch monkey." >>>

EDP,

	In using the Lagrangian interpolation, do the independent variables
have to be normalized?  What I mean is, do the independent variables have to 
fall between 0 and 1, for instance?



				Thanx,

				 Ron
1581.13BEING::EDPAlways mount a scratch monkey.Wed Jun 10 1992 11:596
    Re .12:
    
    No, they can be anything, as long as they are distinct from each other.
    
    
    				-- edp