[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

1305.0. "Interval Arithmetic?" by CSC32::S_JOHNSON (Lifetime Member of Aye Phelta Thi) Thu Oct 04 1990 15:34

Hi,

I have an idea regarding something I'd like to do for a research
project.  I'd like to find out if anyone else has done anything similar
or if there is even an interest in my idea.

Real numbers cannot usually be represented cleanly and must be
approximated.  When they are approximated, they usually fall in an
interval which represents a range of values for real numbers.  An idea
that was given to me, is to create a library of functions that do
what I'll call interval mathematics.  That is, when given a real number,
it determines if the real number can be represented cleanly and if so it
does the mathematical operations on the exact number.  If it cannot be
represented cleanly, it does the mathematical operations on the interval
that the number falls in and the result is also an interval.

Has something like this been done?  If not would there be any interest
in having something like this to use?

Thanks.

Scott Johnson
T.RTitleUserPersonal
Name
DateLines
1305.1Like many of the best ideas...CADSYS::COOPERTopher CooperThu Oct 04 1990 17:4410
    Yes, it has, and you even got the name right.  Interval arithmetic is a
    major tool in evaluating the accuracy and precision of numeric
    calculations (although it is sometimes grossly conservative,
    overestimating by a large amount the size of the maximum error
    associated with a particular calculation).

    Such a package might be very useful, but you may be better off
    importing and making available an existing public domain one.

					    Topher
1305.2How hard is it to implement?CSC32::S_JOHNSONLifetime Member of Aye Phelta ThiThu Oct 04 1990 20:408
    This would be for a graduate project to get a master's degree in
    computer science.
    
    I just wanted to find out if it has been done and to what extent. 
    Whether it was just for +-/* or for the trig functions and log
    functions also.
    
    scott
1305.3I think a lot of people have looked at this.DECWET::BISHOPNihon ga natsukashii ...Thu Oct 04 1990 21:1054
	Chapter 3(?) of "Structure and Interpretation of Computer 
	Programs," discusses this, and has some homework problems on the 
	topic.  

	I think one way to make it more interesting is if each of the 
	intervals was the pdf of the random variable representing the
	actual data.  A data "point" would start out as the uniform 
	distribution (unless you had an a priori pdf for it.  However,
	we'll see below that doesn't matter much in the limit):

	_________________
	|		|
	|		| height 1/w.

	<-------w------->	(<--presumably a function of the floating
				point precision)

	Since summing rv's convolves their pdfs, the sum (or difference)
	of two such elementary points would be the triangle pdf:

			^

					<-- The top is too hard to draw

			etc.
	   /			      \
	  /			       \
	 /			        \	height 1/w 
	/                |		 \
	<-------w------->|<-------w------->	(obviously not the same scale
						as the above).


	The next step is a piecewise quadratic.  There are more complicated
	shapes such as trapezoids, etc. for summing a pdf of one "degee" (=
	number of sums from elementary data points).

	However, by the central limit theorem, sums of these generalized intervals 
	would tend toward Normal pdfs. Also, since summing Normal rv's gives a 
	normal rv that is the sum of the variances, the pdf's would become 
	flatter over time (a consequence of repeated convolution of the
	pdf's), reflecting the fact that the uncertainty in the final answers
	grows with more processing.

	Note that in the limit, the influence of the SHAPE of the initial 
	distribution goes away, but the variance remains.

	The pdf for the product of two rv's is more complicated (I seem to
	recall that for Normals it is a Bessel function of some type), but
	I imagine results similar to the above will fall out.

	Avery
	
1305.4A correctionDECWET::BISHOPNihon ga natsukashii ...Thu Oct 04 1990 21:1714
	A typo:

	>The next step is a piecewise quadratic.  There are more complicated
	>shapes such as trapezoids, etc. for summing a pdf of one "degee" (=
	>number of sums from elementary data points)

	This should say "of different 'degrees'".  I got lazy on the proof
	reading.

	Incidentally, I think one of the numerical analysis texts I used, 
	maybe Ralston & Rabinowitz, touched on this summing of pdfs due to
	the limitations in the internal representation.

	-fab
1305.5Not too hard for the elementary ops.CADSYS::COOPERTopher CooperFri Oct 05 1990 13:5467
RE: .2 (Scott)

    I just checked in the Digital Library Network VTX catalog and got a
    bunch of citations.  I used the subject keyword "interval", which hit
    on the subject of "Interval Analysis".  Most of these were technical
    reports which apparently reported work making use of interval analysis.
    The following books are of interest and can be obtained via internal
    DEC interlibrary loan:

	    Kulisch, Ulrich.  Computer Arithmetic in Theory and Practice
		1981

	    Moore, Ramon E.  Methods and Applications of Interval Analysis
		1979

	    Moore, Ramon E.  Reliability in Computing:  The Role of
		Interval Methods in Scientific Computing, 1988

	    Ratschek, H. New Computer Methods for Global Optimization, 1988
		[This would also appear to be an application, albeit a
		 broad one.  As a book, however, it may have a substantial
		 introduction, since it was important enough to list as a
		 subject topic.]

    One general technical report was apparent:

	    Kyburg, Henry E.  In defense of intervals (University of
		Rochester TR. 17p) 1988

    I would suggest starting with the TR and the 88 book by Moore.  That
    should give you enough to know whether you wish to pursue it and give
    you some other book and paper citations.

>    Whether it was just for +-/* or for the trig functions and log
>    functions also.

    Basically, the problem of transforming a function 'f' from the domain
    of single values to the domain of intervals, is as follows:
    Let an interval be expressed as [a>w] with a being the starting point
    and w being the width (this is a bit more convenient here than
    describing it as a center point and a half width, and shows the
    relevant structure better than describing it in terms of its two
    end points); then

	    f([a>w]) = [f(a) > f(a) - f(a+w)]

    if f is monotonically increasing over the interval.  There is an
    obvious conversion if it is monotonically decreasing over the interval.

    If there is a clean expression for f(a+w), then you get clean results.
    If f(a+w) is messy, then you get messy results.  You can still express
    the interval in terms of the mappings of its end-points, but you
    generally don't get any simplification with each step and you get
    rather unwieldly expressions.

    For log, there is no closed expression for log(a+w) so you just have
    to keep calculating in terms of the log of the end points.

    The trig functions are worse since they are not monotonic unless you
    can avoid overlapping the right-angle multiples with your intervals.
    In general the end points of the result intervals do not correspond
    to the mapping of the end points of the input intervals.

    Can this be dealt with?  Of course, but I doubt if anyone has managed
    to find a way of dealing with it which is both general and pretty.

					Topher
1305.6Gets messy.CADSYS::COOPERTopher CooperFri Oct 05 1990 14:0725
RE: .3 (Avery)

>	The pdf for the product of two rv's is more complicated (I seem to
>	recall that for Normals it is a Bessel function of some type), but
>	I imagine results similar to the above will fall out.

    Except that as far as I know there is no equivalent to the central
    limit theorem, so that things are likely to get messier rather than
    "smoothing out" as the computation precedes.  Also we only have a
    normal curve if we got our interval from a good many additions (how
    many rather depends on the sensitivity of the multiplication rule
    for normals).  I think that here is a general description for the
    product of two distributions in terms of their moment generating
    functions, but this wouldn't necessary produce particularly clean,
    or interpretable results.

    And as I remember the quotient of two normals is rather ill-behaved --
    no mean.

    An interesting idea would be to combine this idea, symbolic execution
    (which interval analysis is sort of a form of) and Monte Carlo
    simulation to produce estimates of the error distribution for various
    assumptions about the values of the input values from a calculation.

					Topher
1305.7DECWET::BISHOPNihon ga natsukashii ...Fri Oct 05 1990 20:5429
>    Except that as far as I know there is no equivalent to the central
>    limit theorem, so that things are likely to get messier rather than
>    "smoothing out" as the computation precedes.  Also we only have a
>    normal curve if we got our interval from a good many additions (how
>    many rather depends on the sensitivity of the multiplication rule
>    for normals).  I think that here is a general description for the
>    product of two distributions in terms of their moment generating
>    functions, but this wouldn't necessary produce particularly clean,
>    or interpretable results.

	If you add a bunch (say N) of one product rvs you get a 
	generalization of a noncentral chi-square distribution of N
	degrees of freedom, which also tends toward Normal for large N
	(around 100).  This still doesn't say anything about N multi-
	plies, though.  Sigh.  Give up, use the computer.

>    And as I remember the quotient of two normals is rather ill-behaved --
>    no mean.

	In a former lifetime I wrote a dissertation with an appendix 
	that characterized both the product and quotient of rv's. Of
	course, there was nothing in closed form of any use.  I did it 
	all by hand, but later I found a book called, I think, "The algebra
	of Random Variables" that had it all done.  I kicked myself for
	not just quoting that instead of going to all the trouble of 
	writing the appendix.

	Avery
1305.8from usenet sci.mathGUESS::DERAMODan D'EramoFri Oct 05 1990 23:1759
Path: ryn.esg.dec.com!shlump.nac.dec.com!decuac!haven!aplcen!uakari.primate.wisc.edu!zaphod.mps.ohio-state.edu!swrinde!ucsd!orion.oac.uci.edu!cedman
From: cedman@lynx.ps.uci.edu (Carl Edman)
Newsgroups: sci.math
Subject: Re: Interval Arithmetic?
Message-ID: <CEDMAN.90Oct4143558@lynx.ps.uci.edu>
Date: 4 Oct 90 21:35:56 GMT
References: <1759@shodha.enet.dec.com>
Followup-To: sci.math
Organization: non serviam
Lines: 46
Nntp-Posting-Host: lynx.ps.uci.edu
In-reply-to: scott@talguy.enet.dec.com's message of 4 Oct 90 16:15:36 GMT
 
In article <1759@shodha.enet.dec.com> scott@talguy.enet.dec.com (Scott Johnson) writes:
   Real numbers cannot usually be represented cleanly and must be
   approximated.  When they are approximated, they usually fall in an
   interval which represents a range of values for real numbers.  An idea
   that was given to me, is to create a library of functions that do
   what I'll call interval mathematics.  That is, when given a real number,
   it determines if the real number can be represented cleanly and if so it
   does the mathematical operations on the exact number.  If it cannot be
   represented cleanly, it does the mathematical operations on the interval
   that the number falls in and the result is also an interval.
 
   Has something like this been done?  If not would there be any interest
   in having something like this to use?
 
Yes, something like this has been done before and in fact isn't
very difficult to do. One problem with this is that in doing
the computation you assume a) that the 'true' value of the variable
is normally distributed and b) that 2 variables which are combined
really are independand. This causes many standard arithmetic
transformations of term not to 'work' any longer and other
strange results.
f.e.
 
  V - V = 0 in normal arithmetic , now that V to be one of your
distribution variables f.e. 10 +/- 1 and you get:
 
  10+/-1 - 10+/-1 = 0 +/- sqrt(2)
 
 Or try this: The usual formula for parallell resistors:
 
  R1 = ((R2)^-1 + (R3)^-1)^-1
 
 Now try to use an equivalent formula:
 
  R1 = R2 * R3 / (R2 + R3)
 
 And you will get different error bars for R1.
 
	Carl Edman
 
 
 
Theorectial Physicist,N.:A physicist whose   | Send mail
existence is postulated, to make the numbers |  to
balance but who is never actually observed   | cedman@golem.ps.uci.edu
in the laboratory.                           | edmanc@uciph0.ps.uci.edu
1305.9The easy problems have already been solvedVMSDEV::HALLYBThe Smart Money was on GoliathMon Oct 08 1990 14:0812
.6>    An interesting idea would be to combine this idea, symbolic execution
.6>    (which interval analysis is sort of a form of) and Monte Carlo
.6>    simulation to produce estimates of the error distribution for various
.6>    assumptions about the values of the input values from a calculation.
    
    Interesting.  And if you combine this with some numerically difficult
    calculations (e.g., dividing a huge value by the difference of two
    very similar small values -- the kinds of problems where accuracy
    really is a concern) then you might end up with an application of
    chaos theory.
    
      John
1305.10usenet sci.mathGUESS::DERAMODan D'EramoMon Oct 08 1990 14:3382
Path: ryn.esg.dec.com!shlump.nac.dec.com!lemans.dec.com!decuac!haven!uflorida!rex!wuarchive!cs.utexas.edu!yale!cmcl2!kramden.acf.nyu.edu!brnstnd
From: brnstnd@kramden.acf.nyu.edu (Dan Bernstein)
Newsgroups: sci.math
Subject: Re: Interval Arithmetic?
Message-ID: <10250:Oct504:12:4990@kramden.acf.nyu.edu>
Date: 5 Oct 90 04:12:49 GMT
References: <1759@shodha.enet.dec.com>
Organization: IR
Lines: 6
 
R. E. Moore's book is probably still the best reference for interval
analysis. You should also check out the section in Knuth for a few items
I didn't see in Moore. I've heard rumors of an interval library floating
around, though I haven't seen it.
 
---Dan

Path: ryn.esg.dec.com!shlump.nac.dec.com!decuac!haven!uflorida!rex!samsung!uunet!snorkelwacker!apple!rutgers!cunixf.cc.columbia.edu!cunixb.cc.columbia.edu!jpl5
From: jpl5@cunixb.cc.columbia.edu (Jay P Lessler)
Newsgroups: sci.math
Subject: Re: Interval Arithmetic?
Message-ID: <1990Oct5.150801.30547@cunixf.cc.columbia.edu>
Date: 5 Oct 90 15:08:01 GMT
References: <1759@shodha.enet.dec.com>
Sender: news@cunixf.cc.columbia.edu (The Daily News)
Organization: Columbia University
Lines: 36
 
[...] 
Sorry if I misunderstood you, but how is this diffferent from the standard 
 error analysis done in labs?
[...]
--Jay Lessler

Path: ryn.esg.dec.com!shlump.nac.dec.com!decuac!haven!udel!burdvax!ubbpc!wgh
From: wgh@ubbpc.UUCP (William G. Hutchison)
Newsgroups: sci.math
Subject: Re: Interval Arithmetic?
Summary: There is a huge amount of literature on interval arithmetic...
Message-ID: <751@ubbpc.UUCP>
Date: 5 Oct 90 14:16:49 GMT
References: <1759@shodha.enet.dec.com>
Organization: Unisys UNIX Portation Center, Blue Bell, PA
Lines: 37
 
In article <1759@shodha.enet.dec.com>, scott@talguy.enet.dec.com (Scott Johnson) writes:
> ------------------------------------------------------------------------
> I have an idea regarding something I'd like to do for a research
> project.  I'd like to find out if anyone else has done anything similar
> or if there is even an interest in my idea.
> 
> Real numbers cannot usually be represented cleanly and must be
> approximated.   [ ... ]
> [ ... ] interval mathematics.  That is, when given a real number,
> it determines if the real number can be represented cleanly and if so it
> does the mathematical operations on the exact number.  If it cannot be
> represented cleanly, it does the mathematical operations on the interval
> that the number falls in and the result is also an interval.
> Has something like this been done?  [ ... ]
> 
> Scott Johnson
 
 Yup, it's been done!
 
The main reference is Ramon Moore's Thesis at U Penn > 20 years ago.
 
There are lots of followup papers and symposia.
 
There is an old ACM Algorithm that does this (Algol).
 
A group at U Wisconsin (I think) defined Triplex numbers, which are the
two ends of the interval plus a "middle" number.
 
I have written a C++ class for interval arithmetic.
Interval arithmetic packages have been posted to comp.sources in the past.
 
 Aside from that, not much happening with interval arithmetic :-)
-- 
Bill Hutchison, DP Consultant	rutgers!cbmvax!burdvax!ubbpc!wgh (work)
Unisys UNIX Portation Center	uunet!eidolon!wgh (home)
P.O. Box 500, M.S. B121         "At the moment I feel more like arguing than
Blue Bell, PA 19424		being good" Raymond Smullyan _The Tao is Silent_
1305.11?EAGLE1::BESTR D Best, sys arch, I/OTue Oct 09 1990 20:295
>	Since summing rv's convolves their pdfs, the sum (or difference)
>	of two such elementary points would be the triangle pdf:

Don't you have to assume that the associated random variables are
independent to do this ?
1305.12NoCOOKIE::PBERGHPeter Bergh, DTN 523-3007Tue Oct 09 1990 20:3810
         <<< Note 1305.11 by EAGLE1::BEST "R D Best, sys arch, I/O" >>>
                                     -< ? >-

<< >	Since summing rv's convolves their pdfs, the sum (or difference)
<< >	of two such elementary points would be the triangle pdf:

<< Don't you have to assume that the associated random variables are
<< independent to do this ?

No.
1305.13YesCADSYS::COOPERTopher CooperWed Oct 10 1990 13:0315
RE: .12 (Peter) (re: .11)

>                                     -< ? >-
>
><< >	Since summing rv's convolves their pdfs, the sum (or difference)
><< >	of two such elementary points would be the triangle pdf:
>
><< Don't you have to assume that the associated random variables are
><< independent to do this ?
>
>No.

    Yes.

					Topher
1305.14Expanding on the previous reply.CADSYS::COOPERTopher CooperWed Oct 10 1990 13:1413
RE: .13 (me) (RE: .12 (Peter) (re: .11))

    Sorry couldn't resist the pithiness of the previous reply.  Here,
    however, is the justification --

    Take the extreme example of the r.v's lacking independence -- that they
    are always constrained to have the same value.  Assume furthermore that
    they are distributed uniformly in the interval (a, b).  Then their
    sum will be uniformly distributed in the range (2a, 2b); and their
    difference distribution will be the Dirak delta distribution (i.e.,
    probability 1 of being 0).

				    Topher
1305.15I goofed -- sorryCOOKIE::PBERGHPeter Bergh, DTN 523-3007Wed Oct 10 1990 17:390
1305.16JARETH::EDPAlways mount a scratch monkey.Wed Nov 07 1990 12:2677
Article 13264 of sci.math:
Path: shlump.nac.dec.com!decuac!haven!uflorida!uakari.primate.wisc.edu!zaphod.mps.ohio-state.edu!uwm.edu!ogicse!oregon!oregon.oacis.org
From: jmeissen@oregon.oacis.org (John Meissen - Staff OACIS)
Newsgroups: pdx.general,or.general,sci.math
Subject: Symposium on Interval Arithmetic and Global Optimization
Message-ID: <58@oregon.oacis.org>
Date: 7 Nov 90 00:04:36 GMT
Sender: jmeissen@oregon.oacis.org
Followup-To: pdx.general
Organization: Oregon Advanced Computing Institute (OACIS)
Lines: 62
Xref: shlump.nac.dec.com or.general:29 sci.math:13264


              OREGON ADVANCED COMPUTING INSTITUTE (OACIS) 
                     Announces a symposium on
 "Interval Arithmetic, Global Optimization and Statistical Decision Theory"

                     By Dr. G. William Walster, 
       Director of Research for Computation & Algorithms, OACIS

           November 19th, 1990 - 2:00p-4:00p, Room 123 
      at the Oregon Center For Advanced Technology Education 
                            (OCATE) 
          In the Oregon Graduate Center Science Park

SYMPOSIUM ABSTRACT:  
The intersection between interval arithmetic, global optimization and 
statistical decision theory is not empty.  The common link is guaranteed-
accuracy computing.  The need for guaranteed accuracy in science and 
industry is pervasive.  Examples will be presented to illustrate this fact.
Interval arithmetic will be introduced and shown to provide guaranteed 
accuracy computing.  

The following additional topics will be discussed:

*  Definition of interval arithmetic;
*  Application of interval arithmetic;
*  Importance and pervasiveness of optimization problems;
*  How interval global optimization can solve a problem that many believe to 
   be inherently unsolvable;
*  How the determination of sample size can make the application of 
   inferential statistics meaningful;
*  How interval arithmetic is required to solve the sample size problem 
   in inferential statistics; 
*  How interval global optimization can be used to solve otherwise intractable
   maximum likelihood estimation problems;
*  Why parallel computers are an ideal platform on which to implement all of
   the above.

BIOGRAPHY:
G. William Walster, the director of research for computation and algorithms 
(part of the Technology Roadmap) at OACIS, received his B.A. and Ph.D. degrees 
from Standford University and the University of Minnesota, respectively.  Much
of his research career has been focussed on the development of guaranteed 
accuracy software applications using interval arithmetic.  Dr. Walster served 
on the faculty at the University of Wisconsin, teaching and conducting 
research in applied statistics.  The requirement to perform guaranteed 
accuracy computing compelled him to give up his full professorship at 
Wisconsin in search of a research environment within which interval arithmetic
could be developed.  Dr. Walster believes OACIS is the ideal place in which 
to pursue new application software research.

RESEARCH INTEREST:   The development and implementation of algorithms to solve 
outstanding numerical and statistical problems that permeate business, 
industry and academia.

19500 NW Gibbs Drive, Suite 110
Beaverton, OR 97006-6907
Contact Benjamin R. Peek at:
503/690-1220 or
email: peekb@oacis.org

Please email your RSVP to peekb@oacis.org  OR jhill@oacis.org seating 
is limited.