| From: TURTLE::RDVAX::BLAIS 17-JAN-1984 10:21
To: TURTLE::WIENER! SENT TO @LOZIER
Subj: Lozier's visit
From: PAYNE 9-JAN-1984 10:52
To: BLAIS
Subj: For distribution
+---+---+---+---+---+---+---+
| | | | | | | |
| d | i | g | i | t | a | l | I N T E R O F F I C E M E M O
| | | | | | | |
+---+---+---+---+---+---+---+
From: Mary Payne
To: List Date: 9 January 1984
Dept: Exploratory Research
DTN: 225-5845
Loc: HLO2-3/N04
Enet: RDVAX::PAYNE
Subject: Lozier and Olver on "Beyond Floating Point"
A seminar on the above topic will be presented in Hudson (Hall of
the White Mist) at 10:00 am on Wednesday, Feb. 1, 1984. The
topic to be discussed is a novel approach to floating point
computation, developed by Dan Lozier of the National Bureau of
Standards, Frank Olver, University of Maryland, and C. W.
Clenshaw, University of Manchester. Their system is based on a
logarithmic approach, so that MUL and DIV are easy, with ADD and
SUB requiring rather more work. Some of you may recall that Bob
Reid was exploring the same idea almost ten years ago. The
present system is an extension in the sense that the logarithmic
operation is iterated, with a small field reserved to identify the
level of iteration.
I have a copy of a paper that has been accepted for publication,
which I can copy for you if you are interested. They have carried
out considerable error analysis, and claim to have made progress
in developing algorithms for the ADD/SUB problems -- this last is,
however, not included in the paper. They believe that their work
is developed to the point where it would make sense to investigate
how it might be cast in hardware, and they are much interested in
establishing some level of collaboration here. The seminar is the
first step in exploring the feasibility of such a collaboration.
Their time from 2:00 pm to the end of the day is available for
discussions with them. Would some of you be interested in
participating in sort of a "workshop" during this time? If so,
please let me know, so that I can arrange for a room of the
appropriate size.
Finally, please route this to others who might be interested.
|
| Two comments:
Yet a different number system has been proposed recently. It uses "redundant
digits". For example, you use base-10 numbers but allow "digits" in the
range -9...18 (this would be more redundant than necessary). There are now
many representations for the same number. The big win is that, if you do it
right, carries in an addition can only propagate at most one place. This
makes adders for very long numbers easy. Converting INTO this system is
easy; in fact, any decimal number IS already in this system! Of course, you
have to pay the price somewhere: Converting OUT is hard. However, if you
do a lot of arithmetic on big numbers, you may win big in the process.
It is "intuitive" that adding is easier than multiplying. This is actually
provable in a variety of models of computational complexity; one interesting
one is the VLSI time/space tradeoff model, in which indeed addition us MUCH
easier than multiplication. Of course, this is only true for binary repre-
sentations of the numbers. If I represent numbers "logarithmically", addition
becomes much harder than multiplication. Is there a way to win for both ope-
rations at once? The answer is no: There is a bound on the calculation of
ab+c for any bit-wise representation of a, b, and c; the bound is actually the
\\\ about the same as the bound on binary multiplication.
The "redundant" scheme beats the bounds for both addition and multiplication,
but doesn't meet the requirements of "bitwise input and output" as stated
in the theorem. If you include the conversions between representations, you
are back where you started.
-- Jerry
|