[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

1597.0. "Breakthrough in proof verification / NP stuff?" by SSAG::LARY (Laughter & hope & a sock in the eye) Tue Apr 21 1992 06:58

There seem to be a few typos in this article, and of course it has been
pureed for easy digestion by nonmathematicians, but it appears to be a
significant discovery - the main discovery is fairly incomprehensible
to me (from this text, anyway) but the subsidiary ("computer science")
discovery seems to be that even finding an approximate answer to some NP
problems (NP-hard ones?) is also NP...

--------------------------------------------------------------------------------

From:	HOOTUS::murali "Murali M. Krishna" 20-APR-1992 09:23:54.45
Subj:	Breakthrough in Math/CS

By GINA KOLATA
c.1992 N.Y. Times News Service
       In a discovery that overturns centuries of mathematical
tradition, a group of graduate students and young researchers has
discovered a way to check even the longest and most complicated
proof by scrutinizing it in just a few spots.
       The finding, which some mathematicians say seems almost magical,
depends upon transforming the set of logical statements that
constitute a proof into a special mathematical form in which any
error is so amplified as to be easily detectable.
       Using this new result, the researchers have already made a
landmark discovery in computer science.
       They showed that it is impossible to compute even approximate
solutions for a large group of practical problems that have long
foiled researchers.
       Even that negative finding is very significant, experts say,
because in mathematics, a negative result, showing something is
impossible, can be just as important and open just as many new
areas of research as a positive one.
       The discovery was made by Sanjeev Arora and Madhu Sudan (B.Tech. IITD),
graduate students at the University of California at Berkeley, Dr.
Rajeev Motwani, an assistant professor at Stanford University, and
Dr. Carsten Lund and Dr. Mario Szegedy, young computer scientists
at AT&T Bell Laboratories.
       Motwani, who is the senior member of the group, just turned 30
on March 26.
       ``With the conventional notion of a proof, you had to check it
line by line,'' said Dr. Michael Sipser, a theoretical computer
scientist at the Massachusetts Institute of Technology. ``An error
might be buried on page 475, line 6. A `less than or equal to'
should have been a `less than.' That would totally trash the whole
proof. But you'd have to dig through the whole thing to find it,''
Sipser said.
       Now, he added, ``the new idea is that there is a way to
transform any proof so that if there is an error, it appears almost
everywhere. I'd say, `You have a proof? Show me a page.' If there
is an error, it will be there.''
       The finding, which is built on two and a one half years work by
leading researchers, is expected to have a profound impact. But
because it is so new and unexpected, mathematicians and computer
scientists cannot yet predict its scope of application.
       ``This is philosophically important,'' said Dr. Mihalis
Yannakakis, a theoretical computer scientist at AT&T Bell
Laboratories.
       Dr. Manuel Blum, a mathematician at the University of California
at Berkeley, agreed that the result was ``really exciting,'' with
ramifications that were hard to predict.
       The new verification method can show up errors in calculations
as well as in proofs, experts said, and Blum suggested it could
have a role in checking long computer computations.
       Dr. Umesh Vazirani, a theoretical computer scientist at the
University of California at Berkeley, said that the discovery was
``one of the most outstanding in the past decade,'' because it
allowed investigators to decide, almost at a glance, whether it was
worthwhile to try to find an approximate solution to a problem they
were unable to solve exactly.
       ``As soon as you formulate a problem, you can know if it is
intractable,'' Vazirani said.
       The finding is so recent that it has not yet been published.
``We just wrote it in the last month or so,'' Motwani said.

(STORY CAN END HERE. OPTIONAL MATERIAL FOLLOWS.)

        As is the custom in mathematics and theoretical computer
science, the result gains credibility not by having been sent to a
journal and reviewed by other scientists but by having been vouched
for by leaders in the field. In this case, at least a dozen experts
say they are convinced by the result, and amazed by it.
       ``It is absolutely stunning,'' said Dr. Laszlo Babai, a
theoretical computer scientist at the University of Chicago whose
work helped lay the groundwork for the new discovery.
       Dr. Richard Karp, a theoretical computer scientist at the
University of California at Berkeley agreed. ``Personally, to me
it's very surprising,'' he said.       Karp added that the method's
application to show that certain
problems have no approximate solutions is the most important
discovery in his field of theoretical computer science in more than
two decades.
       The insight into the nature of proofs shows how a connection can
be created between each and every logical statement of a proof, no
matter how long or complicated the proof may be.
       It applies to the 15,000-page proof that is the longest ever
published as well as to the single sentence proof that can be
grasped in an instant. And it is no harder to check a long proof
than a short one.
       ``Even if your proof is the size of the universe, you still
would not have an increased number of places to check,'' Babai
said.
       The investigators used a variety of mathematical tricks to
transform a proof so that its errors, if any, will show up almost
everywhere. One method they used was a way of comparing lists of
data.
       Data comparisons are an integral part of long proofs. For
example, Babai pointed out, a typical mathematical proof might have
a line that says, ``We know by theorem 31 that a equals b and by
theorem 72 that b equals c. Therefore, we conclude that a equals
c.''
       To see if this statement is correct, Babai said, ``We have to go
back to theorem 31 and see that it does indeed say b equals c, and
similarly we have to look up theorem 72 and see that it has been
copied correctly.''
       One way to compare two lists of numbers is to arrange them in a
grid and then extend each row and column in an arithmetic
progression.
       To compare the data list 1,3,-1,4 with 1,5,-1,4, in which a
single digit has been miscopied, the lists would be formed into
arrays with 1,3 and 1,5 in the top rows.
       The rows would then be extended in arithmetic progression,
forming 1,3,5,7,9,11 and 1,5,7,9,11,13. Already the one-digit
difference has been amplified to five differences.
       By increasing the number of items listed in the arrays, the
number of differences in an extended array can be made so great
that sampling just a few squares will be bound to detect a
difference and thus an error in one of the original data sets.
       Proofs are not data but they can be manipulated in analogous
ways by representing each step of a proof as a term in a
mathematical equation.
       For example, Arora said, a logical statement in a typical proof
might say ``either statement x and statement y are true, or
statement z is not true.'' This can be converted to the equation,
xy pluas (1-z).
       Using these equations, the mathematicians formulated
relationships that would have to hold if the proof contained no
error.
       Blum said that the new finding does have in it an intrinsic
uncertainty. An error will show up in almost everywhere in a
suitably transformed proof, but not absolutely everywhere. But, he
said, it is easy to probe the proof in enough places to make the
chance of missing the error negligible.
       ``Realistically, you can make the probability of missing an
error as small as 1 divided by the number of atoms in the
universe,'' Blum said. That means you can make the chance of
mistaking an incorrect proof for a correct one, ``virtually zero,''
he said.
       The investigators also noticed that their proof was
mathematically equivalent to a profound statement of how difficult
it would be to solve certain problems that plague mathematicians.
       Their proof about proofs showed that it was just as hard to get
an approximate solution to a class of very hard problems as it was
to get an exact solution.
       Since computer scientists have pretty much given up on exact
solutions to these problems, the new result means they must abandon
even their remaining shreds of hope of obtaining rough solutions to
a wide group of practical problems.
       This group of thousands of problems involves finding the best
ways to do things like design telephone networks or airline routes
or to set up sequences of jobs in factories.
       Although no one has ever proved that an exact solution is
impossible, the consensus is that they are. Babai said
mathematicians had tried for centuries to find an exact solution,
to no avail.
       ``Literally thousands of people have considered these problems,
including the greatest minds in mathematics, including Gauss, and
have gotten nowhere,'' he said, referring to Karl F. Gauss, an
18th-century German mathematician and astronomer.
       In 1971, investigators showed that a wide range of these very
hard problems were mathematically equivalent: a solution to one
would solve all. Without hope of an exact solution that would solve
them all the time, they began a search for an approximate solution
that would always come very close, at least, to the best answer.
       Babai explained: ``There was a feeling that, `All right, we are
not interested in the best possible solution. If we can get within
1 percent of an exact solution, that's good enough.' ''
       Investigators found some good approximate solutions that worked
for some problems some of the time, but they could not find a good
approximation that would always work.
       Now they know there is a reason for that. The solutions they
were seeking were as hard to find as exact solutions.
       Karp said this finding alone was a monumental achievement,
equivalent in its impact to the finding in 1971 that the problems
were equivalent. Until now, he said, ``One could always cling to
the hope that one could get what one wanted with a fast algorithm
that gives a good approximation. Now we know that even that is
unachievable.''
T.RTitleUserPersonal
Name
DateLines
1597.1What does it say about itself?PULPO::BELDIN_RPull us together, not apartTue Apr 21 1992 13:260
1597.2That's hardCIV009::LYNNLynn Yarbrough @WNP DTN 427-5663Tue Apr 21 1992 15:041
"I am not decidable under your axiom system. You'll have to use mine."
1597.3HANNAH::OSMANsee HANNAH::IGLOO$:[OSMAN]ERIC.VT240Wed Apr 22 1992 18:067
Can someone please post an example of using this new method ?  I couldn't
figure it out from the article.

Thanks.

/Eric
1597.4on second thought, maybe its no joke!PULPO::BELDIN_RPull us together, not apartWed Apr 22 1992 21:1511
My (cynical) guess is that there is no practical way to apply
the results discussed in .0.  Any technique that depends on
iterativel applying transforms to all of the text of an argument
and looking for big discrepancies between things that should
agree smacks of considerable processing power.  Maybe someday it
can become a practical tool, but I am not going to hold my
breath.

fwiw,

   Dick
1597.5CommentsCADSYS::COOPERTopher CooperThu Apr 23 1992 14:3035
    We can't give you an example, because this material has not even been
    published yet.  Mathematics doesn't "work" like science as far as
    publication is concerned.  Proofs are circulated informally among
    people who have an interest and the ability to evaluate.  They try to
    find flaws in it.  Only after this has taken place is an attempt made
    to publish the results.  This is apparently still in that initial
    circulation phase.

    The idea seems to be, however, that a transformation is made on a
    formal representation of the proof.  After enough transformations are
    done, it is very probable that any small error will have messed up the
    whole transformed result in gross ways that can easily be spotted.  The
    process can be repeated to drive the probability of an unspotted error
    down as small as you want.  The implication from what is said here is
    that the transformation process is trivially applied -- perhaps linear
    or at least small-power-polynomial.  I suspect that it is probably
    practical to apply it to a complete formal proof.

    The thing that probably limits the practicality of this technique is
    that virtually no one produces complete formal proofs.  In real,
    published proofs, there is a high degree of relative informality and
    step-skipping.  I doubt if the technique could be applied to something
    in that form.  And given a complete, detailed proof in some particular
    deductive theory it should be pretty straight forward to write a
    conventional proof checker.  I once was considering writing one for the
    output of the Boyer/Moore Theorem Prover as an exercise.

    The one place I know of where complete proofs are routinely generated
    (semi-mechanically) is in program proofs-of-correctness, proofs-of-
    security, proofs-of-non-race, etc.  Its conceivable that it might be
    applied there, but if you want to check a semi-mechanical proof for
    extra reliablility there are, as I said, more conventional tools which
    should be usable.

				Topher
1597.6House of CardsAUSSIE::GARSONThu Apr 23 1992 23:2912
re .5
    
>    The thing that probably limits the practicality of this technique is
>    that virtually no one produces complete formal proofs.
    
    And since it has a non-zero probability of accepting an invalid proof
    as valid, the technique can presumably only be used as a filter. Any
    proof that passes this technique must (as *I* understand it) then be
    proved formally.
    
    Still if verified, the technique could be useful to filter out the X
    incorrect proofs of Fermat's Last Theorem that pop up each year. (-:
1597.7a mathematician's cure for paranoiaSGOUTL::BELDIN_RPull us together, not apartFri Apr 24 1992 16:5614
   Re:                      <<< Note 1597.6 by AUSSIE::GARSON >>>

That reminds me of the fella who had the office next to mine
before I left the UPR to come to Digital.  His proof of FLT had
been evaluated (and rejected) by so many mathematicians that it
cured his paranoia!  (You know, it isn't paranoia if they really
are after your hide!) :-)

He retired last year.  His colleagues wished him well (and
gone).

fwiw,

Dick
1597.8a clarification from the authorsSCHOOL::ABIDIIt's a wild worldMon Apr 27 1992 20:0152
    <forwards deleted>
Subject: [NYT article: Clarifications]


From: madhu@kazoo.Berkeley.EDU (Madhu Sudan)
Subject: Clarification on recent NYT article
Date: 15 Apr 92 19:57:13 GMT


   CLARIFICATION ABOUT THE RECENT RESULT ON CHECKING PROOFS
   ------------- ----- --- ------ ------ -- -------- ------

The recent article in the New York Times concerning  our  results
contained  several  inaccuracies despite our best efforts to keep
their  staff  informed.   This  brief   note   is   intended   to
express   our   regret   at these inaccuracies and to dispel  any
confusion   arising   from them.

Our result builds directly upon the techniques  of  another   (as
yet  unpublished)  result  due  to Sanjeev Arora and Shmuel Safra
which showed how to check a  proof  of  length  n   by   randomly
choosing   only  O(log  log n) bits in it to look at. This result
preceded ours by a month, and the improvement  we  show  is  that
looking   at   only   a constant  number  of  bits suffices (this
enabled us to show that approximating MAX-SNP -hard  problems  is
NP-hard).

We feel that  the  people  quoted  in   the   NYT   article  were
referring  to  both these papers, which was not made clear in the
article.  In particular, we regret that it fails  to  mention Dr.
Safra's contribution at all.

The NYT article also did not mention that the concept of "easily"
checkable proofs/computations described there was introduced in a
paper called "Checking computations in polylogarithmic  time"  by
L.Babai,  L.Fortnow,  L.Levin,  M.Szegedy in STOC'91.  This paper
was based upon the amazing sequence  of  results  on  interactive
proof  systems  presented  at  FOCS  '90,  especially  the result
"MIP=EXPTIME" by L.Babai, L.Fortnow, C.Lund. The  idea  of  using
these  techniques  to show hardness results for approximating the
clique number was intiated in the paper "Approximating Clique  is
almost  NP-complete"  by U.Feige, S.Goldwasser L.Lovasz, S.Safra,
M.Szegedy in FOCS'91.

[Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan,  Mario
Szegedy.]

(The  widest  possible  circulation  of   this   note   will   be
appreciated.)



1597.9BEING::EDPAlways mount a scratch monkey.Tue Sep 08 1992 18:4121
    There's a summary of the proof-verification results in _Focus_.  It is
    short on details, and this will be even shorter, but it seems the
    proof-verification can be mapped to a graph.  Each node in the graph is
    a spot-check list that the proof-checker would accept.  (A typical item
    on such a list says that "the j-th bit of the proof is e".)  Two nodes
    are joined if they don't plainly contradict one another by asserting a
    different bit in the same position.
    
    If there is an acceptable proof of a given statement (of length n), the
    graph has lots of connections and thus has very large cliques.  If
    there is not, the graph has small cliques.
    
    If there were an efficient algorithm to determine even approximately
    what the size of a graph's largest clique was, then any NP-complete
    decision problem could be solved efficiently (because "yes" instances
    of such problems have small proofs).  Thus, even approximating a
    graph's largest clique is itself NP-complete.  This can be extended to
    approximating solutions to other problems.
    
    
    				-- edp