[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference noted::hackers_v1

Title:-={ H A C K E R S }=-
Notice:Write locked - see NOTED::HACKERS
Moderator:DIEHRD::MORRIS
Created:Thu Feb 20 1986
Last Modified:Mon Aug 03 1992
Last Successful Update:Fri Jun 06 1997
Number of topics:680
Total number of notes:5456

154.0. "Risks of Computers" by REX::MINOW () Thu Aug 29 1985 14:11

You might be interested in a new mailing list that has recently
appeared on the ARPAnet. You can subscribe by sending mail to
RHEA::DECWRL::"Risks-Request@SRI-CSL.ARPA"

The second issue appears after the form-feed.

Martin.

From:	RHEA::DECWRL::"Neumann@SRI-CSLA.ARPA" "Peter G. Neumann" 28-AUG-1985 22:19
To:	RISKS:;@UNKNOWN.ARPA, RISKS:;@UNKNOWN.ARPA
Subj:	RISKS-1.2, 28 Aug 85

        FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
                Vol 1 No 2 -- 28 August 1985
                 Peter G. Neumann, moderator

      (Contributions to RISKS@SRI-CSL.ARPA)
      (Requests to RISKS-Request@SRI-CSL.ARPA)
      (This vol/no can be FTPed from SRI-CSL:<RISKS>RISKS-1.2)
      (The first issue, vol 1 no 1, is in SRI-CSL:<RISKS>RISKS-1.1)      

Contents: 
  Introduction; three more risk items (Peter Neumann)
  Mariner 1 Irony (Nicholas Spies)
  RISKS Forum ... [Reaction] (Bob Carter)
  RISKS Forum ... [An Air Traffic Control Problem] (Scott Rose)
  Risks in AI Diagnostic Aids (Art Smith)
  Warning! ... [A Trojan Horse Bites Man] (Don Malpass)
  SDI (Martin Moore, Jim Horning, John McCarthy, Peter Karp, Dave Parnas, 
       Gary Martins, Tom Parmenter; panel at 8th ICSE in London)
  The Madison Paper on Computer Unreliability and Nuclear War (Jeff Myers)
  Can a Computer Declare War? (Cliff Johnson)

Caveat: Sorry if you have already seen some of this stuff elsewhere
on the net.

------------------------------------------------------------

Date: 27 Aug 1985 23:32:01-PST
Subject: Introduction, and more recent risk items
To: RISKS@SRI-CSL
From: Peter G. Neumann <Neumann@SRI-CSL>

I was away during the previous three weeks, which made it difficult to put
out another issue.  However, the newspapers were full of excitement relevant
to this forum:

  * A Federal district judge awarded $1.25 million to the families of
    three lobstermen who were lost at sea in a storm that the National
    Weather Service failed to predict because its parent organization 
    (the National Oceanic and Atmospheric Administration) had not repaired 
    a weather buoy for three months.  [NY Times 13 Aug 85]

  * Another Union Carbide leak (causing 135 injuries) resulted from a 
    computer program that was not yet programmed to recognize aldicarb
    oxime, compounded by human error when the operator misinterpreted
    the results of the program to imply the presence of methyl isocyanate
    (as in Bhopal).  A 20-minute delay in notifying county emergency
    made things worse.  [NY Times 14 and 24 Aug 85 front pages]  (There
    were two other serious Union Carbide incidents reported in August as 
    well, although only this one had a computer link.)

  * An untimely -- and possibly experiment-aborting -- delay of the intended 
    25 August launch of the space shuttle Discovery was caused when a
    malfunction in the backup computer was discovered just 25 minutes 
    before the scheduled launch.  The delay threatened to seriously
    compromise the mission.  [NY Times 26 August 1985]  The Times reporter
    John Noble Wilford wrote, "What was puzzling to engineers was that the
    computer had worked perfectly in tests before today.  And in tests after
    the failure, it worked, though showing signs of trouble."  Arnold
    Aldrich, manager of the shuttle program at Johnson, was quoted as saying
    "We're about 99.5% sure it's a hardware failure."  (The computers are
    state of the art as of 1972 and are due for upgrading in 1987.)  A
    similar failure of just the backup computer caused a one-day delay in
    Discovery's maiden launch last summer.

  * More details are emerging on possible computer hanky-panky in elections,
    including the recent Philippine elections.  There has been a series of
    articles in the past weeks by Peter Carey in the San Jose Mercury News
    -- which I haven't seen yet but will certainly hope to report on.

I expect that future issues of this RISKS forum will appear at a higher
frequency -- especially if there is more interaction from our readership.  I
will certainly try to redistribute appropriate provocative material on a
shorter fuse.  I hope that we can do more than just recapture and abstract
things that appear elsewhere, but that depends on some of you contributing.
I will be disappointed (but not surprised) to hear complaints that we
present only one side of any particular issue, particularly when no
countering positions are available or when none are provoked in response; if
you are bothered by only one side being represented, you must help to
restore the balance.  However, remember that it is often easier to criticize
others than to come up with constructive alternatives, and constructive
alternatives are at the heart of reducing risks.  So, as I said in vol 1 no
1, let us be constructive.

------------------------------

Date: 16 Aug 1985 21:23-EST
From: Nicholas.Spies@CMU-CS-H.ARPA
Subject: Mariner 1 irony
To: risks@sri-csl

My late father (Otto R. Spies) was a research scientist at Burroughs when
the Mariner 1 launch failed. He brought home an internal memo that was
circulated to admonish all employees to be careful in their work to prevent
similar disasters in the future. (I don't recall whether Burroughs was
directly involved with Mariner 1 or not.)  After explaining that a critical
program bombed because a period was substituted for a comma, the memo ended
with the phrase

		"... no detail is to [sic] small to overlook."

My father would be deeply pleased that people who can fully appreciate this
small irony are now working on ways to prevent the misapplication of
computers as foible-amplifiers.

------------------------------

Date: 8 Aug 85  19:10 EDT (Thu)
From: _Bob <Carter@RUTGERS.ARPA>
Subject: Forum on Risks to the Public in Computer Systems    [Reaction]
To: RISKS@SRI-CSL

Thanks for the copy of Vol. I, No. 1.  Herewith a brief reaction.  This
is sent to you directly because I'm not sure whether discussion of the
digest is appropriate for inclusion in the digest.  

  1. Please mung RISKS so that it does not break standard undigestifying 
     software (in my case, BABYL).    

        [BABYL is an EMACS-TECO hack.  It seems to be a real bear to use,
         with lots of pitfalls still.  But I'll see what I can do.  
         Alternatively, shorter issues might help.  PGN]
    
  2. I think RISKS is clearly an idea whose time has come, but I'm not 
     entirely sure it has been sufficiently thought through.  

        [I should hope not!  It is a cooperative venture.  I just
         happen to be trying to moderate it.  PGN]

   (a.) You cast your net altogether too widely, and include some topics
        that have been discussed extensively on widely-read mailing lists.
        Star Wars, the Lin paper, the Parnas resignation, and related topics
        have been constructively discussed on ARMS-D.  I have considerable
        doubt about the utility of replicating this discussion.  (The
        moderators of HUMAN-NETS and POLI-SCI have both adopted the policy
        of directing SDI debate to that forum.  Would it be a good idea to
        follow that example?

           [To some extent, yes.  However, one cannot read ALL of the
            interesting BBOARDs -- there are currently hundreds on the
            ARPANET alone, many of which have some bearing on RISKS.  Also,
            browsers from other networks are at a huge disadvantage unless
            they have connections, hours of spare time, money, etc.  This is
            a FORUM ON RISKS, and should properly address that topic.  We
            certainly should not simply reproduce other BBOARDS, but some
            duplication seems tolerable.  (I'll try to keep it at the end
            of each issue, so you won't have to wade through it.)  By the
            way, I had originally intended to mention ARMS-D in RISKS vol 1
            no 1, but did not have time to check it out in detail.  For those
            of you who want to pursue it, next following is the essence of
            the blurb taken from the Network Information Center,
            SRI-NIC.ARPA:<NETINFO>INTEREST-GROUPS.TXT.  PGN]

          [  ARMS-D@MIT-MC:

             The Arms-Discussion Digest is intended to be a forum for
             discussion of arms control and weapon system issues.  Messages
             are collected, edited into digests and distributed as the
             volume of mail dictates (usually twice a week).

             Old digests may be FTP'ed from MIT-MC(no login required).  They
             are archived at   BALL; ARMSD ARCn   , where n is the issue no.

             All requests to be added to or deleted from this list, problems, 
             questions, etc., should be sent to Arms-D-REQUEST@MIT-MC.

             Moderator: Harold G. Ancell <HGA@MIT-MC>  ]

   (b.) You do not cover the topics which, in my opinion, are going
        to generate more law-making than anything you do touch on.  In
        particular, the health hazards (if any) of CRT use, and the working
        conditions (including automated performance testing) of "pink-collar" 
        CRT users are going to be among the most important labor-relations
        issues of the next few years.  Many people think these more imminent
        risks than those mentioned in the RISKS prospectus.

                              [Fine topic!  PGN]

  3. I think a digest is an animal that differs considerably from print
     media, but is no less important.  I get the feeling that you consider
     yourself a country cousin of the ACM publications and of SEN.  Wrong!
     You're not inferior, you are just editing in a different medium and as 
     you put your mind to the task, I hope you come to take them with a
     larger grain of salt.  In particular, 

      !  Chinese computer builder electrocuted by his smart computer after he 
         built a newer one. "Jealous Computer Zaps its Creator"!  (SEN 10 1)

     was a National Inquirer-style joke.  The editor of SEN should not have
     reprinted it, and you probably should not have included it in a
     serious list of computer-related failures.
        [The editor of SEN has sometimes been known to indulge in levity.  
         In this case it appears that a Chinese engineer was indeed
         electrocuted -- and that is an interesting case of computer-related
         disaster.  On the other hand, if someone can believe that an
         AI automatic programming routine can write many million lines of
         correct code, then he might as well believe that a smart computer
         system could express jealousy and cause the electrocution!  
         Actually, Bob used "PEN" throughout rather than "SEN", but
        "Software Engineering Notes" was the only sensible interpretation
        I could come up with, so I changed it.  Do I have a "PEN" pal?  PGN]

  4. It seems to me that it is precisely in the area of serious hardware
     and software failures that RISKS should make its mark.  Directing
     itself to that topic, it fills a spot no existing list touches on
     directly, and treats a matter that concerns every computer
     professional who is earning a decent living.  Litigation about
     defective software design and programming malpractice will be the
     inevitable consequence of risks, and RISKS is the only place to
     discuss avoiding them.  Please consider focussing the list more closely
     on that subject.

        [Bob, Thanks for your comments.  I heartily agree on the importance
         of the last item.  But, I do not intend to generate all of the
         material for this forum, and can only smile when someone suggests
         that this forum is not what it should be.  I look forward to your
         help! PGN]

[End of Bob Carter's message and my interspersions.]

-----------------------------------

Subject: RISKS forum              [including An Air-Traffic Control Problem]
Date: 16 Aug 85 21:06:39 PDT (Fri)
From: Scott M. Rose <rose@uw-bluechip.arpa>

I had kind of hoped that somebody would submit something on the recent problem
  in Aurora Illinois, whereby a computer cable was cut that brought
  information from RADAR sensors to the regional air traffic control center
  there.  Supposedly, the system was designed to be sufficiently redundant to
  handle such a failure gracefully, but this turned out not to be the case:
  there were several close calls as the system went up and down repeatedly.
  There was information about the problem in the New York Times and the
  Chicago Tribune, at least... but not in very good detail.

I wonder if the forum is the right format for such a group.  The problem is
  that one may find oneself reluctant to report on such an incident that was
  widely reported in the popular press, and was current, for fear that a dozen
  others have done the same.  Yet in this case, the apparent result is that
  NOBODY reported on it, and I think such an event ought not pass without
  note on this group.  I might propose something more like the info-nets 
  group, where postings are automatically forwarded to group members.  If
  problems arose, then the postings could be filtered by the moderator... say,
  on a daily basis?  Just an idea...

	-S Rose

               [Please don't feel reluctant to ask whether someone has
                reported an interesting event before you go to any 
                potentially duplicate effort.  We'd rather not miss out
                entirely.]

------------------------------

Date:     Sun, 18 Aug 85 12:23:25 EDT
From:     Smith@UDel-Dewey.ARPA
To:       RISKS@sri-csl.ARPA
Subject:  Risks in AI Diagnostic Aids

   I would enjoy a discussion on the legal and ethical problems that have
come up with the creation of AI diagnostic aids for doctors.  Who takes the
blame if the advice of a program causes a wrong diagnosis?  The doctor (if
so, then who would use such a program!?!?), the program's author(s) (if so,
then who would write such a program!?!?), the publishers/distributors of the
program (if so, then who would market such a program!?!?), ....  These
nagging questions will have to be answered before anyone is going to make
general use of these programs
    I would be very interested in hearing what other people think about this 
question.  It seems to me that it would be a suitable one for this bboard.

		art smith
		(smith@UDel-Dewey.ARPA)

        ****************************************************
        ** Following are several items on the Strategic   **
        ** Defense Initiative and related subjects.       **
        ****************************************************

------------------------------

Date: Thu, 15 Aug 85 11:05:48 edt
From: malpass@ll-sst (Don Malpass)
To: INFO-HZ100@RADC-TOPS20
Subject: WARNING !!                    [A Trojan Horse Bites Man]

Today's Wall St. Journal contained the following article.  I think
it is of enough potential significance that I'll enter the whole thing.
In addition to the conclusions it states, it implies something about
good backup procedure discipline.
	In the hope this may save someone,
		Don Malpass

		******************************************
			(8/15/85 Wall St. Journal)
				ARF! ARF!
	Richard Streeter's bytes got bitten by an "Arf Arf," which isn't
a dog but a horse.
	Mr. Streeter, director of development in the engineering department
of CBS Inc. and home-computer buff, was browsing recently through the
offerings of Family Ledger, a computer bulletin board that can be used by
anybody with a computer and a telephone to swap advice, games or programs -
or to make mischief.  Mr. Streeter loaded into his computer a program that
was billed as enhancing his IBM program's graphics; instead it instantly wiped
out the 900 accounting, word processing and game programs he had stored in
his computer over the years.  All that was left was a taunt glowing back
at him from the screen: "Arf! Arf! Got You!"
"HACKERS" STRIKE AGAIN
	This latest form of computer vandalism - dubbed for obvious reasons
a Trojan Horse - is the work of the same kind of anonymous "hackers" who
get their kicks stealing sensitive data from government computers or invading
school computers to change grades.  But instead of stealing, Trojan Horses
just destroy all the data files in the computer.
	Trojan Horse creators are nearly impossible to catch - they usually
provide phony names and addresses with their programs - and the malevolent
programs often slip by bulletin-board operators.  But they are becoming a
real nuisance.  Several variations of the "Arf! Arf!" program have made
the rounds, including one that poses as a "super-directory" that
conveniently places computer files in alphabetical order.
	Operators have begun to take names and addresses of electronic
bulletin-board users so they can check their authenticity.  When a
computer vandal is uncovered, the word is passed to other operators.
Special testing programs also allow them to study the wording of
submitted programs and detect suspicious commands.
INTERFACER BEWARE
	But while Al Stone, the computer consultant who runs Long Island
based Family Ledger, has such a testing program, he says he didn't have time
to screen the "Arf! Arf!" that bit Mr. Streeter.  "Don't attempt to run
something unless you know its pedigree," he says.
	That's good advice, because the computer pranksters are getting more
clever - and nastier.  They are now creating even-more-insidious programs
that gradually eat away existing files as they are used.  Appropriately
enough, these new programs are known as "worms".

			(8/15/85 Wall St. Journal)
		******************************************

------------------------------

Date: Mon, 19 Aug 85 13:56:21 CDT
From: mooremj@EGLIN-VAX
Subject: Software engineering and SDI

[FROM Soft-Eng Digest         Fri, 23 Aug 85       Volume 1 : Issue  31]

Dr. David Parnas has quite accurately pointed out some of the dangers inherent
in the software to be written for the Strategic Defense Initiative.  I must
take exception, however, to the following statement from the Boston Globe
story quoted in Volume 1, Issue 29, of this digest:

        "To imagine that Star Wars systems will work perfectly
         without testing is ridiculous.  A realistic test of the
         Strategic Defense Initiative would require a practice
         nuclear war.  Perfecting it would require a string of such wars."

There are currently many systems which cannot be fully tested.  One example
is the software used in our present defense early warning system.  Another
example, one with which I am personally familiar, is the Range Safety Command
Destruct system at Cape Canaveral Air Force Station.  This system provides
the commands necessary to destroy errant missiles which may threaten populated
areas; I wrote most of the software for the central computer in this system.
The system can never be fully tested in the sense implied above, for to do so
would involve the intentional destruction of a missile for testing purposes
only.  On the other hand, it must be reliable:  a false negative (failure to
destroy a missile which endangers a populated area) could cause the loss of
thousands of lives; a false positive (unintentional destruction of, say, a
Space Shuttle mission) is equally unthinkable.  There are many techniques
available to produce fault-tolerant, reliable software, just as there are for
hardware; the Range Safety system was designed by some of the best people at
NASA, the U. S. Air Force, and several contractors.  I do not claim that a 
failure of this system is "impossible", but the risk of a failure, in my
opinion, is acceptably low.

"But ANY risk is too great in Star Wars!"  

I knew someone would say that, and I can agree with this sentiment.  The only
alternative, then, is not to build it, because any system at all will involve
some risk (however small) of failure; and failure will, as Dr. Parnas has 
pointed out, lead to the Ultimate Disaster.  I believe that this is what Dr.
Parnas is hoping to accomplish:  persuading the authorities that the risk
is unacceptable.

It won't work.  Oh, perhaps it will in the short run; "Star Wars" may not 
be built now, or ever.  But sooner or later, some system will be given 
life-and-death authority over the entire planet, whether it is a space 
defense system, a launch-on-warning strategic defense system, or something 
else.  The readers of this digest are the present and future leaders in
the field of software engineering.  It is our responsibility to refine the
techniques now used and to develop new ones so that these systems WILL be
reliable.  I fear that some first-rate people may avoid working on such
systems because they are "impossible"; this will result in second-rate
people working on them, which is something we cannot afford.  This is NOT 
a slur at Dr. Parnas.  He has performed an invaluable service by bringing
the public's attention to the problem.  Now it is up to us to solve that
problem.

I apologize for the length of this message.  The above views are strictly
my own, and do not represent my employer or any government agency.

                                      Martin J. Moore
                                      Senior Software Analyst
                                      RCA Armament Test Project
                                      P. O. Box 1446
                                      Eglin AFB, Florida  32542
                             ARPAnet: MOOREMJ@EGLIN-VAX.ARPA

------------------------------

From: horning@decwrl.ARPA (Jim Horning)
Date: 21 Aug 1985 1243-PDT (Wednesday)
To: Neumann@SRI-CSLA
Subject: Trip Report: Computing in Support of Battle Management

[This is a relatively long report, because I haven't been able to come
up with a simple characterization of an interesting and informative day.]

Background:

On August 13 I travelled to Marina del Rey to spend a day with the
U.S. Department of Defense Strategic Defense Initiative Organization
Panel on Computing in Support of Battle Management (DoD SDIO PCSBM).

SDI is the "Star Wars" antiballistic missile system; PCSBM is the panel
Dave Parnas resigned from.

I wasn't really sure what to expect. As I told Richard Lau when he
invited me to spend a day with them, I'd read what Parnas wrote, but
hadn't seen the other side.  He replied that the other side hadn't been
written yet. "Come on down and talk to us. The one thing that's certain
is that what we do will have an impact, whether for good or for ill."

Summary:

The good news is that the panel members are not crazies; they aren't
charlatans; they aren't fools. If a solution to SDI's Battle Management
Software problem can be purchased for five billion dollars (or even
ten), they'll probably find it; if not, they'll eventually recognize
that it can't.

The bad news is that they realize they don't have the expertise to
solve the problem themselves, or even to direct its solution. They
accept Dave Parnas's assessment that the software contemplated in the
"Fletcher Report" cannot be produced by present techniques, and that
AI, Automatic Programming, and Program Verification put together won't
generate a solution. Thus their invitations to people such as myself,
Bob Balzer, and Vic Vyssotsky to come discuss our views of the state
and prospects of software technology.

I think a fair summary of the panel's current position is that they are
not yet convinced that the problem cannot be modified to make it
soluble. ("Suppose we let software concerns drive the system
architecture? After all, it is one of the two key technologies.") They
are trying to decide what must be done to provide the information that
would be needed in the early 1990s to make a decision about deploying a
system in the late 1990s.

Assumptions:

Throughout the day's discussions, there were repeated disconnects
between their going-in assumptions and mine. In fairness, they tried to
understand the sources of the differences, to identify their
assumptions, and to get me to identify and justify mine.

* Big budgets: I've never come so close to a trillion-dollar ($10**12)
project before, even in the planning stage. ("The satellite launches
alone will cost upwards of $500 billion, so there's not much point in
scrimping elsewhere.")

- I was unprepared for the intensity of their belief that any technical
problem could be steamrollered with a budget that size.

- They seemed surprised that I believed that progress in software
research is now largely limited by the supply of first-rate people, and
that the short-term effect of injecting vastly more dollars would be to
slow things down by diverting researchers to administer them.

* Big software: They were surprised by my observation that for every
order of magnitude in software size (measured by almost any interesting
metric) a new set of problems seems to dominate.

- This implies that no collection of experiments with million-line
"prototypes" can ensure success in building a ten-million-line system.
I argued that the only prototype from which they would learn much would
be a full-scale, fully-functional one. Such a prototype would also
reveal surprising consequences of the specification.
(The FIFTEENTH LAW OF SYSTEMANTICS: A complex system that works is
invariably found to have evolved from a simple system that works.)

- Only Chuck Seitz and Bijoy Chatterjee seemed to fully appreciate why
software doesn't just "scale up" (doubtless because of their hardware
design experience). It is not a "product" that can be produced at some
rate, but the design of a family of computations; it is the
computations that can be easily scaled.

* Reliability: I had assumed that one of the reasons Battle Management
software would be more difficult than commercial software was its
more stringent reliability requirement. They assume that this is one of
the parameters that can be varied to make the problem easier.

Discussion:

The Panel is still in the process of drafting its report on Battle
Management Systems. Although they take the need to produce such a
system as a given, almost anything else is negotiable. (In particular,
they do not accept the "Fletcher Report" as anything more than a
springboard for discussion, and criticize current work for following it
too slavishly. The work at Rome Air Development Center--which produced
estimates like 24.61 megalines of code, 18.28 gigaflops per weapons
platform--was mentioned contemptuously, while the Army work at Huntsville
was considered beneath contempt.)

The following comments are included merely to indicate the range and
diversity of opinions expressed. They are certainly not official
positions of the panel, and--after being filtered though my
understanding and memory--may not even be what the speaker intended.
Many of the inconsistencies are real; the panel is working to identify
and resolve them.

- The problem may be easier than a banking system, because: each
autonomous unit can be almost stateless; a simple kernel can monitor
the system and reboot whenever a problem is detected; there are fewer
people in the loop; more hardware overcapacity can be included.

- If you lose a state it will take only a few moments to build a new
state. (Tracks that are more than 30 minutes old are not interesting.)

- Certain kinds of reliability aren't needed, because: a real battle
would last only a few minutes; the system would be used at most once;
with enough redundancy it's OK for individual weapons to fail; the
system doesn't have to actually work, just be a credible deterrent; the
system wouldn't control nuclear weapons--unless the Teller "pop up"
scheme is adopted; the lasers won't penetrate the atmosphere, so even
if the system runs amok, the worst it could do would be to intercept some
innocent launch or satellite.

- We could debug the software by putting it in orbit five or ten years
before the weapons are deployed, and observing it. We wouldn't even
have to deploy them until the system was sufficiently reliable. Yes,
but this would not test the important modes of the system.

- Dependence on communication can be minimized by distributing
authority: each platform can act on its own, and treat all
communication as hints.

- With a multi-level fault-tolerance scheme, each platform can monitor
the state of its neighbors, and reboot or download any that seem to be
malfunctioning.

- In fifteen years we can put 200 gigaflops in orbit in a teacup. Well,
make that a breadbox.

- Space qualification is difficult and slow. Don't count on
microprocessors of more than a few mips in orbit. Well, maybe we could
use fifty of them.

- How much can we speed up computations by adding processors? With
general-purpose processors, probably not much. How much should we rely
on special-purpose space-qualified processors?

- Processor cost is negligible. No, it isn't. Compared to software
costs or total system costs it is. No, it isn't, you are
underestimating the costs of space qualification.

- 14 MeV neutron flux cannot effectively be shielded against and
represents a fundamental limitation on the switching-speed, power
product. Maybe we should put all the computationally intensive
components under a mountain. But that increases the dependence on
communication.

- Maybe we could reduce failure rates by putting the software in
read-only memory. No, that makes software maintenance incredibly
difficult.

- Flaccidware. It's software now, but it can become hardware when
necessary.

- Is hardware less prone to failure if switched off? Maybe we could
have large parts of the system on standby until the system goes on
alert. Unfortunately, the dominant hardware failure modes continue even
with power off.

- The software structure must accommodate changes in virtually all
component technologies (weapons, sensors, targets, communication,
computer hardware) during and following deployment. But we don't have
much technology for managing rapid massive changes in large systems.

Relation to Critics:

Dave Parnas's criticisms have obviously been a matter of considerable
concern for the panel. Chuck Seitz and Dick Lau both said explicitly
that they wouldn't be satisfied making a recommendation that failed to
address the issues Dave and other critics have raised. Chuck also
distributed copies of "The Star Wars Computer System" by Greg Nelson
and David Redell, commending it to the attention of the panel as
"Finally, some well-written and intelligent criticism."

Richard Lipton had a somewhat different attitude: How can they say that
what we are going to propose is impossible, when even we don't know
yet what we're going to propose? And why don't software researchers
show more imagination? When a few billion dollars are dangled in front
of them, the physicists will promise to improve laser output by nine
decimal orders of magnitude; computer scientists won't even promise one
or two for software production.

The minutes of the August 12 meeting contain the following points:

- Critics represent an unpaid "red team" and serve a useful function in
identifying weak points in the program.

- Critiques should be acknowledged, and areas identified as to how we
can work to overcome these problem areas.

- Throughout our discussions, and in our report we should reflect the
fact that we have accepted a degree of uncertainty as an inherent part
of the strategic defense system.

- How to get the system that is desired? This basic problem goes back
to defining requirements--a difficult task when one is not quite sure
what one wants and what has to be done.

Prospects:

After all of this, what do I think of the prospects for SDI Battle
Management Software? I certainly would not be willing to take on
responsibility for producing it. On the other hand, I cannot say flatly
that no piece of software can be deployed in the 1990s to control a
ballistic missile defense system. It all depends on how much
functionality, coordination, and reliability are demanded of it.

Unfortunately, as with most other computer systems, the dimension in
which the major sacrifice will probably be made is reliability. The
reality of the situation is that reliability is less visible before
deployment than other system parameters and can be lost by default. It
is also probably the hardest to remedy post facto. Of course, with a
system intended to be used "at most once," there may be no one around
to care whether or not it functioned reliably.

Despite these misgivings, I am glad that this panel is taking seriously
its charter to develop the information on which a deployment decision
could responsibly be based.

Jim H.

------------------------------

[An earlier SU-bboard message that prompted the following sequence of 
replies seemed like total gibberish, so I have omitted it.  PGN]


Date: 13 Aug 85  1521 PDT
From: John McCarthy <JMC@SU-AI.ARPA>
Subject: Forum on Risks to the Public in Computer Systems 
To:   su-bboards@SU-AI.ARPA  
                              [but not To: RISKS...]

I was taking [as?] my model Petr Beckmann's book "The Health Hazards of not
Going Nuclear" in which he contrasts the slight risks of nuclear energy with
the very large number of deaths resulting from conventional energy sources
from, e.g. mining and air pollution.  It seemed to me that your announcement
was similarly one sided in its consideration in risks of on-line systems and
ignoring the possibility of risks from their non-use.  I won't be specific
at present, but if you or anyone else wants to make the claim that there are
no such risks, I'm willing to place a substantial bet.

   [Clearly both inaction and non-use can be risky.  The first two items at
    the beginning of this issue (Vol 1 no 2) -- the lobstermen and the Union
    Carbide case -- involved inaction.  PGN]

------------------------------

Date: 14 Aug 85  1635 PDT
From: John McCarthy <JMC@SU-AI.ARPA>
Subject: IJCAI as a forum   
To:   su-bboards@SU-AI.ARPA 

	Like Chris Stuart, I have also contemplated using IJCAI as a forum.
My issue concerns the computer scientists who have claimed, in one case "for
fundamental computer science reasons" that the computer programs required
for the Strategic Defense Initiative (Star Wars) are impossible to write and
verify without having a series of nuclear wars for practice.  Much of the
press (both Science magazine and the New York Times) have assumed (in my
opinion correctly) that these people are speaking, not merely as
individuals, but in the name of computer science itself.  The phrase "for
fundamental computer science reasons" was used by one of the computer
scientist opponents.

	In my opinion these people are claiming an authority they do not
possess.  There is no accepted body of computer science principles that
permits concluding that some particular program that is mathematically
possible cannot be written and debugged.  To put it more strongly, I don't
believe that there is even one published paper purporting to establish
such principles.  However, I am not familiar with the literature on
software engineering.

	I think they have allowed themselves to be tempted into
exaggerating their authority in order to support the anti-SDI cause,
which they support for other reasons.

	I have two opportunities to counter them.  First, I'm giving
a speech in connection with an award I'm receiving.  Since I didn't
have to submit a paper, I was given carte blanche.  Second, I have
been asked by the local arrangements people to hold a press conference.
I ask for advice on whether I should use either of these opportunities.
I can probably even arrange for some journalist to ask my opinion on
the Star Wars debugging issue, so I wouldn't have to raise the issue
myself.  Indeed since my position is increasingly public, I might
be asked anyway.

	To make things clear, I have no position on the feasibility
of SDI, although I hope it can be made to work.  Since even the
physical principles that will be proposed for the SDI system haven't
been determined, it isn't possible to determine what kind of programs
will be required and to assess how hard they will be to write
and verify.  Moreover, it may be possible to develop new techniques
involving both simulation and theorem proving relevant to verifying
such a program.  My sole present point is that no-one can claim
the authority of computer science for asserting that the task
is impossible or impractical.

	There is even potential relevance to AI, since some of the
opponents of SDI, and very likely some of the proponents, have suggested
that AI techniques might be used.

	I look forward to the advice of BBOARD contributors.

------------------------------

Date: Thu 15 Aug 85 00:17:09-PDT
From: Peter Karp <KARP@SUMEX-AIM.ARPA>
Subject: Verifying SDI software
To: su-bboard@SUMEX-AIM.ARPA

John McCarthy: I argue CPSR's approach is reasonable as follows:

1) I assume you admit that bugs in the SDI software would be very
   bad since this could quite conceivably leave our cities open
   Soviet attack.

2) You concede software verification theory does not permit proof
   of correctness of such complex programs.  I concede this same
   theory does not show such proofs are impossible.

3) The question to responsible computer professionals then becomes:
   From your experience in developing and debugging complex computer
   systems, how likely do you believe it is that currently possible
   efforts could produce error-free software, or even software whose
   reliability is acceptable given the risks in (1) ?

Clearly answering (3) requires subjective judgements, but computer
professionals are among the best people to ask to make such 
judgements given their expertise.  

I think it would be rather amusing if you told the press what you
told bboard: that you "hope they can get it to work".


------------------------------

Date: 16 Aug 85  2200 PDT
To:   su-bboards@SU-AI.ARPA 
From: John McCarthy <JMC@SU-AI.ARPA>
Subject: sdi 

I thank those who advised me on whether to say something about the
SDI controversy in my lecture or at the press conference.  I don't
presently intend to say anything about it in my lecture.  Mainly
this is because thinking about what to say about a public issue
would interfere with thinking about AI.  I may say something or
distribute a statement at the press conference.

I am not sure I understand the views of those who claim the computer
part of SDI is infeasible.  Namely, do they hope it won't work?  If
so, why?  My reactionary mind thinks up hypotheses like the following.
It's really just partisanship.  They have been against U.S. policy
in many areas including defense, that they automatically oppose any
initiative and then look for arguments.

------------------------------

Date: Thu, 15 Aug 85 13:01:46 pdt
From: vax-populi!dparnas@nrl-css (Dave Parnas)
To: Neumann@SRI-CSLA.ARPA
Subject: Re:  [John McCarthy <JMC@SU-AI.ARPA>: IJCAI as a forum   ]

McCarthy is making a classic error of criticizing something that 
he has not read.  I have not argued that any program cannot be written 
and debugged.  I argue a much weaker and safer position, that we cannot
know that the program has been debugged.  There are "fundamental computer
science reasons" for that, they have to do with the size of the smallest
representation of the mathematical functions that describe the behaviour
of computer software and our inability to know that the specifications
are correct.  

Dave


Date: Thu, 15 Aug 85 13:14:22 pdt
From: vax-populi!dparnas@nrl-css (Dave Parnas)
To: neumann@SRI-CSL.ARPA
Subject: Copy of cover letter to Prof. John McCarthy

Dear Dr. M

	A friend of mine, whose principal weakness is reading the junk mail 
posting on bulletin boards sent me a copy of your posting with regard to 
SDI.

	It is in general a foolish error to criticize a paper that you have
not read on the basis of press reports of it.

	Nobody has, in fact, claimed that any given program cannot be
written and "debugged" (whatever that means).  The claim is much weaker,
that we cannot know with confidence that the program does meet its
specification and that the specification is the right one.  There is both
theoretical (in the form of arguments about the minimal representation of
non-continuous functions) and empirical evidence to support that claim.  The
fact that you do not read the literature on software engineering does not
give you the authority to say that there are no papers supporting such a
claim.

	As I would hate to see anyone, whether he be computer scientist or AI
specialist, argue on the basis of ignorance, I am enclosing ...


------------------------------

Date: Thu 15 Aug 85 18:50:46-PDT
From: Gary Martins <GARY@SRI-CSLA.ARPA>
Subject: Speaking Out On SDI
To: jmc@SU-AI.ARPA

Dear Dr. McC -

In response to your BB announcement:

1.  Given that IJCAI is by and large a forum for hucksters and crackpots of
various types, it is probably a poor choice of venue for the delivery of
thoughts which you'd like taken seriously by serious folks.

2. Ditto, for tying your pro-SDI arguments in with "AI"; it can only lower
the general credibility of what you have to say.

3.  You are certainly right that no-one can now prove that the creation of
effective SDI software is mathematically impossible, and that part of your
argument is beyond reproach, even if rather trivial.  However, you then
slip into the use of the word "impractical", which is a very different
thing, with entirely different epistemological status.  On this point,
you may well be entirely wrong -- it is an empirical matter, of course.


I take no personal stand on the desirability or otherwise of SDI, but
as a citizen I have a vested interest in seeing some discussions of
the subject that are not too heavily tainted by personal bias and
special pleading.


Gary R. Martins
Intelligent Software Inc.

------------------------------

       International Conference on Software Engineering
              28-30 August 1985, London UK
         Feasibility of Software for Strategic Defense
                    Panel Discussion
             30 August 1985, 1:30 - 3:00 PM

                       Panelists:       
        Frederick P. Brooks III, University of North Carolina
        David Parnas, University of Victoria
           Moderator: Manny Lehman, Imperial College

This panel will discuss the feasibility of building the software for the
Strategic Defense System ('Star Wars') so that that software could be
adequately trusted to satisfy all of the critical performance goals.  The
panel will focus strictly on the software engineering problems in building
strategic defense systems, considering such issues as the reliability of the
software and the manageability of the development.

    [This should be a very exciting discussion.  Fred has extensive hardware,
     software, and management experience from his IBM OS years.  David's
     8 position papers have been widely discussed -- and will appear in the
     September American Scientist.  We hope to be able to report on this
     panel later (or read about it in ARMS-D???).   Perhaps some of you
     will be there and contribute your impressions.  PGN]

------------------------------

Date: Mon, 15 Jul 85 11:05 EDT
From: Tom Parmenter <parmenter@SCRC-STONY-BROOK.ARPA>

From an article in Technology Review by Herbert Lin on the difficulty
(impossibility) of developing software for the Star Wars (Strategic
Defense Initiative) system:

  Are there alternatives to conventional software development?  Some defense
  planners think so.  Major Simon Worden of the SDI office has said that

    "A human programmer can't do this.  We're going to be developing new
     artificial intelligence systems to write the software.  Of course, 
     you have to debug any program.  That would have to be AI too."

------------------------------

Date: Wed, 14 Aug 85 18:08:57 cdt
From: uwmacc!myers@wisc-rsch.arpa (Latitudinarian Lobster)
Message-Id: <8508142308.AA12046@maccunix.UUCP>
To: risks@sri-csl.arpa 
Subject: CPSR-Madison paper for an issue of risks?

The following may be reproduced in any form, as long as the text and credits
remain unmodified.  It is a paper especially suited to those who don't already
know a lot about computing.  Please mail comments or corrections to:

Jeff Myers
University of Wisconsin-Madison		reflect the views of any other
Madison Academic Computing Center	person or group at UW-Madison.
1210 West Dayton Street
Madison, WI  53706
ARPA: uwmacc!myers@wisc-rsch.ARPA
UUCP: ..!{harvard,ucbvax,allegra,heurikon,ihnp4,seismo}!uwvax!uwmacc!myers
BitNet: MYERS at MACCWISC

-------------------------------------------------------------------------------

                   COMPUTER UNRELIABILITY AND NUCLEAR WAR

     Larry Travis, Ph.D., Professor of Computer Sciences, UW-Madison 
	      Daniel Stock, M.S., Computer Sciences, UW-Madison
	     Michael Scott, Ph.D., Computer Sciences, UW-Madison
	    Jeffrey D. Myers, M.S., Computer Sciences, UW-Madison
	      James Greuel, M.S., Computer Sciences, UW-Madison
James Goodman, Ph.D., Assistant Professor of Computer Sciences, UW-Madison
     Robin Cooper, Ph.D., Associate Professor of Linguistics, UW-Madison
	     Greg Brewster, M.S., Computer Sciences, UW-Madison

                               Madison Chapter
               Computer Professionals for Social Responsibility
                                  June 1984

           Originally prepared for a workshop at a symposium on the
                     Medical Consequences of Nuclear War
                         Madison, WI, 15 October 1983

  [The paper is much too long to include in this forum, but can be 
  obtained from Jeff Myers at the above net addresses, or FTPed from
  RISKS@SRI-CSL:<RISKS>MADISON.PAPER.  The section headings are as follows:

    1.  Computer Use in the Military Today, James Greuel, Greg Brewster
    2.  Causes of Unreliability, Daniel Stock, Michael Scott
    3.  Artificial Intelligence and the Military, Robin Cooper
    4.  Implications, Larry Travis, James Goodman

  ]

------------------------------

Date: Wed, 21 Aug 85 17:46:55 PDT
From: Clifford Johnson <GA.CJJ@Forsythe>
To: SU-BBOARDS@SCORE
Subject:  @=  Can a computer declare war?

****************** CAN A COMPUTER DECLARE WAR?

Below is the transcript of a court hearing in which it is was argued by the
Plaintiff that nuclear launch on warning capability (LOWC, pronounced
lou-see) unconstitutionally delegates Congress's mandated power to declare
war.

The Plaintiff is a Londoner and computer professional motivated to act by
the deployment of Cruise missiles in his hometown.  With the advice and
endorsement of Computer Professionals for Social Responsibility, on February
29, 1984, he filed a complaint in propria persona against Secretary of
Defense Caspar Weinberger seeking a declaration that peacetime LOWC is
unconstitutional.  The first count is presented in full below; a second
count alleges a violation of Article 2, Part 3 of the United Nations Charter
which binds the United States to settle peacetime disputes "in such a manner
that international peace and security, and justice, are not endangered":

1.  JURISDICTION:  The first count arises under the Constitution of the
United States at Article I, Section 8, Clause 11, which provides that "The
Congress shall have Power ... To declare War"; and at Article II, Section 2,
Clause 1, which provides that "The President shall be Commander in Chief" of
the Armed Forces.

2.  Herein, "launch-on-warning-capability" is defined to be any set of
procedures whereby the retaliatory launching of non-recoverable nuclear
missiles may occur both in response to an electronically generated warning
of attacking missiles and prior to the conclusively confirmed commencement
of actual hostilities with any State presumed responsible for said attack.

3.  The peacetime implementation of launch-on-warning-capability is now
presumed constitutional, and its execution by Defendant and Defendant's
appointed successors is openly threatened and certainly possible.

4.  Launch-on-warning-capability is now subject to a response time so short
as to preclude the intercession of competent judgment by the President or by
his agents.

5.  The essentially autonomous character of launch-on-warning-capability
gives rise to a substantial probability of accidental nuclear war due to
computer-related error.

6.  Said probability substantially surrenders both the power of Congress to
declare war and the ability of the President to command the Armed Forces,
and launch-on-warning-capability is therefore doubly repugnant to the
Constitution.

7.  The life and property of Plaintiff are gravely jeopardized by the threat
of implementation of launch-on-warning-capability.

WHEREFORE, Plaintiff prays this court declare peacetime
launch-on-warning-capability unconstitutional.

****************** THE HEARING IN THE COURT OF APPEALS FOLLOWS 

[in the original message, and is too lengthy to include here.  I presume you
will find it in ARMS-D -- see my interpolation into the note from Bob Carter
above.  Otherwise, you can FTP it from SRI-CSL:<RISKS>JOHNSON.HEARING.  PGN]
-------

T.RTitleUserPersonal
Name
DateLines
154.1CADLAC::GOUNFri Aug 30 1985 18:30653
The first issue follows the formfeed.

      FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 

                         Vol 1 No 1 


    (THIS vol/no CAN BE FTPed FROM SRI-CSL:<RISKS>RISKS-1.1)

Contents: 
  Welcome!
  ACM Council Resolution of 8 October 1984
  An Agenda for the Future
  Computer-Related Incidents Illustrating Risks to the Public 
  Strategic Computing Initiative
  Strategic Defense Initiative; David Parnas and SDI
  Herb Lin: Software for Ballistic Missile Defense, June 1985
  Weapons and Hope by Freeman Dyson (minireview by Peter Denning)
  Christiane Floyd et al.: The Responsible Use of Computers
  Human safety (software safety)
  Computers in critical environments, Rome, 23-25 October 1985

------------------------------

From: Neumann@SRI-CSL <Peter G. Neumann>
Subject: WELCOME TO RISKS@SRI-CSL 

This is the first issue of a new on-line forum.  Its intent is to address
issues involving risks to the public in the use of computers.  As such, it
is necessarily concerned with whether/how critical requirements for human
safety, reliability, fault tolerance, security, privacy, integrity, and
guaranteed service (among others) can be met (in some cases all at the same
time), and how the attempted fulfillment or ignorance of those requirements
may imply risks to the public.  We will presumably explore both deficiencies
in existing systems and techniques for developing better computer systems --
as well as the implications of using computer systems in highly critical
environments.

Introductory Comments

This forum is inspired by the letter from Adele Goldberg, President of the
Association for Computing Machinery (ACM), in the Communications of the ACM,
February 1985 (pp. 131-133).  In that message (part of which is reproduced
below), Adele outlined the ACM's intensified concern with our increasingly
critical dependence on the use of computers, a concern which culminated in
the ACM's support for a Forum on Risks to the Public in Computer Systems
(RPCS), to be developed by the ACM Committee on Computers and Public Policy
(of which I am currently the Chairman).  My involvement in this BBOARD
activity is thus motivated by my ACM roles, but also by strong feelings that
this topic is one of the most important confronting us.  In keeping with ACM
policy, and with due respect to the use of the ARPANET, we hope to attain a
representative balance among differing viewpoints, although this clearly
cannot be achieved locally within each instance of the forum.

For discussions on requirements, design, and evaluation techniques for
critical systems -- namely, how to do it right in the first place so that a
system can satisfy its requirements and can continue to maintain its desired
abilities through ongoing maintenance and evolution, you will find a little
solace in the literature on computer science, computer systems, and software
engineering.  There is even some modest encouragement from the formal
verification community -- which readers of the ACM SIGSOFT Software
Eengineering Notes will find in the forthcoming August 1985 special issue on
the verification workshop VERkshop III.  However, it is not encouraging to
find many developers of critical software ignoring what is known about how
to do it better.  In this RISKS forum, we hope that we will be able to
confront some of those problems, and specifically those where risks to the
public are present.

You should also be aware (if you are not already) of several related on-line
services:  HUMAN-NETS@RUTGERS for a variety of issues pertaining to people
(but originally oriented to the establishment of WorldNet), SOFT-ENG@MIT-XX
for software engineering, and perhaps SECURITY@RUTGERS for security -- it is
still young and rather narrow (car-theft prevention is big at the moment,
with a few messages on passwords and forged mail headers).  (You can get
more inforation from SRI-NIC.ARPA:<NETINFO>INTEREST-GROUPS.TXT.)  I look at
these regularly, so some cross-fertilization and overlap may be expected.
However, the perspective of RISKS seems sufficiently unique to justify the
existence of still another interest group!

RISKS Forum Procedures

I hope all this introductory detail does not deter you, but it seems to be
worthwhile to set things up cleanly from the beginning.  To submit items for
distribution, send mail to RISKS@SRI-CSL.ARPA.  For all other messages
(e.g., list additions or deletions, or administrative complaints), send to
RISKS-Request@SRI-CSL.ARPA.

Submissions should relate directly to risks to the public involving computer
systems, be reasonably coherent, and have a brief explicit descriptive
subject line.  Flames, ad hominem attacks, overtly political statements, and
other inappropriate material will be rejected.  Carefulness, reasonably
clear writing, and technical accuracy will be greatly appreciated.  Much
unnecessary flailing can be avoided with just a little forethought.

Contributions will generally be collected and distributed in digest form
rather than singly, as often as appropriate.  Subject lines may be edited in
order to group messages with similar content.  Long messages may have
portions of lesser interest deleted (and so marked), and/or may be split
across several issues.

Initially we will provide distributions to individuals, but as soon as there
are more than a few individuals at any given host, we will expect the
establishment of a file <BBOARD>RISKS.TXT or equivalent on that host -- with
local option whether or not to forward individual copies.  Back issues may
be FTPed from SRI-CSL.ARPA:<RISKS>RISKS-"vol"."no", where "vol" and "no" are
volume and number -- i.e., RISKS-1.1 for this issue.  But please try to rely
on local repositories rather than swamping the gateways, nets, and SRI-CSL.

------------------------------

From: Neumann@SRI-CSL <Peter G. Neumann>
Subject: ACM Council Resolution of 8 October 1984

Following are excerpts from ACM President Adele Goldberg's letter in the
Communications of the ACM, February 1985 (pp. 131-133).

  On this day [8 October 1984], the ACM Council passed an important
  resolution.  It begins:

    Contrary to the myth that computer systems are infallible, in fact
    computer systems can and do fail.  Consequently, the reliability of
    computer-based systems cannot be taken for granted.  This reality
    applies to all computer-based systems, but it is especially critical
    for systems whose failure would result in extreme risk to the public.
    Increasingly, human lives depend upon the reliable operation of 
    systems such as air traffic and high-speed ground transportation 
    control systems, military weapons delivery and defense systems, and
    health care delivery and diagnostic systems.

  The second part of the resolution includes a list of technical questions
  that should be answered about each computer system.  This part states that:

    While it is not possible to eliminate computer-based systems failure
    entirely, we believe that it is possible to reduce risks to the public
    to reasonable levels.  To do so, system developers must better
    recognize and address the issues of reliability.  The public has the
    right to require that systems are installed only after proper steps
    have been taken to assure reasonable levels of reliability.

    The issues and questions concerning reliability that must be addressed
    include:

    1. What risks and questions concerning reliability are involved when
       the computer system fails?

    2. What is the reasonable and practical level of reliability to
       require of the system, and does the system meet this level?
  
    3. What techniques were used to estimate and verify the level of
       reliability?

    4. Were the estimators and verifiers independent of each other
       and of those with vested interests in the system?

Adele's letter goes on to motivate the ACM's authorization of a forum on
risks to the public in the use of computer systems, of which this is an
on-line manifestation.  As you can see, I believe that the charter must be
broader than just reliability, including as appropriate other critical
requirements such as fault tolerance, security, privacy, integrity,
guaranteed service, human safety, and even world survival (in sort of
increasing order of globality).  There is also an important but probably
sharply delimited role for design verification (such as has been carried out
in the multilevel-security community in demonstrating that formal
specifications are consistent with formal requirements) and even code
verification (proving consistency of code and specifications), although
formal verification technologies seem not suited to mammoth systems but only
to selected critical components -- assuming that those components can be
isolated (which is the operative assumption in the case of security kernels
and trusted computing bases).  For example, see the VERkshop III proceedings
noted above.  PGN

------------------------------

From: Neumann@SRI-CSL <Peter G. Neumann>
Subject: AN AGENDA FOR THE FUTURE 

One of the activities of the ACM Committee on Computers and Public Policy
will be the review of a problem list presented by Dan McCracken and his
committee in the September 1974 issue of the Communications of the ACM, and
an update of it in the light of dramatic changes in the use of computers
since then.  Three items from that problem list are particularly relevant to
our RISK forum.

   * Computers and money
   * Computers and privacy
   * Computers and elections

Indeed, in the latest issue of the ACM SIGSOFT Software Engineering Notes
(July 1985), I reported on a variety of recent money problems, security
problems, and a whole string of potential election-fraud problems -- in the
last case suggesting opportunities for Trojan Horses and local fraud.  On
the third subject, there was an article by David Burnham (NY Times) in
newspapers of 29 July 1985 (NY Times, SF Chron, etc.), on vulnerabilities in
various computerized voting systems.  About 60% of the votes are counted by
computer programs, with over a third of those votes being counted by one
program (or variants?) written by Computer Election Systems of Berkeley CA.
Burnham writes, "The allegations that vote tallies calculated with [that
program] may have been secretly altered have raised concern among election
officials and computer experts... In Indiana and West Virginia, the company
has been accused of helping to rig elections."  This topic is just warming
up.

Items that also give us opportunities for discussions on risks to the
public include these:

   * Computers and defense
   * Computers and human safety 
   * Computer-user consumer protection
   * Computers and health
   * Informal and formal models of critical properties
     (e.g., not just of security or reliability, not 
      so high-level as Asimov's 3 Laws of Robotics)

Several items on computers and defense are included below.  There are also
some comments on software that is safe for humans.

I would think that some sort of Ralph-Nader-like consumer protection
organization might be appropriate for computing.  We have already had two
very serious automobile recalls due to program bugs -- the El Dorado brake
computer and the Mark VII computerized air suspension, and at least two
heart pacemaker problems (one which resulted in a death), as noted in the
disaster list below -- to go along with this summer's watermelon recall
(pesticides) and Austrian wine recalls (with the antifreeze-component
diethylene glycol being used as a sweetener).

------------------------------

From: Neumann@SRI-CSL <Peter G. Neumann>
Subject: COMPUTER-RELATED INCIDENTS ILLUSTRATING RISKS TO THE PUBLIC

Readers of the ACM SIGSOFT Software Engineering Notes have been alerted in
many past issues to numerous disasters and computer curiosities implying
potential or actual risks to the public.  A summary of events is catalogued
below, and updates earlier versions that I circulated in a few selected
BBOARDS.  Further details can be found in the references cited.  Awareness
of these cases is vital to those involved the design, implementation, and
operation of computer systems in critical environments, but is of course
not sufficient to prevent new disasters from occurring.  Significantly
better systems, and more aware operation and use, are also required.

       SOME COMPUTER-RELATED DISASTERS AND OTHER EGREGIOUS HORRORS
              Compiled by Peter G. Neumann (21 July 1985)

The following list is drawn largely from back issues of ACM SIGSOFT Software
Engineering Notes [SEN], references to which are cited as (SEN vol no), where 
vol 10 = 1985.  Some incidents are well documented, others need further study.
Please send corrections/additions+refs to PGNeumann, SRI International, EL301, 
Menlo Park CA 94025, phone 415-859-2375, Neumann@SRI-CSL.ARPA.

Legend: ! = Loss of Life; * = Potentially Life-Critical; 
        $ = Loss of Money/Equipment; S = Security/Privacy/Integrity Flaw

-------------------------- SYSTEM + ENVIRONMENT ------------------------------
!S Arthritis-therapy microwaves set pacemaker to 214, killed patient (SEN 5 1)
*S Failed heart-shocking devices due to faulty battery packs (SEN 10 3)
*S Anti-theft device reset pacemaker; FDA investigating the problem (SEN 10 2)
*$ Three Mile Island PA, now recognized as very close to meltdown (SEN 4 2)
*$ Crystal River FL reactor (Feb 1980) (Science 207 3/28/80 1445-48, SEN 10 3)
** SAC/NORAD: 50 false alerts in 1979 (SEN 5 3), incl. a simulated attack whose
    outputs accidentally triggered a live scramble [9 Nov 1979] (SEN 5 3);
** BMEWS at Thule detected rising moon as incoming missiles [5 Oct 1960] 
    (SEN 8 3).  See E.C. Berkeley, The Computer Revolution, pp. 175-177, 1962.
** Returning space junk detected as missiles.  Daniel Ford, The Button, p. 85
** WWMCCS false alarms triggered scrams [3-6 Jun 1980] (SEN 5 3, Ford pp 78-84)
** DSP East satellite sensors overloaded by Siberian gas-field fire (Ford p 62)
** 747SP (China Air.) autopilot tried to hold at 41,000 ft after engine failed,
    other engines died in stall, plane lost 32,000 feet [19 Feb 95] (SEN 10 2)
** 767 (UA 310 to Denver) four minutes without engines [August 1983] (SEN 8 5)
*  F18 missile thrust while clamped, plane lost 20,000 feet (SEN 8 5)	
*  Mercury astronauts forced into manual reentry (SEN 8 3)
*  Cosmic rays halve shuttle Challenger comm for 14 hours [8 Oct 84] (SEN 10 1)
*  Frigate George Philip fired missile in opposite direction (SEN 8 5)
$S Debit card copying easy despite encryption (DC Metro, SF BART, etc.)
$S Microwave phone calls easily interceptable; portable phones spoofable

------------------------------- SOFTWARE ------------------------------------	
*$ Mariner 1: Atlas booster launch failure DO 100 I=1.10 (not 1,10) (SEN 8 5)
*$ Mariner 18: aborted due to missing NOT in program (SEN 5 2)
*$ F18: plane crashed due to missing exception condition, pilot OK (SEN 6 2)
*$ F14 off aircraft carrier into North Sea; due to software? (SEN 8 3) 
*$ F14 lost to uncontrollable spin, traced to tactical software (SEN 9 5)
*$ El Dorado brake computer bug caused recall of all El Dorados (SEN 4 4)
$$ Viking had a misaligned antenna due to a faulty code patch (SEN 9 5)
$$ First Space Shuttle backup launch-computer synch problem (SEN 6 5 [Garman])
*  Second Space Shuttle operational simulation: tight loop upon cancellation of
    an attempted abort; required manual override (SEN 7 1)
*  Second Shuttle simulation: bug found in jettisoning an SRB (SEN 8 3)
*  Gemini V 100mi landing err, prog ignored orbital motion around sun (SEN 9 1)
*  F16 simulation: plane flipped over whenever it crossed equator (SEN 5 2)
*  F16 simulation: upside-down F16 deadlock over left vs. right roll (SEN 9 5)
*  Nuclear reactor design: bug in Shock II model/program (SEN 4 2)
*  Reactor overheating, low-oil indicator; two-fault coincidence (SEN 8 5)
*  SF BART train doors sometimes open on long legs between stations (SEN 8 5)
*  IRS reprogramming cost USA interest on at least 1,150,000 refunds (SEN 10 3)
*S Numerous system intrusions and penetrations; implanted Trojan horses; 414s;
    intrusions to TRW Credit Information Service, British Telecom's Prestel,
    Santa Clara prison data system (inmate altered release date) (SEN 10 1).
    Computerized time-bomb inserted by programmer (for extortion?) (10 3)
*$ Colorado River flooding in 1983, due to faulty weather data and/or faulty
    model; too much water was kept dammed prior to spring thaws.
 S Chernenko at MOSKVAX: network mail hoax [1 April 1984] (SEN 9 4)
 S VMS tape backup SW trashed disc directories dumped in image mode (SEN 8 5)
$  1979 AT&T program bug downed phone service to Greece for months (SEN 10 3)
$  Demo NatComm thank-you mailing mistitled supporters [NY Times, 16 Dec 1984]
$  Program bug permitted auto-teller overdrafts in Washington State (SEN 10 3)
 - Quebec election prediction gave loser big win [1981] (SEN 10 2, p. 25-26)
 - Other election problems including mid-stream corrections (HW/SW) (SEN 10 3)
 - SW vendor rigs elections? (David Burnham, NY Times front page, 29 July 1985)
 - Alaskan DMV program bug jails driver [Computerworld 15 Apr 85] (SEN 10 3)
 - Vancouver Stock Index lost 574 points over 22 months -- roundoff (SEN 9 1) 
 - Gobbling of legitimate automatic teller cards (SEN 9 2)

-------------------------- HARDWARE/SOFTWARE ---------------------------------
!  Michigan man killed by robotic die-casting machinery (SEN 10 2)
!  Japanese mechanic killed by malfunctioning Kawasaki robot (SEN 10 1, 10 3)
    [Electronic Engineering Times, 21 December 1981]
!  Chinese computer builder electrocuted by his smart computer after he built 
    a newer one. "Jealous Computer Zaps its Creator"!  (SEN 10 1)
*  FAA Air Traffic Control: many computer system outages (e.g., SEN 5 3)
*  ARPANET ground to a complete halt [27 Oct 1980] (SEN 6 1 [Rosen])
*$ Ford Mark VII wiring fires: flaw in computerized air suspension (SEN 10 3)
$S Harrah's $1.7 Million payoff scam -- Trojan horse chip (SEN 8 5) 
$  Great Northeast power blackout due to threshold set-too-low being exceeded
$  Power blackout of 10 Western states, propagated error [2 Oct 1984] (SEN 9 5)
 - SF Muni Metro: Ghost Train reappeared, forcing manual operation (SEN 8 3)
*$ Computer-controlled turntable for huge set ground "Grind" to halt (SEN 10 2)
*$ 8080 control system dropped bits and boulders from 80 ft conveyor (SEN 10 2)
 S 1984 Rose Bowl hoax, scoreboard takeover ("Cal Tech vs. MIT") (SEN 9 2)

-------- COMPUTER AS CATALYST, HUMAN FRAILTIES, OR UNKNOWN CAUSES -------------
!!$ Korean Airlines 007 shot down [1 Sept 1983], killing 269; autopilot left on
    HDG 246 rather than INERTIAL NAV? (NYReview 25 Apr 85, SEN 9 1, SEN 10 3)
!!$ Air New Zealand crashed into mountain [28 Nov 1979]; computer course data
    error had been detected and fixed, but pilots not informed (SEN 6 3 & 6 5)
!  Woman killed daughter, tried to kill son and self after computer error led 
    to a false report of their all having an incurable disease (SEN 10 3)
*  Unarmed Soviet missile crashed in Finland.  Wrong flight path? (SEN 10 2)
*$ South Pacific Airlines, 200 aboard, 500 mi off course near USSR [6 Oct 1984]
*S San Francisco Public Defender's database accessible to police (SEN 10 2)
*  Various cases of false arrest due to computer database use (SEN 10 3)
$  A: $500,000 transaction became $500,000,000; B: $200,000,000 lost (SEN 10 3)
*  FAA Air Traffic Control: many near-misses not reported (SEN 10 3)

---------------- ILLUSTRATIVE OF POTENTIAL FUTURE PROBLEMS -------------------
*S Many known/past security flaws in computer operating systems and application
    programs.  Discovery of new flaws running way ahead of their elimination.  
*  Expert systems in critical environments: unpredictability if (unknowingly) 
    outside of range of competence, e.g., incompleteness of rule base. StarWars
$S Embezzlements, e.g., Muhammed Ali swindle [$23.2 Million], Security Pacific 
    [$10.2 Million], City National Beverly Hills CA [$1.1 Million, 23 Mar 1979]
    [These were only marginally computer-related, but suggestive.  Others
    are known, but not publically acknowledged.]

---------------------- REFUTATION OF EARLIER REPORT --------------------------
* "Exocet missile not on expected-missile list, detected as friend" (SEN 8 3)
   [see Sheffield sinking, reported in New Scientist 97, p. 353, 2/10/83]; 
   Officially denied by British Minister of Defence Peter Blaker
   [New Scientist, vol 97, page 502, 24 Feb 83].  Rather, sinking abetted by
   defensive equipment being turned off to reduce communication interference?

[See also anecdotes from ACM Symposium on Operating Systems Principles,
SOSP 7 (SEN 5 1) and follow-on (SEN 7 1).]

------------------------------

                   ** DEFENSE COMPUTING SYSTEMS **

Subject: STRATEGIC COMPUTING INITIATIVE (SCI)

The Strategic Computing Initiative has received considerable discussion in
the Communications of the ACM lately, including a letter by Severo Ornstein,
Brian Smith and Lucy Suchman (ACM Forum February 1985), the response to them
by Robert S. Cooper (ACM Forum, March 1985), and the three responses to
Cooper in the August 1985 issue, as well as an article by Mark Stefik in the
July 1985 Communications.  Considerable variety of opinion is represented,
and is well worth reading.  PGN

------------------------------

Subject: STRATEGIC DEFENSE INITIATIVE (SDI)

The Strategic Defense Initiative (popularly known as Star Wars) is
considering the feasibility of developing what is probably the most complex
and most critical system ever contemplated.  It is highly appropriate to
consider the computer system aspects of that effort here.  Some of the
potential controversy is illustrated by the recent statements of David
Parnas, who presents a strongly skeptical view.  (See below.)  I hope that
we will be able to have a constructive dialogue here among representatives
of the different viewpoints, and firmly believe that it is vital to the
survival of our world that the computer-technical issues be thoroughly
discussed.  As in many other cases (e.g., space technology) there are many
potential research advances that can spin off approaches to other problems.
As indicated by my disaster list above, the problems of developing software
for critical environments are very pervasive -- and not just limited to
strategic defense.  But what we learn in discussing the feasibility of the
strategic defense initiative could have great impact on the uses that
computers find in other critical environments.  In general, we may find that
the risks are far too high in many of the critical computing environments on
which we depend.  We may also be led to techniques for developing better
systems that can adequately satisfy all of their critical requirements --
and continue to do so.  But perhaps most important of all is the increased
awareness that can come from intelligent discussion.  Thus, an open forum on
this subject is very important.  PGN

------------------------------

Plucked from SOFT-ENG@MIT-XX:
Date: 12 Jul 85 13:56:29 EDT (Fri)
From: Ed Frankenberry <ezf@bbnccv>
Subject: News Item: David Parnas Resigns from SDI Panel

New York Times, 7/12/85, page A6:

SCIENTIST QUITS ANTIMISSILE PANEL, SAYING TASK IS IMPOSSIBLE

By Charles Mohr
special to the New York Times

Washington, July 11 - A computer scientist has resigned from an advisory panel
on antimissile defense, asserting that it will never be possible to program a
vast complex of battle management computers reliably or to assume they will
work when confronted with a salvo of nuclear missiles.

  The scientist, David L. Parnas, a professor at the University of Victoria
in Victoria, British Columbia, who is consultant to the Office of Naval
Research in Washington, was one of nine scientists asked by the Strategic
Defense Initiative Office to serve at $1,000 a day on the "panel on computing
in support of battle management".

  Professor Parnas, an American citizen with secret military clearances, said
in a letter of resignation and 17 accompanying memorandums that it would never
be possible to test realistically the large array of computers that would link
and control a system of sensors, antimissile weapons, guidance and aiming
devices, and battle management stations.

  Nor, he protested, would it be possible to follow orthodox computer
program-writing practices in which errors and "bugs" are detected and
eliminated in prolonged everyday use. ...

  "I believe," Professor Parnas said, "that it is our duty, as scientists and
engineers, to reply that we have no technological magic that will accomplish
that.  The President and the public should know that."  ...

  In his memorandums, the professor put forth detailed explanations of his
doubts.  He argued that large-scale programs like that envisioned for the
program only became reliable through modifications based on realistic use.

  He dismissed as unrealistic the idea that program-writing computers,
artificial intelligence or mathematical simulation could solve the problem.

  Some other scientists have recently expressed public doubts that large-scale
programs free of fatal flaws can be written.  Herbert Lin, a research fellow
at the Massachusetts Institute of Technology, said this month that the basic
lesson was that "no program works right the first time."

  Professor Parnas wrote that he was sure other experts would disagree with
him.  But he said many regard the program as a "pot of gold" for research
funds or an interesting challenge.

[The above article is not altogether accurate, but gives a flavor of the
Parnas position.  The arguments for and against feasibility of success need
detailed and patient discussion, and thus I do not try to expand upon either
pro or con here at this time.  However, it is hoped that a serious
discussion can unfold on this subject.  (SOFT-ENG@MIT-XX vol 1 no 29
provides some further material on-line.)  See the following message as well.
PGN]

---------------

Date: Mon 22 Jul 1985 16:09:48 EST
From: David Weiss <weiss%wang-inst.csnet@csnet-relay.arpa>
Subject: Obtaining Parnas' SDI critique

Those who are interested in obtaining copies of David Parnas' technical
critique of SDI may do so by writing to the following address:

  Dr. David L. Parnas, Department of Computer Science, University of
  Victoria, P.O. Box 1700, Victoria B.C.  V8W 2Y2  CANADA

------------------------------

Date: Mon, 29 Jul 85 16:53:26 EDT
From: Herb Lin <LIN@MIT-MC.ARPA>
Subject:  BMD paper

The final version of my BMD paper is available now.  

            "Software for Ballistic Missile Defense"

Herb Lin, Center for International Studies, 292 Main Street, E38-616,
MIT, Cambridge, MA 02142, phone 617-253-8076.  Cost including postage =
$4.50

         Software for Ballistic Missile Defense, June 1985

                               Abstract

  A battle management system for comprehensive ballistic missile defense
  must perform with near perfection and extraordinary reliability.  It
  will be complex to an unprecedented degree, untestable in a realistic
  environment, and provide minimal time for human intervention.  The
  feasibility of designing and developing such a system (requiring upwards
  of ten million lines of code) is examined in light of the scale of the
  project, the difficulty of testing the system in order to remove errors,
  the management effort required, and the interaction of hardware and
  software difficulties.  The conclusion is that software considerations
  alone would make the feasibility of a "fully reliable" comprehensive
  defense against ballistic missiles questionable.

IMPORTANT NOTE: this version supersedes a widely circulated but earlier
draft entitled "Military Software and BMD: An Insoluble Problem?" dated
February 1985.

------------------------------

From: Peter Denning <pjd@RIACS.ARPA>
Tuesday 30 Jul 85 17:17:56 pdt
Subject: Minireview of Freeman Dyson's Weapons and Hope

I've just finished reading WEAPONS AND HOPE, by Freeman Dyson, published
recently.  It is a remarkable book analyzing the tools, people, and concepts
used for national defense.  The goal is to set forth an agenda for
discussing the nuclear weapons problem.  The most extraordinary aspect of
the book is that Dyson fairly and accurately represents the many points of
view with sympathy and empathy for each.  He thus transmits the impression
that it is possible for everyone to enter into and participate intelligently
in this important debate, and he tells us what the fundamental questions we
must address are.  This book significantly altered the way in which I
personally look at the problem.  I recommend that everyone read it and that
we all use it as a point of departure for our own discussions.

Although Dyson leaves no doubt on his personal views, he presents his very
careful arguments and lays out all the reasoning and assumptions where they
can be scrutinized by others.  With respect to the SDI, Dyson argues
(convincingly, I might add) that the greatest risk comes from the
interaction of the SDI system with the existing policies of the US and
Soviet Union -- it may well destabilize that interaction.  His argument is
based on policy considerations and is largely insensitive to the question
whether an SDI system could meet its technical goals.  For other reasons he
analyzes at length, he considers the idea of a space defense system to be a
technical folly.

Most of the arguments I've seen computer scientists make in criticism of the
"star wars" system are technically correct and support the technical-folly
view but may miss the point at a policy level.  (I am thinking of arguments
like

  "Computer scientists ought to oppose this because technically it
  cannot meet its goals at reasonable cost.")  

The point is that to the extent that policy planners perceive the technical
arguments as being largely inessential at the policy level, they will not
take seriously arguments labelled

  "You must take this argument seriously because it is made by computer
   scientists."  

Politicians often argue that is their job to evaluate the spectrum of
options, make the moral judgments, and puts risks in their proper place --
technologists ought to assess the risks, but when it comes to judging
whether those risks are acceptable (a moral or ethical judgment),
technologists have no more expertise than, say, politicians.  So in a
certain important sense, computer scientists have little special expertise
to bring to the debate.  For this reason, I think ACM has taken the right
approach by giving members [and nonmembers! <PGN>] a forum in which to
discuss these matters as individual human beings but without obligating ACM
to take official positions that are easily dismissed by policy planners as
outside ACM's official expertise.

Peter Denning


[In addition, you might want to look at "The Button: The Pentagon Strategic
Command and Control System", Simon and Schuster, 1985, which is also a
remarkable book.  It apparently began as an attempt to examine the
survivability of our existing communications and rapidly broadened into a
consideration of the computer systems as well.  I cite several examples in
the catalog above.  Some of you probably saw excerpts in the New Yorker.  PGN]

------------------------------

Subject:  Responsible Use of Computers
From: horning@decwrl.ARPA (Jim Horning)
Date: 30 Jul 1985 1149-PDT (Tuesday)

  You might want to mention the evening session on "Our Responsibility
  as Computer Professionals" held in conjunction with TAPSOFT in Berlin,
  March 27, 1985.  This was attended by about 400 people.  Organized by R.
  Burstall, C. Floyd, C.B. Jones, H.-J. Kreowski, B. Mahr, J. Thatcher.
  Christiane Floyd wrote a nice position paper for the TAPSOFT session,
  well worth abstracting and providing a reference to.  

Jim Horning

[This position paper is apparently similar to an article by Christiane
Floyd, "The Responsible Use of Computers -- Where Do We Draw the Line?",
that appears in two parts in the Spring and Summer 1985 issues of the CPSR
Newsletter (Computer Professionals for Social Responsibility, P.O. Box 717,
Palo Alto CA 94301).  Perhaps someone can send us the TAPSOFT abstract.  PGN]

------------------------------

From: Neumann@SRI-CSL <Peter G. Neumann>
Subject: HUMAN SAFETY (SOFTWARE SAFETY)

An important area of concern to the RISKS forum is what is often called
Software Safety, or more properly the necessity for software that is safe
for humans.  ("Software Safety" could easily be misconstrued to imply making
the software safe FROM humans, an ability that is called "integrity" in the
security community.)  Nancy Leveson has been doing some excellent work on
that subject, and I hope she will be a contributor to RISKS.  There are also
short letters in the ACM Forum from Peter Fenwick (CACM December 1984) and
David Nelson (CACM March 1985) on this topic, although they touch only the
tip of the iceberg.  I would expect human safety to be a vital topic for
this RISKS forum, and hope that we can also help to stimulate research on
that topic.

------------------------------

Subject: INTERNATIONAL SEMINAR ON COMPUTERS IN CRITICAL ENVIRONMENTS

In sticking to my convictions that we must address a variety of critical
computing requirements in a highly systematic, unified, and rigorous way, I
am involved in the following effort:

  23-25 October 1985, Rome, Italy (organized by Sipe Optimation and T&TSUD,
  sponsored by Banca Nazionale del Lavoro).  Italian and English.  Organizers
  Roberto Liscia (Sipe Optimation, Roma, via Silvio d'Amico 40, ITALIA, phone
  039-6-5476), Eugenio Corti (T&TSUD, 80127 Napoli, via Tasso 428, ITALIA),
  Peter G. Neumann (SRI International, Menlo Park CA 94025).  Speakers include
  Neumann, Bill Riddle, Severo Ornstein (CPSR), Alan Borning (U. Washington),
  Andres Zellweger (FAA), Sandro Bologna (ENEA), Eric Guldentops (SWIFT).  The
  program addresses a broad range of topics (including technical, management,
  social, and economic issues) on the use of computer systems in critical
  environments, where the computer systems must be (e.g.) very reliable,
  fault-tolerant, highly available, secure, and safe for humans.  This 
  symposium represents an effort to provide a unified basis for the 
  development of critical systems.  Software engineering and the role of
  the man-machine interface are addressed in detail.  There will also be
  case studies of air-traffic control systems, defense systems, funds 
  transfer, and nuclear power.  Contact Roberto Liscia (or Mrs. De Vito) at 
  SIPE, or Peter Neumann at SRI for further information.

------------------------------

AFTERWORD

Congratulations to you if you made it through this rather lengthy inaugural
issue.  I hope you find this and subsequent issues provocative, challenging,
enlightening, interesting, and entertaining.  But that depends in part upon
your contributions having those attributes.  Now it is in YOUR hands: your
contributions and suggestions will be welcomed.  PGNeumann <Neumann@SRI-CSL>
-------