[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::digital

Title:The Digital way of working
Moderator:QUARK::LIONELON
Created:Fri Feb 14 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:5321
Total number of notes:139771

363.0. "The adverse effects of AI (?)" by EAGLE1::BEST (R D Best, Systems architecture, I/O) Tue Aug 11 1987 22:01

There has been a good deal of discussion surrounding the benefits and 
wonderful things that we hope to accomplish with AI.  But every technology
carries with it some down side effects.  Frequently we are affected in
ways that were largely unanticipated when the technology was introduced.

I'd like to devote this note to speculations on what some of the adverse
effects of the evolution of AI may be.  You're welcome to speculate on
what may happen and what we might do to mitigate or prevent certain
problems from arising.

I'll start the discussion by presenting two sketches of scenarios for bad
situations we could get ourselves into (or maybe we can't ?) and then
presenting plausibility arguments for why they may be real concerns.

I'm very interested in hearing peoples' arguments on these subjects.
------------------------------------------------------------------------
Situation 1:

  The abandonment of control

In order to retain competitive advantage, businesses and governments are
subjected to continuous pressures to make faster decisions and implement them
ever more quickly.  Since computers are capable of digesting large amounts
of information, the natural tendency will be to assign more and more routine
decision making to machines.

Already our national defense relies critically on computers to evaluate
whether we are under attack and to initiate appropriate reactive measures.
Almost surely, businesses will come to rely on machines to decide upon
and execute business transactions faster than their competitors.

But such transactions and decisions will not have been subject to
extensive human review.  The reason for this is that the 'decision support'
calculations may well prove to be too complex for a human to understand
in toto in a time scale necessary to execute the decision and gain the
advantage.

Thus, by degrees we might begin to abdicate responsibility for business and
political decisions to algorithms.  By hiding policy decisions in
algorithms, we make implicit many decision processes that used to be subject to
explicit political or business debate.  Since such a debate process is arguably
central to a democracy, are we then not subtly abandoning control ?

How can we avoid such implicit abdication ?  Maybe we don't want to or
shouldn't ?
------------------------------------------------------------------------

Situation 2:

  The devaluation of professionals (perhaps humans in general ?)

As the technology (computer aided engineering|decision support systems|
expert systems) technology advances, will we see a steady devaluation of
professional skills and a resultant rise in layoffs and unemployment
as expert systems supplant human professionals ?

There has been much talk about how automation is replacing high paying
manufacturing jobs with low paying service and retail jobs, but little 
discussion has been made about the effect of AI on the professional
sector.  Ironically, it seems likely (to me) that the professional sector will
be affected earlier and perhaps more drastically.

It seems to be a clear trend that when a machine can do a job more efficiently
(or as efficiently at the same or lower cost) than a human, that the human
will be replaced.

What's to prevent the 'post-industrial era' from becoming the 
'post-professional era' ?
------------------------------------------------------------------------

I (of course) have my own opinions and some supporting arguments, but first
I'd like to hear some reactions from the floor.
T.RTitleUserPersonal
Name
DateLines
363.1WHYVAX::HETRICKBrian HetrickWed Aug 12 1987 14:3261
          I believe that the most noticeable effect of the widespread
     adoption of AI techniques in general programming will be the loss of
     what little software reliability there is, and in species-critical
     situations.

          In general, a mere human being cannot predict what an AI system
     will do with any particular input:  the AI system, particularly one
     using expert system or neural network technology, is simply too
     complex to understand or to emulate by hand.  [This property of AI
     systems is so well known that some AI authors propose it as an
     operative definition of AI systems.]  In an expert system, one adds
     rules as the need for them becomes known;  the primary mechanism for
     discovering a rule is needed is the failure of the system to perform
     as one desires.  In a neural network system, one trains the network
     with a set of test vectors, and thereafter it does whatever it does --
     and one hopes it does the 'right' thing with data not represented by a
     test vector.

          In a very real sense, all expert systems are prototypes:  one
     simply has collected rules until it seems to work.  In a very
     frightening sense, all neural networks are magic:  it may have worked
     in all cases before this, but the technology gives no assurance that
     it will work in the *next* case.

          The main advantage of procedural algorithms is that programs in
     them can be proven correct.  A proof is fundamentally a series of
     statements that convinces someone else that a claim is correct.  The
     advantage of provability is that one can then convince oneself that a
     claim ("this program works") is correct;  this is not a proof, but is
     made possible only because the subject matter is subject to proof.
     Given that the major advances in effective AI have relied upon non-
     procedural methods that nobody really understands but which looks as
     if they work, it is currently impossible to prove large AI programs
     correct.  It is therefore impossible to convince oneself that the
     program is correct.

	  I would not be so frightened were there some effort underway to
     allow mere humans to understand the mechanisms and predict the actions
     of AI programs.  But the AI community does not see this understanding
     and prediction as a goal worth the effort.

          Major sources of physical danger to humans are being put into the
     care of computers:  humans in underground silos may have the
     responsibility for turning the keys, but the physical launch sequence
     is controlled by computers;  humans in control rooms may have the
     responsibility to watch the dials and push the buttons, but the
     physical reactor mechanisms are controlled by computers;  soon, humans
     in airborne control rooms may have overall responsibility to give
     strategy, but detailed tactics and physical actions of the orbiting
     weapons will be controlled by computers.

          And the operative definition of an AI program is that, by design,
     a human cannot predict what the program will do.

          I see a large risk that the 'post-industrial' era will become,
     not the 'post-professional' era, but the 'post-human' era.  I
     sincerely hope that before humanity manage to kill itself off by
     trusting AI too much, it stops;  and that it correctly identifies AI,
     rather than computers per se, as the technology to avoid.

				  Brian Hetrick
363.2DependsVIDEO::TEBAYNatural phenomena invented to orderWed Aug 12 1987 15:1914
    Key to the use of AI was expressed in 1 as rules are only
    added as the need becomes known by a failure of the system.
    When the system failure can be fatal that is a problem.I
    for one would not trust business/politcal decesions to AI.
    I barely trust them to humans. Complex ordering tasks that
    can be structured yes-AI can do well. But if the outcome of
    a wrong way is fatal No.
    
    The post professional idea is one that has already been explored
    some in science fiction. One of the most interesting explorations
    of this was LeGunn's "Always Coming Home". I think that some of
    this post professional is arising now but whether it is do to
    computers or to a cycle in evolution of a society I am not sure.
    
363.3gut reaction: a story to illustrateMUGSY::GLANTZMikeWed Aug 12 1987 16:0582
  I have a hard time crystallizing thoughts into anything coherent on
  topics like this. Instead, I'll offer the following sketch for a
  science fiction novel. It may convey some emotion, if not much
  rational opinion.
  
  Somewhere in the heart of the Soviet Union's Asian territory, or maybe
  it's in Australia, there is a small society. It probably consists of
  several "states" or villages. They have had, up until now, no contact
  with our more complex, technologically advanced civilization.
  Nevertheless, they know that they lead a complicated existence - there
  are traditions and laws to be observed, personal tragedies, major
  scandals, and, in general, all of the things which make life
  interesting. It seems as though every day life gets more complicated.
  They hardly suspect that, at this very moment, an enormous, almost
  infinitely more complicated world is seething around them.
  
  Eventually, the Authorities - representatives of the government that
  *really* rules this land - arrive to bring order and income tax to
  this tiny, overlooked society. Of course, they also introduce high
  technology and a completely new set of social values. The people of
  the small society, who previously couldn't imagine that life could get
  more complicated (and who initially rebel furiously, imagining that
  someday they would be able to return to their familiar ways),
  gradually come to be a part of the larger civilization. They
  reluctantly join the modern world, of which they had for so long been
  ignorant - blissful ignorance mainly due to the modern world not
  having gotten around to imposing itself on this tiny outback until
  things got so big that even tiny, far away places were worth looking
  into.
  
  Now the reader of course knows what is coming. The tiny society is
  actually the planet Earth, and the big complicated modern world is
  actually an enormous civilization which has existed for billions of
  years in the Milky Way Galaxy. Being that Earth is pretty far out on
  one of the spiral arms, and orbiting a boring star of no obviously
  special value, it just took a long time before the galactic industry
  expanded to the point where it became economically practical to do
  business so far out here. But here they are, finally, bringing a level
  of complexity to life which is unimagineable (the words hardly convey
  the scale of things, here). Life will never be the same (to continue
  the understatement).
  
  But, being adaptable little blobs of stuff, mankind adapts. Not too
  badly, actually. We find, though, that mankind is at a significant
  disadvantage among the various creatures which inhabit the galaxy.
  This is mainly because mankind's basic design is randomly evolved by
  the process of natural selection. Just imagine! A functioning life
  form which has not been deliberately engineered, but, rather, has been
  allowed to bounce around from one random mutation to the next! It
  certainly is fragile, isn't it?! Oh, what these poor beings have
  missed - billions of years of well-planned and well-executed
  technology, where solar-system-sized machines (at this point, there's
  no difference between a machine and a life form) are specified to the
  level of individual subatomic particles and photons. They have quite a
  bit of catching up to do. The story continues ...
  
  I leave the story line, now, to ask some meaningless questions on some
  specific issues. First, there's the subject of genetic engineering and
  chromosome manipulation. The technology is just arriving. What will we
  do with it? We also, recently, have heard of this book, by some guy at
  MIT, entitled "Engines of Creation", or something like that, which
  talks about designing and building things from the level of the atom
  or molecule. This technology is coming relatively soon. What will we
  do with it? And then, of course, there's AI, coming soon to an
  application near you. What will we do with it? What will all of these
  developments do to us? I have absolutely no idea, but I bet that none
  of it will be planned very well. It will mostly just happen, for better
  or for worse. The reason is that there are fairly few people who would
  honestly want to do real planning - most people tend to worry about
  things a bit closer to home. And worse, even among those who believe
  they want to plan, very few actually have the skills and information
  which would make this possible. So the adaptability of mankind will be
  put to much tougher tests than nature has dished out so far. Of
  course, this has always been true, as we evolved from "animals" to
  tool-users, to tool-makers, etc. It should be interesting.
  
  Finally, we've seen films like Star Wars, Close Encounters of the
  Third Kind, and ET. When will the galactic economy find it
  economically practical to start doing business this far out, way out
  in this backwater, which is currently inhabited by a small society
  which considers its modern life to be incredibly complicated and
  sophisticated?
363.4the ultimate technology trapREGENT::MERRILLGlyph, and the world glyphs with u,...Wed Aug 12 1987 17:1825
    Technology allows humans to overreach themselves resulting in what
    are known as "technology traps."  The elevator, for example, allows
    us to build buildings that are taller than the firemen's ladder;
    the electric subway allows long underground tunnels from which
    rescue by alternate means is nearly impossible. But this is what
    the adaptable man is good at: playing leapfrog with technology -
    creating rescues from successive technology - applying failsafe
    to nuclear weapons - finding treatments for penecillin shock ...
    
    The ultimate technology trap was summed up in a TWO PAGE sci.fi
    story: to solve mankind's problems all the computers in the world
    were wired together. After a century of work all the factories and
    services were DECnetted together and now any question could be answered
    by The Computer. At the final interconnect there was a celebration
    and all the smartest people in the world were gathered together
    to come up with the first previously unanserable question that would
    be posed to the computer.  The question was
    
    
    "Is there a God?"
    The computer paused - then a bolt of lightning fused shut the main
    switch, and the answer came,
    
    	"There is NOW!"
    
363.5re .1EAGLE1::BESTR D Best, Systems architecture, I/OWed Aug 12 1987 19:49102
re .1:

>          In general, a mere human being cannot predict what an AI system
>     will do with any particular input:  the AI system, particularly one
>     using expert system or neural network technology, is simply too
>     complex to understand or to emulate by hand.

This is true if the expert system dynamically 'learns' (i.e. has a
dynamically modified rule base).  It's probably less so if the rule
base is static.  Of course, I would suspect that one reason that we
might build an expert system is to solve problems so complicated that
we h. sapiens would become hopelessly confused.

My opinion is that we should not produce (at least for sale) any
systems with dynamic rule bases.  This is analogous to forbidding the
practice (in rule based systems) of writing self-modifying code.
There is of course the problem of 'layered rule bases'.

>          The main advantage of procedural algorithms is that programs in
>     them can be proven correct.  A proof is fundamentally a series of
>     statements that convinces someone else that a claim is correct.  The
>     advantage of provability is that one can then convince oneself that a
>     claim ("this program works") is correct;  this is not a proof, but is
>     made possible only because the subject matter is subject to proof.
>     Given that the major advances in effective AI have relied upon non-
>     procedural methods that nobody really understands but which looks as
>     if they work, it is currently impossible to prove large AI programs
>     correct.  It is therefore impossible to convince oneself that the
>     program is correct.

I have some trouble making a distinction between 'procedural algorithms'
and 'expert systems'.  I think expert systems are procedural algorithms.
I don't know how else you would build one.
The difference that I do see is that in what you call 'procedural algorithms'
the flow of control is made explicit; the programmer must specify in
detail each step that the procedure must perform to solve the problem.

In an 'expert system', the flow of control is implicit and determined by
the compiler (shell?) writer.  (I'm thinking of a Prolog style shell).

I think that the concern that you are expressing is that the expert system
application writer (not the shell writer) does not necessarily understand the
mechanics of how his 'program' (i.e. a set of facts and rules) will be executed.
This is a concern for vanilla compilers too, but is more so, I think, for
expert systems where the target structures that are being manipulated and
the algorithms used to manipulate them are almost completely invisible to the
application writer.

Regarding proofs, I think that (practically speaking) even conventional
programs are difficult to prove (you probably need proving programs, but
then you have to prove the proving programs, etc. :-).  I suspect that
the proper way to prove expert system shells is to try to prove the
embedded theorem prover.  Given that the theorem prover works, true
facts and rules should produce true results.  Now proving the facts and
rules is another story entirely !  Aha, you've just uncovered another
interesting issue ! I'll make it a separate posting.

>	  I would not be so frightened were there some effort underway to
>     allow mere humans to understand the mechanisms and predict the actions
>     of AI programs.  But the AI community does not see this understanding
>     and prediction as a goal worth the effort.

There has been some work in this area.  The area is called 'explanation
facilities' I believe.  They are supposed to allow the operator to
have the embedded theorem prover elaborate a proof so that a human can
understand it.  They are similar to debuggers, but have the additional
capability of showing how the interpreter (i.e. the theorem prover)
operates on the rule base to produce the conclusion.  So you can debug
both the rule base (the program) and the theorem prover (the compiler)
at the same time.

I agree with you WHOLEHEARTEDLY that these should be an integral part
of every bona fide expert system.  Of course I'm agreeing with words I
put in your mouth :-).

>          Major sources of physical danger to humans are being put into the
>     care of computers:  humans in underground silos may have the
>     responsibility for turning the keys, but the physical launch sequence
>     is controlled by computers;  humans in control rooms may have the
>     responsibility to watch the dials and push the buttons, but the
>     physical reactor mechanisms are controlled by computers;  soon, humans
>     in airborne control rooms may have overall responsibility to give
>     strategy, but detailed tactics and physical actions of the orbiting
>     weapons will be controlled by computers.

I would gather that you agree with me that this is a concern.
Is there anything that can be done to resist the pressure to let the
computer do it all, or do you feel that we are better off that way ?

>          I see a large risk that the 'post-industrial' era will become,
>    not the 'post-professional' era, but the 'post-human' era.  I
>    sincerely hope that before humanity manage to kill itself off by
>    trusting AI too much, it stops;  and that it correctly identifies AI,
>    rather than computers per se, as the technology to avoid.

My personal concern (ignoring for a moment the possibility that we will
be destroyed by scenario 1) is that there is historical precedent for
a major 'dislocation' (as sociologists like to refer to it) in our
political and social structures as AI comes on line in a big way that
appears not to have been recognised.

I'll elaborate this some more in a subsequent reply.
363.6SIVA::RISKS_DIGESTDELNI::JONGSteve Jong/NaC PubsThu Aug 13 1987 16:216
    I gave out an incorrect conference pointer in the original .6. 
    My apologies!
    
    Press KP7 to add the conference SIVA::RISKS_DIGEST to your notebook.
    It is a read-only transcript of the RISKS Digest, devoted to discussion
    of risks to the public posed by computers.
363.7Future dependence on AI?SLDA::OPPWed Aug 19 1987 17:2917
      One possible future problem with AI systems is as follows.  
    What if their application became so wide spread that life as
    then known was dependent upon intelligent systems.  Assume for
    a moment that world markets, communications, governments, etc.
    would become chaotic if their AI systems failed.  We may not
    want to arrive at such a point.  
    
      A current example I feel exists in France.  To break their 
    dependence on foreign oil, France established a program of 
    electricity generation based upon nuclear fission.  Now, some-
    thing like 60-70% of their electricity is generated by fission.
    Granted, they have the best run nuclear power plants in the
    world.  However, their dependency upon nuclear fission is so
    high that I doubt France could exist today without it.  
    
    GLO
    
363.8Devil's advocateSTUBBI::D_MONTGOMERYThis line ain't got no words...Wed Aug 19 1987 19:1335
re:    < Note 363.7 by SLDA::OPP >
>      A current example I feel exists in France.  To break their 
>    dependence on foreign oil, France established a program of 
>    electricity generation based upon nuclear fission.  Now, some-
>    thing like 60-70% of their electricity is generated by fission.
>    Granted, they have the best run nuclear power plants in the
>    world.  However, their dependency upon nuclear fission is so
>    high that I doubt France could exist today without it.  
>    
>    GLO
    
    What's so bad about that?  The USA can't exist without oil (right
    now - until we follow France's lead).   Humans can't exist without
    oxygen.
    
    Is there something that makes you think France will encounter a
    need to exist without nuclear fission?
    
    To bring it back to the point:  I agree that dependence on AI for
    world markets, communications, etc... *could* cause problems.  Isn't
    the solution to build an infrastructure that would allow for
    contingencies?  What would really be different from today's situation?
    Think about it:  There are certain things that could, if they `failed',
    cause world markets, communications, governments, etc. to become
    chaotic.  
    
    AI systems should be applied where they can help.  Not as an end-all
    solution to all the world's problems.
    
    Like any technology, AI must be `implemented' carefully.
    
    -dm-
    


363.9Ever see the movie "Colosus: The FORBIN Project"?YUPPIE::COLEI survived B$ST, I think.....Wed Aug 19 1987 20:330
363.10AI today is not AI; AI may not be for ISLDA::OPPThu Aug 20 1987 17:3824
      Since I haven't seen the movie mentioned in .9, I don't know
    how to respond to that.
    
      With regard to dependency and the points made in .8, I feel
    that nuclear fission, being an unstable process by nature, is
    a poor choice for electricity generation.  In addition, it is
    currently a very dirty process producing highly toxic elements
    some of which have long half-lifes.
    
      My definition of AI is not today's accepted usage which is
    really a "prosituting" of the terms.  A reliance on a knowledge
    based system or expert system is little different than a reli-
    ance on an algorithmic program.  My concern is for the social,
    religious, political, economic, and psychological implications
    of developing an artificial intelligence equivalent to that of
    an average human.  I believe such a system is probably 50 years
    down the road, but that people should begin thinking about 
    where knowledge based systems are heading.  MAN may not want or
    need an intelligent ally of different species.  Such a decision
    must be made very carefully.  I suggest reading the "Two Faces
    of Tomorrow" by James P. Hogan, who at one time worked for DEC.
    
    GLO
    
363.11TELCOM::MCVAYPete McVay, VRO TelecomThu Aug 27 1987 19:4422
    "The FORBIN Project" was an excellent movie about how AI got out
    of hand and began running everything.  It was for the good of
    mankind--but machines had a strange idea about what was good...
    (Incidently, we deliberately named two early Ed Services VAXen
    "CLOSUS" and "GRDIAN" after this movie.)

    A similar problem was expressed in a "Dr. Who" episode, where a
    robot got out of hand and began killing its masters.  As Dr. Who
    expressed it, "of course they don't suspect a robot.  Would you
    accuse your refrigerator of being a homicidal maniac?"

    In the first case above, everyone knew that the AI machine COLOSSUS was
    out to rule the earth, but no one did much about it.  In the second
    case, things had progressed so far that no one could even recognize a
    misfuctioning applicance.  Both cases, in my opinion, were extreme
    (but well done) examples of behavior that does exist currently.
    There are some people that blindly apply technology and to hell
    with the consequences (the Audi acceleration problem?) and those
    that don't even recognize that a problem exists.  The THRESHER class
    submarines depended entirely on computer-controlled ballasting valves.
    When the THRESHER lost all power in a dive, you know what happened.
    (Subs now have backup manual controls.)
363.12P1JAWS::DAVISGil DavisFri Aug 28 1987 11:019
    Ron Davis had a great book called "The Adolescence of P1".
    P1 was  computer program that could replicate itself across a network
    connection, suck up all the resources in a machine, and then
    let operators see what P1 wanted them to see....a GREAT BOOK!
    
    Wish I had my own copy. I'm not sure if it's in print anymore..
    
    Gil
    
363.13PBS strikes againKYOA::KOCHAny relation?...Fri Aug 28 1987 14:505
>    Ron Davis had a great book called "The Adolescence of P1".

	They made a "WONDERWORKS" story on PBS from this book. It was
very entertaining.

363.14Nanotechnology, Molecular computers, and SpaceRDVAX::FORRESTFri Aug 28 1987 22:27150
    Hi.  I'm not normally a Notes reader, but a friend pointed this
    file out to me, so here's my two cents worth on the points raised
    to date.
    
    Regarding situation 2 in 363.0, I think you are quite correct. 
    Ultimately, if we do things right, we'll all be "unemployed."  
    Traditionally, improvements in technology have replaced many people
    performing a task with much fewer people plus machines.  There is
    no evidence that information tasks should be excluded from this
    trend.  
    
    Along with improved technology comes greater productivity
    and improved standards of living.  It is possible to think about
    a time when machines will replace most, or all, of the tasks currently
    performed by humans:  food-growing and distribution, manufacture
    of textiles, of household products, and of houses, and the generation,
    dissemination and processing of information (in effect, research
    and development).  Replacing the tasks performed by people with
    tasks performed by machines is NOT the same thing as replacing people.
    Some people will move on to new interests and others will be able
    to concentrate on old ones (games, hobbies, sports).  Undoubtedly,
    some people will just sit around watching re-runs of "Leave it to
    Beaver."  If this sounds like thousands of years into the
    future, read on. . .
    
    In response to 363.3, the technology you described is known as 
    nanotechnology (NT).  NT ". . .is a technology based on assemblers able
    to build systems to complex atomic specifications.  Individual parts
    serving distinct functions may thus range down to sub-nanometer
    size.  Expected products include molecular circuits and nanomachines."
    (Eric Drexler)
    
    Indeed the best reference on this topic is Eric Drexler's "Engines
    of Creation."  Increasing abilities to manipulate molecule are leading
    us toward nanotechnology.  A growing interest in building
    molecular computing devices, making advanced materials, and engineering
    biological structures may lead to significant research efforts in
    molecular technology within the next five years.  Improving abilities
    to model molecular systems, the application of expert systems
    to computer-aided design and manufacture, and continuing reductions
    in computation costs are likely to significantly shorten the time
    between the concept of a product and its manufacture.  
    
    The power of NT will lie with the ability of machines (assemblers)
    to replicate.  A significant research effort will precede these
    devices, but ultimately they will have the ability to build systems
    to complex atomic specifications under programmable control.  Molecular
    computers (25 Mbytes per cubic micron, with computation speeds 100
    times that of a Cray) are already being designed
    to control assemblers.  The assembler's first program will consist of
    instructions to make copies of themselves, so they will grow
    exponentially in number until they run out of material to make copies
    of themselves.  Then they can be reprogrammed to make other useful
    devices and structures.  The point in time at which this kind of 
    assembler is manufactured is known as the "assembler breakthrough."  
    After the breakthrough, the cost
    of manufacturing an item will be limited only by the cost of materials
    (cheap), the cost of energy, the cost of the software necessary
    to make whatever it is you're making, and the cost of the physical
    space to store it.  Initial studies by Drexler indicate that energy
    requirements will be modest (household current) for many applications.
    
    The time frame for this scenario is not known, of course.  But a
    plausible estimate would be 20 years, plus or minus ten, with some
    allowance for advances in related fields.  If you're planning for
    the benefits of this technology (cell repair machines for long
    lifespans, or credit-card-sized equivalents of a million Crays)
    assume 30 years.  If you're planning for the dangers (abuse of the
    technology), assume 10 years.
    
    Now, all of this brings me back to some of the original issues.
    The problem of knowing how a particular program (expert system,
    "AI," or otherwise) will operate under a variety of conditions
    can be addressed with a brute force approach.  Since computers 
    will be small, cheap and plentiful, and reasonably fast, it
    won't be a problem to set several trillion aside to do large numbers
    of simulations to test out how the program will react under many
    conditions.  Although you still will not be able to predict how
    the program will react under EVERY possible circumstance, you will
    have sufficient ability to reduce the failure probability to very
    low levels in a short amount of time, if you've been thorough in
    describing the range of conditions for your simulations.
    
    Now, if I've whetted your appetites to know more about Nanotechnology,
    here's how:
    
    I am a summer intern in CR&A, a doctoral student in materials
    engineering at MIT, and Vice President 
    of the MIT Nanotechnology Study Group.  We have been
    discussing these issues for the past 2-1/2 years at semi-monthly
    meetings at MIT.  The meetings are open, and we already have some
    DEC employees attending regularly.  We meet on the first Tuesday
    and third Thursday of every month at the MIT AI Lab, 545 Technology
    Square, Room 773 (7th floor), at 7:30 pm.  [If you get there and
    the door is locked, just knock.]  The next meeting is Tuesday, 
    September 1st--we don't have a speaker for that one so you can come
    and just kick some of these ideas around.  On Sept 17th Prof. Mark
    Wrighton will speak about chemical transistors, and on Oct. 6th
    Claude Cruz will talk about neural nets.
    
    To learn more, (1) read "Engines of Creation" (I can sell you a
    copy for $14--contact me by net mail at rdvax::forrest), (2) send
    to my mail address (rdvax::forrest) for a copy of the Nanotechnology
    Press Kit--be sure to give me a hardcopy mailing address, (3) come 
    to the semi-monthly meetings.  [Note to Mike Glantz--Here's where
    at least some of the people are who ARE doing the planning.]
    
    Some random, concluding points:
    
    Re 363.0, Situation 1, paragraph 4:  There is some danger of abdicating
    responsibility, but surely we can develop the abilities needed to
    make sure that the programs are continuously adjusted to meet the
    changing desires of a society.  There's a much better chance of
    better response from well-programmed computers than from lawmakers
    under our current system.
    
    Read "The Tomorrow Makers" by Grant Fjermedal!!  An excellent book
    which covers these plus related issues.  Highly readable and fun.
    
    Re:  363.3:  Once we are past the assembler breakthrough point,
    our technology will rapidly take us to the point where our abilities
    are limited only by physical law.  Thus, technology will not continue
    to improve indefinitely, but plateau.  An alien culture, which would
    undoubtedly develop nanotechnology prior to doing any serious space
    travel (outside of their solar system), would have significant
    abilities to manipulate matter:  restructuring solar systems and
    surrounding stars with structures able to trap their energy (Dyson
    spheres).  If such a culture had formed (even as far away as the
    other side of this galaxy) within the past 100,000 years, it would
    be here by now, since we already know that space travel at speeds
    nearing 0.99 that of the speed of light is possible.  Since 100,000
    years is small by evolutionary time scales, it is reasonable to assume
    that no such culture presently exists within this galaxy, since
    we haven't seen any evidence of massive restructuring of matter.
    {There is a very small, but finite, probability that they are developing at
    nearly the same rate as us.}  Thus, we are probably the first to
    emerge in the Milky Way.  But I don't think we're the only ones in 
    the universe.
    
    Under the same assumptions, this contradicts Carl Sagan's assertions
    that when two alien civilizations meet for the first time, there
    will be no contest--that one will be vastly superior to the other.
    His argument ignores the fact that there ARE limits to
    what technology can do and that it is possible to approach those
    limits quite rapidly with the machines of nanotechnology:  molecular
    computers able to do the equivalent of millions of years of research
    (by today's standards) in one year.
                                                      
    David Forrest
    
363.15Look to the history of "Labor-saving"MINAR::BISHOPMon Aug 31 1987 22:2828
    Technological unemployment is an old fear, but never turns out
    as bad as the more excitable expected.
    
    At one time 90% (or more) of the US population was in the
    agriculture industry.  Now 3% produce food for the rest (and
    enough more to export).  Are 87% of us out of work?  No.
    
    Now, it's true that if your skill is replaced by a machine,
    you might be unable to work at that job (thus spinners and
    weavers lost out to machines).  But if your skills are more
    general, and if there are other jobs, then there isn't much
    more than inconvenience.
    
    Business' reaction to computers was first to replace skilled
    labor--then the displaced labor is used to upgrade services
    (thus computers means you can have a daily Profit-and-Loss
    statement rather than one a year, you can have inventory
    overviews correct to seconds rather than weeks).
    
    Peoples' reaction to labor-displacing machines was to upgrade
    service: when water is pumped by a machine rather than by hand,
    people use more; when washing is done by machine, they wash their
    clothes more often (remember celluoid collars?  They were designed
    to allow you to wear the same shirt for a week or more).
    
    So don't worry so much.
    
    					-John Bishop
363.16a (much delayed) replyEAGLE1::BESTR D Best, sys arch, I/OMon Jan 11 1988 16:52129
re .15 and .0's situation 2:

>    Technological unemployment is an old fear, but never turns out
>    as bad as the more excitable expected.
    
>    At one time 90% (or more) of the US population was in the
>    agriculture industry.  Now 3% produce food for the rest (and
>    enough more to export).  Are 87% of us out of work?  No.

See references (read forward).

>    Now, it's true that if your skill is replaced by a machine,
>    you might be unable to work at that job (thus spinners and
>    weavers lost out to machines).  But if your skills are more
>    general, and if there are other jobs, then there isn't much
>    more than inconvenience.

What are these other jobs ?

.15>    Business' reaction to computers was first to replace skilled
>    labor--then the displaced labor is used to upgrade services
>    (thus computers means you can have a daily Profit-and-Loss
>    statement rather than one a year, you can have inventory
>    overviews correct to seconds rather than weeks).

>    Peoples' reaction to labor-displacing machines was to upgrade
>    service: when water is pumped by a machine rather than by hand,
>    people use more; when washing is done by machine, they wash their
>    clothes more often (remember celluoid collars?  They were designed
>    to allow you to wear the same shirt for a week or more).

I hope you're right.  But technological unemployment is unlikely to
be a completely painless process.  This country has no coherent retraining
policy for displaced workers.  Robert Reich in his book 'The Next
American Frontier' notes that several European countries do have national
policies for retraining that seem to work.

Also, I think it's not clear that the job a retrained worker gets will
be as 'good' (choose a criterion) as the one he/she lost.  For example,
it would be interesting to examine the profile of the jobs that have been
created over the last five years of the U.S. economic expansion to answer
questions like which categories had the largest gains, and what their long
term advancement and earnings potential is relative to the jobs that showed
the greatest losses.

I'm very uneasy about the validity of statments like 'when computers
get that advanced, we can all retire on full salary and grow gardens
all day'.  Such statements (I think) fly in the face of economic history.
Businesses don't retire unneeded employees, they furlough or fire them.
What makes you think that the situation will change in the future ?

Some references:

Authors of an article in the April '87 issue of M.I.T. Technological
Review argue that direct comparisons between the agricultural transition and
the predicted post-industrial revolution are somewhat misleading.  The
crux of the argument involves a concept call 'linkage'.  Linkage is
the dependency of allied services and business on primary industry.

The authors point out that, while we did indeed undergo a transition
from a primarily agrarian economy to an industrial one, the
agricultural infrastructure did not disappear; it simply became more
efficient.  They claim we did not experience unemployment because many
agricultural jobs shifted into linked sectors.

They then argue that conversion to a service economy is not similar
to industrialisation and would result in the (possibly unreversible)
destruction of our manufacturing capability.

They also claim that the much heralded conversion of an industry
centred economy to a service centred economy is NOT the result of
increases in our efficiency, but rather the result of increases in
our competitors' manufacturing efficiency.  This drives our primary
industries offshore.  Ultimately, they predict that the high
valued linked sectors (e.g. high technology) will FOLLOW manufacturing
and along with them the prosperity that Americans are accustomed to.

It's a very interesting article and defends a position different from what's in
vogue in the popular press now.

For some arguments in SUPPORT of the conversion to a service economy, a
recent Scientific American (circa Oct-Dec of '87) has an article on
why we should wholeheartedly convert to a service economy.  Some of the
arguments in this article also seemed reasonable, so my jury is still out.

------------------------------------------------------------------------
I believe that there are some trends that will tend to accelerate the move to
professional (engineering specifically) displacement.  Paradoxically, one of
these is a claimed shortage of engineering manpower (I don't want to argue about
whether such a predicted shortfall is real or not).  If employers foresee
shortfalls in employees, they will replace people with machines wherever
possible.  The complexity of the tasks assigned to people should continue to
rise for some time because these will be the tasks that resist automation.  On
the other hand, there will be a very high payback for automating such tasks
when the more routine ones are already automated, so there will be many people
working on cracking them.  Therefore, in the long term, I would expect
the professional population needs of individual firms to drop.

One could mount several counterarguments against the preceding; here are
just a few:
(1) The number of competing firms will increase, absorbing the outplaced
(or never initially hired) employees.
(2) Companies will attack problems of ever-increasing complexity,
necessitating more professionally trained people, not less.
(3) More and more professionals will move into the 'flexible' employment
pool (i.e. temporary engineering services, consulting, etc.)
(4) Engineering design effort will be dragged away from automation and
towards solving immediate threats to human survival (e.g. energy needs,
toxic waste disposal, etc. )

Anybody care to pursue these (or other) counterarguments ?

I think we are agreed that as AI advances, it will gradually encroach on
some professional jobs (do we agree on this? maybe we don't).  My argument
for this is that companies can't stand still (and survive).  They will
always move ahead with incoporating new cost effective technologies that
increase their productivity; if they don't, their competitors will and they
will lose business.

As computing technology advances, more and more of the routine functions of
manufacturing, design, and support are being automated (in an attempt to
reduce overhead).  Gradually, more of the cost of running the business will
therefore shift into personnel related costs.  Eventually, (this is
speculation) fewer people will be needed to run the business.

I'd like to hear opinions on which types of jobs are more or less susceptible
to automation and what types of replacement jobs ar likely to arise.

Comments ?
363.17I don't think it's a real concernSTAR::HEERMANCEMartin, Bugs 5 - Martin 0Mon Jan 11 1988 19:2321
    Re. .16
    >I'm very uneasy about the validity of statments like 'when computers
    >get that advanced, we can all retire on full salary and grow gardens
    >all day'.  Such statements (I think) fly in the face of economic
    history.
    >Businesses don't retire unneeded employees, they furlough or firethem.
    >What makes you think that the situation will change in the future
    
    Although it is unlikely that corporations would simply allow people
    to retire with full pay, somebody will have to buy the product that
    businesses produce.  If everbody is fired or given meanial low pay
    jobs then the economy would suffer greatly.  I think that most heads
    of corporations are not stupid and would realize this.
    
    Also I'm skeptical that AI can replace people in the production
    side of the ecconomy.  The current cutting edge of AI is the expert
    system, which is more of an advisor than an employee which could
    design and build anything.  Also, expert systems are simple rule
    based deduction systems which makes them only slightly more advanced
    that the average trouble shooting chart.  No real knowledge is
    generated by the program.
363.18yet moreEAGLE1::BESTR D Best, sys arch, I/OTue Jan 12 1988 20:5775
re .17:
    
>    Although it is unlikely that corporations would simply allow people
>    to retire with full pay, somebody will have to buy the product that
>    businesses produce.

>    If everbody is fired or given meanial low pay
>    jobs then the economy would suffer greatly.  I think that most heads
>    of corporations are not stupid and would realize this.

This argument has merit.  But there is a problem (I think) with assuming
that business managers will act on this argument.

I think the 'principle of the commons' in a slightly modified form
applies here.  Although an individual manager may realise that, if everybody
acted the way he did, serious economic 'dislocations' (I love that
euphemism :-) might result, that manager also might think he will gain an
individual advantage by acting preemptively.

I make an analogy to the 19-oct stock market crash.  Clearly, portfolio
managers must have recognised that massive selloffs would be tremendously
bad for the market as a whole.  But prices had been driven up very high.
A vague uneasiness must have arisen that, sooner or later, the market must
take a tumble.  A few large players (allegedly Fidelity, et al.) realised that
it would be to their advantage to rabbit out of the market.  They knew that
those out first would be OK.

I'm not very familiar with economic theory, so I don't know whether or
not a shift of moderate to large numbers of people from medium range
professional salaries to a lower salary range would effect the economy
overall (heck, I don't even know that the new 'service' or whatever
jobs might not pay MORE).  Economic situations are always complicated and
I don't claim to have the answers.

The reason I raised the issue was that I haven't seen any projections of
the social effects of A.I. in either the trade or popular press that I
considered at all realistic.

>    Also I'm skeptical that AI can replace people in the production
>    side of the economy.  The current cutting edge of AI is the expert
>    system, which is more of an advisor than an employee which could
>    design and build anything.  Also, expert systems are simple rule
>    based deduction systems which makes them only slightly more advanced
>    that the average trouble shooting chart.  No real knowledge is
>    generated by the program.

Well, this is true today.  Following this track would lead us into speculation
on whether an  expert system with a sufficiently large knowledge base (and
LIPs to match) could replace a seasoned designer.  I'd probably support the
view that we can get to that point eventually.  I'll make an analogy to
compilers.

As computer languages have advanced, we have gradually been moving away from
having to write the implementation of a program towards simply describing
what we want the program to do (specification).  Logic programming languages
and some constructs in ADA are examples of this.  Now these are still far
from being programmable in 'natural language', but I can imagine a day in
the not-too-distant future (say 1998) when designers will describe the
behavior in a near-English of hardware logic or programs and the design
system will be able to generate (compile) a complete implementation
(down to gates, masks, or assembly code).  The system would also be
required to prove that its implementation is correct (meets all the
requirements).

When such a system is available, then perhaps the need for legions of
designers will be reduced.

You can of course argue that design focus might simply shift into the
application space; i.e. we will all wind up designing (compiling)
sophisticated application  specific logic and programs for myriads of
end-users (the individually tailored product for narrow niche markets).  This
would be a good outcome (I guess).  It is, however, somewhat in dissonance
with the ideas of cost control through architecture.  Managing boatloads of
specific applications hardware and software doesn't sound like my idea of
architectural heaven :-).
363.19I don't agree.STAR::HEERMANCEMartin, Bugs 5 - Martin 0Wed Jan 13 1988 15:0725
    RE .18
        Ok, I'm willing to admit that there will be short sighted in-
    dividuals who might endager long term gain for short term gain.
    However, I don't think that your programing language analogy is
    accurate.  I feel that there is a fundamental difference between
    analyzing a problem and creating one. ;^)
        When I program in assembler and try to create a loop struc-
    ture I must analyze the problem more than if I use Pascals While
    structure.  However, the compiler does not help me do any real
    design work.  Maybe a compiler will be created which can under-
    stand English and I'll simply type a design spec right into it,
    I'll still be creating the program.  There will still be a need
    for many people to work on a project simply for sanity sake.  A
    design created by one person is far more likely to be flawed than
    a design which got bounced around for many people to analyze.
    Also, coding is actually a very small part of software engineering.
    The majority of time is spent researching a problem, designing a
    solution, and modification of the design to meet new requirements.
    Even if coding and verification is done by an advanced compiler
    I feel that reducing the number of engineers won't happen.  Why
    will companies go to the trouble of producing advanced compiliers
    if they won't be able to fire anybody?  Simple, they'll get the
    product out the door much faster and with fewer bugs
    
    Martin H.
363.20Adding machinesISTG::ENGHOLMLarry EngholmMon Jan 18 1988 01:2411
    Re: < Note 363.17 by STAR::HEERMANCE "Martin, Bugs 5 - Martin 0" >

    >Also, expert systems are simple rule
    >based deduction systems which makes them only slightly more advanced
    >that the average trouble shooting chart.  No real knowledge is
    >generated by the program.
    
    Also, computers are simply fast adding machines which makes them
    only slightly more advanced than the average 2nd grader.  No real
    problems are solved by the computer.
    							Larry
363.21Reasoning vs Analysis.STAR::HEERMANCEMartin, Bugs 5 - Martin 0Mon Jan 18 1988 18:3724
    Re: -.1
        I'm assuming your being sarcastic.  However, my point was that
    the machine never gains any insight into the problem.  The program
    performs forward and backward chaining coming up with conclusions
    which have already been drawn by someone else.  The programmer who
    wrote the rules initially generated all the knowledge the machine
    has.  I view expert systems similarly to the Chinese puzzle problem.
    (The Chinese puzzle problem is a classic example/argument against
    common sence reasoning in formal systems)
        As far as computers "only being fast adding machines", I did
    not say or imply this.  Indeed I feel that computers are far more
    that just adding machines and can make decisions.  Indeed using
    a trouble shooting chart requires the ability to make a decision.
    My point (which I reiterated in .19) is that there is a difference
    between problem solving and analysis.  Common sence reasoning is
    great for problem solving.  Rule based expert systems are good at
    problem analysis.
        Computer common sence reasoning is still an immature technology.
    I do not think it's impossible.  However, most of the programs I've
    seen do not use methods which are easily generalized to the real
    world.       
        Now that I've made my point, I welcome constructive criticism.
    
    Martin H.
363.22ULTRA::HERBISONLess functionality, more featuresTue Jan 19 1988 15:2330
        Re: .17
        
>    Also I'm skeptical that AI can replace people in the production
>    side of the ecconomy.  The current cutting edge of AI is the expert
>    system, which is more of an advisor than an employee which could
>    design and build anything.  Also, expert systems are simple rule
>    based deduction systems which makes them only slightly more advanced
>    that the average trouble shooting chart.  No real knowledge is
>    generated by the program.
        
        Consider these similar arguments:
        
        [From the middle ages:]
            The current cutting edge of book manufacture is hand-copying
            by scribes.  Therefore books will always be rare and used by
            scholars and will never be useful for the masses. 
        
        [From the 1940s:]
            The current cutting edge of computing machines is vacuum
            tubes.  Therefore they will never be generally useful, and
            only used by governments and a few large corporations and
            universities. 
        
        AI has learned (and taught us) quite a bit, but we still do
        not know how the human learning mechanism operates or how to
        simulate it.  This, however, does not mean that we will never
        learn this, or that we know enough to place limits on what can
        be done with AI technology. 
        
        					B.J.