[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference noted::sf

Title:Arcana Caelestia
Notice:Directory listings are in topic 2
Moderator:NETRIX::thomas
Created:Thu Dec 08 1983
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:1300
Total number of notes:18728

1053.0. "Another SF Misprediction: Computers" by TECRUS::REDFORD (If this's the future I want vanilla) Mon Mar 02 1992 01:03

    Here's something that's been bothering me: why was SF so
    completely wrong about computers?  Computers have been in the
    field from very early on, and yet every description of them has
    been off-base. They have always, but always been described in
    terms of artificial intelligence, but AI is a minor, even
    insignificant part of their actual application.  Nowhere in the
    world is there a machine that can talk to you, and yet I count
    five processors in the room I'm sitting in: one in a clock, one in
    my wristwatch, one in the terminal I'm typing on, one in a PC, and
    one in a printer.  Computers are as common as electric motors.

    The mistake shows no sign of going away.  No cyberpunk novel is
    complete without a rogue AI.  They can be enemies or friends, or
    even gods as in William Gibson's books.  Nowhere are they
    described as just another kind of motor.
    
    My friends in AI tell me that the field is actually in a state of
    philosophical paralysis.  The old approaches really haven't
    gotten very far.  People have retreated from trying to understand
    consciousness to trying to understand how simple animals can make
    their way in the world. 
    
    I don't understand the issues there and so can't judge.  What
    does seem clear to me is that no one would really want a machine
    with an independent will.  Machines have to be paid for, after
    all.  People buy them to do what they want, not what it wants. 
    The moment that it did something that its buyer didn't like,
    field service would be called in to fix it.  
    
    "But", the SF author argues, "Suppose that it was built to do
    things that called for judgement and imagination?  Wouldn't it
    need its own will, its own consciousness to perform such tasks?"
    
    "Doesn't it take judgement and imagination to play a good game of
    chess?" I would say.  It certainly does when human beings  play
    the game.  You have to judge the position based on previous games
    you've played or studied, and have to be able to imagine your
    opponent's strategies and your own future play.  Yet computers can
    play a fine game of chess based on quite mindless algorithms.  The
    program that's playing can't do anything else, of course, but in
    this one area it can do things that require serious intellectual
    effort on a person's part.  A computer can do these quite well
    without a will. 

    So not only are there no AIs now, but I don't see many existing
    even when the field's problems get solved.  There will be work on
    it because it's an inherently interesting scientific problem, but
    I can't see it as being of economic use.
    
    Perhaps the mistake in SF comes from a confusing term.  Computers
    do not compute so much as remember steps of computation.  They
    don't think, they remember. Computation has been done for
    centuries on mechanical calculators.  What's new in computers is
    the ability to repeatably run through calculations, and do
    different things based on the results of those calculations.  

    The promise that computers hold is just in this: nothing need ever
    be worked out more than once.  Once someone has figured out how to
    solve something, that procedure can be remembered on computers and
    repeated by anyone.  You needn't study for years in  order to
    solve integrals; you can just buy Mathematica.  You still need to
    know what integrals are for, of course, and how and where to apply
    them, but you don't need to do the algebra.

    That sounds like a subject for SF!  All the knowledge  generated
    by billions of people can be stored and used by billions of
    others.  It takes human intelligence into hyperdrive. Forget the
    sterile rehashing of AI; the promise of computers is in how they
    can augment our intelligence, not in what they  can do for their own.
    
    /jlr
T.RTitleUserPersonal
Name
DateLines
1053.1DB responded, 'Hell, my computer cant even type.'GAMGEE::ROBRI'm too sexy for this conference...Mon Mar 02 1992 02:298
    
    
    Reminds me of a note a read some time ago in the chess conference.  A
    computer was playing some soviet player who kept on winning.  Suddenly,
    the guy was electrocuted 'by the computer'!  The computer was put on
    trial, found accountable and scrapped.  This is how I recall it.  It's
    still in the conference someplace...  I even sent it to Dave Barry :').
    
1053.2Literary Attractions of AICUPMK::WAJENBERGand the CthulhuettesMon Mar 02 1992 12:3122
    Re .0:
    
    One reason for the prevalence of AI is simply that it's exotic and
    colorful, and of such things are SF made.  Should we ever discover
    extraterrestrial life, there's a good chance it will be Martian
    bacteria or some such thing, not the Klingon Empire.  Should we ever
    discover faster-than-light physics, there's a good chance it will be a
    subtle effect in a physics lab, not warp drive.  (You could even argue
    that we already *have* discovered such an effect in a lab.)  The first
    sues of genetic engineering are things like goading E. coli into making
    human insulin, not designing the Next Step in Human Evolution.  But SF,
    being adventure literature, takes such a notion to the dramatic extreme.  
    Same thing with computers.
    
    Another possible reason is this: Some literary ciritc, I forget who,
    described SF as the "literature of alienation," much concerned with
    looking at the normal world from some outside viewpoint.  AIs are just
    one more entry in the list of possible alien viewpoints, along with
    ETs, mutants, space-travelers, time-travelers, and people from parallel
    worlds.
    
    Earl Wajenberg
1053.3Well, I can think of at least one ...HELIX::KALLISPumpkins -- Nature's greatest giftMon Mar 02 1992 16:0319
Re .0 (jlr):

I bcan think of at least one set of stories where computers are neither vocal
nor "aware": Doc Smith's Lensman Series, where computers, from the one the Eich
used to try to determine who was responsible for the disappearance of Medon, to
the one on Klovia that couldn't begin to be able to work out the effects of
the emergence of an nth-space planetary projectile against Ploor and Ploor's
sun ("if it could handle that kind of math, which it couldn't," according to
Christopher Kinnison) in sufficient time -- nor even as rapidly as an
Arisian could (but the Arisian would still be too slow), were nothing more 
than tools; they didn't even speak.

To a lesser extrent, the processors in _The Door Into Summer_, by Heinlein,
had the same kind of functions as _real_ computers do to this day.

"Old Man Mose" in _The Demolished Man_," by Alfred Bester, more nearly fits
our understanding of computers.

Steve Kallis, Jr.
1053.4CUPMK::WAJENBERGand the CthulhuettesMon Mar 02 1992 19:2733
    Larry Niven's "Known Space" series had a fair number of computers
    without using AI.  There were auto-piloted cars and aircars, automated
    kitches, and the sometimes frightening "autodocs."
    
    And the U.S.S. Enterprise would probably weigh a few tons less if they
    took out all the computer parts.  Besides the "ship's computer" (which
    has voice interface but ain't no AI), there's a computer in ever last
    door, recognizing the word "come" after the doorbell is pushed,
    undoubtedly computers in the tricorders, and most likely in the
    transporters and the food synthesizers.  Not to mention the holodecks.
    
    Speaking of the holodeck, there are at least a few "virtual reality"
    stories out there.  Are they all "contaminated" with AIs?
    
    Re .0 again:
    
    Another reson SF computers tend to be heavy on the AI is that SF tends
    to dwell on "where all this is leading."  As I said in .1, the first
    FTL effects (if any) will probably be something obscure in a physics
    lab, but there are only so many plots you could spin out of that.  On
    the other hand, FTL travel has plenty of drama, or at least dramatic
    convenience.
    
    Similarly, SF computers *other* than AI are hard to milk for drama.  As
    you point out in .0, they tend to be invisible.  There's one in your
    watch, one in your phone, etc.  Whee.
    
    Consider all the voicewriters and voice-operated doors and phones, and
    auto-piloted cars, etc. you find in the background of SF.  Those are
    fairly good non-AI computer-based gizmos, but they are just as
    incidental in SF as in real life.
    
    Earl Wajenberg
1053.5REACH::WRIGHTLife was never meant to be painlessMon Mar 02 1992 19:2717
Computers that talk -

There are full vocal interfaces to computers today.

They are slow, it takes a long time to train (yes, train) them to understand
your voice and speech patterns, but they are there.

They are mainly being pioneered for use by the handicapped and disabled.

(how better for a blind programmer to code, or one that is paralized...)

In some respects we are getting closer and closer to what SF predicts,
but it is taking alot longer in some areas than others...

grins,

clark.
1053.6I believe the SF is more accurate than notSTARCH::JSLOVEJ. Spencer Love; 237-2751; SHR3-2/W28Mon Mar 02 1992 23:3857
A computer that drives a car had better be an AI, or I'm not getting in it.

I'll make an exception for computerized trains, which are on rails. 
Computerized aircraft make me nervous but so far there is a pilot waiting
to override it.  Although planes aren't on rails, their environment is more
controlled than roads -- at least, there are fewer competing drivers and
they are generally better trained and more accountable.

The AI that would drive my car is so much more than the servo mechanism
called a cruise control that I have today.  I want it to read signs,
navigate with a builtin map but be able to cope when the map is wrong or
the right one is not available, deal with human drivers competing and being
flakey, recognize and cope with hardware malfunctions on the car, drive at
night and in bad weather and on dirt roads.

Speaking of maps being wrong, for years all the maps you could buy showed a
street which never existed more or less where my house was, and didn't show
my street at all.  Rather scarily, when I moved in a friend asked
directions to my house of the local fire department, and they didn't know.

Back to the car, the controller gets to be called an AI because it
incorporates a vision system, a text recognition and interpretation system,
pattern matching against the visual field and against maps, models the
behavior of other drivers, chooses routes and exercises judgement on
executing them.  Much of this is what we now call AI, although primitive
subsystems are available for some of these functions.

In southern California, they may introduce smarter servomechanisms that
guide cars on limited access super highways.  I'll readily grant that
that's not an AI.

Oh, did you mean that AIs are conscious and ornery?  Well, it's not clear
that all of US are conscious.  For grins, read "The Origin of Consciousness
in the Breakdown of the Bi-Cameral Mind" (I think that's right) by Julian
Jaynes.  It's not at all clear that AI and personality are equivalent.

It's truly humorous when an SF character gets into an argument with his AI
pal, but that's comic relief.  A calculator can be downright fanatical that
two plus two does not equal five, but when I see someone arguing with one,
I wonder what kind of drugs he's using ;-)

I expect the AI which drives my car to refuse to drive off cliffs and to be
real difficult about allowing manual control by an intoxicated person.  I
don't expect to argue politics with it or have it go on strike for premium
gasoline.

THAT SAID, I think there will be a big market for AIs with personality. 
Have you looked at late night television lately?  There are lots of adds
for phone services at up to $5/minute that exist to allow perfect strangers
to communicate.  Although the ads nearly universally use sex appeal to sell
the services, I believe that the driving force that brings them customers
is loneliness.  This is a problem that isn't going away.  And face it, a
toaster-oven doesn't even begin to deal with this problem.  (Of course,
there are SF stories where the toaster oven does deal with this problem,
but I think we're back to comic relief at that point.)

						-- Spencer
1053.7new Apple prototypeLMOADM::HYATTTue Mar 03 1992 14:1943
re: .5

	Apple just demonstrated a prototype voice recognition interface 
	that has been "pre-trained" on a 1000 or so english voices so 
	that it will recognize commands right out of the box. The other 
	neat thing is that it recognizes a huge number (billion?) of 
	sentences, not just single word commands!

	It was running on a Mac.  They say it should be available in '93.
	
	I saw it demo'd on some morning show the other day by Sculley and 
	the inventing engineer.  Pretty impressive. I didn't pay close 
	enough attention to the exchange to quote it, but what follows is
	more of a paraphrase or "representation" of the type of dialog that 
	I thought I saw/heard while getting ready for work.

Sculley:	Casper (the name of the computer), schedule a meeting with
		Mike Hyatt.
Casper:		On what day would you like that?
S:		Casper, show me my calender for next week.
C:		(displays calender)
S:		Casper, schedule that for Tuesday
C:		What time?
S:		Casper, from 11 to 1 o'clock
C:		Meeting sceduled with Mike Hyatt, Tuesday, March 10, from
		11 am to 1 pm. (displays entry in calender)
S		Casper, pay Filenes $150.
C:		Which account should I use?
S:		Casper, whats the balance in my checkbook?
C:		(displays checkbook) Your balance is $500.
S:		Casper, use my checking account.
C:		(displays check to Filenes) $150 paid to Filenes.

The interesting thing was that the three people were carrying on 
conversation during the demo.  Casper only would respond when its name 
preceeded the command.  Also, the engineer and hostess would jump in
and ask a question (command) every now and then, and it would respond to 
them just as it did Sculley.  Cool.

No, its not really AI as SF would like it to be, but you don't really need
true "intelligence" to do some really interesting things with a computer and 
have some "human-like" interactions.

1053.8Some of this is availableBHAJEE::BEARDSWORTHGet Neutronless Cheese Here!Wed Mar 04 1992 05:5639
    Just a few commenst about what is already reality:-
    
>I want it to read signs,
    No, you want it to recognise what the sign means, if this is done by by
    Radio, Xrays Laser or whatever, who cares, but most certainly it will
    not be "read". You "read" the signs (if needed) but the information can
    be passed to the computer in other more consistent ways.
>navigate with a builtin map but be able to cope when the map is wrong or
    Well, this is already available in Berlin. Its partly built in, and
    partly centralised. There are (i think) 1000 cars equipped with a
    guidance system. When you get in your car, you can type in the
    destination, and the computer will work out the best route, taking into
    consideration 'known' traffic jams, diversions etc. At each junction
    there is an Audible and a Visible indication as to the direction to
    take.
>the right one is not available, deal with human drivers competing and being
					   Well they should ALL be driving
    automatic cars.
>flakey, recognize and cope with hardware malfunctions on the car, drive at
				 Recognising malfunctions is in just about
    every BMW nowadays. (OOPS soory for the free advertisment).
>night and in bad weather and on dirt roads.
    Assuming you pick up the right system for you data and conrtol
    transmission, night/bad weather should be no problem.
    
>Back to the car, the controller gets to be called an AI because it
>incorporates a vision system, a text recognition and interpretation system,
	Not Needed if you dont use the Road Signs as People do.
>pattern matching against the visual field and against maps, models the
	Admittedly the automatic drivers which use the Road Edges and an
    IMMENSE amount of Picture Processing to driver round corners etc do
    have a vision system, but they are not AI as such. They are pure number
    crunchers (at the moment, the systems I have seen have to carried
    around in something like a Ford Transit (or that size))
>behavior of other drivers, chooses routes and exercises judgement on
>executing them.  Much of this is what we now call AI, although primitive
>subsystems are available for some of these functions.

    
1053.9Not what I'm looking forSTARCH::JSLOVEJ. Spencer Love; 237-2751; SHR3-2/W28Wed Mar 04 1992 11:0829
No.  I *REALLY* meant "read the signs".

I don't care if it finds plot in the signs, or characterization.  I don't
require it to understand the signs as I do.  But I do require it to be able
to cope with signs that make sense but were not explicitly anticipated by
its designers.

I do *NOT* stipulate that everyone is using an automatic car.  I do *NOT*
agree that every sign will incorporate a transponder that lets the car's
computer recognize it as a location marker and look up the actual contents
of the sign in a database.  I do not agree that there is a centralized
automatic control.  This is why I require it to deal with the map being
wrong, and to work on unimproved dirt roads.

I want to be able to tell the car what destination I want, and either NOT
GET IN, or get in and GO TO SLEEP, with the car arriving at its destination
unsupervised.  Without a central controller to pay taxes to, have incorrect
maps, and be unable to cope with accidents and the like which "never"
happen.

The system you describe, I will use, as it is not likely to be less safe
than, say, taking the bus, or it will not remain in use long.  However, I
WILL NOT BUY ONE for my car; it is only a slightly improved cruise control
with a lot fancier technology.  A friend of mine has an electronic map in
his car which reads its map off a cassette tape; he lives in Oakland, CA,
and the tape is a map of the San Francisco Bay area.  It's cute, but it was
not cheap.  I'll pass.

						-- Spencer
1053.10Some commentsFUTURS::HAZELA cubic attoparsec = 1 fluid ounceWed Mar 04 1992 11:4025
    About SF being wrong about computers:
    
    So were IBM when they anticipated that 640K of useable RAM would see
    the PC well into the future.
    
    So were industry analysts, when they predicted (only a few years back)
    the imminent demise of the mainframe.
    
    Even "experts" in this field still make wrong predictions about the
    direction of future computing. What chance have SF writers got, not
    generally being experts? Contrast the Star Trek computers with Asimov's
    Multivac: which prediction was nearer the truth?
    
    About the Star Trek computers:
    
    These certainly _are_ AI machines, given the way in which the crew of
    the Enterprise query them. Questions are always posed in a very vague
    way, yet the computers always come up with the goods (most computers in
    films and TV are like this: it is what gives the general public its
    distorted ideas about computers). Even the computer on Seaview ("Voyage
    to the Bottom of the Sea") was used in this pseudo-technical way. At
    the very least, computers in films and TV are "expert systems", if not
    fully AI.
    
    Dave Hazel
1053.11The "I" in "AI"CUPMK::WAJENBERGand the CthulhuettesWed Mar 04 1992 12:118
    Re .10:
    
    I think we must cope with two definitions of AI.  By current lowly
    standards, probably even the doors on the Enterprise are AI.  But I had 
    the impression that, in SF stories generally, "AI" usually meant a
    character, not a prop.
    
    Earl Wajenberg
1053.12fiction doesn't have to be plausible ...BOOKS::BAILEYBLet my inspiration flow ...Wed Mar 04 1992 16:05100
    >> 					Computers have been in the
    >> field from very early on, and yet every description of them has
    >> been off-base. 
    
    Perhaps part of the reason is simply because nobody can predict future
    technology, and so SF writers have to base their description on  their
    own vision of where current technology and trends are likely to take us
    in the future.  And very few SF writers are computer experts, so it is
    understandable that their descriptions are "off-base".
    
    I recall one of Vonnegut's novels that I read about 20 years ago.  It
    had this computer named EPICAC, which basically controlled the world. 
    This computer was housed in Carlsbad Caverns, contained enough glass to
    cover a continent (it used vacuum tubes), and enough wire to stretch
    from earth to the moon.  Of COURSE it was off-base ... when the novel
    was written they didn't use semiconductors in computer technology (even
    though I believe the transistor was in use at the time in other
    applications).
    
    >> They have always, but always been described in
    >> terms of artificial intelligence, but AI is a minor, even
    >> insignificant part of their actual application.
    
    That is true today ... but most SF writers that use AI in their stories
    have futuristic settings, and we can all see the trend toward smarter,
    faster computers.  So it's a natural extension of a writer's vision of
    the future.  Besides, a dumb computer wouldn't make a very interesting
    story ... I mean, how much less interesting would 2001:A Space Odyssey
    have been if HAL were, say, a VAX 9000.
    
    >> The mistake shows no sign of going away.  No cyberpunk novel is
    >> complete without a rogue AI.  They can be enemies or friends, or
    >> even gods as in William Gibson's books.  Nowhere are they
    >> described as just another kind of motor.
    
    First off, who says it's a mistake ... I mean the F in SF *does* stand
    for fiction.  In the case of cyberpunk, what would the genre be without
    AI personalities?  As to being "just another kind of motor", well what
    do you suppose the human brain is?  It's nothing more than an
    electrochemical computer ... a "motor" if you will.
    
    >>  does seem clear to me is that no one would really want a machine
    >> with an independent will.  Machines have to be paid for, after
    >> all.  People buy them to do what they want, not what it wants. 
    
    I don't know that AI requires an independent will, as we know it. 
    Independent will in human beings is a combination of intelligence,
    training, and emotion.  Not all humans with the same level of
    intelligence will react the same way to a certain stimuli ... that is
    because of the variables of personality and upbringing.  Computers
    would not be bound by the same set of variables, particularly if they
    were all built to a pre-defined architecture that determined what
    reactions a computer would have to any given set of stimuli.
    
    >> "Doesn't it take judgement and imagination to play a good game of
    >> chess?"  I would say.
    
    I would say not.  In a computer all it takes is a large number of "if,
    then" type commands to tell the computer how to react to any given
    move.  But a computer, by today's technology, is not capable of
    creating any moves it has not been programmed for ... it is not capable
    of creativity.  That is the technological breakthrough that will put us
    into the age of truly intelligent machines.  By the way, a computer
    playing chess is also very beatable ... it's only as good as the person
    who programmed the countermoves.
    
    >> So not only are there no AIs now, but I don't see many existing
    >>  even when the field's problems get solved.  There will be work on
    >> it because it's an inherently interesting scientific problem, but
    >> I can't see it as being of economic use.
    
    I think you misjudge the power of capitalism if you think no one would
    find a use for an intelligent machine.  If the technology's available,
    people will find a way to make money off of it.  There were people who
    didn't think there would be any economic use for the phonograph or the
    telephone too ... but they were judging the market by the standards of
    the day, and at the time they made those judgements they were correct.
    
    >> 								Forget the
    >> sterile rehashing of AI; the promise of computers is in how they
    >> can augment our intelligence, not in what they  can do for their own.
    
    I don't see the two ideas as mutually exclusive.  Computers are nothing
    more than cheap imitations of the human brain.  Like computers, we have
    to be programmed to react to outside stimuli.  Unlike computers, our
    brain has the power to enhance it's own programming.
    
    It is true that computers cannot do that today ... but it's a young
    technology, and who's to say where it will be in another fifty years.
    
    Personally, I don't have a problem with SF writers applying a popular
    vision to their stories, no matter how off-base it is with current
    technological capability.  If they were trying to convince us that they
    were NOT writing fiction, I'd adopt a different philosophy.  But no one
    can predict the future, so what's wrong with deducing where you think
    it will go based on current knowledge and making that vision a part of
    your story?
    
    ... Bob
    
1053.13I think, therefore, I compute.HELIX::KALLISPumpkins -- Nature's greatest giftWed Mar 04 1992 16:3738
More years ago than I care to think of, I did a little piece in _Analog_
titled "minicomputers," which presented to that magazine's audience the then-
extant state of the art on minis.  To science-fiction readers, though, I made
a point of comparing what existed then to what some stories showed their 
computers doing.  For instance, I compared a then-operational configuration
used in an architectural firm with Heinlein's "Drafting Dan" system from
_The Door Into Summer_.  There was a LINC-8-based program called "Intervue," 
which was very close to a computerized medical-dialogue vending machine in a
short story called "John's Other Practice," and so on.  What was interesting
to me at the time was that stories from the 1950s and set in the late 20th
Century were beginning to be realized by the early 1970s.

There is a good story called "The Gulf Between" that illustrated a fundamental
difference between some versions of AI (in the layperson's concept) and standard
human thought, but that was a specific dead-end_point.  It appeared in 
_Astounding_ in the 1950s, as I recall.

Some of the more ambitious projections of computer technology can be found in
works by folk like van Vogt (particularly in his _World of Null-A_, the Games
Machine, the robocars [taxis], and the lie detectors), though the degree of
volition in some of these devices seems more like a design flaw than a design
feature.

A problem here is that in real life, much gets taken for granted.  If you find
an early Hugo Gernsback novel, one of the things that might strike you as 
hilarious is the way that the main characters spend time describing the common-
place to each other; I find it so.  The reason, of course, is that they are
really trying to explain it to the audience.  Thus, a climatologically controlled
living environment, say, 25 years into the future, might be available on middle-
income homes.  It might have several processors doing all kinds of really neat
things to insure a proper temperature/humdidity range, and the like, but unless
it becomes an active story element, the reader won't even know it exists in a
story set then.  On the other hand, if you ask a van Vogtian Lie Detector
whether or not someone's trying to pull a fast one, and the machine hems and haws
about how confused it is, even though it's really only a minor gadget (and
not strictly necessary to the plot), it's hard _not_ to notice it.

Steve Kallis, Jr.
1053.14Old analogies, now known to be wrongTECRUS::REDFORDIf this's the future I want vanillaMon Mar 09 1992 01:5234
    re: .-2

>    Computers are nothing
>    more than cheap imitations of the human brain.  

    But that's just it: computers are NOT cheap imitations of human
    brains. That's just what has become clear as computers
    penetrate more and more of our culture.  Computers do almost
    everything in ways completely different from the way we do them.
    That's what my chess example was meant to show.  Computers can
    play good chess, and they can do it without the discipline and
    intellectual power that human beings need to play it.

    Think about arithmetic.  It takes months or years for a human
    child to learn it, and some never do.   Yet arithmetic is the
    most primitive operation inside a machine.  For all that, the
    machine has great trouble in figuring out which arithmetic
    operation to apply to a new problem, something the child can do
    immediately.  The machine and the child are using completely
    different cognitive processes with completely different areas
    of effectiveness.
    
    We'll get AI eventually, but computers just aren't very good at
    it.  It's like trying to teach a cat to fetch sticks.  You might
    succeed after a lot of effort, but in the meantime your house has
    been overrun with mice.  
    
    Some day computers might be able to hold a conversation with you,
    but computers TODAY can show you the colors of the moons of
    Saturn or the images of atoms in a scanning tunnelling
    microscope.  That seems as marvelous to me as all of Asimov's
    robot butlers or factory workers.
    
    /jlr
1053.15aptitude does not equal intelligence ...BOOKS::BAILEYBLet my inspiration flow ...Mon Mar 09 1992 12:50101
    >> That's what my chess example was meant to show.  Computers can
    >> play good chess, and they can do it without the discipline and
    >> intellectual power that human beings need to play it.
    
    A computer CANNOT play a good game of chess ... it can merely perform
    the operations it has been programmed to perform.  When you play chess
    with a computer, you are NOT playing against the computer ... you are
    matching your own skill against that of the person who programmed the
    machine.  A computer does not comprehend the strategy of the game, and
    if you hit it with a move it has not been programmed to counter, you will
    win.  The same can be said for a human, to a certain extent.  The
    difference is that once a human has seen a certain move it has never
    before encountered, it will remember and learn from the experience.  We
    are on the verge of achieving that with computers today.  At the next
    level, the human can use what it has learned to deduce new, creative
    strategies ... both about the game and about the opponent.  A computer
    cannot do that ... yet.
    
    >> Think about arithmetic.  It takes months or years for a human
    >> child to learn it, and some never do.   Yet arithmetic is the
    >> most primitive operation inside a machine.  For all that, the
    >> machine has great trouble in figuring out which arithmetic
    >> operation to apply to a new problem, something the child can do
    >> immediately.  The machine and the child are using completely
    >> different cognitive processes with completely different areas
    >> of effectiveness.
    
    Perhaps you are confusing aptitude with intelligence.  A computer is
    very good at arithmetic ... in fact that is about all it is good at.
    However, there are some humans who are complete idiots in most ways who
    are absolute geniuses when it comes to arithmetic operations.
    
    A child has to deal with a wider variety of problems in it's
    environment, and it learns from each experience.  Computers have to
    deal only in an environment where it uses arithmetic to solve it's
    problems.  They use different cognitive processes, that is true.  Each,
    however, is a product of it's programming.  The computer simply has
    more limitations, due to the limitations of it's operating system.
    
    >> It's like trying to teach a cat to fetch sticks.  You might
    >> succeed after a lot of effort, but in the meantime your house has
    >> been overrun with mice.  
    
    My sister has a cat that fetches ... and she didn't teach it the trick
    at all ... it seemed to be a natural aptitude for the animal.  Her
    other two cats don't do that trick, and never seem interested in
    learning what their brother knows either.  By the same token, none of
    her cats have ever seen a mouse and probably wouldn't know what to do if
    they did see one, as they have never been programmed to hunt mice.
    Catching small creatures is instinctive in cats ... it is a part of
    their "genetic" programming ... a part which is being slowly bred out of
    the domestic variety of cat.  She also has two birds, which the cats
    never show any interest in at all.  Other cats, which may be bred for
    the outdoors, would certainly show an interest.   Again, it's all a part
    of the animal's programming.
    
    >> Some day computers might be able to hold a conversation with you,
    >> but computers TODAY can show you the colors of the moons of
    >> Saturn or the images of atoms in a scanning tunnelling
    >> microscope.  
    
    Computers today cannot show you the colors of the moons of Saturn ...
    they have no concept of moons or Saturn.  They can simply break down
    electronic signals and interpret those signals as they have been
    programmed to ... just as our visual sensors to the brain do.  If you
    program a computer to believe that the signals representing the visual
    wavelength normally associated with blue are orange, that is what they
    will show you.  Likewise, a color-blind person's brain might interpret
    the optical signals that normally represent blue as orange ... in both
    cases it's a matter of prior programming, and the ability to interpret
    data based on their programming.  A child is not born with the concept
    of blue, orange, or any other color.  They have to be taught to
    associate their visual information with a standard which can be
    communicated to the outside world, just as a computer does.
    
    It is true that the human brain is capable of much greater cognitive
    powers than those of a computer ... that is why I used the term "cheap
    imitation".  Whether or not the methods of cognizance are the same, the
    result is the same ... human and machine are both a product of their
    programming.
    
    >> That seems as marvelous to me as all of Asimov's
    >> robot butlers or factory workers.
    
    What about Mitsubishi's factory workers?  You don't need intelligence
    to build a robot butler or factory worker ... you simply need an
    operating system that's sophisticated enough to enable the machine to
    perform the function for which it is intended.  In order to achieve
    true AI, we must develop our own concept of mathematics and technology
    a bit further than we have to date.  THAT is what's lacking so far.  But
    we've only been tinkering with computer technology (as we define the
    term today ... no cracks about the abacus being a computer, please) for
    about thirty years or so.  And if we give the technology another fifty or
    hundred years, who's to say we won't develop what we need to achieve true
    thinking machines?  
    
    Besides, you still haven't answered the question of why you think a
    science fiction writer needs to adhere to today's technology when
    protraying a fictional story about the future.
    
    ... Bob
1053.16More on clever computersFUTURS::HAZELA cubic attoparsec = 1 fluid ounceMon Mar 09 1992 15:4044
    Re. .15:
    
    There are some misconceptions in this note.
    
    Firstly, about computers playing chess. As I understand it, they are
    NOT programmed to play a specific style of the game. What, in fact,
    they do is to "look ahead" to the possible moves, to some preset number
    of moves ahead. This number is limited by the memory and processing
    capacity of the computer. It then selects, from the possible moves it
    can forsee, the one which gives the highest chance of winning. This
    strategy is, in fact, used in many computer games which involve similar
    "thinking ahead". It gives an advantage to the computer on two counts:
    firstly, the "looking ahead" is systemmatic and rigorous (the computer
    does not "miss" moves, because it has a system for finding them all);
    secondly, it does not get confused about which outcome results from
    which moves. A human player needs to be quite clever to beat this
    strategy.
    
    Secondly, the point about colour blind people. I'm not sure if I have
    just mis-read your comments, but they imply that you think such people
    simply see one colour as another. This is not so. Colour blindness
    represents an incapacity of their eyes to distinguish between two
    different colours. For instance, such a person might see red and blue
    as being the same colour. Whilst it is true that association of colours
    with the words "red", "blue" and "green" has to be learned, the human
    eye is intrinsically capable of registering these colours as being
    distinct - except in cases of colour-blindness. (The eye actually has
    colour-sensitive photo-receptors which do this. Dogs' eyes lack such
    receptors, and they are therefore colour-blind by nature).
    
    The human brain (and any animal brain, for that matter) works by a
    combination of massive parallelism (millions of neurones) and highly
    specialised "hardware" (eg. parts of the brain dedicated to the
    processing of visual information). Whilst I am by no means an expert on
    these matters, I have seen enough of what is involved to be aware of
    the scale of the problem of reproducing it artificially.
    
    If you take a look at neural networks, you will see an area of
    computing which is both new and which is starting to crack the AI
    problem. The "gotcha" is that these networks are almost as difficult as
    a brain to understand (though not necessarily difficult to program).
    
    
    Dave Hazel
1053.17BEING::EDPAlways mount a scratch monkey.Mon Mar 09 1992 15:5672
    Re .15:
    
    > A computer CANNOT play a good game of chess ...
    
    That's a testable assertion, and it is false:  There are computers that
    DO play good games of chess, so obviously they can.
    
    > When you play chess with a computer, you are NOT playing against the
    > computer ... you are matching your own skill against that of the person
    > who programmed the machine.
    
    The designers of advanced chess computers don't play even nearly as
    well as their computers . . . the best computer players nowadays beat
    all but grandmasters and sometimes beat them, thus hopelessly
    outclassing their designers.
    
    > A computer does not comprehend the strategy of the game, and if you
    > hit it with a move it has not been programmed to counter, you will win.
    
    It doesn't comprehend?  Computers today have knowledge of what the
    effects of a move will be, or the effects of a particular line of
    moves.  In fact, unless a computer analyzes _every_ possible move of a
    chess game, which is not yet possible, it _must_ have some means of
    evaluating a position other than precise analysis.
    
    > The difference is that once a human has seen a certain move it has
    > never before encountered, it will remember and learn from the
    > experience.
    
    In writing the previous paragraph, I was going to point out that
    computers are "heuristic".  I looked up that word in Merriam-Webster's
    Collegiate Dictionary and found:  ". . . specifically:  of or relating
    to exploratory problem-solving techniques that utilize SELF-EDUCATING
    techniques to improve performance", emphasis mine.  And the example
    given is:  "a heuristic computer program".  Yes, computer programs can
    be self-educating; they can remember and learn from seeing moves not
    previously encountered.  And that dictionary is 16 years old.
    
    > At the next level, the human can use what it has learned to deduce
    > new, creative strategies ... both about the game and about the
    > opponent.  A computer cannot do that ... yet.
    
    Poker-playing computers deduce things about their opponents.  And
    theorem-proving computers have produced new, creative proofs that
    surprised their authors.
    
    > Computers today cannot show you the colors of the moons of Saturn ...
    > they have no concept of moons or Saturn.
    
    What is a concept?  In a brain, a concept is represented in signals and
    chemicals; why can't an equivalent representation be done in a
    computer?  A concept, or understanding, of moons or Saturn means
    knowing data about moons or Saturn -- and data about how they relate to
    other things, about what shape they are, and what shape is, and what
    gravity is, and space, et cetera.  A concept is data and relationships,
    so that a mental structure is formed.  Computers have these
    relationships, and the amount of knowledge they can link in a structure
    and reason with and from is increasing.
    
    > Likewise, a color-blind person's brain might interpret the optical
    > signals that normally represent blue as orange ... in both cases it's a
    > matter of prior programming, and the ability to interpret data based on
    > their programming.
    
    Color-blindness is a result of an abnormality in the receptors in the
    eye, not brain structure.  E.g., some chemical is missing, so that red
    and green light cause the same stimulation in the eye, rather than
    different results.  The brain never receives separate signals for red
    and green; color-blindness is caused by lack of data, not programming.
    
    
    				-- edp
1053.18BEING::EDPAlways mount a scratch monkey.Mon Mar 09 1992 16:0116
    Oh, another point on colors:  Recent studies have found that there is
    something about brain structure that knows about colors; it is not just
    programmed.  Apparently there is an orderly progression of colors in
    language.  At a rudimentary stage, a language has terms for black and
    white.  If the language has terms for color, but only one color, it
    will be red.  I don't remember what comes after that, but let's say,
    for the purposes of illustration, that it is yellow:  If a language has
    terms for only two colors, they will be red and yellow.  And so there
    is a progression:  black/white, red, yellow, . . .  You won't find a
    language that has developed a term for green before it developed a term
    for red.  Even in separate areas of the world and separate times of
    development, something about the brain made people recognize and use
    red before green -- the brain knows about colors.
    
    
    				-- edp
1053.19New Triumphs of PerplexityCUPMK::WAJENBERGHarvey/Dowd in '92Mon Mar 09 1992 16:119
    Re .16, about the difficulty of understanding artificial neural nets:
    
    I have often wondered if we might not end up (at least for a while) in
    the somewhat embarassing position of knowing how to create AIs without
    knowing how they work.  If, as seems likely, the AIs will be largely
    self-organizing -- if we "grow" AIs more than we "build" them -- then 
    the details of that organization may quickly become too hard to follow.
    
    Earl Wajenberg
1053.20Job for a robo-psychologistFUTURS::HAZELA cubic attoparsec = 1 fluid ounceTue Mar 10 1992 06:5225
    Re. .19:
    
    That's exactly what has happened.
    
    AI at one time was considered to be one tool to assist psychologists in
    understanding how the brain functions. The first major breakthrough in
    this area was when someone thought of writing a software representation
    of a group of neurones communicating with each other. Each neurone was
    represented by a process, which was running in parallel with the
    others, like in a real-time system. Any neurone could initiate
    communication with any of the others, at any time. Equally, any of them
    could receive such communication.
    
    When the system ran, it was found to exhibit primitive brain-like
    behaviour. For instance, if given a group of related facts, it would
    sort them into some kind of hierarchy, without being told to do so.
    Essentially, it was "classifying" information.
    
    Like I said in my last reply, the "gotcha" was that no-one really
    understood why this software neural-net did what it did. Modern,
    hardware neural nets have exactly the same snags: they work, but no-one
    really understands the details of how they work.
    
    
    Dave Hazel
1053.21Sorry, could not resist thisELIS::BUREMAPRUNE JUICE: The warrior's drinkThu Mar 12 1992 08:4110
    Re: .19
    
    .19> the somewhat embarassing position of knowing how to create AIs without
    .19> knowing how they work.  If, as seems likely, the AIs will be largely
    
    Are we not now doing the same with NI (=Natural Intelligence)? we seem
    to create it without knowing how they work...  8-)
    
    Wildrik
    -------
1053.22The hazzards of unskilled labor.CUPMK::WAJENBERGHarvey/Dowd in '92Thu Mar 12 1992 12:2712
    Re .21:
    
    Yes, I had thought of that similarity, too.  And the process of
    creating NIs also seems fraught with embarassment, especially if you
    include the, ah, full development cycle to around age 18, and include
    embarassment *by* the NI as well as *for* it.  ("Hey, Dad, remember 
    the car?"  "Oh Mom! Don't tell them that story!")
    
    Perhaps a formal public demo is for AIs what a coming-out party is for
    debutante NIs.
    
    Earl Wajenberg
1053.23A co-author computer of SFVERGA::KLAESAll the Universe, or nothing!Thu Apr 02 1992 15:45109
Article: 880
From: clarinews@clarinet.com
Newsgroups: clari.news.books,clari.news.features,clari.tw.computers
Subject: RoboHack: A chip off the old writer's block
Date: 30 Mar 92 21:28:48 GMT
 
    _U_P_I_ _N_e_w_s_F_e_a_t_u_r_e
	
    _B_y_ _B_._J_._ _D_E_L_ _C_O_N_T_E

	TORONTO (UPI) - Meet RoboHack:  Half-man, half-computer and a
real microchip off the old writer's block. 

	In a stunt designed to explore the possibilities of artificial
intelligence, author Burke Campbell and his computer sidekick Bernie
recently collaborated in front of an audience on a 17,000-word science
fiction fantasy, ``The Meaning of Pharoah's Dream''. 

	``I wanted to show how we can integrate technology into society 
in a creative way,'' the Texas-born, Toronto-based Campbell said. 

	``One of the reasons I stage these events is that people must
be forced to examine questions now about technology entering the
marketplace - not 20 years down the road,'' he said. 

	``If we don't ask these questions now, we'll be creating a
whole generation of people who don't have any control over their lives
because they don't know where technology is coming from,'' Campbell said. 

	A sci-fi fairy tale populated by strange heroes and stranger
villains, the Campbell/Bernie novella was written over three days on a
giant TV screen in front of onlookers at a downtown Toronto computer
store.  The flamboyant author arrived to fanfare in a stretch limo and
wearing a daredevil-style outfit designed for the occasion. 

	Prior to their performance, Campbell spent months programming
plot options into Bernie -- options Bernie threw back at Campbell as
they wrote.  For example, when Campbell decided a character was to
drown, Bernie asked whether it would be in a swimming pool, lake,
bathtub and so on.  Campbell decided on a lake, so Bernie asked about
water and weather and the character's swimming ability.  If the
character drowned in a bathtub however, Bernie would pursue more
relevant lines of questioning. 

	``I can start out with a skeletal structure and as the
computer asks me questions it can act at as prompt,'' Campbell said.
``It acts as another facet of storytelling.'' 

	``Writing is very much like an architectural rendering or kind
of engineering -- you have to keep track of many facts and situations
that are going on,'' Campbell said. 

	Literary critics have hailed Campbell's concept, but are quick
to point out he is only simulating artificial intelligence.  One
newspaper called him a ``high-tech ventriloquist'' who merely asks the
computer to ask him questions. 

	Campbell agrees, but argues he is helping the general public
understand the positive potential of computers, and by extension,
demystifying them and making them fun and truly ``user friendly''. 

	Campbell said that rather than fearing or revering computers,
people should realize:  ``It's not your job to know the computer, it's
the computer's job to know you.'' 

	Campbell used Apple MacIntosh technology coupled with software
produced by Emerald Intelligence of Ann Arbor, Mich., a company that
is suitably impressed by his use of a computer program designed for
industrial use. 

	``Our software is normally used to build factory diagnostic
systems or customer support systems, so naturally we were very
surprised to see it used in such a unique manner,'' said Emerald
spokesman Thomas Kippola. 

	Campbell knows some writers will scoff at the idea of cold,
hard machinery assisting in a process so fluid and formless. 

	``A lot of people think the creative process has no shape and
it's very spontaneous, but it does have a structure and a system like
this can find a place for things,'' he said. 

	``What I intended it as was a suggestion of one way of using
an expert system, and using an expert system whenever there exists any
type of structure requiring logic and inference,'' he said. 

	Campbell obviously hopes to spark non-users appreciation of
computers by tweaking their sense of fun. 

	``I got tired of writing alone,'' a smiling Campbell said. ``I
longed to write with someone who possessed my talent, my passion, my vision. 
Naturally, working with another human being was out of the question!'' 

	``The best part about Bernie is that he doesn't eat, spends
nothing on clothes, and never asks for a raise - and if he turns on
me, I'll unplug him!'' Campbell said. 

	Campbell's collaboration with Bernie is the latest in his
struggle to help people come to grips with technology. 

	He first gained international recognition almost 10 years ago
with ``Blind Pharoah'', a 20,000 word novella written on a computer in
60 hours for a speed-writing contest and then made available to
subscribers of computer on-line services such as The Source, bypassing
traditional publishing methods. 

    _a_d_v_ _w_e_e_k_e_n_d_
    _a_p_r_i_l_ _3_-_4 

1053.24Already being automatedTECRUS::REDFORDIf this's the future I want vanillaThu Apr 02 1992 22:0816
    Hey, I've already seen books that I would swear were algorithmically
    generated.  The algorithm appeared to be a Dungeon game.  The
    characters would:
    
        fight;
        assess_damage;
        if damage < wound_threshold then fight more;
        else 
            if damage < fatal_threshold then retreat;
            else die;
    
    It's a standard scenario in the crummier class of fantasy novel. 
    A human happened to have typed it instead of a machine, but it's
    just as mechanical.
    
    /jlr
1053.25he reinvented the wheelMILKWY::ED_ECKFri Apr 03 1992 12:497
    
    Somewhere (within the past couple months) I saw an article on
    commercially available software that generates plots for
    TV shows. There's maybe a half-dozen versions of various
    levels of sophistication. Coulda been the (Boston) Sunday Globe.
    
    Ed E.
1053.26perhaps we have AISNO78A::NANCARROWWed Apr 15 1992 10:5333
    
    	Perhaps we should really decide what intelligence is rather
    than discuss whether we have achieved it artificially or not.
    How many people out their can say that they don't react/create/perform
    any action during their working day which is not a direct result
    of a previous similar event,basic knowledge ,instructions received
    or some other stimuli rxed. How many times do you go home after work
    and perform the exact same actions and say close to the same words
    to your family is this really intelligence or a programmed response
    with a randomize function set in for variety.
    	The current procedure in industry is to send our managers,salesmen
    and other workers to seminars which give them tips on how to save
    time, perform better services, provide leadership, service equipment
    and other subjects which provide a structured method for attacking
    a problem. All these seminars provided a method or guideline or path
    of maximum effectiveness which people have found works from experience
    how different is this from programming a computer's responses ???
    	I can not remember when but the turing competition was performed
    in the USA last year where some supreme court judges provided
    some questions to both humans and computers the results were that
    the judges in the majority of cases(sorry can not find the article
    for exact figures,it appeared in the Australian newspaper down here)
    could not discern computer from human. 
    
    Perhaps AI should be defined as
    
    " when a person of average intelligence looks up and sees that he
    has to think harder or be replaced by a machine "
    
    	Average Iam sad to say I can not place a figure on considering
    the shape of the world today.
    
    
1053.27TECRUS::REDFORDIf this's the future I want vanillaWed Apr 15 1992 21:3721
    re: .-1
    
    You probably saw a report about the Turing test run at the Boston
    Computer Museum which was done a couple of months ago.  The only
    category where the machine even came close to being mistaken for
    human was "whimsey".  This was a category for non-sequiters. It was a 
    slow pitch to let the poor things hit /something/.  Even there
    only half the people were fooled.  On most of the categories
    there was no question.  I think the press played up the contest
    more that it deserved.
    
    However, I do think AI will be achieved eventually.  What I was
    trying to say in .0 was that I don't think anyone wants machines
    with independent wills.  It's OK if they're smart about doing what
    we tell them, but not OK if they do things on their own.  The
    machines will continue to be tools, not characters.  Tools aren't
    as interesting in fiction as characters are (as several people
    have pointed out), but that's a challenge for SF authors, not a
    shortcoming of computers.

    /jlr
1053.28REGENT::POWERSThu Apr 16 1992 13:3016
>  <<< Note 1053.27 by TECRUS::REDFORD "If this's the future I want vanilla" >>>
>...    
>    However, I do think AI will be achieved eventually.  What I was
>    trying to say in .0 was that I don't think anyone wants machines
>    with independent wills.  It's OK if they're smart about doing what
>    we tell them, but not OK if they do things on their own.  

Interesting, but how do you separate competence from initiative?
Even if you design the (now cliched) cleaning robot, you need
to decide to tell it how to look for dirt, choose attachments, use 
chemical spot cleaners, etc.
Recall that clause from each of the Laws of Robotics "...or through inaction..."

Initiative seems implicit in intelligence.

- tom]
1053.29The Beauties of ObedienceCUPMK::WAJENBERGQuoth the raven, `Nevertheless.'Thu Apr 16 1992 18:4029
    Re .27
    
       "However, I do think AI will be achieved eventually.  What I was
        trying to say in .0 was that I don't think anyone wants machines
        with independent wills."
    
    I agree with .28 that the line between intelligent obedience and
    initiative is too fuzzy to draw.  We may not *want* machines with
    independent wills, but we might very well wind up with them anyway.
    
    "Oh my God! You mean this whole labrynthine plot was engineered by the
    robo-butler?"  "Yes sir, indeed I did.  Three generations ago, your
    great-grandmother told me to do my best to take care of the family.  I
    only wish I had learned human psychology faster.  Then I would not have
    arranged to have the family fortune escalate quite so rapidly.  At
    least, I would have made sure that you and your sister could not make
    such free use of the funds.  Then you would not have been the spoiled
    brats you were a mere three months ago [at the beginning of the book]. 
    I regretted the necessity of infecting you with Rigelian Immune
    Deficiency Syndrome, but the experience of being a RIDS-leper and
    social outcast forced you into the company of Sister Apocalyptica, who
    so capablly took your spiritual education in hand while nursing you back
    to health.  And *do* believe me when I tell you that your sister's
    suicide attempt was completely unsuccessful and had, in fact, no chance
    of success.  She is now at the Monastary of the Fans of Extreme
    Discomfort, where..."
    
    Earl Wajenberg
    
1053.30A new type of species.SNO78A::NANCARROWFri Apr 17 1992 22:3335
    	Regarding my note .26 the test I refer to was more than a few
    months ago.I do not know if the news media blew up the test out of
    proportion but I do remember that the article said that the only
    area of conversation where more than 30-40% of the judges were able
    to tell the difference between the computer and the human interface
    was when a sentence of jibberish was sent and the computer was not
    able to formulate a reply outside of it's apparent guidelines.
    	Upon the debate of whether we need a computer with initiative
    perhaps some of the following suggestions for it's usage might
    help:
    
    		.a computer which can judge whether or not
    		a person can drive or not(this is an opinion
    		of the computer since bloood alcohol is not
    		a good guide to a driver's level of intoxication)
    	
    		.any long distance interplanetary probe or
    		interstellar probe which will exceed the ability of
    		the Earth to send instructions but not receive data.
    		e.g. if it perceives danger to itself take preventative
    		measures.
    
    		.a robot capable of operating on a human being 
    		with greater accuracy than a human being.
    
    		. a robot designed to take care of an invalid at home.
    
    	Perhaps we will need these machines and it is not a question
    of do we want to have them.
    
    
    
    
    						Mike N.
    		
1053.31"I'm sorry Dave, I can't do that"BIGUN::HOLLOWAYSavage Tree Frogs on SpeedMon Jun 22 1992 05:3137
    
    re:.-1
    
    		.any long distance interplanetary probe or
    		interstellar probe which will exceed the ability of
    		the Earth to send instructions but not receive data.
    		e.g. if it perceives danger to itself take preventative
    		measures.
    
    Isn't the latest writing to deal with this "Queen of Angels" by Greg
    Bear?
    
    As for (mis)predicting future computers and manufactured intelligences,
    the best examples I've read recently are three novels by Iain (M.)
    Banks set in the universe of the "Culture".  The technology is at the
    level of Clarke's Law (i.e. magic) but can be related to by today's
    readers...
    
    The novels are:
    
    Consider Phlebas
    The Player of Games
    Use of Weapons
    
    The Culture makes use of "minds" which are basically extremely advanced
    engineered (by other minds - humans are no longer capable of grasping
    the physics and technology) intelligences that have most of their
    existence in hyperspace, and an interface to the "real" world.
    
    IMHO, what the average guy in the street calls A.I. we can for all
    intents and purposes relate to rule-based systems (and fuzzy logic
    processing).  To create the (sometimes malevolent) A.I. so beloved of
    recent S.F., we first have to understand a hell of a lot more about
    ourselves than we do now and I don't see that happening for many years
    yet...
    
    David
1053.32Rosenblum's CHIMERAVERGA::KLAESQuo vadimus?Tue Feb 08 1994 19:0639
Article: 491
From: fzimmerm@ciesin.org (Fred Zimmerman)
Newsgroups: rec.arts.sf.reviews
Subject: REVIEW--"Chimera" by Mary Rosenblum
Date: 08 Feb 94 00:30:06 GMT
 
%A Rosenblum, Mary
%T Chimera
%I Del Rey
%C New York
%D November 1993
%G ISBN 0-345-38528-4
%P 403 pp.
%O paperback, US$4.99
 
Mary Rosenblum's CHIMERA is set in a future in which most business
communication takes place in a global network of "virtual offices".  
This novel presents a challenging exploration of cyberspace as a realm
for the presentation of the self.   In today's business world, we are
used to being evaluated on the basis of such qualities as our body
language, our clothes, our accent, our professed values, and--in
professional America, at any rate--above all on the basis of our
formal educational credentials.  Not many of us are ready for a world
like Rosenblum's, in which we will compete on the basis of how well
our visualization software edits our body language and presents our
chosen image of self, and in which virtual reality artists are the
aristocrats of the global Net.  As a verbal, book-loving, command-line
person whose computing experience began with Fortran and the Heath
Disk Operating System (HDOS) and who has never much liked the
MTVization of the media, I must say that I hope for a future
information environment that is considerably less visual than
Rosenblum's.  But I liked the fact that her book made me uneasy about
the possibilities. 
 
This, Mary Rosenblum's second novel, is a good book, well worth
reading for anyone who is interested in the Internet and communication
via computer.  In the words of the Del Rey Discovery motto on the
inside cover, this "something new" is "worth the risk." 

1053.33Freedman's BrainmakersJVERNE::KLAESBe Here NowMon Mar 28 1994 20:2649
Article: 3038
Newsgroups: alt.books.reviews,rec.arts.books,comp.ai,comp.ai.philosophy
From: sbrock@teal.csn.org (Steve Brock)
Subject: Review of Brainmakers by David H. Freedman
Sender: news@csn.org (The Daily Planet)
Organization: Colorado SuperNet, Inc.
Date: Mon, 28 Mar 1994 18:47:51 GMT
 
BRAINMAKERS: HOW SCIENTISTS ARE MOVING BEYOND COMPUTERS TO CREATE
A RIVAL TO THE HUMAN BRAIN by David H. Freedman.  Simon and
Schuster, 1230 Avenue of the Americas, N.Y., NY 10020, (800) 223-
2336, (212) 698-7007 FAX.  Index, bibliography.  256 pp., $22.00
cloth.  0-671-76079-3
 
                             REVIEW
 
     Attempts by computer scientists to replicate the operation of
the human brain have proved complicated and frustrating.  While
there have been breakthroughs, programmers and other engineers have
not been able to create equations for human intelligence.

     In "Brainmakers," Freedman, a contributing editor at Discover
Magazine, expresses doubt that the field of artificial intelligence
(AI) has become stagnant, but he echoes the feelings of many who
say that AI research has, for the last twenty years, been going in
the wrong direction.  What is needed at this point, he says, is a
shift to nature-based AI, which takes into account advances in
molecular biology, neuroscience, and complex adaptive systems.

     Freedman documents many of the successes of and outlines a
possible future for AI research:

     -  Marvin Minsky's pioneer software development at MIT,
     -  Rodney Brooks (also at MIT) and his giant robot cockroach
        that can "learn" its terrain,
     -  Chuck Taylor and David Jefferson at UCLA, who "breed"
        smarter generations of robots, and
     -  Stuart Hameroff, an anesthesiologist at the University of
        Arizona, who studies subatomic "microtubules," a possible
        mechanism for consciousness.

     "Brainmakers" is an exciting and convincing document, as well as
an enthusiastically optimistic vision of the future, and I, for one,
can't wait.  As I sit poolside at a Tucson, Arizona resort, I'm
writing this review by hand, dreaming of the day when my reviews will
be composed in my brain and transcribed electronically to my word
processing software.  Voice-recognition software only gets me halfway
there.  "Brainmakers" is highly recommended. 

1053.34Artificial Life magazineJVERNE::KLAESBe Here NowMon Mar 28 1994 20:26230
Article: 2933
From: cgl@santafe.edu (Chris G. Langton)
Newsgroups: sci.nanotech
Subject: CFP - Artificial Life Journal
Date: 28 Mar 94 19:24:37 GMT
Sender: nanotech@planchet.rutgers.edu
Organization: The Santa Fe Institute
 
			CALL FOR PAPERS
 
                 A R T I F I C I A L    L I F E
 
                           MIT Press
 
      Premiering in April with double Fall/Winter 1993 issue
 
		Edited by Christopher G. Langton
                      Santa Fe Institute
 
We are soliciting contributed papers reporting research on the 
synthesis of biological phenomena in hardware, software, and wetware.  
 
Artificial Life, a new quarterly from The MIT Press, is the first 
unifying forum for the dissemination of scientific and engineering 
research in the field of Artificial Life. It reports on synthetic 
biological work being carried out in any media, from the familiar 
"wetware" of organic chemistry, through the inorganic "hardware" of 
mobile robots, all the way to the virtual "software" residing inside 
computers. Topics range from the origin of life, through self-
reproduction, evolution, growth and development, animal behavior....
and so forth, on to the dynamics of whole ecosystems. 
 
 
Artificial Life  will be an essential resource for scientists, academics, 
and students researching artificial life, biology, evolution, robotics, 
artificial intelligence, neural networks, genetic algorithms, ecosystems
and the origin of life.
 
The initial 3 issues of Volume 1 consist of a special set of overview
articles, written by members of the Editorial Board, giving detailed
reviews of distinct sub-disciplines within Artificial Life. Taken 
together, these articles constitute the most thorough and in-depth 
presentation of the theory and practice of Artificial Life provided to 
date; describing promising research directions, reviews of important 
open problems, and suggestions for new methodological approaches. 
 
 
 
-----------------------------------------------
Selected Articles from Volume 1, Numbers 1 - 3 
-----------------------------------------------
 
Kristian Lindgren and Mats Nordahl
        Cooperation and Community Structure in Artificial Ecosystems
 
Peter Schuster
        Extended Molecular Evolutionary Biology
 
Przemyslaw Prusinkiewicz
        Visual Models of Morphogenesis
 
Luc Steels
        The Artificial Life Roots of Artificial Intelligence
 
Pattie Maes
        Autonomous Agents and AL
 
Tom Ray
        An Evolutionary Approach to Synthetic Biology
 
Stephanie Forrest and Melanie Mitchell
	Genetic Algorithms and Artificial Life
 
Daniel Dennett
	Artificial Life as Philosophy
 
Stevan Harnad
	Levels of Functional Equivalence in Reverse 
	Bioengineering
 
 
------------------------------------------------------
 
 
Quarterly: Volume 1 forthcoming, fall/winter/spring/summer
96 pages per issue 7x10, illustrated, ISSN 1064-5462
 
Yearly Rates: $45 Individual; $125 Institution, $25 Student
 
 
For Submission Information	 	To order Subscriptions 
please contact: 			please contact: 
 
Christopher G. Langton 			Circulation Department 
Santa Fe Institute 			MIT Press Journals 
1660 Old Pecos Trail 			55 Hayward Street 
Santa Fe, NM 87501 U.S.A. 		Cambridge, MA 02142 U.S.A. 
TEL: 505-984-8800			TEL: 617-253-2889
FAX: 505-982-0565 			FAX: 617-258-6779 
cgl@santafe.edu				journals-orders@mit.edu
 
-----------------------------------------------------------------
 
Information about the Artificial Life Journal, and much more, is 
available over the Internet from the Artificial Life Online & BBS 
services, which are available via WWW, telnet, Gopher, and ftp.
 
 
Try these access methods:
 
  Alife Online WWW server:	http://alife.santafe.edu/
  Alife Online BBS:		telnet alife.santafe.edu
  Alife Online Gopher server:	gopher alife.santafe.edu
  Alife Online FTP server:	ftp    alife.santafe.edu
 

Article: 2932
From: cgl@santafe.edu (Chris G. Langton)
Newsgroups: sci.nanotech
Subject: Artificial Life Online
Date: 28 Mar 94 19:23:15 GMT
Sender: nanotech@planchet.rutgers.edu
Organization: The Santa Fe Institute
 
 
                        ANNOUNCING:
 
 
                  ARTIFICIAL LIFE ONLINE
 
 
    The Artificial Life Online WWW-Server and BBS Service
 
 
                       Sponsored by
 
                        MIT Press
                           and
                  The Santa Fe Institute
 
                     alife.santafe.edu
 
 
   The Artificial Life Online/BBS is intended to be a central 
information collection and distribution site on the Internet
for any and all aspects of the Artificial Life endeavor. The 
system is sponsored by MIT Press and the Santa Fe Institute.
 
  The Alife Online service combines the functionalities of a 
WWW server, a Gopher server, an FTP site, an interactive 
bulletin-board-system, and Usenet News. Directions for accessing 
Alife Online and the ALBBS in these different modes are included 
below.
 
  A special feature is a collection of 40 or so local newsgroups 
dedicated to a wide variety of topics in Artificial Life.
 
  Many of the files and resources here are available to everybody
via Gopher and WWW. However, to access the full range of BBS 
services, it is necessary to come in using telnet and to create a 
local account. This will allow you to participate in the local Alife
newsgroup discussions, and to set up personal information files 
such as a plan, project, HTML personal home page, etc. 
 
 
 
To access Alife Online via World-Wide-Web (WWW):
 
     Use the URL http://alife.santafe.edu/ 
 
     For best results we suggest using a client capable of 
     handling color graphics and forms, such as Mosaic. 
 
     A character-based (ASCII) client called "lynx" is also 
     available -- but will not support graphics. 
 
 
 
To access the Alife Online BBS (ALBBS) via telnet:
 
     telnet to "alife.santafe.edu" and login as "bbs". You
     will find yourself in a specially constructed UNIX
     shell within which either BBS menu commands or UNIX
     commands can be used to browse around in the system.
 
     To set up a local account, telnet to "alife.santafe.edu" 
     login as "bbs," and run the "account" program. These 
     accounts will initially be provided free of charge, but 
     we will eventually have to charge a nominal fee in order 
     to cover operating expenses (on the order of $15-$25 per 
     year). Subscribers to the Artificial Life Journal from
     MIT Press will have this fee waived.
 
     Once you have an account on alife.santafe.edu, you can
     telnet to "alife.santafe.edu" and login as yourself.
 
     You do not have to create an account to use the ALBBS via
     telnet - you can simply login as "bbs" and browse through
     the system using the BBS commands. 
 
     To access the www features in the context of a character
     based client, telnet to alife.santafe.edu and login to the 
     BBS as "lynx".
 
 
 
To access Artificial Life Online using Gopher:
 
     Connect to alife.santafe.edu (standard gopher port 70).
 
 
 
To access Artificial Life Online via FTP:
    
     ftp to alife.santafe.edu, login as "anonymous" and 
     type your login@homesite as the password. 
 
     Everything interesting is in the "pub" directory. 
 
 
 
Feedback:
 
     Please let us know if you have any suggestions or 
     questions about the Alife Online/BBS system. 
 
     Send Email to: 
 
             feedback@alife.santafe.edu

1053.35??CUPMK::WAJENBERGTue Mar 29 1994 13:276
    Re .33:
    
    "Subatomic `microtubules'" as a mechanism for consciousness?
    Anyone have any idea what this refers to?
    
    Earl Wajenberg
1053.36Cells as Atoms?LJSRV2::FEHSKENSlen - reformed architectTue Mar 29 1994 14:3910
    
    "Microtubules" are a cellular (as in biological) component.  They are a
    relatively recent (past twenty years maybe?) discovery and their
    structure and function are still being elucidated.  My guess is that
    the "subatomic" is a typo, and "subcellular" was really meant.  Of
    course, I could be completely wrong about all this, but I do recall
    reading about microtubules in some cellular context.
    
    len.
    
1053.37DarwinJVERNE::KLAESBe Here NowTue Mar 29 1994 20:0419
Article: 3412
From: clarinews@clarinet.com (AP)
Newsgroups: clari.local.massachusetts,clari.tw.computers,clari.biz.misc
Subject: Software To Predict Behavior
Date: Tue, 29 Mar 94 8:50:16 PST
 
	CAMBRIDGE, Mass. (AP) -- Thinking Machines Corp. on Tuesday
introduced software it says will detect subtle customer patterns and
predict future behavior from information that companies put in databases. 

	With the software, called ``Darwin,'' a company could determine 
what customers are likely to default on their bills, for instance, 
Thinking Machines said. 

	The product will be marketed by its new Business Systems Group.

	The company also said it was forming a business partnership with
Oracle Corp., a database manufacturer.