[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference napalm::commusic_v1

Title:* * Computer Music, MIDI, and Related Topics * *
Notice:Conference has been write-locked. Use new version.
Moderator:DYPSS1::SCHAFER
Created:Thu Feb 20 1986
Last Modified:Mon Aug 29 1994
Last Successful Update:Fri Jun 06 1997
Number of topics:2852
Total number of notes:33157

19.0. "12-APR MIT Seminar Transcript" by NOVA::RAVAN () Wed May 30 1984 02:27


This is a transcript of the MIT seminar entitled "Making  Synthesizers
Sound  More  Musical"  held 12-Apr-1984.  My parenthetical remarks are
indicated by square brackets.  In particular, [?] indicates a  missing
word  or phrase since the tape was a little hard to decipher at times.
"?<phrase>[?<phrase>...]?" is the syntax I used  to  indicate  one  or
more  possible  words or phrases when I could hazard a guess.  I tried
to be as literal as possible in transcription.

In general I thought  the  lecture  was  pretty  good  and  the  ideas
presented interesting.

This document is about 15 pages long.  I  suggest  that  you  use  the
PRINT command to get a listing instead of reading it online.

Have fun,
Jim

                                                                Page 1


                Making Synthesizers Sound More Musical

                           Miller Puckette
                    MIT Experimental Music Studio
                             12-Apr-1984


[applause]

Well that was a [?] introduction, and I hope I live through  this  and
see what I say.

I think I guessed right that I should just start off  by  just  saying
what  computer  music  is,  and start that by showing you what kind of
diagram I started yesterday.  I surprised  myself,  which  is  to  say
these  diagrams really represent the way I think about this.  I really
just sort of transferred my brain onto the page which is  a  different
approach from what I would normally take.

This is what the computer musician creates in order to make a computer
music  sound.  He wants to get sound out of a machine.  (I'll show you
more later but...) He writes a score and the score  contains  a  whole
bunch of notes, or what we call NOTE statements.  The shape of the box
I'm putting the score in is diliberately reminiscent of earlier  times
when these things actually were on punched cards.  They still have the
same format in almost every system, which is  to  say  that  one  line
corresponds to a note.

This first line, for instance, is an instruction that the computer  is
to  go  over  to the orchestra, which is the computer's idea of how to
play the notes, and it's going to put the number 440  into  this  slot
and  it's going to put .3 in this slot and 6.8 into that slot and it's
going to start the thing going.  And it's going  to  start  the  thing
going  for long enough to make a second's worth of sound and then it's
going to stop it.  So in  principle  it's  actually  not  so  hard  to
imagine writing say a thousand or ten thousand of these lines and then
telling a computer to go and when the computer has done its  job  then
you'll have 2 minutes or 5 minutes or ten minutes of music.

So step two in this process ...  [pause] ...  Now I've  added  a  time
indicator  ...   [laughter]  ...   Step  two is you feed this into the
computer (computers look like tape recorders in these diagrams because
that  is  the  only  thing  really physical looking you can see on the
computer).  Now, the human  has  gone  away.   The  human  really  has
editted  these  things  into  the  machine and then he's left, and the
computer is sitting there spinning its wheels and blinking its lights.
And  then  you  come  back  the  next day and you're ready to play the
results.  And the results ...  [pause, drawing, laughter]  ...   (this
is  really  experimental  so  ...   what I'm going to try to do is use
these tokens and shapes over and over again as  sort  of  a  mnuemonic
device to try and transfer the energy and the information a little bit
more effectively.  This is called COUPLING.)

Now, and that's it.  That's computer music.  This was  the  idea  that
Max  Matthews  had  in 1960 and every non-real-time so-called software

                                                                Page 2


synthesis system ever since has followed this  basic  organization  of
EDIT, WAIT, LISTEN.

Now that I've shown you that, I want to  ...   I  want  to  play  some
examples  ...  [a tape recorder is being set up in the background] ...
and I have credits.  The examples of non-real-time synthesis, none  of
them  are  mine because I don't do non-real-time synthesis anymore ...
to speak of.  I try to do real time synthesis but I spend most  of  my
time  developing  systems  so  ...   with that a few notes from [title
covered by early tape recorder cue].

[taped example and speaker's overvoice]
..
and you'll discover there are a wide range  of  things  that  you  can
actually do
..
[end of example]

That last word is 'convert'.  'Tracking Convert' refers to a bug  that
we had two years ago [laughter].  It's a very good title for [obscured
by laughter].  Next, (I'm glad Allen showed up) ...

[taped example]
..
..
[end of example]

And finally, my favorite example of all of non-real-time synthesis  is
Peter  Child.   You'll find it really a very simple score and ?playing
with?  very simple sounds.

[taped example]
..
..
[end of example]

That last sound was what we have for cellos.  It's not as good as  [?]
cellos  obviously;  it  predates them by a couple of years.  I want to
say a little something about why they sound like cellos at  all  in  a
few   minutes   because   they  do,  a  little  bit,  but  they  sound
'metapohically like cellos'.  In a way you can sort of  feel  the  bow
hitting  the  string,  but  the  result acoustically doesn't sound the
slightest bit like a cello.  It's really an abstraction  of  a  cello.
(but  I won't try to play it again because I'm scared of trying to cue
the tape recorder in real time...) [laughter]

Now contrasting with that is the rather newer advent of what  we  call
'real  time synthesis'.  In real time synthesis ...  [drawing] ...  it
looks like you have a performer.  You can now have a performer in  the
system and he has an input device and he talks to the computer and the
computer talks to the speaker ...  and when he hits a  note  he  hears
it.   This  is  clearly  a completely different way to operate and the
immediate reaction to the idea is ' Well, this clearly has to be  much
much  better  because,  not  only does the dog hear the sound [one can
only assume the speaker has drawn a dog on the board] but he hears the

                                                                Page 3


sound,  and  that  means  that he can get the sound he wants in a much
much shorter amount of time  because  he  has  feedback.'  That's  the
Norbert Weiner concept that 'the more information that is flowing from
here back into his head, the faster he can control  what  it  is  that
he's  trying  to  control in the face of his uncertainty as to how his
inputs relate to the outputs of the machine.'

Ok.  I want you not to listen to this very carefully  but  this  is  a
piece  in  progress.   Pieces  in  computer music that are in progress
sound different from other pieces that are in progress,  which  is  to
say that you have all this down but it doesn't sound ...  as complete.
[laughter] So here we go -

[taped example]
..
..
[end of example]

Sorry, ungraceful pickup.  You'll notice that that immediately  points
out  that  you pay for the advantage you get in real time synthesis in
the following way:  There's a limit to how much this  person  can  do,
this  keyboard,  in  real time and I fought with that in the following
way in that example  -  I  went  back  and  separately  added  several
different  kinds  of  details of control over several runs through the
piece ...  which meant that I was able  to  be  about  five  different
people  controlling  the  synthesizer.  But nonetheless, there is much
less information even in that sum total than there was in the thousand
score  cards that I would have written to get the same result out in a
non-real-time system.  Which is not to say that I couldn't  have  gone
back  and  done  it  that  way anyway.  Then I could have just fed the
score straight into the computer and played it back in what is  called
'tape  recorder  mode'.   But  that defeats the advantage of real time
which is 'you can perform the thing'.  You can't  perform  with  score
cards  and you can perform with real input devices, such as keyboards.
So there's really a get-what-you-pay-for situation that  this  lecture
will  study ways to circumvent ...  [pausing, searching for words] ...
I think I've shown you  both  the  promises  and  the  problems.   The
promise  is  that  you  get [pointing to something on the board] these
wonderful sounds but the composer goes crazy because it  takes  him  a
month  to  write  all  these  score  cards.   Or  you get [pointing to
something on the board] these wonderful sounds but the  listener  goes
crazy  after  about  five  minutes of it because it sounds the same as
itself.  [laughter]

[pause, dealing with board]

So, using my mathematical training, I decided to try to  describe  the
problem  in  very  abstract  terms,  which is to say, ...  [pause] ...
There's not much that you can do to a  keyboard.   You  can  maybe  do
thirty things to a keyboard in a second if you're good, and they might
be the thirty right things to have done in the second or they might be
thirty wrong things.  But the computer can do fifty thousand things to
a speaker per second.  [pointing  to  something  on  the  board]  This
computer is telling this speaker cone where to move.  And it will tell
it where it should be every twenty microseconds from  here  until  the

                                                                Page 4


end  of  the piece.  And that is an awful lot of [?].  ..  [pause] ...
or, removing the element of  time,  synthesis  you  can  see  as  this
function  which  takes  the  set of all possible things you could have
done and puts it into the set of all possible sounds.  And the problem
now  is the fact that the set of all possible sounds is just much much
much bigger than the set of all possible real time inputs.

Similarly in the non-real-time case ...  Although in the non-real-time
case  you  have  more flexibility.  You can, if you want to, put a lot
more information into the score.  But it's still expensive to  do  so.
Expensive  in terms of the composer's time which is the most expensive
commodity of all, as anyone who has tried  to  make  a  sound  with  a
computer knows.

So that's the fundamental trouble, I think, with  the  whole  business
and  the  fundamental thing we have to overcome.  I just want to state
that whole thing from a different angle,  that  only  just  struck  me
today as possibly being a better way picture this whole thing.

Here's the kind of thing you might want  to  do  to  play  a  ?bar  of
Mozart?  ?barra oat zark?:  These are the notes and these are supposed
to represent the notes  as  the  composer  understands  them,  as  the
composer   thinks  of  them,  and  of  course  it's  written  in  some
approximation of the composer's language.  The job is  to  take  these
notes  and transform them into these events which I've drawn in a very
very simple way as just being this bunch of  envolopes  that  have  to
appear out of nowhere, turn on, and do something, and stop.  And there
sitting in this two-dimensional space.

Now the problem is ...  if you go home and turn on your Commodore, and
tell  it  to  play  these things through its Soundchaser, well it sure
will.  [I think here the speaker is referring to the Commodore 64 home
computer  and its SID sound chip] What you'll hear is EXACTLY what you
see here with EXACTLY the same volumes, the same rise times, the  same
emphasis, the same timbres, everything.  Unless you do something.

The set of things that you could have done with one of these things is
huge.   there  is  no  reason,  for  instance,  that,  well  ...  just
supposing this is how hard a cellist  is  hitting  a  string  in  your
algorithm,  your synthesis algorithm.  That is a vaild parameter and a
musically interesting parameter  to  think  about  controlling.   But,
there  are  2**20  differnt ways that a composer can make a shape like
this happen to a string on his cello ...  or his  violin.   He  could,
just  to choose the obvious one [the rise time in a picture of an ADSR
envelope, I think], bare down on it quickly, which is to say, he could
jump  immediately  up  to the position of highest pressure, or else he
could slew up gradually.  He could hit it harder or softer, he doesn't
know how high this [?] has to be.  And then what could it do over time
time?  Well, it could do anything over time.  It doesn't even have  to
look  like  this.   Why doesn't it grow over time and then turn off at
the end, or ...  who knows what?

And so you see that in order to really control any real algorithm that
we  have  now  that  really  makes sound, you still have an impossible
amount of bandwidth ...  to put into  the  algorithm.   There's  still

                                                                Page 5


just  more  information  than you can supply.  That has to be supplied
and if you don't  supply  it  all,  or  you  don't  randomize  it,  or
something, these guys are going to be all the same.

Now if you come up with some scheme of just randomizing the shapes  of
the  envelopes of all the notes in your piece, then it will just sound
random.  I've tried it; it just sort of sounds silly.  There really is
musical  information which gets mapped in some ghastly way into shapes
of functions that wary in time.  And it is very hard  to  codify  this
transformation,  either  in  terms  of text-editted score lines, or in
terms of real time inputs, that  are  different  from  the  real  time
inputs that the performer actually uses to play his instrument.

Which is not to say that I am now going to advocate that we  stick  to
regular kinds of input devices, because then we'll have regaular kinds
of possible kinds of input, and that's a restriction too.

So, how do you make  it  possible,  in  not  too  much  time,  to  get
interesting and musically useful, musically worth-listening-to results
out of this computer?

Well, the first thing you hit it with is graphics.  It's  an  approach
to  computer  science:  'the first thing you always hit a problem with
is graphics'.  I've never known why this is, but I think it's true ...
or  at  least I think it's a good first hack.  The obvious thing to do
is to give the composer is the notes, for instance, as opposed to  the
lines  of  text on a VT52 terminal.  Well, he would rather just have a
score format editor.  Now the problems in actually  designing  one  of
these  editors  are huge.  The studio actually tried to do so under an
NSF grant years ago.  The results were good, but the  results  really,
...   the main discovery was that the notation that composers actually
use is SO complicated and SO infinitely extensible that no finite size
program  could possibly be an adequate editor for every composer to be
able to use it in every way.  However, if you're willing to force  the
composer  into  a small subset of the possible things that you can do,
you're still going to have a very useful editor and you're still going
to have something that the composer can really use.  That was the idea
and in my opion it  worked.   And  it  would  still  work  except  the
hardware  is  gone now and we have to go back and do it again.  But we
know how to do it now and it looks like we won't do it  again  in  the
next couple of years because it was a success the first time.

There were two things that we were talking  in  terms  of  giving  the
computer  to play, and one of them was the score and the other was the
orchestra.  Actually the way the musician specifies what the orchestra
is  is  he  writes  it out in a programming language.  The programming
language is, in any  decendant  of  MUSIC  V  which  is  the  original
software  synthesis language that I've ever heard of ...  the language
is a threaded list of subroutine calls and it's really ...  it doesn't
look  like that at all.  And so the obvious thing to do is to actually
give  the  composer  a  graphical  signal  processing  network  design
program.   And  that means you give him something that he can take his
pointing device and pick up a couple oscillators and put them together
and  then  draw directed lines from these unit generators down to this
unit generator.  'Unit generator' is just something that  comes  in  a

                                                                Page 6


box  like an oscillator, a filter, or an envelope generator.  And then
one thing he has, among many others, is a picture of a little speaker.
And when he puts something to that, then he hears the sound.

Great idea, right?  Well, that also was done.  I forget when, but also
under NSF auspices.  And it was smashing!  It's a lot of fun to use.

We're almost up to the present  now.   Really  the  talk  is  sort  of
future-centric.

The other thing you want to think about are input devices that  really
let  a  performer  inflict his notion of how to perform a piece on the
computer.  In the siimplest variation of this, you simply don't  allow
the  computer to ask any questions.  For instance, you design his drum
head.  (This is one thing that Max Matthews has done  more  recently.)
You  design  his drumhead that is rectangular in shape and this person
can hit it.  And he can hit it anywhere on the surface and he can  hit
it  soft  or  hard.   And  that's three variables, the three variables
being the position of the impulse, and the strength, and  those  along
with  a  time tag are sent straight off to the computer.  And then you
use that for something.  I'm not sure what to use  it  for  yet.   [at
this  point I gather the speaker has been building a composite drawing
of the entire music system] The obvious thing to use it for is ...  to
instatiate  notes  ...   on a synthesizer ..  make these things map in
some real way into  some  presumably  much  larger  set  of  synthesis
parameters  ...   and  have  this thing control the playback ...  of a
score which is not stored in the computer but which is stored  in  the
performer's head.  And the less obvious thing to do is to start making
the computer itself know more about the score and hence to be able  to
dynamically  allow  the  performer to do more with it.  That is to say
...  well, I have to save that for later because I have pictures.

This is the one that Barry Vercoe is working with at IRCAM  in  Paris,
France.  He's commissioned to do a piece which is being performed next
October there.  The input device ...  I'll tell  you  more  about  the
rest  of the piece later but ..  he has a flautist and the flautist is
playing a REAL flute ...  and the flute is making real sound that  the
viewers  of  the  piece  will  hear.   But  at the same time there are
optical sensors on the keys of the flute.  And these  optical  sensors
find  out  what  the  flute  player  is playing.  And that goes into a
computer.  And now the computer knows what the flautist  is  doing  as
well  as  the  flautist does.  And now the computer presumably has his
own part and is kicking alongside staying with the  flautist,  perhaps
playing the rest of an orchestra for a concerto.  And now the flautist
is in complete control.  He doesn't have to follow the computer at all
if  he doesn't care to.  In fact if he does follow the computer, he is
in trouble ...  because the computer's following him.  [laughter]

But now the computer is being cast  into  the  role  of  the  flexible
background  orchestra  or  the  flexible  concerto orchestra ...  that
backs up a soloist.  This  is  different  from  the  standard  way  of
playing  computer  music  with real instruments but I'll tell you that
too later.

[another slide] This is an idea, actually I  think  this  is  an  idea

                                                                Page 7


which was first due to Barry Vercoe and has never been realized in any
way usable to a person doing computer music.  The  man  is  conducting
now;  he  is  holding  a conducting baton.  And he's conducting in the
front of this stage which might contain  an  assortment  of  real  and
mechanical  players.   Now  the  real players see the baton; they play
along.  The mechanical players, well, they can't really see this  guy,
they  can't  see  the  expression  on  his face, but they can at least
detect the position and attitude of the baton.  (uh, this is  a  spec.
I'm  not  talking  about  a real device at the moment although I could
be.) The computer is  sensing,  or  the  computers  are  snesing,  the
position  and  the  attitude  of  this  baton.   That's six degrees of
freedom; that's an awful lot of information.   There's  an  incredible
amount  of  bandwidth  there.   Who knows how much of it is really ...
Who knows how much of it can really  be  meaningfully  filled  by  the
conductor at the same time as he's trying to conduct the real players,
but we'll find out.

Now the computer's job is to extract a beat and to play along with the
beat  and  to  do  so in such a way as really never to make a mistake,
because if he ever does make a mistake, it's going to be tough to ever
get  the  orchestra and the computer back together.  That's because of
the lack of another channel ...  But the problem at least  is  now  to
have   the   computer  sense  a  beat  and  some  kind  of  expressive
information, as much as it really can be sensed ...  from  this  baton
...  that qualifies as an input device.  None of these are really what
we would consider the last word in input devices.

The following thing exist only as a spec.  If you know  how  to  build
it,  please  tell me.  This is a rubber and it's a foot and a half big
and it full of something, I don't what, but it senses when  you  touch
it and how hard you squeeze it.  And so either as a performance piece,
you'd leave it in the middle of an empty room ...  [laughter] I  don't
know, just think of something.  Part of the power of computer music is
that you just think of something and them you do it and then you  make
it  musical.   And  believe  me, anything that makes sound can be made
musical; I'm convinced of that.

So we've talked at least lowering partially the  barrier  between  man
and  machine  which  is really there because the two think differently
and not because of the lack of any high  bandwidth  link  between  the
two.   I mean, there are plenty of ways to a lot of bandwidth out of a
person.  The problem is he won't necessarily really  be  communicating
anything.

And so  having  faced  the  problem  of  getting  as  much  meaningful
bandwidth out of the performer as possible which is a problem you just
try to solve I guess, the next thing that you do is you  start  adding
flexibility.   [pause,  shuffling of papers] No, I guess I have to ...
I'm not going to talk about this down here yet ...

OK.  The second thing is you start teaching the computer and the  real
person  how  to play together better.  And that means how to speak the
same language so as to make the most  effective  use  of  the  limited
channel  there  exists  between  the  two.  Going back to the flute in
Paris, suppose now that you've got a computer  and  the  computer  can

                                                                Page 8


tell  what the flute is playing.  Really as well as if it were hearing
it and following it.  And that's because not only  do  you  sense  the
keys  on  the flute but you also throw in a microphone so that you can
actually here what the flute is playing.  And combining the microphone
with the keys, you can actually have the computer following the flute.
Very effectively.  I've been convinced of that since Barry  came  back
from  playing  me  his  latest sound examples.  He can make a computer
figure out what a flautist is playing.  Very effectively.

Now the computer can listen to this flute.  And now what  can  he  do?
Well  now  he  can  use the fact that presumably he has some resonable
representation of what the piece is that he is trying  to  play.   Now
...   I realized after I drew this that really these things are inside
the computer, but ...  you can at least think of them as inputs.   The
flute  score,  the  composer's  score and the computer's score, are of
interest to both the computer and the flautist.

The flautist wants them for the same reason that he  always  wants  to
know  what  the  piece  sounds  like.   He  doesn't  need to know this
literally note for note perhaps [hopefully pointing to the  computer's
score]  but  he needs to know what the computer part is so that he and
the computer can play together in  some  sort  of  reasonable  harmony
[sic].  And that means play expressively as well as just play on pitch
and in time.

The computer sees three  things.   The  orchestra  is  something  that
really  I'm just trying to wash over for the moment.  So, he knows how
to play things but he also sees his score and he sees the flute score.
Now  the simplest thing he can do is he can wait until the flute plays
the next note in his score and he can say 'All right, that  was  a  D 
and  that  happens  at beat five so I can go ahead and race forward to
beat five.  And then I'm going to wait for him to do  something  else.
And  if  that's  at beat seven then I'll race to beat seven, and then,
we'll be playing together.'

Well, then you have to write software, the first generation  of  which
has  to  tell the computer how to predict in advance what the flautist
is going to play when.  So, the next obvious thing to at least try  to
do  is to have the computer actually play alongsode the flute and play
confidently enough so that even when the flute has a rest the computer
goes  ahead  and  plays,  not  even  just  exactly  to  the last tempo
perceived but playing along in some way that's commensurate  not  only
with  that  but  with  the  computer's notion of how to play the score
because has to know how to play the score too.  And then,  of  course,
the computer and the flutist are playing together because you know the
flautist plays along and sometimes the flautist has  a  rest  and  the
computer  goes  along  and  sometimes  the computer has a rest and the
flautist goes along and they know where each other are and each one of
them knows when to do something.

Now this emphasis on timing is  actually  as  unnatural  as  it  seems
because  that's  the hardest problem really in real time performance -
geting things to happen at the right time.  It's easier, and  in  fact
there  are  ways,  to  come  up with differnt kinds of decision making
about other aspects of playback.  The timing of playback is the really

                                                                Page 9


tricky   thing  to  accomplish,  and  it's  also  the  thing  you  can
immediately tell as being off.  When you're listening to something the
timing cues in the music really seem to be very important to the way a
person hears a rhythm, to the way a person hears  the  music.   And  I
think my later sound examples will prove that quite conclusively.

Now, having done that, you immediately want to do much much more.  Now
look,  you  want  the  computer to play expressively and that and that
doesn't mean just toolin' along and followin'  the  flute.   It  means
'play  along with the flute, and not only that but play expressively'.
Playing expressively comes in two parts.  First off the  computer  has
to  know how to play the piece expressively.  So you can actually give
the computer a static or unchanging notion  of  what  the  appropriate
expression with which to play the piece is.  And you [?] all that into
the machine and then you weight what you can.  Later on I'll try to at
least suggest effective ways of [?]ing the [?] in computers.

And second, not only do you want the computer to know how to play  the
piece  expressively,  but  harder yet you want it to communicate on an
expressive, almost ESP, level with this other performer.  And whatever
way  real performers communicate is probably the only way that you can
possibly do it.  As far as I've been able to tell,  the  only  way,  I
mean   ...    there  are  obvious  ways  that  real  human  performers
communicate.  But there's also the fact fact that they just  have  the
same thought at the same time and that has to be simulated too.

Another input device I showed you was a conducting baton and the thing
that corresponds to that is the idea of conducting.  So here's what it
looks like without the enlightened attitude toward playback.

Well, you can have a conductor or not but you do have a small or large
ensemble.   And  they're  playing along and there's this tape recorder
and it's playing along too.  They're trying to play a  piece  together
but there are problems doing that because the computer part that is on
the tape recorder is completely unyielding.  The  tape  recorder  just
has  no  way  of doing anything except just going straight forward and
playing the thing out according to the timing and the expression  that
was  forced on it by the composer who did that last month or, actually
in our case, who just finished doing that an hour before  the  concert
perhaps.   (In  fact in one concert we did, our ?decks?  ?VAX?  didn't
work; we couldn't get sound out  of  the  machine.   We  couldn't  get
four-channel  sound out of the machine until about two days before the
concert and we were all  scared  to  death.   It  was  amazing.   But,
regression ...)

So the enlightened way to do it is to invent conducting, or rather, to
invent  conducting  a  computer.   This  is very hard and no one has a
stage-worthy conducting system.  But assuming  you  could,  you  would
have  the  following  picture,  and  this  is  the  simplest  possible
variation of the picture:  The man is conducting an not only does  the
cellist follow him but the computer follows him.  The computer follows
him, if this is the baton, by locating the minimum  in  the  point  of
this  baton and not only the minimum, but if he's conducting a complex
beat you also have to compute the furthest it gets over this  way  and
even  that's  not  right  because  that's not really the definition of

                                                               Page 10


where the beat is, but whatever it is you have to do something and  it
probably  varies from conductor to conductor.  There's also a question
as to whether you can ask the conductor to modify his behavior to help
the  computer  follow  the beat.  It looks like in the first stages of
this you would certainly have to.  My position on that is that I think
it's  philosophically  OK  because  I think of the conductor really as
playing the computer as an instrument and I think it's OK to want  the
conductor  to practice because, after all, he has to practice in order
to conduct his real performers.

You can probably even put this on stage.   I'd  love  to.   I've  been
waiting.   There's  one  trick  which is that ...  there's two tricks.
Trick number one is that the computer has the same problem as when  he
followed  the  flute.   And  the  problem  is 'not only is it tough to
follow the actual beats which are often pretty clear, or which you can
at  least tell the conductor to make clear, he also has to figure out,
possibly as much as a second in advance, when he thinks  the  beat  is
going to arrive.  Now that's tough.  That's very tough.  And he has to
do it in such a way that he doesn't look like a fool  if  he  makes  a
mistake.  But he's always going to make a mistake.  I mean, he's never
going to be exactly on.  So he needs a way of what you could best call
'phase  locking'  onto  the  baton  in such a way as never to commit a
musical faux pas, such as walking three-quarters of the way through  a
bar  and  then  taking  twice  as  long as that to go through the last
quarter, for instance.  He never does anything terrible and  he  never
gets off by a whole beat.

And yet things like trills work.  Now a trill is something  that  goes
along at the same speed and, depending on how long that portion of the
piece lasts, you  have  a  different  number  of  notes.   The  actual
topology  of  the  score is changing as a result of the fact that this
person is doing something in real time that the computer just couldn't
control  ahead  of  time.   He  had to add more notes in order to stay
around.

I realized that I have to go back one.  I have sound examples of this.
This  actually  works.  Barry Versoe did this, or is doing this, and I
have to explain that he's doing it for a piece in October.  He's doing
a  piece for flute and harpsichord.  The flautist is playing the flute
part and the computer is playing the harpsichord  part.   The  obvious
appology  applies in that this does not sound like a harpsichord.  But
there is no appology whatsoever for the roughness of  the  performance
because it only just now has started to work and the computer does not
really have all the information he needs.  The  computer  is  watching
the  key  clicks now but is not, in this example, hearing how hard the
flute is play so the computer doesn't really have any accurate way  to
detect  the  onset  of a note.  Now it's a real fool of a flautist who
presses his [?] key to form a note just when he start blowing or  just
when  he wants the psychological onset of a note to be.  For one thing
he's actually even making sound before the psychological onset of  the
note.   And  for  another,  why rush when you can fingure your flute a
little bit ahead of time?  So things are rough, but  they're  working.
And it's actually possible to hear in this next example, I think, what
is sort of like a charicature, at least, of the interplay between  two
real  performers.  And what is now a charicature, I think, will in the

                                                               Page 11


next few years be a reality.  And I hope in the  next  few  months  it
will  be  a reality that at least in this small case we really have an
example of a man and a machine playing  together.   So  with  all  the
necessary  disclaimers I will play you what that sounds like so far on
a test piece.  The real piece will be one written for the occasion and
I  assure  you that pieces written for computers sound a lot better on
the computer than pieces written for  other  instruments.   So  here's
what this sounds like ...  Oh, you can expect to hear some problems on
startup obviously because the last thing you do in these  system  when
you're  designing  them,  at  least in my experience, is you deal with
problems starting up.  That's really a hard and niggling  detail.   So
forget  about  those  but  definitely  listen  to  the  pattern.   For
instance, following the flute who is not playing  strict  [?]  or  the
beat.  OK?  This is a miracle, I think.

[taped example, baroque or early classical period]
..
..
[end of example]

Now in that example the computer essentially knew what  the  notes  in
his  score  and  what  the  notes in the flute score were.  And, to my
knowledge, it didn't have the benefit of  any  additional  information
besides  that.   I could certainly think of other kinds of information
you  would  want  to  have.   Such  has  the   history   of   previous
performances.   That's  a  real  obvious  one.   If  you know what the
flautist is going to do, if you know that he tend's to race something,
then it's much easier to stay with him.

..  [after some rambling, the tape is played again] ...

So, this really where we are right now.  That is ongoing  research  so
we  have  reached  the present.  And ...  I'll show you this ...  I'll
play you as well as we've been able to do on  this  one  so  far.   [I
assume   the   speaker   is   referring   to   the  'computer-follows-
live-conductor' slide] I don't have a real example of this but I  have
a  fake  one  which is at least the computer hearing the conductor and
conductor trying to tell the computer which only knows how to playback
strict meters ...  The conductor is trying to tell the computer how to
play it expressively at least as far as tempo is concerned.  It  would
have been fun to play the non-conducted example right at the beginning
of the talk because this is an example  of  the  worst  thing  that  a
computer  can  do,  which is to take a piece and play it back beat for
beat, click for click.  And that sounds like this ...  Oh ...  'Mozart
G  major  duo  for  violin and viola' ...  And the computer is playing
both the violin and viola part which is  going  to  sound  like  [many
words garbled].

[taped example, baroque or early classical period]
..
..
[end of example]

[at this point the attendee ran out of tape on side A  and  turned  to
side B]

                                                               Page 12


..  computer trained to watch our  current  excuse  for  a  conducting
baton  which is essentially a stick with a photocell on it.  The stick
goes up and down and the photocell gets more and  less  sunlight.   So
this  is one musical input device.  Pretty low bandwidth and you can't
beat anything more than just a straight downbeat  or  else  the  thing
just  has  no  way  of knowing what's going on.  All it sees is up and
down or depending on the room light it might [?].

Here at least is as far as we can get toward the real time control  of
the playback of that same sound.

[taped example, baroque or early classical period]
..
..
[end of example]

The tough thing to do there, actually, was to play  it  smoothly.   It
turns  out that it's very easy just to detect the minima of a function
of time and hence to extract some kind of beat and to never be off  by
more  than  one  beat of the tempo that the person's playing.  But you
are faced with the fact that you have to predict when  the  person  is
next going to lower the baton and hence the light level will decrease.
And you have to do it in such a way that something like a trill  comes
off without accelerating or decelerating or something else worse.

So that was today.  That's brought us right up to  the  present.   Now
I've  exhausted my sound examples because, of course, I can't play the
sounds of the future.  [laughter]

But to my thinking, I have five or ten more minutes because we  didn't
start  until  fifteen after so I want to quickly tell you what I think
the future holds.

Well, the future ...  I think it's time to really take the concept  of
performance apart.  And the reason you take a problem apart is so that
you can solve the problems of it separately.  Well, what does it  mean
for the orchestra to just play a score?  Well, it means the following:
[the speaker begins to write on the  board]  Someone  wrote  something
whose inputs are a whole bunch of score cards and a clock.  That thing
waits for the clock to reach the time of the next note.  Then when the
next  note  has  come  by, he sends the orchestra a command (we hope a
message which is fully symbolic and contains no pointers and on and on
and on ...).  So this person is a live process.  This is data and this
is sort of a system function.  [I assume the speaker  is  pointing  to
the picture he drew on the board] And so the orchestra sees the stream
of messages which correspond to the notes but  it  sees  them  at  the
right time and the orchestra can then be pretty dumb.  It can just see
the message being passed to some object inside which  is  instantiated
instrument and on the basis of that message it can turn the instrument
on and tell it to  play  a  note  given  its  frequency  and  whatever
appropriate  parameters.   At  the same time, there's no reason not to
have real input too.  That allows me to put two control  processes  in
the  first  and  simplest  example,  the second of which does a rather
different thing.  It waits for a note to be depressed on the  keyboard
and  when  the  note  is  depressed  it says 'OK, ?turn on?  the other

                                                               Page 13


note'.  And then I pointed this guy to the orchestra too so  that  the
orchestra could hear the note.  Or the orchestra plays the note on the
basis of a keyboard.  Now what we have is a Synclavier-style playback,
which  is  to  say,  the score plays and you play.  And you hear both.
But that's simple.

Now I'll show you, assuming you've done the partition correctly  which
I  think  you're  close  to  doing, what you now get to be able to do.
Here's the conducting example as  something  a  user  might  just  put
together.   Now assuming someone smart has gone in and written what we
call a conductor control process, now what does the conductor  control
process  do?   It  watches this baton and on the basis of the baton it
acts like a clock.  Now what a clock does, if you're a computer,  what
a  clock does is ...  The person the clock's talking to actually tells
it 'Well, wake me up when it's five thirty'.   Then  the  clock  waits
until  its  notion  of time reaches five thirty and then it wakes this
guy up.

So instead of a clock, we have a conductor  and  a  conductor  control
process and you tell the play control process that instead of watching
the clock you watch this guy.  He'll pretend  he's  a  clock.   Follow
him.   And now you've just reconfigured at the user level this process
and changed it from one that plays striclty metricly to one that plays
back under time control.

Or we can do the flute example ?under the same guise?  ?with the  same
guys?.   Oh, I had a 'read keyboard' control process bach here but you
can see that the keyboard's output is to detect the onsets  of  notes.
And  that  it detects.  possibly later, some dynamic information about
them.  And there's no reason not to be able to just read any  of  your
input  devices  and,  on the basis of them, produce a stream of notes.
And once you have separated that part of the function  out,  then  you
can use any input device that someone has written a program to control
to generate a stream of notes in real time.

So anyway the flute plays the notes in real time  and  now  I  imagine
that  there  is  another  control  process  which we call the 'follow'
control process.  And it sees two things:  It's got a score.  It knows
what  the  flute score is.  It sees the notes that the flute's playing
and ...  well, of course you have to do error correction, because  the
flute might make a mistake, and there's all kinds of work this guy has
to do to really figure out what the flute's notion of time is ...  and
then  if  you  want,  you  can  use  that  to  control the time of the
playback.  So this 'follow' thing I'm imagining now again presents the
interface  of the clock.  It pretends it's a clock.  You tell the play
guy to watch the follower.   You  tell  the  follower  to  watch  this
control  process.   He  knows  where the flute is.  This guy knows the
computer score.  This guy knows the flute score.  In computer  science
terms we've 'partitioned' the problem.  And everybody has his own data
and data is well and explicitly scoped.  And  this  is  great  because
when  you  do that then you can change things around just as I've been
doing.

So we jump forward a couple of years.  Now we teach the  score  to  be
intelligent.  Now this is really the most important thing of all.  And

                                                               Page 14


it's also possibly the hardest thing of all.  The words here [pointing
to  a  slide]  are 'knowledge base' and that's a really loose usage of
the term.  And 'knowledge base' is  a  real  loose  usage  of  reality
anyway, I think.  [laughter]

Gee, these control processes that are supposed to be dealing with this
flute  thing  ...  The follow person needs to know the flute score and
the play control process needs to know the computer score.   Well  you
just  tell  them  to  go look their scores up.  That's all.  And those
scores are part of this knowledge base.  And  so  the  follow  control
process  just  walks  over  here and says 'Ok, I'm Ernie.  What do you
have for Ernie?  The user just named me Ernie.' And this  score  knows
exactly  what  information  he  has  for  him,  assuming  the thing is
designed right.  That's obviously going to be vague.  I mean  ...   we
have to be vague when we're talking about the future.

So you turn the computer on  and  you  wait  five  minutes  while  the
control  processes  kick  around and ask all kinds of great questions.
But the control processes are running in real time so they're  not  in
LISP,  they're  probably  in  C.  And when they're in performance, you
just put them together ...  I mean, in the command  stream  if  you're
talking  Unix ...  You gave each one of them a name and you pointed at
each other and on and on and on ...  Then they were ready to  perform.
They're ready to do something.

Then the greatest thing of all is called 'rehersal'.  Rehersal  is  an
extension  of  performance.  Rehersal is a performance of one in which
you learn something more about the score as a result  of  having  gone
through it.  And what you do is, you gather everybody around after the
performance is over and you have them tell  things  to  the  knowledge
base.   This guy who was following the flute probably has something to
say to the knowledge base about what the flautist tends to  do.   Does
he  race  this  section?   Expect  an  accelerando  there.   Expect  a
decelerando there.  Expect a sudden dynamic change there and on and on
and  on ...  The next time follower wakes up and does something, he'll
do a better job if  he  knows  something  about  the  history  of  the
rehersal.   It's  obvious.  It's true of real players.  What's true of
real players is true.  [laughter] An so what we invent  is  a  way  of
control  processes to rehearse.  But what we say is that the composer,
and now the distinction between composer and performer is going  down,
but  the  composer  is  rehearsing  his  knowledge  base.   All right?
Because you look at the knowledge base and you go  through  the  piece
and then ?on the basis [?]?

And you do that as often as you want.  And  now  if  you  want  to  do
something  else  to  it,  you  might  do it with a completly different
configuration.  Now suppose you want to back and do something unsimple
like  ..   you just want to have a virtual sliding pot.  And with that
you just wnat to control how some instrument plays.  Well then you  go
back  and you do that.  But you're able to do that because you're able
to tell these guys not to bother this time.

In conclusion, ..  [long aside] ...  what we hope to be able to do  in
the not too long term is to actually take this sucker and actually put
it on a stage and actually put real  input  devices  on  it  and  real

                                                               Page 15


computer music performance.  And so far, it's there and we're trying.
T.RTitleUserPersonal
Name
DateLines