[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

1078.0. "Probability" by BEING::POSTPISCHIL (Always mount a scratch monkey.) Wed May 10 1989 13:27

    In a room with two one-way windows, a dealer shuffles three cards.  Two
    cards are black, and one is red.  After shuffling, the dealer places
    them on a glass table, face down.  A fair die is rolled to select one
    of the cards, with equal probability for each. 

    As the controller of the game, you have access to video cameras which
    are focused on the cards.  The table also has the numbers 0, 1, and 2
    etched into it adjacent to the cards so that each of the three camera
    views shows a card and a digit corresponding to the card's position. 

    Behind one of the windows is an observer who has been told that each
    time this performance is repeated, they will be shown one of the two
    unselected cards, as determined by a fair coin flip.  This observer has
    in fact seen several trials; the cards are shuffled and dealt, one is
    selected, a fair coin is flipped, and the card indicated by the coin is
    shown on a monitor.  This observer believes that if the card shown on
    the monitor is black, the probability that the remaining unselected
    card is red is 1/2. 

    Behind the other window is an observer who has been told that each time
    this performance is repeated, they will be shown an unselected black
    card -- either the only unselected black card or the card chosen by a
    fair coin flip if both unselected cards are black.  This observer has
    also seen several trials; they sometimes see a different card on their
    monitor than the first observer.  This observer believes that if the
    card shown on their monitor is black, the probability that the
    remaining unselected card is red is 2/3. 

    Now we begin the performance again.  The cards are shuffled.  The die
    selects card 0.  The first observer's monitor shows card 1.  The second
    observer's monitor shows card 1.  You, the controller, have not seen
    anything but card 1 (your computer has handled the showing of card 1). 

    What do you, the controller, think the probability of card 2 being red
    is? 

    The two observers get together and exchange all their information.
    What do they conclude about the probability of card 2 being red? 

    The probability of event B given event A is P(B and A)/P(A).  What is
    an event?
    
    Next, you go out onto the street and get two observers.  The first is
    told they will be shown the lowest-numbered unselected card.  The
    second is told they will be shown an unselected black card, just as the
    previous second observer was.  The cards are dealt, card 0 is selected,
    card 1 is red, and you kick both observers back onto the street.
    
    You get two more observers and tell them the same thing.  This time,
    card 0 is selected, card 1 is black, and both observers see card 1.
    
    The first observer believes the probablity card 2 is red is 1/2.  The
    second observer believes it is 2/3.  Why? 


				-- edp
T.RTitleUserPersonal
Name
DateLines
1078.1AITG::DERAMODaniel V. {AITG,ZFC}:: D'EramoWed May 10 1989 15:5010
	You are on the show "Let's Make a Deal."  Monty Hall shows
	you three doors, and you are told that there is a major prize
	behind one of them, and something worthless behind the other
	two.  You select one of the doors.  Monty then opens one of the
	unselected doors and shows you that there is something worthless
	behind it.  He then offers you a choice between what's behind the
	door you previously selected or what's behind the unselected,
	unopened door.  What is your choice?  Why?

	Dan
1078.2AITG::DERAMODaniel V. {AITG,ZFC}:: D'EramoWed May 10 1989 15:5013
	You are on the show "Let's Make a Deal."  Monty Hall shows
	you one hundred doors, and you are told that there is a major prize
	behind one of them, and something worthless behind the other
	ninety-nine.  You select one of the doors.  Monty then opens one of the
	unselected doors and shows you that there is something worthless
	behind it.  He opens another unselected door and shows you that
	there is something worthless behind it, too.  He continues showing
	you worthless junk behind unselected doors until there is only the
	door you selected and one other door unopened.  You are then
	offered your choice of what's behind either door.  What is your
	choice?  Why?

	Dan
1078.3KOBAL::GILBERTOwnership ObligatesWed May 10 1989 16:4641
>    Now we begin the performance again.  The cards are shuffled.  The die
>    selects card 0.  The first observer's monitor shows card 1.  The second
>    observer's monitor shows card 1.  You, the controller, have not seen
>    anything but card 1 (your computer has handled the showing of card 1). 

	(NB: you don't need to see card 1 -- it's black of course).

>    What do you, the controller, think the probability of card 2 being red
>    is? 

	Prob of being red = 2/3.

>    The two observers get together and exchange all their information.
>    What do they conclude about the probability of card 2 being red? 

	They now know everything the controller knows.  Since they have
	no reason to mistrust the controller, they reach the same
	conclusion as the controller:
	Prob of being red = 2/3.

	I was confused about the rest of the problem.  Were the cards still
	being selected by the same random procedure as before?  Were the
	observers told the truth?

>    You get two more observers and tell them the same thing.  This time,
>    card 0 is selected, card 1 is black, and both observers see card 1.
>    
>    The first observer believes the probablity card 2 is red is 1/2.  The
>    second observer believes it is 2/3.  Why? 

	This new second observer's knowledge is the same as the original
	second observer.  And the second paragraph of the problem says that
	the second observer (correctly) believes the probability card 2
	is red is 2/3.

	For the first observer, ... suppose the die roll were done before
	the cards were shuffled, so it was decided that card 1 would be shown.
	Now the cards are shuffled, one of them is chosen to be card 1
	(which is displayed as black), the shuffling of the two remaining
	cards continues and are laid down.  Card 2 has a 1/2 chance of
	being red.
1078.4The more, the merrierNIZIAK::YARBROUGHI PREFER PIWed May 10 1989 18:5511
>	You are on the show "Let's Make a Deal."  Monty Hall shows
>	you one hundred doors, ...

This one (and, by analogy, the previous) is easy. At the outset the odds
are 99-1 that you picked a worthless door, i.e that the valuable door is 
among those that Monty has to pick from. Since his choices are not
arbitrary - he must not show you the good one - at the end it is 99-1 that
the remaining door is so not by chance but by process of eliminating 98
worthless doors; so you should always switch. 

Lynn 
1078.5definitions, questions and quibblesPULSAR::WALLYWally Neilsen-SteinhardtThu May 11 1989 16:4552
re: < Note 1078.0 by BEING::POSTPISCHIL "Always mount a scratch monkey." >

>    Behind the other window is an observer who has been told that each time
>    this performance is repeated, they will be shown an unselected black
>    card -- either the only unselected black card or the card chosen by a
>    fair coin flip if both unselected cards are black.  
    
    Presumably this other observer does not see the result of the coin
    flip at the time, although he/she as the opportunity to verify
    afterwards that a coin flip was done and the result used as appropriate.
    
>    The probability of event B given event A is P(B and A)/P(A).  What is
>    an event?
    
    Some definitions from _Statistics_, Winkler and Hays:
    
    A simple experiment is some well-defined act or process that leads
    to a single well-defined outcome.
    
    The set of all possible distinct outcomes for a simple experiment
    will be called the sample space for that experiment.  Any member
    of the sample space is called a sample point, or an elementary event.

    Any set of elementary events is known simply as an event.
    
    
    In the experiment described in .0, I believe that an event is the
    final outcome of the four operations:
    
    	shuffle and deal out the three cards
    	roll the die and select one card
    	flip the coin and show the card to first observer 
    	optionally flip the coin and show the card to second observer 
    
    I am getting a different answer from that given in .3, but I am still
    checking it.
    
    I agree with .3 that the latter part of .0 is ambiguous.  Are the
    observers being told the truth?
    
>    told they will be shown the lowest-numbered unselected card.  
    
    Does this refer to the numbers etched into the table, or to numbers
    on the cards themselves?
    
    One final quibble:  since the mechanism of this game is not described,
    either observer or the controller could assign a non-zero probability
    to the hypothesis that the mechanism was a malicious being who will
    behave one way during preliminary trials and another way in the
    final trial.  If the mechanism was a simple, verifiable piece of
    hardware or software, this hypothesis could be eliminated.  So I
    will ignore in the future.
1078.6I got same answer as .3PULSAR::WALLYWally Neilsen-SteinhardtFri May 12 1989 17:3033
re: < Note 1078.0 by BEING::POSTPISCHIL "Always mount a scratch monkey." >

>    What do you, the controller, think the probability of card 2 being red
>    is? 
    
    2/3.  After some checking, I get the same answer as .3.  Interesting
    that the controller in this case gets exactly the same probability
    as second observer.  The controller does get to eliminate some
    elementary events, but it turns out that the controller's smaller
    set of events still gives probability 2/3.
    
    You can solve this problem using Bayes' Law or by enumerating
    elementary events.  It is interesting that because of the optional
    coin flip for second observer, the elementary events are not equally
    probable.

>    The two observers get together and exchange all their information.
>    What do they conclude about the probability of card 2 being red? 
    
    They are now working from the same data as the controller, so they
    get the same answer.

>    The first observer believes the probablity card 2 is red is 1/2.  The
>    second observer believes it is 2/3.  Why? 
    
    This is really the same problem.  The fact that card 1 was chosen
    to show to first observer by fiat or coin flip does not change the
    relative probabilities of the elementary events.  
    
    If you solve the problem by Bayes' Law, then the coin flip introduces
    an extra factor of 0.5 into both numerator and denominator of the
    fraction which determines the probability for the first observer.  It
    cancels, so the answer is unchanged.
1078.7ALIEN::POSTPISCHILAlways mount a scratch monkey.Mon May 15 1989 12:0427
    Re .*:
    
    Yes, the observers are told the truth in the second experiment.  Also,
    the "lowest-numbered card" refers to the numbers etched in the table;
    the cards have no numbers.  I failed to explain the purpose of the
    second experiment.  It eliminates the video monitors.  By kicking
    observers onto the street, you can repeat the experiment until the
    different processes for the two observers coincide about which card is
    to be revealed, and your dealer can then reveal the chosen card to both
    observers -- now both observers see the same physical acts without any
    intervention. 
    
    The purpose of this topic is to bring out something about what "events"
    are in probability.  In the first case, the first observer and the
    second observer have seen identical physical acts:  They saw the
    shuffle, the deal, the rolling of the die to select a card, and the
    revelation of one of the cards.  Yet they arrive at different
    conclusions about the probability.  Clearly this is a result of the
    information they are given.
    
    So probability is not just a statement about physical events.  It is
    also related to information.  I'm looking for a clearer definition of
    this.  I've never seen probability defined on a formal basis, where one
    could work from axioms to determine the probability of an event.
    
    
    				-- edp
1078.8interpretations of probabilityPULSAR::WALLYWally Neilsen-SteinhardtTue May 16 1989 19:4990
    re:    <<< Note 1078.7 by ALIEN::POSTPISCHIL "Always mount a scratch monkey." >>>

>    The purpose of this topic is to bring out something about what "events"
>    are in probability.  In the first case, the first observer and the
>    second observer have seen identical physical acts:  They saw the
>    shuffle, the deal, the rolling of the die to select a card, and the
>    revelation of one of the cards.  Yet they arrive at different
>    conclusions about the probability.  Clearly this is a result of the
>    information they are given.
    
>    So probability is not just a statement about physical events.  It is
>    also related to information.  
    
    I've read several books on this subject, and I think you are asking an
    interesting question.  It is usually phrased in terms of the
    interpretation of probability values, not the definition of events. 
    All the interpretations I know about agree on the definition of an
    event (see previous reply) and on the mathematics of probability.
    
    The difference comes in the meaning of probability values, when applied
    in the real world.  The three interpretations I know about give these
    meanings:
    
    classical: probability values are the chances of events, known a priori
    or calculated by the laws of probability.
    
    frequentist: probability values are the limits of fractions of success
    in observations repeated indefinitely.  [ Note that this interpretation
    is often called classical, as in 1071.11 ]  
    
    Bayesian: probability values are representations of states of knowledge 
    about propositions.
    
    > I'm looking for a clearer definition of
>    this.  I've never seen probability defined on a formal basis, where one
>    could work from axioms to determine the probability of an event.
    
    Unless you believe in the classical interpretation, as defined above,
    it is not possible to determine any probability values from axioms. 
    Some knowledge of the world is required to put in the numbers.
    
    There are many books on formal mathematical probability, usually
    starting with axioms like
    
    	P( A and B ) = P( A given B ) * P( B )
    
    These books usually do not define probability, but treat it as an
    undefined term, for the same reason that geometry books no longer
    define a point or a line.
    
    A different axiomatic approach is taken by _Rational Descriptions,
    Decisions and Designs_, Myron Tribus, Pergamon, NY, 1969.  You may find
    this book interesting.  I got it from the Digital Library in the Mill,
    long ago.  I also got several research reports by E. T. Jaynes by
    interlibrary loan, which he was going to make into a book.  So what
    follows is probably from Tribus, but may be partly from Jaynes.
    
    Tribus takes as his axioms a dozen or so rules of rational thought, of
    which I remember only the most complex: "If X and Y are evidence for
    proposition A, then their combined effect on the probability of A must
    be independent of the order in which they are used."  From these, and a
    bit of the theory of functions, he deduces all the rules for
    manipulating probabilities which are usually taken as axioms.  This
    amounts to a demonstration that the Bayesian interpretation is
    consistent with all of mathematical probability.  He also shows that
    the frequentist and classical interpretations are special cases of the
    Bayesian interpretation.  He applied the Bayesian approach to a number
    of common problems.
    
    
    Note that a determined frequentist would not be impressed by your
    experiment, but might reply something like:
    
    "You begin by asking for the probability that a card is red, but this
    is not a sensible question.  The card is either red or it isn't.  The
    valid equivalent is to ask what fraction of cards in a long series of
    repetitions will be red.  The first observer gives a different answer
    from the second observer and the controller, but that is not
    surprising, because the first observer is seeing a different problem.
    If you ran the experiment using exactly the description given to
    the first observer, then the fraction would be 1/2 (this is faith, I
    have not worked it out.)  When you run it as it looks to second
    observer or controller, then the fraction is 2/3.  If you tell the
    first observer the truth, then all three will get the same answer.  The
    final part of .0 illustrates the problem, since you are explicitly
    ignoring some samples, exactly the ones which would allow the first
    observer to see the expected result."
    
    Any correction from a true frequentist is welcome, since I don't
    personally believe the previous paragraph.
1078.9ALIEN::POSTPISCHILAlways mount a scratch monkey.Wed May 17 1989 12:3114
    Re .8:
    
    > You begin by asking for the probability that a card is red, but this
    > is not a sensible question.  The card is either red or it isn't.
    
    I wonder what a frequentist would make of quantum mechanics?  I imagine
    we could set up a situation where two observers of a two-slit electron
    diffraction experiment believe in different distributions because one
    has access to a detector at one of the slits and one does not.
    
    I'll give this more consideration.  Thanks for the information.
    
    
    				-- edp
1078.10I must be a BayesianRELYON::HOWEWed May 17 1989 13:4744
Probably a fascinating subject.
    
RE: .7
>    could work from axioms to determine the probability of an event.
                                         ---
A Bayesian would say the probability is not deterministic, i.e. it is
subject to change based on new information.  It reflects the observer's
belief about the true state of nature (which may never be known).

RE: .8  (Nice summary, Wally)
>    [frequentist's statement]
>    "You begin by asking for the probability that a card is red, but this
>    is not a sensible question.  The card is either red or it isn't.  The
>    valid equivalent is to ask what fraction of cards in a long series of
>    repetitions will be red.

A Bayesian would counter that the "valid equivalent" is not a sensible
question in many cases, e.g., what is the probability that MY ride on
the space shuttle will end in disaster?

RE: .9    
>    I wonder what a frequentist would make of quantum mechanics?

Hmmm...I recall that Einstein's problem with quantum mechanics was
something like "God does not roll dice to determine physical laws."    

RE: .4
>arbitrary - he must not show you the good one - at the end it is 99-1 that
>the remaining door is so not by chance but by process of eliminating 98
>worthless doors; so you should always switch. 

I'm having a tough time swallowing this.  After Monty reveals the first
door, the odds are 98-1 that the selected door is the good one, and so on.
When he gets down to the last two, the odds are 1-1 for either door.
This is Bayesian revision in light of new information.  This is also
what one would expect if one walked out from an isolation booth
after the process of elimination was done, and were given the choice
of doors.

Again, the selected door either hides the major prize or it doesn't.
So what is your definition of probability, if not to reflect your
belief about the selected door hiding the prize?
    
    Rick        
1078.11Let's Make A DealAITG::DERAMODaniel V. {AITG,ZFC}:: D'EramoWed May 17 1989 14:3012
     re .-1,
     
     Monty Hall offers you a choice of (the contents of) any of
     100 small boxes.  You know that one of the boxes contains a
     valuable prize and the other ninety-nine are empty.  You
     select one of the boxes.  Monty Hall has the contents of the
     other boxes all dumped into one large box.  You are now
     offered the choice between the contents of the small box
     that you had chosen earlier, or the contents of the large
     box.  Which do you chooose, and why?
     
     Dan
1078.12maybe I am a frequentistPULSAR::WALLYWally Neilsen-SteinhardtWed May 17 1989 15:4551
    re:                       <<< Note 1078.10 by RELYON::HOWE >>>
    
                           -< I must be a Bayesian >-
    
    Yes, it sounds like you are.

> A Bayesian would counter that the "valid equivalent" is not a sensible
> question in many cases, e.g., what is the probability that MY ride on
> the space shuttle will end in disaster?
    
    And the frequentist would repeat that probability statements about
    individual events are meaningless.  The discussion would degenerate
    from there, as discussions about foundations so often do.

>>    I wonder what a frequentist would make of quantum mechanics?
    
    I believe that any quantum mechanical experiment involving many
    repetitions will be consistent with the frequentist interpretation.  In
    fact, I think that the frequentist approach is the simplest and
    clearest way to view the probabilities in quantum mechanics.
    
    Contradictions, anyone?


>>arbitrary - he must not show you the good one - at the end it is 99-1 that
>>the remaining door is so not by chance but by process of eliminating 98
>>worthless doors; so you should always switch. 
>
>I'm having a tough time swallowing this.  After Monty reveals the first
>door, the odds are 98-1 that the selected door is the good one, and so on.
>When he gets down to the last two, the odds are 1-1 for either door.
>This is Bayesian revision in light of new information.  This is also
>what one would expect if one walked out from an isolation booth
>after the process of elimination was done, and were given the choice
>of doors.
    
    That has been bothering me too.  The key question, from a Bayesian
    point of view, is whether the contestant's personal probability
    of the statement "My selected door conceals the prize" changes when
    Monty opens the door.  I have decided it does not, because I knew from
    the beginning that Monty could open an unselected door and show a
    worthless prize.  You cannot use the usual argument about the
    equivalence of all the doors, because after the contestant selects, one
    door is not equivalent:  Monty will never select that door.
    
    Interestingly enough, I convinced myself that the above argument was
    correct by imagining a frequentist experiment: I select a door, Monty
    opens 99, and we tabulate how often the selected door conceals the
    prize.  My intuition says that in repeated trials the relative
    frequency will approach 0.01, so I searched for and found the Bayesian
    argument above.
1078.13Monty Bayes DealRELYON::HOWEWed May 17 1989 16:006
    re .-1,
    
    Not the same proposition.  How would your decision change if Monty
    showed you 98 of the 99 to be merged, and they were empty?
    
    Rick
1078.14AITG::DERAMODaniel V. {AITG,ZFC}:: D'EramoWed May 17 1989 16:206
	The hundred doors and the hundred boxes are the same problem.
	In either case, you select one which has a probability of 0.01
	of being the one with the prize.  Then you are offered a choice
	between staying with that selection or switching.

	Dan
1078.15"I've got a little list..." - MikadoAKQJ10::YARBROUGHI prefer PiWed May 17 1989 18:3028
>RE: .4
>>arbitrary - he must not show you the good one - at the end it is 99-1 that
>>the remaining door is so not by chance but by process of eliminating 98
>>worthless doors; so you should always switch. 
>
>I'm having a tough time swallowing this.  After Monty reveals the first
>door, the odds are 98-1 that the selected door is the good one, and so on.

No, because while our first choice of a door was random, Monty's choices 
are not. Here's another way at looking at the problem. Monty puts the prize
behind door N; now he builds a random permutation of {1..N-1,N+1..100}
followed by N. Now that the table is built, all of Monty's actions are
*predetermined*. This is where the show starts. We will play the game 100 
times.

We select 1<= D <=100; Monty now reads 98 numbers (and opens each
associated door) from the permutation from top to bottom, skipping D when
it occurs. Now we know the prize is behind the door with one of the last
two numbers remaining. We switch; 99 times the switch will be to N, the
other time we had the bad luck to pick D=N and we get whatever was last in
the permutation as the door to our prize. 

Now if Monty's choices were also random, of course, he would build the 
table as a permutation of {1..100}, so 98% of the time N would appear among 
the first 98 numbers and he would inadvertently reveal the real prize. No
one would watch the show; they would all be in the contestant's line. 

Lynn 
1078.16HPSTEK::XIAWed May 17 1989 19:3413
    re -1
    
    I completely agree with your reasoning.  When you first pick
    a door the odd is 1/100, and that does not change.  So at the end,
    if you don't switch, your odd is still 1/100.  That means the odd
    the other one contains the prize is 99/100.
    
    It is very easy to check this problem by experiment.  If 100 boxes
    are too many try 10 boxes (the odd then is 9-1).  Well, it is also
    a very simple programming problem....
    
    Eugene
                                         
1078.17Let's make a betPOOL::HALLYBThe Smart Money was on GoliathWed May 17 1989 20:039
    I agree with Eugene and Lynn, you should always switch.  If anyone
    feels strongly that staying with their choice results in 1-1 odds,
    well let's just say we should be able to set up a friendly wager.
    
    Thanks for the summary Wally.  And the question:  "what are the chances
    that the shuttle will blow up on MY flight?" really points out the
    conflicts involved.
    
      John
1078.18two more booksPULSAR::WALLYWally Neilsen-SteinhardtWed May 17 1989 20:3114
    Here are two relevant and interesting books, both available from Dover
    Publications:
    
    _Probability, Statistics and Truth_, Richard von Mises, 1928 and 1951.
    This is a forceful statement of the superiority of the frequentist 
    interpretation over the classical interpretation, as defined in 1078.8.  
    The author seems to believe that he is arguing against the Bayesian
    interpretation as well, but even in 1951 that interpretation was not 
    well formulated enough to be a good target.
    
    _The Foundations of Statistics_, Leonard J. Savage, 1954 and 1972.
    This book marked the return of Bayesian ideas to a cnetury which had
    been dominated by frequentist thinking.  It leaves the developments 
    in the field since 1954.
1078.19AITG::DERAMODaniel V. {AITG,ZFC}:: D'EramoWed May 17 1989 23:1522
	Two points.  First, the Monty Hall problem with three doors
	in .1 is essentially the same as one of the problems in .0
	(the one where the observer is shown an unselected black card
	and believes the probability asked for is 2/3).  The case with
	100 doors is to make it more obvious that the answer there is
	a probability of 99/100 that the prize is behind the door neither
	you nor Monty selected; so that you would be more willing to
	believe that the three door answer is 2/3.

	Second, I think that when I think of probability I get confused
	until I think of modelling the specific problem with numerical
	valued random variables with appropriate distributions.  Then I
	compute some expected value which is a lot like a probability.
	For example, in the three doors case, let X be a random variable
	representing not switching, with value one if you win the prize
	and zero if you don't.  Then E[X] = 1/3.  Likewise if Y is a
	random variable representing always switching, with value one if
	switching wins the prize and value zero otherwise, then E[Y] = 2/3.
	In fact, X is Binomially distributed with parameter p = 1/3.  Etc.
	It seems a lot like the frequentist approach but that is unclear.

	Dan
1078.20RELYON::HOWEThu May 18 1989 13:0718
    Yes, I see that the probability of winning by switching is greater
    given the initial probabilities.  It is interesting that we tend to
    assign probabilities on the basis of implicit assumptions and by
    imagining repeated experiments.  In Monty's case, it all rides on
    the initial probabilities.  How do you justify your assessment of
    .01 probability for any given box?  Based on 100 equally likely
    possible outcomes, and one possible choice?  But how do you know
    they're equally likely, especially if it's a one shot deal?
    Better yet, how do you know the probabilities are stationary?  Maybe
    there's someone behind the stage moving things around...  This is
    analagous to assuming that random variables come from a distribution,
    i.e. are Independent and Identically Distributed.
    
    Sometimes probability statements are made because there is insufficient
    reason to make any other statements.  We tend to want to look at the
    long run to confirm or deny this...but what if there is no long run?

    
1078.21Is this where I came in?AKQJ10::YARBROUGHI prefer PiThu May 18 1989 19:0511
>    ... how do you know the probabilities are stationary?  Maybe
>    there's someone behind the stage moving things around...  

Right, that's known as the old shell game.
    
>    Sometimes probability statements are made because there is insufficient
>    reason to make any other statements.  

True ... and that can lead you down yet another rathole. Using the
Principle of Insufficient Reason, you can show that the probability of
absurd events on remote planets is as high as you like. 
1078.22"In the long run, we are all dead." KeynesPULSAR::WALLYWally Neilsen-SteinhardtFri May 19 1989 20:5364
    re:                       <<< Note 1078.20 by RELYON::HOWE >>>

>    the initial probabilities.  How do you justify your assessment of
>    .01 probability for any given box?  Based on 100 equally likely
>    possible outcomes, and one possible choice?  But how do you know
>    they're equally likely, especially if it's a one shot deal?
    
    In the classical interpretation, you must assume that equal
    probabilities are known a priori.
    
    In the frequentist interpretation, you must have access to a history
    which demonstrates equal liklihood.  This is one reason why frequentist
    analyses do not deal with unique cases.
    
    In Bayesian interpretation, you base your probabilities on all you know 
    about the situation.  The key to Bayesian analysis is to use *all* the 
    relevant knowledge you have to assign the probabilities.  As .21 points
    out, you can get absurd results by ignoring knowledge you have.  Depending 
    on what you know the probabilities may be different.
    
    If all you know is that there are 100 doors and the valuable prize has
    been placed behind one of them, then you may properly apply the
    Principle of Insufficient Reason to assign a probability of 0.01 to
    each door.  If you know that the prize is placed behind a door chosen
    by a standard gambling device (like the lottery number generators),
    then you can also assign 0.01.  If you have been watching the show and
    have noticed that certain doors are consistently favored or avoided,
    then you can use that history to assign a better distribution.  This
    may not be relevant to Monty Hall, but it was once used to crack the 
    enigma cipher, based on operator bias in choosing 'random' keys.
    
>    Better yet, how do you know the probabilities are stationary?  Maybe
>    there's someone behind the stage moving things around...  This is
>    analagous to assuming that random variables come from a distribution,
>    i.e. are Independent and Identically Distributed.
    
    Right.  This is the point of the quibble in 1078.5. 
    
>    Sometimes probability statements are made because there is insufficient
>    reason to make any other statements.  We tend to want to look at the
>    long run to confirm or deny this...but what if there is no long run?
    
    Yes, we can assume many possibilities, and use experience to eliminate
    some of them.  Interestingly enough, there are some possibilities that
    cannot be eliminated by anything less than an infinite history:
    
    	The system is controlled by a malicious intelligence who will mimic
    	a simple system until the victim begins to play.
    
    	The system gradually changes from one mechanism to another, over
    	periods long compared to any reasonable sampling time.
    
    	The system changes suddenly and unpredictably with a frequency long
    	compared to any reasonable sampling time.
    
    Ultimately, we must invoke some knowledge beyond frequency results to
    assign probabilities.  This usually means looking into or reasoning
    about the mechanisms which drive the system.  The fact that this
    additional knowledge is required is one of the philosophic problems
    with the frequentist interpretation.
    
    In the Bayesian interpretation, long run information is great when we
    can get it, but when we cannot, we must get along without it somehow.
    We assign the probabilities based on what we do know, and take action.