T.R | Title | User | Personal Name | Date | Lines |
---|
992.1 | Fuzzy problem, fuzzy solution | AKQJ10::YARBROUGH | I prefer Pi | Wed Dec 14 1988 18:20 | 11 |
| This problem is discussed in texts on fuzzy math {set theory, logic, and
arithmetic} and, if I read my available texts correctly, requires additional
information about the nature of the random variables to discuss
intelligently. I am just getting into this space and am not yet able to
discuss it intelligently, but it appears that there are several different
solutions depending on the choice of metrics available.
One such text is Negoita and Ralescu, "Simulation, Knowledge-based Computing,
and Fuzzy Statistics" [van Nostrand Reinhold].
Lynn
|
992.2 | | AITG::DERAMO | Daniel V. {AITG,ZFC}:: D'Eramo | Wed Dec 14 1988 19:43 | 32 |
| >>.0 Let U1 and U2 be independent and identically distributed random variables.
>> Both have a Uniform distribution over [a,b].
>>.1 requires additional information about the nature of the random variables
What more can be required than what was given?
>>.0 So what is the distribution of MAX(U1,U2)?
>> In particular, what is E(MAX(U1,U2))?
>>.0 Does the problem easily generalize to N random variables instead of 2?
The probability that a sample value of U1 is less than or
equal to x is zero for x < a, one for b < x, and
(x - a) / (b - a) for a <= x <= b.
The same is true for U2, U3, ..., Un. That's what it means
for a random variable to be uniformly distributed.
The probability that MAX(U1, ..., Un) is less than or equal
to x is the product of the probabilities of the independent
(by assumption) events U1 <= x, ..., Un <= x.
{ 0 x < a
So Prob(MAX(U1, ..., Un) <= x) = { ((x-a)/(b-a))^n a <= x <= b
{ 1 b < x
From this you can compute everything about the distribution;
for instance the density function is the derivative of the
cumulative density function given above.
Dan
|
992.3 | Thanks, Dan | POOL::HALLYB | The smart money was on Goliath | Thu Dec 15 1988 13:53 | 14 |
992.4 | yes, a simple problem | PULSAR::WALLY | Wally Neilsen-Steinhardt | Thu Dec 15 1988 14:52 | 6 |
| I will agree with .2 and .3, and disagree with .1. This is a simple
problem in traditional probability theory for anyone who is better
at rigor and algebra than I am.
I got roughly the results of .2 and .3, but with so much messy
hand-waving that I was embarrassed to put it in here.
|
992.5 | Quick simplification attempt. | 5540::COOPER | Topher Cooper | Fri Dec 16 1988 16:32 | 46 |
| Let the two random variables (each independently uniformly distributed
from a to b) be called X1 and X2. Let X be max(X1, X2), then the
basic question was: what is E[X]? The solution presented was:
3 2 3
2b ab a
--- - --- + --
3 2 6
E[X] = --------------
2
(b-a)
with a plea to simplify if it were possible. I don't have access
to MAPLE at the moment but I think that I can apply some problem
domain information to simplify this considerably (the same answer
should be obtainable by algebra but I was unable to do this with
some quick tries, so maybe I've made some conceptual or algebraic
error here). I haven't checked the above formula but am simply
assuming it is correct.
Define Y1 = (X1 - a)/(b-a) and Y2 = (X2 - a)/(b-a) (essentially
Yn is Xn in a different coordinate system). And let Y = max(Y1, Y2).
Y thus equals max((X1-a)/(b-a), (X2-a)/(b-a)), which equals,
(max(X1,X2) - a)/(b-a), since max is a psudo-linear operator (as long
as (b-a) is positive). E[Y] is therefore, E[(max(X1,X2) - a)/(b-a)].
Expectation is a linear operator so:
E[Y] = (E[X] - a)/(b-a)
and so:
E[X] = E[Y]*(b-a) + a
But Y1, and Y2 are also uniformly distributed variables (a=0, b=1),
so by the above formula:
E[Y] = 4/6 = 2/3,
substituting we get:
2b + a
E[X] = 2b/3 - 2a/3 + a = ------
3
Did I do make some stupid error?
Topher
|
992.6 | Error. | 5540::COOPER | Topher Cooper | Fri Dec 16 1988 17:53 | 18 |
| RE:.5
I did what I should have done before and tried some sample a's and
b's in the "messy" formula and in my "simplified" one, and got
mismatched values, so there was a mistake somewhere.
I looked at the original equation and noted that if a=0, and b=1
then E[X] = 2/3, and if a=1 and b=2 then the equation gives E[X]
= 7/2. But the latter problem is the first problem simply shifted
up by one, and the latter expectation should be simply one plus
the former expectation.
I have still not checked the derivation, but clearly there is an
error in the "messy" formula. (There may well be errors in my
"simplification" as well, of course).
Topher
|
992.7 | | AITG::DERAMO | Daniel V. {AITG,ZFC}:: D'Eramo | Fri Dec 16 1988 18:06 | 15 |
| The correct form for E[X] is
3 3
2b 2 a
--- - ab + ---
3 6
E[X] = ---------------
2
(b - a)
which does reduce to (2b + a)/3.
That was a good idea to approach it that way.
Dan
|
992.8 | An exercise for the casual reader | POOL::HALLYB | The smart money was on Goliath | Fri Dec 16 1988 18:37 | 9 |
| (The original problem should have stated b>a as a condition, hence b-a>0.)
Very clever, Topher, to shift the problem into a domain where the error
doesn't exist (a=0), then solve and translate back. Perhaps this is a
new technique, hereby named the "Cooper Process".
Does anybody want to generalize this to N variables? Given that
MAX(U1,U2,U3) is MAX(MAX(U1,U2),U3), it shouldn't be difficult.
John
|
992.9 | re .11 -- make that (nb+a)/(n+1) | KOBAL::GILBERT | Ownership Obligates | Fri Dec 16 1988 18:40 | 15 |
| It's easier to work with variables that are uniformly distributed
over the range [0,1). Then
E(MAX(U1,...,Un)] = 1 - 1/(n+1).
Taking *this* result, we see that for variables Vi randomly distributed
over the range [a,b), we have:
E(MAX(V1,...,Vn)] = a + (b-a) E(MAX(U1,...Un))
= b - (b-a)/(n+1)
= (nb-a)/(n+1)
I think this is right.
|
992.10 | Blush | 5540::COOPER | Topher Cooper | Fri Dec 16 1988 19:52 | 17 |
| RE: .8 (John)
Thanks, but I'm afraid I can't claim such prescience -- I didn't
know there was an error much less that it vanished when a equaled
0. It just occurred to me that the position/scale invariance of
the problem implied that the formula should be a linear function
of the constant value for the cannonical uniform distribution; then
used the formula given to get that constant. It was pure luck that
the formula, though incorrect, was correct when a=0, and so my
result was right anyway.
If anyone can think of some way to make this process useful without
prescience I'll be glad to share joint eponymy with them, i.e.,
the "Cooper-X Process" (I refuse to consider the possibility of
the "X-Cooper Process"). :-) :-) ;-)
Topher
|
992.11 | | 38863::DERAMO | Daniel V. {AITG,ZFC}:: D'Eramo | Fri Dec 16 1988 20:57 | 18 |
| re .9
>> E(MAX(V1,...,Vn)] = a + (b-a) E(MAX(U1,...Un))
>>
>> = b - (b-a)/(n+1)
>>
>> = (nb-a)/(n+1)
>>
>> I think this is right.
Close, but the last equality should be
= (nb+a)/(n+1)
(Minus a minus is plus, or consider the case n=2, or
however.)
Dan
|