Probability Tutorials is a Public Group with 360 members.
 Probability Tutorials

 Public Group,
 360 members
Help !!!!!!
Expand Messages
 Hi Noel,
Please help me with this one :
Suppose E( X )< infinity, mu=E( X ) Fix c<d .
Let ~X= c , x, d accordingly as [X<c],[c<=X<=d],[X>d] ,set ~mu=E(~X)
Prove: E( ~X  ~mu^r )<=E(X mu^r) for all r>= 1
Regards,
Shuva
PS: I solved the problem for mu=~mu but didn't need the condition r>=1, ie true for any r
Not very sure whether the problem is correct .If wrong can you get an counter example.
You many have to use Cr Inequality , Minkowski's , or Jensen's or Whatever .
Tony Wong <tw813@...> wrote:
hello all,
I have the following question which I hope someone can
help with:
Given X_t is a brownian motion with drift rate u and
volatility sigma, and X_0 = a >0, define T(b) with b>a
to be the first time X_t hits the level b. Now define
Y_t = X_{min(t, T(b)}. (i.e. Y_t=X_t if T(b) is
greater than t and Y_t=b for all t>= T(b) ). For each
fixed t, Y_t has a mixed distribution with a point
mass at b and a continuous density on (infinity, b).
My question is: how can one derive the density part??
(For the case where X_t is the standard brownian
motion, this question is not too hard.) Also, can
someone tell me from which book, I could get an answer
for this question??
Many thanks,
Tony
__________________________________
���W������BGames�BWebcam�B�y�����...
���s Yahoo! Messenger
http://messenger.yahoo.com.hk/
Yahoo! Groups SponsorADVERTISEMENT

Yahoo! Groups Links
To visit your group on the web, go to:
http://groups.yahoo.com/group/probability/
To unsubscribe from this group, send an email to:
probabilityunsubscribe@yahoogroups.com
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

Do you Yahoo!?
Check out the new Yahoo! Front Page. www.yahoo.com/a
[Nontext portions of this message have been removed]  Hi Noel,
Can you please help me with this
Suppose EX< infinity, mu=E(X). Fix c<d ,
Let ~X = c,X,d according as [X<c], [c<=X<=d], [X>d]
Set ~mu=E(~X)
Show that E ( ~X  ~mu^r ) <=E ( X  mu^r ) for all r>=1
Regards,
Shuva
PS: I have been able to do it for mu = ~mu ( did not need the condn r>=1here )but couldn't for the case mu not equals ~mu.I am noy sure whether the problem is correct so can you please provide me with a counter example if you think it is not true.
You many need to use Cr Inequality ,Jensen or Whatever .....

Do you Yahoo!?
Check out the new Yahoo! Front Page. www.yahoo.com/a
[Nontext portions of this message have been removed]  Here's a heuristic sketch of a very special case. Let's suppose that
c=\infty, that X has a "nice" density ("nice" here means whatever
it needs to mean so that all my dubious differentiations are
justified), and that r=2k for some positive integer k.
Then ~X=min(X,d). Define g(t)=E[min(X,t)] and observe that
g'(t) = E[1_{X > t}] = P(X > t).
Also note that ~mu=g(d).
Now define h(t) = E[(min(X,t)  g(t))^{2k}], so that E~X  ~mu^r =
h(d). Note that
h'(t) = 2k E[(min(X,t)  g(t))^{2k1} (1_{X > t}  g'(t))].
If we let Y be an independent copy of X, then
1_{X > t}  g'(t)
1_{X > t} P(Y < t)  1_{X < t} P(Y > t)
= E[1_{X > t > Y}X]  E[1_{X < t < Y}X].
Hence,
h'(t) = 2k E[(min(X,t)  g(t))^{2k1} 1_{X > t > Y}]
 2k E[(min(X,t)  g(t))^{2k1} 1_{X < t < Y}]
= 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
 2k E[(X  g(t))^{2k1} 1_{X < t < Y}].
Since 2k1 is odd,
(X  g(t))^{2k1} 1_{X < t < Y}
<= (t  g(t))^{2k1} 1_{X < t < Y}.
Therefore,
h'(t) >= 2k(t  g(t))^{2k1} E[1_{X > t > Y}  1_{X < t < Y}].
But since X and Y are iid, this expectation is 0, so h is an
increasing function in t. Thus,
E~X  ~mu^r = h(d)
<= lim_{t>\infty} h(t)
= EX  mu^r. QED
This argument is so convoluted, I don't have much confidence in it.
(Where's my mistake?) Even if it is correct, hopefully someone else
(Noel?) can provide something more transparent.
 In probability@yahoogroups.com, shuva gupta <shuvagupta@y...>
wrote:>
condn r>=1here )but couldn't for the case mu not equals ~mu.I am
>
> Hi Noel,
>
> Can you please help me with this
>
> Suppose EX< infinity, mu=E(X). Fix c<d ,
>
> Let ~X = c,X,d according as [X<c], [c<=X<=d], [X>d]
>
> Set ~mu=E(~X)
>
> Show that E ( ~X  ~mu^r ) <=E ( X  mu^r ) for all r>=1
>
> Regards,
>
> Shuva
>
> PS: I have been able to do it for mu = ~mu ( did not need the
noy sure whether the problem is correct so can you please provide me
with a counter example if you think it is not true.>
> You many need to use Cr Inequality ,Jensen or Whatever .....
>
>
>
>
>
>
>
>
>
>
>
>
> 
> Do you Yahoo!?
> Check out the new Yahoo! Front Page. www.yahoo.com/a
>
> [Nontext portions of this message have been removed]  Well, here's one mistake:
I wrote
> Since 2k1 is odd,
What I wanted to say was
>
> (X  g(t))^{2k1} 1_{X < t < Y}
> <= (t  g(t))^{2k1} 1_{X < t < Y}.
> Since 2k1 is odd,
But that's clearly false. I then used this false "fact" to show that
>
> (X  g(t))^{2k1} 1_{X < t < Y}
> <= (t  g(t))^{2k1} 1_{X > t > Y}.
h'(t)>=0, which finished the "proof".
But I think it's salvagable. Here's another way to show h'(t)>=0.
Start with what we know:
h'(t) = 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
 2k E[(X  g(t))^{2k1} 1_{X < t < Y}].
Since X and Y are iid, I can interchange them in the first
expectation, giving
h'(t) = 2k E[(t  g(t))^{2k1} 1_{X < t < Y}]
 2k E[(X  g(t))^{2k1} 1_{X < t < Y}].
Now we use the fact that (X  g(t))^{2k1} <= (t  g(t))^{2k1}
whenever X < t to conclude that h'(t)>=0.
Any other glaring mistakes? (Besides the lack of rigor?) :)
 In probability@yahoogroups.com, "jason1990" <jason1990@y...>
wrote:>
that
>
> Here's a heuristic sketch of a very special case. Let's suppose
> c=\infty, that X has a "nice" density ("nice" here means whatever
=
> it needs to mean so that all my dubious differentiations are
> justified), and that r=2k for some positive integer k.
>
> Then ~X=min(X,d). Define g(t)=E[min(X,t)] and observe that
>
> g'(t) = E[1_{X > t}] = P(X > t).
>
> Also note that ~mu=g(d).
>
> Now define h(t) = E[(min(X,t)  g(t))^{2k}], so that E~X  ~mu^r
> h(d). Note that
it.
>
> h'(t) = 2k E[(min(X,t)  g(t))^{2k1} (1_{X > t}  g'(t))].
>
> If we let Y be an independent copy of X, then
>
> 1_{X > t}  g'(t)
> 1_{X > t} P(Y < t)  1_{X < t} P(Y > t)
> = E[1_{X > t > Y}X]  E[1_{X < t < Y}X].
>
> Hence,
>
> h'(t) = 2k E[(min(X,t)  g(t))^{2k1} 1_{X > t > Y}]
>  2k E[(min(X,t)  g(t))^{2k1} 1_{X < t < Y}]
> = 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
>  2k E[(X  g(t))^{2k1} 1_{X < t < Y}].
>
> Since 2k1 is odd,
>
> (X  g(t))^{2k1} 1_{X < t < Y}
> <= (t  g(t))^{2k1} 1_{X < t < Y}.
>
> Therefore,
>
> h'(t) >= 2k(t  g(t))^{2k1} E[1_{X > t > Y}  1_{X < t < Y}].
>
> But since X and Y are iid, this expectation is 0, so h is an
> increasing function in t. Thus,
>
> E~X  ~mu^r = h(d)
> <= lim_{t>\infty} h(t)
> = EX  mu^r. QED
>
> This argument is so convoluted, I don't have much confidence in
> (Where's my mistake?) Even if it is correct, hopefully someone
else
> (Noel?) can provide something more transparent.
>
>  In probability@yahoogroups.com, shuva gupta <shuvagupta@y...>
> wrote:
> >
> >
> > Hi Noel,
> >
> > Can you please help me with this
> >
> > Suppose EX< infinity, mu=E(X). Fix c<d ,
> >
> > Let ~X = c,X,d according as [X<c], [c<=X<=d], [X>d]
> >
> > Set ~mu=E(~X)
> >
> > Show that E ( ~X  ~mu^r ) <=E ( X  mu^r ) for all r>=1 > This argument is so convoluted, I don't have much confidence in it.
Jason,
> (Where's my mistake?) Even if it is correct, hopefully someone else
> (Noel?) can provide something more transparent.
I find this argument very impressive (there is a lot of ideas in it).
I am a bit uneasy about differentiation under E[...] but I am
confident that this is not a flaw in your proof (i.e. I am confident
every equality could be rigorously justified). I know you have
mentioned a mistake (cf your next post), but I haven't seen any on
first reading :) So I am gonna go through your next post now.
Noel.> > Since 2k1 is odd,
Since (2k1) is odd, x>x^(2k1)is nondecreasing on R
> >
> > (X  g(t))^{2k1} 1_{X < t < Y}
> > <= (t  g(t))^{2k1} 1_{X < t < Y}.
and since X  g(t) <= t  g(t) on {X < t}, the inequality
you wrote seems fine to me.
I also think this inequality allows you to conclude
that h'(t) >= 0.
h'(t)
= 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
 2k E[(X  g(t))^{2k1} 1_{X < t < Y}]>= 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
 2k E[(t  g(t))^{2k1} 1_{X < t < Y}]
= 2k(tg(t))^{2k1}E[1_{X > t > Y}  1_{X < t < Y}]
=0
which is pretty much what you wrote in your first post.
What am I missing?
> What I wanted to say was
I wonder why you wanted to write this in the first place.
>
> > Since 2k1 is odd,
> >
> > (X  g(t))^{2k1} 1_{X < t < Y}
> > <= (t  g(t))^{2k1} 1_{X > t > Y}.
>
> But that's clearly false.
> But I think it's salvagable. Here's another way to show
This seems to me like the same as what you wrote in the first place,
>h'(t)>=0. Start with what we know:
>
> h'(t) = 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
>  2k E[(X  g(t))^{2k1} 1_{X < t < Y}].
>
> Since X and Y are iid, I can interchange them in the first
> expectation, giving
>
> h'(t) = 2k E[(t  g(t))^{2k1} 1_{X < t < Y}]
>  2k E[(X  g(t))^{2k1} 1_{X < t < Y}].
>
> Now we use the fact that (X  g(t))^{2k1} <= (t  g(t))^{2k1}
> whenever X < t to conclude that h'(t)>=0.
>
> Any other glaring mistakes? (Besides the lack of rigor?) :)
except you use iid property, before taking inequalities.
Any way, it seems to me your proof is good. I certainly haven't
seen a flaw.
Noel. Yes, you're right. I thought about this in the morning on the bus
and again this evening before going to a movie. I had two lines of
reasoning in my head at once and when I reread my post, I confused
myself.
About differentiating under the expectation, something occurred to
me at the theater. For any nonnegative random variable Y and any
r>0, E[Y^r]=\int_0^\infty{rz^{r1}P(Y>z)dz}. So suppose we're given
a real t and we want to compute E[min(X,t)]. Let Y=tmin(X,t). Then
Y is nonnegative and we can apply the above to get
E[min(X,t)] = t  \int_{\infty}^t{P(X < z)dz}.
We can differentiate this with no problem. Something similar should
be possible with the other expectation. I don't think this is
necessary to justify the differentiation, but it makes me wonder
whether the whole differentiation approach is unnecessary.
Another thing that is curious: the only property of the map x>x^
{2k} that was used is the fact that it has a nondecreasing
derivative. In other words, it is convex. So perhaps the original
poster's claim is true not only for functions of the form x>x^r
where r>=1, but for all convex functions. If so, then maybe Jensen's
inequality would be useful in creating a simpler proof.
I just feel that it shouldn't be this hard.
 In probability@yahoogroups.com, "Noel Vaillant" <vaillant@p...>
wrote:>
place,
>
> > > Since 2k1 is odd,
> > >
> > > (X  g(t))^{2k1} 1_{X < t < Y}
> > > <= (t  g(t))^{2k1} 1_{X < t < Y}.
>
> Since (2k1) is odd, x>x^(2k1)is nondecreasing on R
> and since X  g(t) <= t  g(t) on {X < t}, the inequality
> you wrote seems fine to me.
>
> I also think this inequality allows you to conclude
> that h'(t) >= 0.
>
> h'(t)
> = 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
>  2k E[(X  g(t))^{2k1} 1_{X < t < Y}]
> >= 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
>  2k E[(t  g(t))^{2k1} 1_{X < t < Y}]
> = 2k(tg(t))^{2k1}E[1_{X > t > Y}  1_{X < t < Y}]
> =0
>
> which is pretty much what you wrote in your first post.
> What am I missing?
>
> > What I wanted to say was
> >
> > > Since 2k1 is odd,
> > >
> > > (X  g(t))^{2k1} 1_{X < t < Y}
> > > <= (t  g(t))^{2k1} 1_{X > t > Y}.
> >
> > But that's clearly false.
>
> I wonder why you wanted to write this in the first place.
>
>
>
> > But I think it's salvagable. Here's another way to show
> >h'(t)>=0. Start with what we know:
> >
> > h'(t) = 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
> >  2k E[(X  g(t))^{2k1} 1_{X < t < Y}].
> >
> > Since X and Y are iid, I can interchange them in the first
> > expectation, giving
> >
> > h'(t) = 2k E[(t  g(t))^{2k1} 1_{X < t < Y}]
> >  2k E[(X  g(t))^{2k1} 1_{X < t < Y}].
> >
> > Now we use the fact that (X  g(t))^{2k1} <= (t  g(t))^{2k1}
> > whenever X < t to conclude that h'(t)>=0.
> >
> > Any other glaring mistakes? (Besides the lack of rigor?) :)
>
> This seems to me like the same as what you wrote in the first
> except you use iid property, before taking inequalities.
>
> Any way, it seems to me your proof is good. I certainly haven't
> seen a flaw.
>
> Noel. > About differentiating under the expectation, something occurred to
Very good. I agree.
> me at the theater. For any nonnegative random variable Y and any
> r>0, E[Y^r]=\int_0^\infty{rz^{r1}P(Y>z)dz}. So suppose we're given
> a real t and we want to compute E[min(X,t)]. Let Y=tmin(X,t). Then
> Y is nonnegative and we can apply the above to get
>
> E[min(X,t)] = t  \int_{\infty}^t{P(X < z)dz}.
>
> We can differentiate this with no problem.
Yes.
> Something similar should be possible with the other
Yes.
> expectation.
> Another thing that is curious: the only property of the map x>x^
Well, I'd be happy already to crack this for r in [0,+oo[
> {2k} that was used is the fact that it has a nondecreasing
> derivative. In other words, it is convex. So perhaps the original
> poster's claim is true not only for functions of the form x>x^r
> where r>=1, but for all convex functions.
> If so, then maybe Jensen's inequality would be useful in
Yes, I am guessing there should be a simpler proof, but I
>creating a simpler proof.
can't find it. I have been looking for a while now.
Even looked for counterexample for r=2.
I am going to give up soon :)
Noel. I am very sorry Shuva,
I have been stuck for 2 hours on this.
I need to move on, otherwise I ll go crazy.
I think Jason has a good chance to find a complete proof.
Noel.  Thanks Jason Noel and Myriam,
Actually the problem has a very simple solution if we exploit the fact that
g(x)=x^r, r>=1 is a convex function.
ie
we use the property g(x)g(y)>=(xy)*g'(y) when g(.)is convex.
Put x=XE(X)
y= ~X  E(~X) Then take expectation on both sides .
Regards,
Shuva
PS A friend of mine found this solution in the book Probability by Chow and Teicher 1st Edition ( pg 102103)
jason1990 <jason1990@...> wrote:
Yes, you're right. I thought about this in the morning on the bus
and again this evening before going to a movie. I had two lines of
reasoning in my head at once and when I reread my post, I confused
myself.
About differentiating under the expectation, something occurred to
me at the theater. For any nonnegative random variable Y and any
r>0, E[Y^r]=\int_0^\infty{rz^{r1}P(Y>z)dz}. So suppose we're given
a real t and we want to compute E[min(X,t)]. Let Y=tmin(X,t). Then
Y is nonnegative and we can apply the above to get
E[min(X,t)] = t  \int_{\infty}^t{P(X < z)dz}.
We can differentiate this with no problem. Something similar should
be possible with the other expectation. I don't think this is
necessary to justify the differentiation, but it makes me wonder
whether the whole differentiation approach is unnecessary.
Another thing that is curious: the only property of the map x>x^
{2k} that was used is the fact that it has a nondecreasing
derivative. In other words, it is convex. So perhaps the original
poster's claim is true not only for functions of the form x>x^r
where r>=1, but for all convex functions. If so, then maybe Jensen's
inequality would be useful in creating a simpler proof.
I just feel that it shouldn't be this hard.
 In probability@yahoogroups.com, "Noel Vaillant" <vaillant@p...>
wrote:>
place,
>
> > > Since 2k1 is odd,
> > >
> > > (X  g(t))^{2k1} 1_{X < t < Y}
> > > <= (t  g(t))^{2k1} 1_{X < t < Y}.
>
> Since (2k1) is odd, x>x^(2k1)is nondecreasing on R
> and since X  g(t) <= t  g(t) on {X < t}, the inequality
> you wrote seems fine to me.
>
> I also think this inequality allows you to conclude
> that h'(t) >= 0.
>
> h'(t)
> = 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
>  2k E[(X  g(t))^{2k1} 1_{X < t < Y}]
> >= 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
>  2k E[(t  g(t))^{2k1} 1_{X < t < Y}]
> = 2k(tg(t))^{2k1}E[1_{X > t > Y}  1_{X < t < Y}]
> =0
>
> which is pretty much what you wrote in your first post.
> What am I missing?
>
> > What I wanted to say was
> >
> > > Since 2k1 is odd,
> > >
> > > (X  g(t))^{2k1} 1_{X < t < Y}
> > > <= (t  g(t))^{2k1} 1_{X > t > Y}.
> >
> > But that's clearly false.
>
> I wonder why you wanted to write this in the first place.
>
>
>
> > But I think it's salvagable. Here's another way to show
> >h'(t)>=0. Start with what we know:
> >
> > h'(t) = 2k E[(t  g(t))^{2k1} 1_{X > t > Y}]
> >  2k E[(X  g(t))^{2k1} 1_{X < t < Y}].
> >
> > Since X and Y are iid, I can interchange them in the first
> > expectation, giving
> >
> > h'(t) = 2k E[(t  g(t))^{2k1} 1_{X < t < Y}]
> >  2k E[(X  g(t))^{2k1} 1_{X < t < Y}].
> >
> > Now we use the fact that (X  g(t))^{2k1} <= (t  g(t))^{2k1}
> > whenever X < t to conclude that h'(t)>=0.
> >
> > Any other glaring mistakes? (Besides the lack of rigor?) :)
>
> This seems to me like the same as what you wrote in the first
> except you use iid property, before taking inequalities.
Yahoo! Groups SponsorADVERTISEMENT
>
> Any way, it seems to me your proof is good. I certainly haven't
> seen a flaw.
>
> Noel.

Yahoo! Groups Links
To visit your group on the web, go to:
http://groups.yahoo.com/group/probability/
To unsubscribe from this group, send an email to:
probabilityunsubscribe@yahoogroups.com
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

Do you Yahoo!?
Check out the new Yahoo! Front Page. www.yahoo.com
[Nontext portions of this message have been removed]  Is it obvious that E[(xy)g'(y)]>=0 for this choice of x and y?
I am not sure I understand.
Noel.
> g(x)=x^r, r>=1 is a convex function.
> ie
> we use the property g(x)g(y)>=(xy)*g'(y) when g(.)is convex.
> Put x=XE(X)
> y= ~X  E(~X) Then take expectation on both sides .  For what it's worth, it's not obvious at all to me. But maybe we're
both missing something. The book is apparently "Probability theory :
independence, interchangeability, martingales" by Yuan Shih Chow,
Henry Teicher. I have a request on that book just to see what's
going on. If someone can enlighten me while I wait, that would be
great.
 In probability@yahoogroups.com, "Noel Vaillant" <vaillant@p...>
wrote:>
>
>
> Is it obvious that E[(xy)g'(y)]>=0 for this choice of x and y?
> I am not sure I understand.
>
> Noel.
>
>
> > g(x)=x^r, r>=1 is a convex function.
> > ie
> > we use the property g(x)g(y)>=(xy)*g'(y) when g(.)is convex.
> > Put x=XE(X)
> > y= ~X  E(~X) Then take expectation on both sides .  Sorry folks, for the confusion,
Sorry Jason you are right the book is
"Probability theory : independence, interchangeability, martingales" by Yuan Shih Chow,
Henry Teicher.
If G(.) is a convex fxn then this propert hold :
G(x)G(y) >=(xy)G'r(y) { G'r(y):= rt hand derivative of G(y) at the pt y moreover G'(.) is an increasing function.}
Now take x=XE(X) y=~XE(~X)
therefore
G(XE(X))G(~XE(~X))>=(XE(X)(~XE(~X))G'r(~XE(~X))
Now it can be shown that ((XE(X)(~XE(~X))G'r(~XE(~X)) is >=
(XE(X)(~XE(~X))*K ( where K is some constant, the detailed argument is in the book , using the fact that G'r(.) is increasing and also the fact the function a(x)=x~x is monotonically incraesing (where ~x=a if x<=a, =x if a=<x <=b and =b if x>=b; a<b)
Thus we have
G(XE(X))G(~XE(~X))>=(XE(X)(~XE(~X))*K
Now taking expectation on both sides we have
E(G(XE(X))) E(G(~XE(~X)))>=E((XE(X)(~XE(~X))*K)
Since E((XE(X)(~XE(~X))*K)=0 thus we have
E(G(XE(X))) E(G(~XE(~X)))>=0
Take G(x)=x^r (r>=1) and we get the desired result ie
E(XE(X)^r)>=E(~XE(~X)^r)
If it is still not clear please let me know and I will be glad to put in more details ( or maybe I can also scan a couple of relevant pages from the book and send it across only if you ask ....I dont want to unnecessary overload the inboxes:))))))))))
Best Regards,
Shuva
jason1990 <jason1990@...> wrote:
For what it's worth, it's not obvious at all to me. But maybe we're
both missing something. The book is apparently "Probability theory :
independence, interchangeability, martingales" by Yuan Shih Chow,
Henry Teicher. I have a request on that book just to see what's
going on. If someone can enlighten me while I wait, that would be
great.
 In probability@yahoogroups.com, "Noel Vaillant" <vaillant@p...>
wrote:>
Yahoo! Groups Sponsor
>
>
> Is it obvious that E[(xy)g'(y)]>=0 for this choice of x and y?
> I am not sure I understand.
>
> Noel.
>
>
> > g(x)=x^r, r>=1 is a convex function.
> > ie
> > we use the property g(x)g(y)>=(xy)*g'(y) when g(.)is convex.
> > Put x=XE(X)
> > y= ~X  E(~X) Then take expectation on both sides .
Get unlimited calls to
U.S./Canada

Yahoo! Groups Links
To visit your group on the web, go to:
http://groups.yahoo.com/group/probability/
To unsubscribe from this group, send an email to:
probabilityunsubscribe@yahoogroups.com
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

Do you Yahoo!?
Check out the new Yahoo! Front Page. www.yahoo.com
[Nontext portions of this message have been removed] > Now it can be shown that:
((XE(X)(~XE(~X))G'r(~XE(~X)) >= (XE(X)(~XE(~X))*K
> where K is some constant, [...] using the fact that G'r(.) is
I feel very dumb here. Tried but failed. Will someone take me
>increasing and [...] a(x)=x~x is monotonically increasing
out of my misery? I know (~XE(~X)) is bounded, and since G'
is nondecreasing, G'r(~XE(~X)) is also bounded. But somehow
I can't manage to conclude (and use the hint about a(x))
I have been spending so much time on this. I may as well
go to the end, so it won't have been for nothing :)
Noel. I think I see it now. Define
f(t) = t  ~t  E(X) + E(~X) and
g(t) = G'r(~t  E(~X)).
Both functions are nondecreasing, and we can find t_0 such that
f(t) <= 0 for t <= t_0 and
f(t) >= 0 for t >= t_0.
Hence, if X >= t_0, then since f(X) >= 0 and g(X) >= g(t_0), we have
f(X)g(X) >= f(X)g(t_0).
Also, if X <= t_0, then f(X) <= 0 and g(X) <= g(t_0), so
f(X)g(X) >= f(X)g(t_0).
So I guess we take K = g(t_0). I think this works. I wouldn't call
it obvious, though.
 In probability@yahoogroups.com, "Noel Vaillant" <vaillant@p...>
wrote:>
>
> > Now it can be shown that:
>
> ((XE(X)(~XE(~X))G'r(~XE(~X)) >= (XE(X)(~XE(~X))*K
>
> > where K is some constant, [...] using the fact that G'r(.) is
> >increasing and [...] a(x)=x~x is monotonically increasing
>
> I feel very dumb here. Tried but failed. Will someone take me
> out of my misery? I know (~XE(~X)) is bounded, and since G'
> is nondecreasing, G'r(~XE(~X)) is also bounded. But somehow
> I can't manage to conclude (and use the hint about a(x))
>
> I have been spending so much time on this. I may as well
> go to the end, so it won't have been for nothing :)
>
> Noel.  Thank you very much Jason. This looks very good to me,
and is a huge relief :)
Noel.
> I think I see it now. Define
>
> f(t) = t  ~t  E(X) + E(~X) and
> g(t) = G'r(~t  E(~X)).
>
> Both functions are nondecreasing, and we can find t_0 such that
>
> f(t) <= 0 for t <= t_0 and
> f(t) >= 0 for t >= t_0.
>
> Hence, if X >= t_0, then since f(X) >= 0 and g(X) >= g(t_0), we have
>
> f(X)g(X) >= f(X)g(t_0).
>
> Also, if X <= t_0, then f(X) <= 0 and g(X) <= g(t_0), so
>
> f(X)g(X) >= f(X)g(t_0).
>
> So I guess we take K = g(t_0).
Your message has been successfully submitted and would be delivered to recipients shortly.