## Help !!!!!!

Expand Messages
• Hi Noel, Please help me with this one : Suppose E( |X| ) d] ,set ~mu=E(~X)
Message 1 of 18 , Nov 4, 2004
Hi Noel,

Suppose E( |X| )< infinity, mu=E( |X| ) Fix c<d .
Let ~X= c , x, d accordingly as [X<c],[c<=X<=d],[X>d] ,set ~mu=E(~X)
Prove: E( |~X - ~mu|^r )<=E(|X -mu|^r) for all r>= 1

Regards,
Shuva

PS: I solved the problem for mu=~mu but didn't need the condition r>=1, ie true for any r
Not very sure whether the problem is correct .If wrong can you get an counter example.
You many have to use Cr Inequality , Minkowski's , or Jensen's or Whatever .

Tony Wong <tw813@...> wrote:

hello all,

I have the following question which I hope someone can
help with:

Given X_t is a brownian motion with drift rate u and
volatility sigma, and X_0 = a >0, define T(b) with b>a
to be the first time X_t hits the level b. Now define
Y_t = X_{min(t, T(b)}. (i.e. Y_t=X_t if T(b) is
greater than t and Y_t=b for all t>= T(b) ). For each
fixed t, Y_t has a mixed distribution with a point
mass at b and a continuous density on (-infinity, b).

My question is: how can one derive the density part??
(For the case where X_t is the standard brownian
motion, this question is not too hard.) Also, can
someone tell me from which book, I could get an answer
for this question??

Many thanks,

Tony

__________________________________
���W��­����BGames�BWebcam�B�y­�����...
���s Yahoo! Messenger
http://messenger.yahoo.com.hk/

---------------------------------

To visit your group on the web, go to:
http://groups.yahoo.com/group/probability/

To unsubscribe from this group, send an email to:
probability-unsubscribe@yahoogroups.com

---------------------------------
Do you Yahoo!?
Check out the new Yahoo! Front Page. www.yahoo.com/a

[Non-text portions of this message have been removed]
Message 2 of 18 , Nov 4, 2004
Hi Noel,

Suppose E|X|< infinity, mu=E(X). Fix c<d ,

Let ~X = c,X,d according as [X<c], [c<=X<=d], [X>d]

Set ~mu=E(~X)

Show that E ( |~X - ~mu|^r ) <=E ( |X - mu|^r ) for all r>=1

Regards,

Shuva

PS: I have been able to do it for mu = ~mu ( did not need the condn r>=1here )but couldn't for the case mu not equals ~mu.I am noy sure whether the problem is correct so can you please provide me with a counter example if you think it is not true.

You many need to use Cr Inequality ,Jensen or Whatever .....

---------------------------------
Do you Yahoo!?
Check out the new Yahoo! Front Page. www.yahoo.com/a

[Non-text portions of this message have been removed]
• Here s a heuristic sketch of a very special case. Let s suppose that c=- infty, that X has a nice density ( nice here means whatever it needs to mean so
Message 3 of 18 , Nov 5, 2004
Here's a heuristic sketch of a very special case. Let's suppose that
c=-\infty, that X has a "nice" density ("nice" here means whatever
it needs to mean so that all my dubious differentiations are
justified), and that r=2k for some positive integer k.

Then ~X=min(X,d). Define g(t)=E[min(X,t)] and observe that

g'(t) = E[1_{X > t}] = P(X > t).

Also note that ~mu=g(d).

Now define h(t) = E[(min(X,t) - g(t))^{2k}], so that E|~X - ~mu|^r =
h(d). Note that

h'(t) = 2k E[(min(X,t) - g(t))^{2k-1} (1_{X > t} - g'(t))].

If we let Y be an independent copy of X, then

1_{X > t} - g'(t)
1_{X > t} P(Y < t) - 1_{X < t} P(Y > t)
= E[1_{X > t > Y}|X] - E[1_{X < t < Y}|X].

Hence,

h'(t) = 2k E[(min(X,t) - g(t))^{2k-1} 1_{X > t > Y}]
- 2k E[(min(X,t) - g(t))^{2k-1} 1_{X < t < Y}]
= 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
- 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}].

Since 2k-1 is odd,

(X - g(t))^{2k-1} 1_{X < t < Y}
<= (t - g(t))^{2k-1} 1_{X < t < Y}.

Therefore,

h'(t) >= 2k(t - g(t))^{2k-1} E[1_{X > t > Y} - 1_{X < t < Y}].

But since X and Y are iid, this expectation is 0, so h is an
increasing function in t. Thus,

E|~X - ~mu|^r = h(d)
<= lim_{t->\infty} h(t)
= E|X - mu|^r. QED

This argument is so convoluted, I don't have much confidence in it.
(Where's my mistake?) Even if it is correct, hopefully someone else
(Noel?) can provide something more transparent.

--- In probability@yahoogroups.com, shuva gupta <shuvagupta@y...>
wrote:
>
>
> Hi Noel,
>
>
> Suppose E|X|< infinity, mu=E(X). Fix c<d ,
>
> Let ~X = c,X,d according as [X<c], [c<=X<=d], [X>d]
>
> Set ~mu=E(~X)
>
> Show that E ( |~X - ~mu|^r ) <=E ( |X - mu|^r ) for all r>=1
>
> Regards,
>
> Shuva
>
> PS: I have been able to do it for mu = ~mu ( did not need the
condn r>=1here )but couldn't for the case mu not equals ~mu.I am
noy sure whether the problem is correct so can you please provide me
with a counter example if you think it is not true.
>
> You many need to use Cr Inequality ,Jensen or Whatever .....
>
>
>
>
>
>
>
>
>
>
>
>
> ---------------------------------
> Do you Yahoo!?
> Check out the new Yahoo! Front Page. www.yahoo.com/a
>
> [Non-text portions of this message have been removed]
• Well, here s one mistake: I wrote ... What I wanted to say was ... But that s clearly false. I then used this false fact to show that h (t) =0, which
Message 4 of 18 , Nov 5, 2004
Well, here's one mistake:

I wrote

> Since 2k-1 is odd,
>
> (X - g(t))^{2k-1} 1_{X < t < Y}
> <= (t - g(t))^{2k-1} 1_{X < t < Y}.

What I wanted to say was

> Since 2k-1 is odd,
>
> (X - g(t))^{2k-1} 1_{X < t < Y}
> <= (t - g(t))^{2k-1} 1_{X > t > Y}.

But that's clearly false. I then used this false "fact" to show that
h'(t)>=0, which finished the "proof".

But I think it's salvagable. Here's another way to show h'(t)>=0.

h'(t) = 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
- 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}].

Since X and Y are iid, I can interchange them in the first
expectation, giving

h'(t) = 2k E[(t - g(t))^{2k-1} 1_{X < t < Y}]
- 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}].

Now we use the fact that (X - g(t))^{2k-1} <= (t - g(t))^{2k-1}
whenever X < t to conclude that h'(t)>=0.

Any other glaring mistakes? (Besides the lack of rigor?) :)

--- In probability@yahoogroups.com, "jason1990" <jason1990@y...>
wrote:
>
>
> Here's a heuristic sketch of a very special case. Let's suppose
that
> c=-\infty, that X has a "nice" density ("nice" here means whatever
> it needs to mean so that all my dubious differentiations are
> justified), and that r=2k for some positive integer k.
>
> Then ~X=min(X,d). Define g(t)=E[min(X,t)] and observe that
>
> g'(t) = E[1_{X > t}] = P(X > t).
>
> Also note that ~mu=g(d).
>
> Now define h(t) = E[(min(X,t) - g(t))^{2k}], so that E|~X - ~mu|^r
=
> h(d). Note that
>
> h'(t) = 2k E[(min(X,t) - g(t))^{2k-1} (1_{X > t} - g'(t))].
>
> If we let Y be an independent copy of X, then
>
> 1_{X > t} - g'(t)
> 1_{X > t} P(Y < t) - 1_{X < t} P(Y > t)
> = E[1_{X > t > Y}|X] - E[1_{X < t < Y}|X].
>
> Hence,
>
> h'(t) = 2k E[(min(X,t) - g(t))^{2k-1} 1_{X > t > Y}]
> - 2k E[(min(X,t) - g(t))^{2k-1} 1_{X < t < Y}]
> = 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
> - 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}].
>
> Since 2k-1 is odd,
>
> (X - g(t))^{2k-1} 1_{X < t < Y}
> <= (t - g(t))^{2k-1} 1_{X < t < Y}.
>
> Therefore,
>
> h'(t) >= 2k(t - g(t))^{2k-1} E[1_{X > t > Y} - 1_{X < t < Y}].
>
> But since X and Y are iid, this expectation is 0, so h is an
> increasing function in t. Thus,
>
> E|~X - ~mu|^r = h(d)
> <= lim_{t->\infty} h(t)
> = E|X - mu|^r. QED
>
> This argument is so convoluted, I don't have much confidence in
it.
> (Where's my mistake?) Even if it is correct, hopefully someone
else
> (Noel?) can provide something more transparent.
>
> --- In probability@yahoogroups.com, shuva gupta <shuvagupta@y...>
> wrote:
> >
> >
> > Hi Noel,
> >
> >
> > Suppose E|X|< infinity, mu=E(X). Fix c<d ,
> >
> > Let ~X = c,X,d according as [X<c], [c<=X<=d], [X>d]
> >
> > Set ~mu=E(~X)
> >
> > Show that E ( |~X - ~mu|^r ) <=E ( |X - mu|^r ) for all r>=1
• ... Jason, I find this argument very impressive (there is a lot of ideas in it). I am a bit uneasy about differentiation under E[...] but I am confident that
Message 5 of 18 , Nov 5, 2004
> This argument is so convoluted, I don't have much confidence in it.
> (Where's my mistake?) Even if it is correct, hopefully someone else
> (Noel?) can provide something more transparent.

Jason,

I find this argument very impressive (there is a lot of ideas in it).
I am a bit uneasy about differentiation under E[...] but I am
confident that this is not a flaw in your proof (i.e. I am confident
every equality could be rigorously justified). I know you have
mentioned a mistake (cf your next post), but I haven't seen any on
first reading :-) So I am gonna go through your next post now.

Noel.
• ... Since (2k-1) is odd, x- x^(2k-1)is non-decreasing on R and since X - g(t)
Message 6 of 18 , Nov 5, 2004
> > Since 2k-1 is odd,
> >
> > (X - g(t))^{2k-1} 1_{X < t < Y}
> > <= (t - g(t))^{2k-1} 1_{X < t < Y}.

Since (2k-1) is odd, x->x^(2k-1)is non-decreasing on R
and since X - g(t) <= t - g(t) on {X < t}, the inequality
you wrote seems fine to me.

I also think this inequality allows you to conclude
that h'(t) >= 0.

h'(t)
= 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
- 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}]
>= 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
- 2k E[(t - g(t))^{2k-1} 1_{X < t < Y}]
= 2k(t-g(t))^{2k-1}E[1_{X > t > Y} - 1_{X < t < Y}]
=0

which is pretty much what you wrote in your first post.
What am I missing?

> What I wanted to say was
>
> > Since 2k-1 is odd,
> >
> > (X - g(t))^{2k-1} 1_{X < t < Y}
> > <= (t - g(t))^{2k-1} 1_{X > t > Y}.
>
> But that's clearly false.

I wonder why you wanted to write this in the first place.

> But I think it's salvagable. Here's another way to show
>
> h'(t) = 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
> - 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}].
>
> Since X and Y are iid, I can interchange them in the first
> expectation, giving
>
> h'(t) = 2k E[(t - g(t))^{2k-1} 1_{X < t < Y}]
> - 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}].
>
> Now we use the fact that (X - g(t))^{2k-1} <= (t - g(t))^{2k-1}
> whenever X < t to conclude that h'(t)>=0.
>
> Any other glaring mistakes? (Besides the lack of rigor?) :)

This seems to me like the same as what you wrote in the first place,
except you use iid property, before taking inequalities.

Any way, it seems to me your proof is good. I certainly haven't
seen a flaw.

Noel.
• Yes, you re right. I thought about this in the morning on the bus and again this evening before going to a movie. I had two lines of reasoning in my head at
Message 7 of 18 , Nov 5, 2004
and again this evening before going to a movie. I had two lines of
reasoning in my head at once and when I reread my post, I confused
myself.

About differentiating under the expectation, something occurred to
me at the theater. For any nonnegative random variable Y and any
r>0, E[Y^r]=\int_0^\infty{rz^{r-1}P(Y>z)dz}. So suppose we're given
a real t and we want to compute E[min(X,t)]. Let Y=t-min(X,t). Then
Y is nonnegative and we can apply the above to get

E[min(X,t)] = t - \int_{-\infty}^t{P(X < z)dz}.

We can differentiate this with no problem. Something similar should
be possible with the other expectation. I don't think this is
necessary to justify the differentiation, but it makes me wonder
whether the whole differentiation approach is unnecessary.

Another thing that is curious: the only property of the map x->x^
{2k} that was used is the fact that it has a nondecreasing
derivative. In other words, it is convex. So perhaps the original
poster's claim is true not only for functions of the form x->|x|^r
where r>=1, but for all convex functions. If so, then maybe Jensen's
inequality would be useful in creating a simpler proof.

I just feel that it shouldn't be this hard.

--- In probability@yahoogroups.com, "Noel Vaillant" <vaillant@p...>
wrote:
>
>
> > > Since 2k-1 is odd,
> > >
> > > (X - g(t))^{2k-1} 1_{X < t < Y}
> > > <= (t - g(t))^{2k-1} 1_{X < t < Y}.
>
> Since (2k-1) is odd, x->x^(2k-1)is non-decreasing on R
> and since X - g(t) <= t - g(t) on {X < t}, the inequality
> you wrote seems fine to me.
>
> I also think this inequality allows you to conclude
> that h'(t) >= 0.
>
> h'(t)
> = 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
> - 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}]
> >= 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
> - 2k E[(t - g(t))^{2k-1} 1_{X < t < Y}]
> = 2k(t-g(t))^{2k-1}E[1_{X > t > Y} - 1_{X < t < Y}]
> =0
>
> which is pretty much what you wrote in your first post.
> What am I missing?
>
> > What I wanted to say was
> >
> > > Since 2k-1 is odd,
> > >
> > > (X - g(t))^{2k-1} 1_{X < t < Y}
> > > <= (t - g(t))^{2k-1} 1_{X > t > Y}.
> >
> > But that's clearly false.
>
> I wonder why you wanted to write this in the first place.
>
>
>
> > But I think it's salvagable. Here's another way to show
> >
> > h'(t) = 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
> > - 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}].
> >
> > Since X and Y are iid, I can interchange them in the first
> > expectation, giving
> >
> > h'(t) = 2k E[(t - g(t))^{2k-1} 1_{X < t < Y}]
> > - 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}].
> >
> > Now we use the fact that (X - g(t))^{2k-1} <= (t - g(t))^{2k-1}
> > whenever X < t to conclude that h'(t)>=0.
> >
> > Any other glaring mistakes? (Besides the lack of rigor?) :)
>
> This seems to me like the same as what you wrote in the first
place,
> except you use iid property, before taking inequalities.
>
> Any way, it seems to me your proof is good. I certainly haven't
> seen a flaw.
>
> Noel.
• ... Very good. I agree. ... Yes. ... Yes. ... Well, I d be happy already to crack this for r in [0,+oo[ ... Yes, I am guessing there should be a simpler proof,
Message 8 of 18 , Nov 5, 2004
> About differentiating under the expectation, something occurred to
> me at the theater. For any nonnegative random variable Y and any
> r>0, E[Y^r]=\int_0^\infty{rz^{r-1}P(Y>z)dz}. So suppose we're given
> a real t and we want to compute E[min(X,t)]. Let Y=t-min(X,t). Then
> Y is nonnegative and we can apply the above to get
>
> E[min(X,t)] = t - \int_{-\infty}^t{P(X < z)dz}.
>

Very good. I agree.

> We can differentiate this with no problem.

Yes.

> Something similar should be possible with the other
> expectation.

Yes.

> Another thing that is curious: the only property of the map x->x^
> {2k} that was used is the fact that it has a nondecreasing
> derivative. In other words, it is convex. So perhaps the original
> poster's claim is true not only for functions of the form x->|x|^r
> where r>=1, but for all convex functions.

Well, I'd be happy already to crack this for r in [0,+oo[

> If so, then maybe Jensen's inequality would be useful in
>creating a simpler proof.

Yes, I am guessing there should be a simpler proof, but I
can't find it. I have been looking for a while now.
Even looked for counter-example for r=2.
I am going to give up soon :-)

Noel.
• I am very sorry Shuva, I have been stuck for 2 hours on this. I need to move on, otherwise I ll go crazy. I think Jason has a good chance to find a complete
Message 9 of 18 , Nov 5, 2004
I am very sorry Shuva,
I have been stuck for 2 hours on this.
I need to move on, otherwise I ll go crazy.
I think Jason has a good chance to find a complete proof.

Noel.
• Thanks Jason Noel and Myriam, Actually the problem has a very simple solution if we exploit the fact that g(x)=|x|^r, r =1 is a convex function. ie we use the
Message 10 of 18 , Nov 8, 2004
Thanks Jason Noel and Myriam,
Actually the problem has a very simple solution if we exploit the fact that
g(x)=|x|^r, r>=1 is a convex function.
ie
we use the property g(x)-g(y)>=(x-y)*g'(y) when g(.)is convex.
Put x=X-E(X)
y= ~X - E(~X) Then take expectation on both sides .
Regards,
Shuva
PS A friend of mine found this solution in the book Probability by Chow and Teicher 1st Edition ( pg 102-103)

jason1990 <jason1990@...> wrote:

and again this evening before going to a movie. I had two lines of
reasoning in my head at once and when I reread my post, I confused
myself.

About differentiating under the expectation, something occurred to
me at the theater. For any nonnegative random variable Y and any
r>0, E[Y^r]=\int_0^\infty{rz^{r-1}P(Y>z)dz}. So suppose we're given
a real t and we want to compute E[min(X,t)]. Let Y=t-min(X,t). Then
Y is nonnegative and we can apply the above to get

E[min(X,t)] = t - \int_{-\infty}^t{P(X < z)dz}.

We can differentiate this with no problem. Something similar should
be possible with the other expectation. I don't think this is
necessary to justify the differentiation, but it makes me wonder
whether the whole differentiation approach is unnecessary.

Another thing that is curious: the only property of the map x->x^
{2k} that was used is the fact that it has a nondecreasing
derivative. In other words, it is convex. So perhaps the original
poster's claim is true not only for functions of the form x->|x|^r
where r>=1, but for all convex functions. If so, then maybe Jensen's
inequality would be useful in creating a simpler proof.

I just feel that it shouldn't be this hard.

--- In probability@yahoogroups.com, "Noel Vaillant" <vaillant@p...>
wrote:
>
>
> > > Since 2k-1 is odd,
> > >
> > > (X - g(t))^{2k-1} 1_{X < t < Y}
> > > <= (t - g(t))^{2k-1} 1_{X < t < Y}.
>
> Since (2k-1) is odd, x->x^(2k-1)is non-decreasing on R
> and since X - g(t) <= t - g(t) on {X < t}, the inequality
> you wrote seems fine to me.
>
> I also think this inequality allows you to conclude
> that h'(t) >= 0.
>
> h'(t)
> = 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
> - 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}]
> >= 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
> - 2k E[(t - g(t))^{2k-1} 1_{X < t < Y}]
> = 2k(t-g(t))^{2k-1}E[1_{X > t > Y} - 1_{X < t < Y}]
> =0
>
> which is pretty much what you wrote in your first post.
> What am I missing?
>
> > What I wanted to say was
> >
> > > Since 2k-1 is odd,
> > >
> > > (X - g(t))^{2k-1} 1_{X < t < Y}
> > > <= (t - g(t))^{2k-1} 1_{X > t > Y}.
> >
> > But that's clearly false.
>
> I wonder why you wanted to write this in the first place.
>
>
>
> > But I think it's salvagable. Here's another way to show
> >
> > h'(t) = 2k E[(t - g(t))^{2k-1} 1_{X > t > Y}]
> > - 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}].
> >
> > Since X and Y are iid, I can interchange them in the first
> > expectation, giving
> >
> > h'(t) = 2k E[(t - g(t))^{2k-1} 1_{X < t < Y}]
> > - 2k E[(X - g(t))^{2k-1} 1_{X < t < Y}].
> >
> > Now we use the fact that (X - g(t))^{2k-1} <= (t - g(t))^{2k-1}
> > whenever X < t to conclude that h'(t)>=0.
> >
> > Any other glaring mistakes? (Besides the lack of rigor?) :)
>
> This seems to me like the same as what you wrote in the first
place,
> except you use iid property, before taking inequalities.
>
> Any way, it seems to me your proof is good. I certainly haven't
> seen a flaw.
>
> Noel.

---------------------------------

To visit your group on the web, go to:
http://groups.yahoo.com/group/probability/

To unsubscribe from this group, send an email to:
probability-unsubscribe@yahoogroups.com

---------------------------------
Do you Yahoo!?
Check out the new Yahoo! Front Page. www.yahoo.com

[Non-text portions of this message have been removed]
• Is it obvious that E[(x-y)g (y)] =0 for this choice of x and y? I am not sure I understand. Noel.
Message 11 of 18 , Nov 8, 2004
Is it obvious that E[(x-y)g'(y)]>=0 for this choice of x and y?
I am not sure I understand.

Noel.

> g(x)=|x|^r, r>=1 is a convex function.
> ie
> we use the property g(x)-g(y)>=(x-y)*g'(y) when g(.)is convex.
> Put x=X-E(X)
> y= ~X - E(~X) Then take expectation on both sides .
• For what it s worth, it s not obvious at all to me. But maybe we re both missing something. The book is apparently Probability theory : independence,
Message 12 of 18 , Nov 10, 2004
For what it's worth, it's not obvious at all to me. But maybe we're
both missing something. The book is apparently "Probability theory :
independence, interchangeability, martingales" by Yuan Shih Chow,
Henry Teicher. I have a request on that book just to see what's
going on. If someone can enlighten me while I wait, that would be
great.

--- In probability@yahoogroups.com, "Noel Vaillant" <vaillant@p...>
wrote:
>
>
>
> Is it obvious that E[(x-y)g'(y)]>=0 for this choice of x and y?
> I am not sure I understand.
>
> Noel.
>
>
> > g(x)=|x|^r, r>=1 is a convex function.
> > ie
> > we use the property g(x)-g(y)>=(x-y)*g'(y) when g(.)is convex.
> > Put x=X-E(X)
> > y= ~X - E(~X) Then take expectation on both sides .
• Sorry folks, for the confusion, Sorry Jason you are right the book is Probability theory : independence, interchangeability, martingales by Yuan Shih Chow,
Message 13 of 18 , Nov 10, 2004
Sorry folks, for the confusion,
Sorry Jason you are right the book is
"Probability theory : independence, interchangeability, martingales" by Yuan Shih Chow,
Henry Teicher.

If G(.) is a convex fxn then this propert hold :

G(x)-G(y) >=(x-y)G'r(y) { G'r(y):= rt hand derivative of G(y) at the pt y moreover G'(.) is an increasing function.}

Now take x=X-E(X) y=~X-E(~X)

therefore
G(X-E(X))-G(~X-E(~X))>=(X-E(X)-(~X-E(~X))G'r(~X-E(~X))

Now it can be shown that ((X-E(X)-(~X-E(~X))G'r(~X-E(~X)) is >=
(X-E(X)-(~X-E(~X))*K ( where K is some constant, the detailed argument is in the book , using the fact that G'r(.) is increasing and also the fact the function a(x)=x-~x is monotonically incraesing (where ~x=a if x<=a, =x if a=<x <=b and =b if x>=b; a<b)

Thus we have
G(X-E(X))-G(~X-E(~X))>=(X-E(X)-(~X-E(~X))*K
Now taking expectation on both sides we have

E(G(X-E(X)))- E(G(~X-E(~X)))>=E((X-E(X)-(~X-E(~X))*K)

Since E((X-E(X)-(~X-E(~X))*K)=0 thus we have

E(G(X-E(X)))- E(G(~X-E(~X)))>=0

Take G(x)=|x|^r (r>=1) and we get the desired result ie
E(|X-E(X)|^r)>=E(|~X-E(~X)|^r)

If it is still not clear please let me know and I will be glad to put in more details ( or maybe I can also scan a couple of relevant pages from the book and send it across only if you ask ....I dont want to unnecessary overload the inboxes:-))))))))))

Best Regards,
Shuva

jason1990 <jason1990@...> wrote:

For what it's worth, it's not obvious at all to me. But maybe we're
both missing something. The book is apparently "Probability theory :
independence, interchangeability, martingales" by Yuan Shih Chow,
Henry Teicher. I have a request on that book just to see what's
going on. If someone can enlighten me while I wait, that would be
great.

--- In probability@yahoogroups.com, "Noel Vaillant" <vaillant@p...>
wrote:
>
>
>
> Is it obvious that E[(x-y)g'(y)]>=0 for this choice of x and y?
> I am not sure I understand.
>
> Noel.
>
>
> > g(x)=|x|^r, r>=1 is a convex function.
> > ie
> > we use the property g(x)-g(y)>=(x-y)*g'(y) when g(.)is convex.
> > Put x=X-E(X)
> > y= ~X - E(~X) Then take expectation on both sides .

Get unlimited calls to

---------------------------------

To visit your group on the web, go to:
http://groups.yahoo.com/group/probability/

To unsubscribe from this group, send an email to:
probability-unsubscribe@yahoogroups.com

---------------------------------
Do you Yahoo!?
Check out the new Yahoo! Front Page. www.yahoo.com

[Non-text portions of this message have been removed]
• ... ((X-E(X)-(~X-E(~X))G r(~X-E(~X)) = (X-E(X)-(~X-E(~X))*K ... I feel very dumb here. Tried but failed. Will someone take me out of my misery? I know
Message 14 of 18 , Nov 11, 2004
> Now it can be shown that:

((X-E(X)-(~X-E(~X))G'r(~X-E(~X)) >= (X-E(X)-(~X-E(~X))*K

> where K is some constant, [...] using the fact that G'r(.) is
>increasing and [...] a(x)=x-~x is monotonically increasing

I feel very dumb here. Tried but failed. Will someone take me
out of my misery? I know (~X-E(~X)) is bounded, and since G'
is non-decreasing, |G'r(~X-E(~X))| is also bounded. But somehow
I can't manage to conclude (and use the hint about a(x))

I have been spending so much time on this. I may as well
go to the end, so it won't have been for nothing :-)

Noel.
• I think I see it now. Define f(t) = t - ~t - E(X) + E(~X) and g(t) = G r(~t - E(~X)). Both functions are nondecreasing, and we can find t_0 such that f(t)
Message 15 of 18 , Nov 11, 2004
I think I see it now. Define

f(t) = t - ~t - E(X) + E(~X) and
g(t) = G'r(~t - E(~X)).

Both functions are nondecreasing, and we can find t_0 such that

f(t) <= 0 for t <= t_0 and
f(t) >= 0 for t >= t_0.

Hence, if X >= t_0, then since f(X) >= 0 and g(X) >= g(t_0), we have

f(X)g(X) >= f(X)g(t_0).

Also, if X <= t_0, then f(X) <= 0 and g(X) <= g(t_0), so

f(X)g(X) >= f(X)g(t_0).

So I guess we take K = g(t_0). I think this works. I wouldn't call
it obvious, though.

--- In probability@yahoogroups.com, "Noel Vaillant" <vaillant@p...>
wrote:
>
>
> > Now it can be shown that:
>
> ((X-E(X)-(~X-E(~X))G'r(~X-E(~X)) >= (X-E(X)-(~X-E(~X))*K
>
> > where K is some constant, [...] using the fact that G'r(.) is
> >increasing and [...] a(x)=x-~x is monotonically increasing
>
> I feel very dumb here. Tried but failed. Will someone take me
> out of my misery? I know (~X-E(~X)) is bounded, and since G'
> is non-decreasing, |G'r(~X-E(~X))| is also bounded. But somehow
> I can't manage to conclude (and use the hint about a(x))
>
> I have been spending so much time on this. I may as well
> go to the end, so it won't have been for nothing :-)
>
> Noel.
• Thank you very much Jason. This looks very good to me, and is a huge relief :-) Noel.
Message 16 of 18 , Nov 11, 2004
Thank you very much Jason. This looks very good to me,
and is a huge relief :-)

Noel.

> I think I see it now. Define
>
> f(t) = t - ~t - E(X) + E(~X) and
> g(t) = G'r(~t - E(~X)).
>
> Both functions are nondecreasing, and we can find t_0 such that
>
> f(t) <= 0 for t <= t_0 and
> f(t) >= 0 for t >= t_0.
>
> Hence, if X >= t_0, then since f(X) >= 0 and g(X) >= g(t_0), we have
>
> f(X)g(X) >= f(X)g(t_0).
>
> Also, if X <= t_0, then f(X) <= 0 and g(X) <= g(t_0), so
>
> f(X)g(X) >= f(X)g(t_0).
>
> So I guess we take K = g(t_0).
Your message has been successfully submitted and would be delivered to recipients shortly.