Intuition behind how the Cauchy-Schwarz inequality's proof was obtainedCauchy-Schwarz inequality proof (but not the usual one)Please explain the intuition behind the dual problem in optimization.Intuition on fundamental theorem of arithmeticWhat is the “distance from the nearest integer”? (is that a kind of norm?)Intuition behind the derivative of dirac delta functionIntuition behind the proof of the Inverse Fourier Transform?What is meant in the quotation of Terry Tao?Is it usual to have no intuition to certain proofs and simply do them mechanically?Intuition behind the formula for $sum i^2$Idea behind the proof of Whitehead's Theorem and Compression LemmaGeometric interpretation of Hölder's inequality

Ansible: use group_vars directly without with_items

White foam around tubeless tires

What was the ring Varys took off?

Alias for root of a polynomial

Is Valonqar prophecy unfulfilled?

Was the dragon prowess intentionally downplayed in S08E04?

Offered a new position but unknown about salary?

Motorola 6845 and bitwise graphics

Why weren't the bells paid heed to in S8E5?

How to redirect stdout to a file, and stdout+stderr to another one?

How does this Martian habitat 3D printer built for NASA work?

Did galley captains put corks in the mouths of slave rowers to keep them quiet?

Adding labels and comments to a matrix

Polynomial division: Is this trick obvious?

Can my Serbian girlfriend apply for a UK Standard Visitor visa and stay for the whole 6 months?

What dog breeds survive the apocalypse for generations?

the grammar about `adv adv` as 'too quickly'

Find the unknown area, x

Will a coyote attack my dog on a leash while I'm on a hiking trail?

Would life always name the light from their sun "white"

Single word that parallels "Recent" when discussing the near future

A case where Bishop for knight isn't a good trade

Why did the metro bus stop at each railway crossing, despite no warning indicating a train was coming?

Mark command as obsolete



Intuition behind how the Cauchy-Schwarz inequality's proof was obtained


Cauchy-Schwarz inequality proof (but not the usual one)Please explain the intuition behind the dual problem in optimization.Intuition on fundamental theorem of arithmeticWhat is the “distance from the nearest integer”? (is that a kind of norm?)Intuition behind the derivative of dirac delta functionIntuition behind the proof of the Inverse Fourier Transform?What is meant in the quotation of Terry Tao?Is it usual to have no intuition to certain proofs and simply do them mechanically?Intuition behind the formula for $sum i^2$Idea behind the proof of Whitehead's Theorem and Compression LemmaGeometric interpretation of Hölder's inequality













3












$begingroup$


I'm studying multivariable calculus. Usually, when I study, I go through a book until I find a theorem, and then try to prove it. I was unable to, so I read the proof, which is the following:



Let $x, y in mathbbR^m, alpha in mathbbR$. Then $(x+alpha
y)bullet(x+alpha y) = vert vert x+alpha yvertvert^2 geq0$
.
Using the properties a the inner product we get:



$(x+alpha y)bullet(x+alpha y) = xbullet x+alpha xbullet y +
alpha ybullet x + alpha^2ybullet y
= vertvert xvertvert^2+2(xbullet y)alpha + alpha²vertvert yvertvert^2 geq 0$
.



That last inequality is true iff the discriminant of the polynomial with respect to
$alpha$ is less than or equal to 0. Therefore $vert
xbullet yvert - vert vert xvertvert²vertvert yvertvert^2
leq 0$
, from which comes the Cauchy-Schwarz inequality. Q.E.D



I can follow every step of the proof. I also get the intuition of why the inequality should be true. However, the proof seems "empty" to me. I don't understand what someone who wanted to prove this would do to find it. What's the intuition behind using $x+alpha y$?



The reason I ask this is because, after I read the proof, the way used to prove it was so beyond everything that I tried, that I am almost sure that I'd never be able to prove this on my own. How to deal with these kind of situations?










share|cite|improve this question











$endgroup$







  • 2




    $begingroup$
    Cauchy-schwarz boils down to the fact that, if you project one vector onto the other, the square of the length of the perpendicular component is greater than or equal to $0$, see my answer here.
    $endgroup$
    – Theo Bendit
    Mar 23 at 14:12















3












$begingroup$


I'm studying multivariable calculus. Usually, when I study, I go through a book until I find a theorem, and then try to prove it. I was unable to, so I read the proof, which is the following:



Let $x, y in mathbbR^m, alpha in mathbbR$. Then $(x+alpha
y)bullet(x+alpha y) = vert vert x+alpha yvertvert^2 geq0$
.
Using the properties a the inner product we get:



$(x+alpha y)bullet(x+alpha y) = xbullet x+alpha xbullet y +
alpha ybullet x + alpha^2ybullet y
= vertvert xvertvert^2+2(xbullet y)alpha + alpha²vertvert yvertvert^2 geq 0$
.



That last inequality is true iff the discriminant of the polynomial with respect to
$alpha$ is less than or equal to 0. Therefore $vert
xbullet yvert - vert vert xvertvert²vertvert yvertvert^2
leq 0$
, from which comes the Cauchy-Schwarz inequality. Q.E.D



I can follow every step of the proof. I also get the intuition of why the inequality should be true. However, the proof seems "empty" to me. I don't understand what someone who wanted to prove this would do to find it. What's the intuition behind using $x+alpha y$?



The reason I ask this is because, after I read the proof, the way used to prove it was so beyond everything that I tried, that I am almost sure that I'd never be able to prove this on my own. How to deal with these kind of situations?










share|cite|improve this question











$endgroup$







  • 2




    $begingroup$
    Cauchy-schwarz boils down to the fact that, if you project one vector onto the other, the square of the length of the perpendicular component is greater than or equal to $0$, see my answer here.
    $endgroup$
    – Theo Bendit
    Mar 23 at 14:12













3












3








3


1



$begingroup$


I'm studying multivariable calculus. Usually, when I study, I go through a book until I find a theorem, and then try to prove it. I was unable to, so I read the proof, which is the following:



Let $x, y in mathbbR^m, alpha in mathbbR$. Then $(x+alpha
y)bullet(x+alpha y) = vert vert x+alpha yvertvert^2 geq0$
.
Using the properties a the inner product we get:



$(x+alpha y)bullet(x+alpha y) = xbullet x+alpha xbullet y +
alpha ybullet x + alpha^2ybullet y
= vertvert xvertvert^2+2(xbullet y)alpha + alpha²vertvert yvertvert^2 geq 0$
.



That last inequality is true iff the discriminant of the polynomial with respect to
$alpha$ is less than or equal to 0. Therefore $vert
xbullet yvert - vert vert xvertvert²vertvert yvertvert^2
leq 0$
, from which comes the Cauchy-Schwarz inequality. Q.E.D



I can follow every step of the proof. I also get the intuition of why the inequality should be true. However, the proof seems "empty" to me. I don't understand what someone who wanted to prove this would do to find it. What's the intuition behind using $x+alpha y$?



The reason I ask this is because, after I read the proof, the way used to prove it was so beyond everything that I tried, that I am almost sure that I'd never be able to prove this on my own. How to deal with these kind of situations?










share|cite|improve this question











$endgroup$




I'm studying multivariable calculus. Usually, when I study, I go through a book until I find a theorem, and then try to prove it. I was unable to, so I read the proof, which is the following:



Let $x, y in mathbbR^m, alpha in mathbbR$. Then $(x+alpha
y)bullet(x+alpha y) = vert vert x+alpha yvertvert^2 geq0$
.
Using the properties a the inner product we get:



$(x+alpha y)bullet(x+alpha y) = xbullet x+alpha xbullet y +
alpha ybullet x + alpha^2ybullet y
= vertvert xvertvert^2+2(xbullet y)alpha + alpha²vertvert yvertvert^2 geq 0$
.



That last inequality is true iff the discriminant of the polynomial with respect to
$alpha$ is less than or equal to 0. Therefore $vert
xbullet yvert - vert vert xvertvert²vertvert yvertvert^2
leq 0$
, from which comes the Cauchy-Schwarz inequality. Q.E.D



I can follow every step of the proof. I also get the intuition of why the inequality should be true. However, the proof seems "empty" to me. I don't understand what someone who wanted to prove this would do to find it. What's the intuition behind using $x+alpha y$?



The reason I ask this is because, after I read the proof, the way used to prove it was so beyond everything that I tried, that I am almost sure that I'd never be able to prove this on my own. How to deal with these kind of situations?







multivariable-calculus soft-question intuition






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 23 at 14:04









YuiTo Cheng

3,21861145




3,21861145










asked Mar 23 at 11:34









RUBEN GONÇALO MOROUÇORUBEN GONÇALO MOROUÇO

1688




1688







  • 2




    $begingroup$
    Cauchy-schwarz boils down to the fact that, if you project one vector onto the other, the square of the length of the perpendicular component is greater than or equal to $0$, see my answer here.
    $endgroup$
    – Theo Bendit
    Mar 23 at 14:12












  • 2




    $begingroup$
    Cauchy-schwarz boils down to the fact that, if you project one vector onto the other, the square of the length of the perpendicular component is greater than or equal to $0$, see my answer here.
    $endgroup$
    – Theo Bendit
    Mar 23 at 14:12







2




2




$begingroup$
Cauchy-schwarz boils down to the fact that, if you project one vector onto the other, the square of the length of the perpendicular component is greater than or equal to $0$, see my answer here.
$endgroup$
– Theo Bendit
Mar 23 at 14:12




$begingroup$
Cauchy-schwarz boils down to the fact that, if you project one vector onto the other, the square of the length of the perpendicular component is greater than or equal to $0$, see my answer here.
$endgroup$
– Theo Bendit
Mar 23 at 14:12










2 Answers
2






active

oldest

votes


















3












$begingroup$

Proving theorems about general vector spaces, or general inner product spaces, can begin by considering a familiar $2$- or $3$-dimensional space. But then you need to abstract the intuition so it's pure algebra, now diagrams required. So your question comes down to what sort of preamble may have helped here.



If you think about vectors in a space you can visualise, all the theorem says is that the angle $theta$ between two vectors satisfies $-1lecosthetale 1$, which by the cosine rule is equivalent to the triangle inequality. Since the cosine rule can be stated in terms of dot products, it makes sense to see what you learn from one more equivalent result, $Vert x-yVert^2ge 0$.



But $Vert x-alpha yVert^2ge 0$ is a natural generalisation, and connects the issue to extremising quadratics, with the extremum giving us the most inequality we can get. And we don't need to think about a specific vector space to use $Vert vVert^2ge 0$, so it's a general starting point.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Wouldn't any real angle $theta$ satisfy $-1 leq cos theta leq 1$ ? I'm not very familiar with cosine rule or geometric proofs though
    $endgroup$
    – RUBEN GONÇALO MOROUÇO
    Mar 23 at 11:58






  • 1




    $begingroup$
    @RUBENGONÇALOMOROUÇO It would, yes. One could argue that knowing that is equivalent to knowing every other theorem mentioned herein.
    $endgroup$
    – J.G.
    Mar 23 at 12:01


















4












$begingroup$

I don't know about anybody else, but I share your dissatisfaction with the standard slick proof, and I personally find it helpful to think instead of expressing $x$ as a sum of a multiple of $y$ and a vector orthogonal to $y$. This kind of resolution of a vector into two mutually orthogonal components is a common and natural operation.



If $lambda$ is real, then $x - lambda y$ is orthogonal to $y$ if and only if (in your notation) $(x - lambda y) bullet y = 0$, i.e.,
$$
lambda |y|^2 = x bullet y.
$$



For any value of $lambda$ satisfying that condition ($lambda$ may be chosen arbitrarily if $y = 0$, and there is a unique solution for $lambda$ if $y ne 0$), write $u = x - lambda y$ and $v = lambda y$, so that $x = u + v$ and $u bullet v = 0$. Then:
beginalign*
|x|^2 & = (u + v) bullet (u + v) \
& = u bullet u + 2u bullet v + v bullet v \
& = |u|^2 + |v|^2 \
& geqslant |v|^2.
endalign*

Therefore, using the definitions of $v$ and $lambda$:
$$
|x|^2|y|^2 geqslant |v|^2|y|^2 = lambda^2|y|^4 = (x bullet y)^2 = |x bullet y|^2,
$$

and the result follows. So the selection of the value $-lambda$ for $alpha$ does make some intuitive sense (to me, at least).



You could arrive at this value of $alpha$ less intuitively by "completing the square" in the expression you derived for $|x + alpha y|^2$, thus, multiplying by $|y|^2$, to avoid a possible division by zero:
beginalign*
|x + alpha y|^2|y|^2 & = |x|^2|y|^2 + 2(x bullet y)alpha|y|^2 + alpha^2|y|^4 \
& = (alpha|y|^2 + x bullet y)^2 + |x|^2|y|^2 - (x bullet y)^2 \
& = |x|^2|y|^2 - (x bullet y)^2,
endalign*

if
$$alpha|y|^2 + x bullet y = 0.
$$

So the proof you quoted can be seen as the proof by resolution into orthogonal components in heavy disguise.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    (+1) for this approach
    $endgroup$
    – Mark Viola
    Mar 23 at 14:25










  • $begingroup$
    One can argue more directly that $|x-lambda y|^2=(x-lambda y)bullet x=|x|^2-lambda(xbullet y)$, therefore $|x|^2geqslantlambda(xbullet y)$, therefore $|x|^2|y|^2geqslantlambda|y|^2(xbullet y)=(xbullet y)^2$; but this loses the nice intuition of Pythagoras's theorem, and its direct corollary that the projection of $x$ on $y$ is shorter than $x$ (although this can be recovered by writing $lambda(xbullet y)$ as $lambda^2|b|^2=|lambda b|^2$); and the proof veers towards being, once again, "slick" and unmemorable - which is why I forgot having once done it this way!
    $endgroup$
    – Calum Gilhooley
    Mar 23 at 16:44












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3159210%2fintuition-behind-how-the-cauchy-schwarz-inequalitys-proof-was-obtained%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









3












$begingroup$

Proving theorems about general vector spaces, or general inner product spaces, can begin by considering a familiar $2$- or $3$-dimensional space. But then you need to abstract the intuition so it's pure algebra, now diagrams required. So your question comes down to what sort of preamble may have helped here.



If you think about vectors in a space you can visualise, all the theorem says is that the angle $theta$ between two vectors satisfies $-1lecosthetale 1$, which by the cosine rule is equivalent to the triangle inequality. Since the cosine rule can be stated in terms of dot products, it makes sense to see what you learn from one more equivalent result, $Vert x-yVert^2ge 0$.



But $Vert x-alpha yVert^2ge 0$ is a natural generalisation, and connects the issue to extremising quadratics, with the extremum giving us the most inequality we can get. And we don't need to think about a specific vector space to use $Vert vVert^2ge 0$, so it's a general starting point.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Wouldn't any real angle $theta$ satisfy $-1 leq cos theta leq 1$ ? I'm not very familiar with cosine rule or geometric proofs though
    $endgroup$
    – RUBEN GONÇALO MOROUÇO
    Mar 23 at 11:58






  • 1




    $begingroup$
    @RUBENGONÇALOMOROUÇO It would, yes. One could argue that knowing that is equivalent to knowing every other theorem mentioned herein.
    $endgroup$
    – J.G.
    Mar 23 at 12:01















3












$begingroup$

Proving theorems about general vector spaces, or general inner product spaces, can begin by considering a familiar $2$- or $3$-dimensional space. But then you need to abstract the intuition so it's pure algebra, now diagrams required. So your question comes down to what sort of preamble may have helped here.



If you think about vectors in a space you can visualise, all the theorem says is that the angle $theta$ between two vectors satisfies $-1lecosthetale 1$, which by the cosine rule is equivalent to the triangle inequality. Since the cosine rule can be stated in terms of dot products, it makes sense to see what you learn from one more equivalent result, $Vert x-yVert^2ge 0$.



But $Vert x-alpha yVert^2ge 0$ is a natural generalisation, and connects the issue to extremising quadratics, with the extremum giving us the most inequality we can get. And we don't need to think about a specific vector space to use $Vert vVert^2ge 0$, so it's a general starting point.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Wouldn't any real angle $theta$ satisfy $-1 leq cos theta leq 1$ ? I'm not very familiar with cosine rule or geometric proofs though
    $endgroup$
    – RUBEN GONÇALO MOROUÇO
    Mar 23 at 11:58






  • 1




    $begingroup$
    @RUBENGONÇALOMOROUÇO It would, yes. One could argue that knowing that is equivalent to knowing every other theorem mentioned herein.
    $endgroup$
    – J.G.
    Mar 23 at 12:01













3












3








3





$begingroup$

Proving theorems about general vector spaces, or general inner product spaces, can begin by considering a familiar $2$- or $3$-dimensional space. But then you need to abstract the intuition so it's pure algebra, now diagrams required. So your question comes down to what sort of preamble may have helped here.



If you think about vectors in a space you can visualise, all the theorem says is that the angle $theta$ between two vectors satisfies $-1lecosthetale 1$, which by the cosine rule is equivalent to the triangle inequality. Since the cosine rule can be stated in terms of dot products, it makes sense to see what you learn from one more equivalent result, $Vert x-yVert^2ge 0$.



But $Vert x-alpha yVert^2ge 0$ is a natural generalisation, and connects the issue to extremising quadratics, with the extremum giving us the most inequality we can get. And we don't need to think about a specific vector space to use $Vert vVert^2ge 0$, so it's a general starting point.






share|cite|improve this answer









$endgroup$



Proving theorems about general vector spaces, or general inner product spaces, can begin by considering a familiar $2$- or $3$-dimensional space. But then you need to abstract the intuition so it's pure algebra, now diagrams required. So your question comes down to what sort of preamble may have helped here.



If you think about vectors in a space you can visualise, all the theorem says is that the angle $theta$ between two vectors satisfies $-1lecosthetale 1$, which by the cosine rule is equivalent to the triangle inequality. Since the cosine rule can be stated in terms of dot products, it makes sense to see what you learn from one more equivalent result, $Vert x-yVert^2ge 0$.



But $Vert x-alpha yVert^2ge 0$ is a natural generalisation, and connects the issue to extremising quadratics, with the extremum giving us the most inequality we can get. And we don't need to think about a specific vector space to use $Vert vVert^2ge 0$, so it's a general starting point.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Mar 23 at 11:50









J.G.J.G.

37.3k23656




37.3k23656











  • $begingroup$
    Wouldn't any real angle $theta$ satisfy $-1 leq cos theta leq 1$ ? I'm not very familiar with cosine rule or geometric proofs though
    $endgroup$
    – RUBEN GONÇALO MOROUÇO
    Mar 23 at 11:58






  • 1




    $begingroup$
    @RUBENGONÇALOMOROUÇO It would, yes. One could argue that knowing that is equivalent to knowing every other theorem mentioned herein.
    $endgroup$
    – J.G.
    Mar 23 at 12:01
















  • $begingroup$
    Wouldn't any real angle $theta$ satisfy $-1 leq cos theta leq 1$ ? I'm not very familiar with cosine rule or geometric proofs though
    $endgroup$
    – RUBEN GONÇALO MOROUÇO
    Mar 23 at 11:58






  • 1




    $begingroup$
    @RUBENGONÇALOMOROUÇO It would, yes. One could argue that knowing that is equivalent to knowing every other theorem mentioned herein.
    $endgroup$
    – J.G.
    Mar 23 at 12:01















$begingroup$
Wouldn't any real angle $theta$ satisfy $-1 leq cos theta leq 1$ ? I'm not very familiar with cosine rule or geometric proofs though
$endgroup$
– RUBEN GONÇALO MOROUÇO
Mar 23 at 11:58




$begingroup$
Wouldn't any real angle $theta$ satisfy $-1 leq cos theta leq 1$ ? I'm not very familiar with cosine rule or geometric proofs though
$endgroup$
– RUBEN GONÇALO MOROUÇO
Mar 23 at 11:58




1




1




$begingroup$
@RUBENGONÇALOMOROUÇO It would, yes. One could argue that knowing that is equivalent to knowing every other theorem mentioned herein.
$endgroup$
– J.G.
Mar 23 at 12:01




$begingroup$
@RUBENGONÇALOMOROUÇO It would, yes. One could argue that knowing that is equivalent to knowing every other theorem mentioned herein.
$endgroup$
– J.G.
Mar 23 at 12:01











4












$begingroup$

I don't know about anybody else, but I share your dissatisfaction with the standard slick proof, and I personally find it helpful to think instead of expressing $x$ as a sum of a multiple of $y$ and a vector orthogonal to $y$. This kind of resolution of a vector into two mutually orthogonal components is a common and natural operation.



If $lambda$ is real, then $x - lambda y$ is orthogonal to $y$ if and only if (in your notation) $(x - lambda y) bullet y = 0$, i.e.,
$$
lambda |y|^2 = x bullet y.
$$



For any value of $lambda$ satisfying that condition ($lambda$ may be chosen arbitrarily if $y = 0$, and there is a unique solution for $lambda$ if $y ne 0$), write $u = x - lambda y$ and $v = lambda y$, so that $x = u + v$ and $u bullet v = 0$. Then:
beginalign*
|x|^2 & = (u + v) bullet (u + v) \
& = u bullet u + 2u bullet v + v bullet v \
& = |u|^2 + |v|^2 \
& geqslant |v|^2.
endalign*

Therefore, using the definitions of $v$ and $lambda$:
$$
|x|^2|y|^2 geqslant |v|^2|y|^2 = lambda^2|y|^4 = (x bullet y)^2 = |x bullet y|^2,
$$

and the result follows. So the selection of the value $-lambda$ for $alpha$ does make some intuitive sense (to me, at least).



You could arrive at this value of $alpha$ less intuitively by "completing the square" in the expression you derived for $|x + alpha y|^2$, thus, multiplying by $|y|^2$, to avoid a possible division by zero:
beginalign*
|x + alpha y|^2|y|^2 & = |x|^2|y|^2 + 2(x bullet y)alpha|y|^2 + alpha^2|y|^4 \
& = (alpha|y|^2 + x bullet y)^2 + |x|^2|y|^2 - (x bullet y)^2 \
& = |x|^2|y|^2 - (x bullet y)^2,
endalign*

if
$$alpha|y|^2 + x bullet y = 0.
$$

So the proof you quoted can be seen as the proof by resolution into orthogonal components in heavy disguise.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    (+1) for this approach
    $endgroup$
    – Mark Viola
    Mar 23 at 14:25










  • $begingroup$
    One can argue more directly that $|x-lambda y|^2=(x-lambda y)bullet x=|x|^2-lambda(xbullet y)$, therefore $|x|^2geqslantlambda(xbullet y)$, therefore $|x|^2|y|^2geqslantlambda|y|^2(xbullet y)=(xbullet y)^2$; but this loses the nice intuition of Pythagoras's theorem, and its direct corollary that the projection of $x$ on $y$ is shorter than $x$ (although this can be recovered by writing $lambda(xbullet y)$ as $lambda^2|b|^2=|lambda b|^2$); and the proof veers towards being, once again, "slick" and unmemorable - which is why I forgot having once done it this way!
    $endgroup$
    – Calum Gilhooley
    Mar 23 at 16:44
















4












$begingroup$

I don't know about anybody else, but I share your dissatisfaction with the standard slick proof, and I personally find it helpful to think instead of expressing $x$ as a sum of a multiple of $y$ and a vector orthogonal to $y$. This kind of resolution of a vector into two mutually orthogonal components is a common and natural operation.



If $lambda$ is real, then $x - lambda y$ is orthogonal to $y$ if and only if (in your notation) $(x - lambda y) bullet y = 0$, i.e.,
$$
lambda |y|^2 = x bullet y.
$$



For any value of $lambda$ satisfying that condition ($lambda$ may be chosen arbitrarily if $y = 0$, and there is a unique solution for $lambda$ if $y ne 0$), write $u = x - lambda y$ and $v = lambda y$, so that $x = u + v$ and $u bullet v = 0$. Then:
beginalign*
|x|^2 & = (u + v) bullet (u + v) \
& = u bullet u + 2u bullet v + v bullet v \
& = |u|^2 + |v|^2 \
& geqslant |v|^2.
endalign*

Therefore, using the definitions of $v$ and $lambda$:
$$
|x|^2|y|^2 geqslant |v|^2|y|^2 = lambda^2|y|^4 = (x bullet y)^2 = |x bullet y|^2,
$$

and the result follows. So the selection of the value $-lambda$ for $alpha$ does make some intuitive sense (to me, at least).



You could arrive at this value of $alpha$ less intuitively by "completing the square" in the expression you derived for $|x + alpha y|^2$, thus, multiplying by $|y|^2$, to avoid a possible division by zero:
beginalign*
|x + alpha y|^2|y|^2 & = |x|^2|y|^2 + 2(x bullet y)alpha|y|^2 + alpha^2|y|^4 \
& = (alpha|y|^2 + x bullet y)^2 + |x|^2|y|^2 - (x bullet y)^2 \
& = |x|^2|y|^2 - (x bullet y)^2,
endalign*

if
$$alpha|y|^2 + x bullet y = 0.
$$

So the proof you quoted can be seen as the proof by resolution into orthogonal components in heavy disguise.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    (+1) for this approach
    $endgroup$
    – Mark Viola
    Mar 23 at 14:25










  • $begingroup$
    One can argue more directly that $|x-lambda y|^2=(x-lambda y)bullet x=|x|^2-lambda(xbullet y)$, therefore $|x|^2geqslantlambda(xbullet y)$, therefore $|x|^2|y|^2geqslantlambda|y|^2(xbullet y)=(xbullet y)^2$; but this loses the nice intuition of Pythagoras's theorem, and its direct corollary that the projection of $x$ on $y$ is shorter than $x$ (although this can be recovered by writing $lambda(xbullet y)$ as $lambda^2|b|^2=|lambda b|^2$); and the proof veers towards being, once again, "slick" and unmemorable - which is why I forgot having once done it this way!
    $endgroup$
    – Calum Gilhooley
    Mar 23 at 16:44














4












4








4





$begingroup$

I don't know about anybody else, but I share your dissatisfaction with the standard slick proof, and I personally find it helpful to think instead of expressing $x$ as a sum of a multiple of $y$ and a vector orthogonal to $y$. This kind of resolution of a vector into two mutually orthogonal components is a common and natural operation.



If $lambda$ is real, then $x - lambda y$ is orthogonal to $y$ if and only if (in your notation) $(x - lambda y) bullet y = 0$, i.e.,
$$
lambda |y|^2 = x bullet y.
$$



For any value of $lambda$ satisfying that condition ($lambda$ may be chosen arbitrarily if $y = 0$, and there is a unique solution for $lambda$ if $y ne 0$), write $u = x - lambda y$ and $v = lambda y$, so that $x = u + v$ and $u bullet v = 0$. Then:
beginalign*
|x|^2 & = (u + v) bullet (u + v) \
& = u bullet u + 2u bullet v + v bullet v \
& = |u|^2 + |v|^2 \
& geqslant |v|^2.
endalign*

Therefore, using the definitions of $v$ and $lambda$:
$$
|x|^2|y|^2 geqslant |v|^2|y|^2 = lambda^2|y|^4 = (x bullet y)^2 = |x bullet y|^2,
$$

and the result follows. So the selection of the value $-lambda$ for $alpha$ does make some intuitive sense (to me, at least).



You could arrive at this value of $alpha$ less intuitively by "completing the square" in the expression you derived for $|x + alpha y|^2$, thus, multiplying by $|y|^2$, to avoid a possible division by zero:
beginalign*
|x + alpha y|^2|y|^2 & = |x|^2|y|^2 + 2(x bullet y)alpha|y|^2 + alpha^2|y|^4 \
& = (alpha|y|^2 + x bullet y)^2 + |x|^2|y|^2 - (x bullet y)^2 \
& = |x|^2|y|^2 - (x bullet y)^2,
endalign*

if
$$alpha|y|^2 + x bullet y = 0.
$$

So the proof you quoted can be seen as the proof by resolution into orthogonal components in heavy disguise.






share|cite|improve this answer









$endgroup$



I don't know about anybody else, but I share your dissatisfaction with the standard slick proof, and I personally find it helpful to think instead of expressing $x$ as a sum of a multiple of $y$ and a vector orthogonal to $y$. This kind of resolution of a vector into two mutually orthogonal components is a common and natural operation.



If $lambda$ is real, then $x - lambda y$ is orthogonal to $y$ if and only if (in your notation) $(x - lambda y) bullet y = 0$, i.e.,
$$
lambda |y|^2 = x bullet y.
$$



For any value of $lambda$ satisfying that condition ($lambda$ may be chosen arbitrarily if $y = 0$, and there is a unique solution for $lambda$ if $y ne 0$), write $u = x - lambda y$ and $v = lambda y$, so that $x = u + v$ and $u bullet v = 0$. Then:
beginalign*
|x|^2 & = (u + v) bullet (u + v) \
& = u bullet u + 2u bullet v + v bullet v \
& = |u|^2 + |v|^2 \
& geqslant |v|^2.
endalign*

Therefore, using the definitions of $v$ and $lambda$:
$$
|x|^2|y|^2 geqslant |v|^2|y|^2 = lambda^2|y|^4 = (x bullet y)^2 = |x bullet y|^2,
$$

and the result follows. So the selection of the value $-lambda$ for $alpha$ does make some intuitive sense (to me, at least).



You could arrive at this value of $alpha$ less intuitively by "completing the square" in the expression you derived for $|x + alpha y|^2$, thus, multiplying by $|y|^2$, to avoid a possible division by zero:
beginalign*
|x + alpha y|^2|y|^2 & = |x|^2|y|^2 + 2(x bullet y)alpha|y|^2 + alpha^2|y|^4 \
& = (alpha|y|^2 + x bullet y)^2 + |x|^2|y|^2 - (x bullet y)^2 \
& = |x|^2|y|^2 - (x bullet y)^2,
endalign*

if
$$alpha|y|^2 + x bullet y = 0.
$$

So the proof you quoted can be seen as the proof by resolution into orthogonal components in heavy disguise.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Mar 23 at 13:03









Calum GilhooleyCalum Gilhooley

5,184730




5,184730







  • 1




    $begingroup$
    (+1) for this approach
    $endgroup$
    – Mark Viola
    Mar 23 at 14:25










  • $begingroup$
    One can argue more directly that $|x-lambda y|^2=(x-lambda y)bullet x=|x|^2-lambda(xbullet y)$, therefore $|x|^2geqslantlambda(xbullet y)$, therefore $|x|^2|y|^2geqslantlambda|y|^2(xbullet y)=(xbullet y)^2$; but this loses the nice intuition of Pythagoras's theorem, and its direct corollary that the projection of $x$ on $y$ is shorter than $x$ (although this can be recovered by writing $lambda(xbullet y)$ as $lambda^2|b|^2=|lambda b|^2$); and the proof veers towards being, once again, "slick" and unmemorable - which is why I forgot having once done it this way!
    $endgroup$
    – Calum Gilhooley
    Mar 23 at 16:44













  • 1




    $begingroup$
    (+1) for this approach
    $endgroup$
    – Mark Viola
    Mar 23 at 14:25










  • $begingroup$
    One can argue more directly that $|x-lambda y|^2=(x-lambda y)bullet x=|x|^2-lambda(xbullet y)$, therefore $|x|^2geqslantlambda(xbullet y)$, therefore $|x|^2|y|^2geqslantlambda|y|^2(xbullet y)=(xbullet y)^2$; but this loses the nice intuition of Pythagoras's theorem, and its direct corollary that the projection of $x$ on $y$ is shorter than $x$ (although this can be recovered by writing $lambda(xbullet y)$ as $lambda^2|b|^2=|lambda b|^2$); and the proof veers towards being, once again, "slick" and unmemorable - which is why I forgot having once done it this way!
    $endgroup$
    – Calum Gilhooley
    Mar 23 at 16:44








1




1




$begingroup$
(+1) for this approach
$endgroup$
– Mark Viola
Mar 23 at 14:25




$begingroup$
(+1) for this approach
$endgroup$
– Mark Viola
Mar 23 at 14:25












$begingroup$
One can argue more directly that $|x-lambda y|^2=(x-lambda y)bullet x=|x|^2-lambda(xbullet y)$, therefore $|x|^2geqslantlambda(xbullet y)$, therefore $|x|^2|y|^2geqslantlambda|y|^2(xbullet y)=(xbullet y)^2$; but this loses the nice intuition of Pythagoras's theorem, and its direct corollary that the projection of $x$ on $y$ is shorter than $x$ (although this can be recovered by writing $lambda(xbullet y)$ as $lambda^2|b|^2=|lambda b|^2$); and the proof veers towards being, once again, "slick" and unmemorable - which is why I forgot having once done it this way!
$endgroup$
– Calum Gilhooley
Mar 23 at 16:44





$begingroup$
One can argue more directly that $|x-lambda y|^2=(x-lambda y)bullet x=|x|^2-lambda(xbullet y)$, therefore $|x|^2geqslantlambda(xbullet y)$, therefore $|x|^2|y|^2geqslantlambda|y|^2(xbullet y)=(xbullet y)^2$; but this loses the nice intuition of Pythagoras's theorem, and its direct corollary that the projection of $x$ on $y$ is shorter than $x$ (although this can be recovered by writing $lambda(xbullet y)$ as $lambda^2|b|^2=|lambda b|^2$); and the proof veers towards being, once again, "slick" and unmemorable - which is why I forgot having once done it this way!
$endgroup$
– Calum Gilhooley
Mar 23 at 16:44


















draft saved

draft discarded
















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3159210%2fintuition-behind-how-the-cauchy-schwarz-inequalitys-proof-was-obtained%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현