Monday, January 29, 2018

physical chemistry - Derivation of the Heisenberg uncertainty principle


The Heisenberg uncertainty principle states that


$$\Delta x \Delta p \geq \frac{\hbar}{2}$$


where
$\Delta x$ is the uncertainty in the position,

$\Delta p$ is the uncertainty in linear momentum, and
$\hbar = 1.054571800(13) \times 10^{-34}\ \mathrm{J\ s}$[source] is the reduced Planck constant


This means that, regardless of what quantum mechanical state the particle is in, we cannot simultaneously measure its position and momentum with perfect precision. I read that this is intrinsically linked to the fact that the position and momentum operators do not commute: $[\hat{x},\hat{p}] = \mathrm{i}\hbar$.


How can I derive the uncertainty principle, as given above?



Answer



The proof I will use is taken from Griffiths, Introduction to Quantum Mechanics, 2nd ed., pp 110-111.


Defining "uncertainty"


Let's assume that the normalised state $|\psi\rangle$ of a particle can be expanded as a linear combination of energy eigenstates $|n\rangle$, with $\hat{H}|n\rangle = E_n |n\rangle$.


$$| \psi \rangle = \sum_n c_n |n\rangle \tag{1}$$


The expectation value (the "mean") of a quantity, such as energy, is given by



$$\begin{align} \langle E\rangle &= \langle \psi | H | \psi \rangle \tag{2} \end{align}$$


and the variance of the energy can be defined analogously to that used in statistics, which for a continuous variable $x$ is simply the expectation value of $(x - \bar{x})^2$:


$$\sigma_E^2 = \left\langle (E - \langle E\rangle)^2 \right\rangle \tag{3}$$


The standard deviation is the square root of the variance, and the "uncertainty" refers to the standard deviation. It's more proper to use $\sigma$ as the symbol, instead of $\Delta$, and this is what you will see in most "proper" texts.


$$\sigma_E = \sqrt{\left\langle (E - \langle E\rangle)^2 \right\rangle} \tag{4}$$


However, it's much easier to stick to the variance in the proof. Let's generalise this now to any generic observable, $A$, which is necessarily represented by a hermitian operator, $\hat{A}$. The expectation value of $A$ is merely a number, so let's use the small letter $a$ to refer to it. With that, we have


$$\begin{align} \sigma_A^2 &= \left\langle (A - a)^2 \right\rangle \tag{5} \\ &= \left\langle \psi \middle| (\hat{A} - a)^2 \middle| \psi \right\rangle \tag{6} \\ &= \left\langle \psi \middle| (\hat{A} - a) \middle| (\hat{A} - a)\psi \right\rangle \tag{7} \\ &= \left\langle (\hat{A} - a)\psi \middle| (\hat{A} - a) \middle| \psi \right\rangle \tag{8} \\ &= \left\langle (\hat{A} - a)\psi \middle| (\hat{A} - a)\psi \right\rangle \tag{9} \end{align}$$


where, in going from $(7)$ to $(8)$, I have invoked the hermiticity of $(\hat{A} - a)$ (since $\hat{A}$ is hermitian and $a$ is only a constant). Likewise, for a second observable $B$ with $\langle B \rangle = b$,


$$\sigma_B^2 = \left\langle (\hat{B} - b)\psi \middle| (\hat{B} - b)\psi \right\rangle \tag{10}$$


The Cauchy-Schwarz inequality



...states that, for all vectors $f$ and $g$ belonging to an inner product space (suffice it to say that functions in quantum mechanics satisfy this condition),


$$\langle f | f \rangle \langle g | g \rangle \geq |\langle f | g \rangle|^2 \tag{11}$$


In general, $\langle f | g \rangle$ is a complex number, which is why we need to take the modulus. By the definition of the inner product,


$$\langle f | g \rangle = \langle g | f \rangle^* \tag{12}$$


For a generic complex number $z = x + \mathrm{i}y$, we have


$$|z|^2 = x^2 + y^2 \geq y^2 \qquad \qquad \text{(since }x^2 \geq 0\text{)} \tag{13}$$


But $z^* = x - \mathrm{i}y$ means that


$$\begin{align} y &= \frac{z - z^*}{2\mathrm{i}} \tag{14} \\ |z|^2 &\geq \left(\frac{z - z^*}{2\mathrm{i}}\right)^2 \tag{15} \end{align}$$


and plugging $z = \langle f | g \rangle$ into equation $(15)$, we get


$$|\langle f | g \rangle|^2 \geq \left[\frac{1}{2\mathrm{i}}(\langle f | g \rangle - \langle g | f \rangle) \right]^2 \tag{16}$$



Now, if we let $| f \rangle = | (\hat{A} - a)\psi \rangle$ and $| g \rangle = | (\hat{B} - B)\psi \rangle$, we can combine equations $(9)$, $(10)$, $(11)$, and $(16)$ to get:


$$\begin{align} \sigma_A^2 \sigma_B^2 &= \langle f | f \rangle \langle g | g \rangle \tag{17} \\ &\geq |\langle f | g \rangle|^2 \tag{18} \\ &\geq \left[\frac{1}{2\mathrm{i}}(\langle f | g \rangle - \langle g | f \rangle) \right]^2 \tag{19} \end{align}$$


Expanding the brackets


If you've made it this far - great job - take a breather before you continue, because there's more maths coming.


We have1


$$\begin{align} \langle f | g \rangle &= \left\langle (\hat{A} - a)\psi \middle| (\hat{B} - b)\psi \right\rangle \tag{20} \\ &= \langle \hat{A}\psi |\hat{B}\psi \rangle - \langle a\psi |\hat{B}\psi \rangle - \langle \hat{A}\psi | b\psi \rangle + \langle a\psi |b\psi \rangle \tag{21} \\ &= \langle \psi |\hat{A}\hat{B}|\psi \rangle - a\langle \psi |\hat{B}\psi \rangle - b\langle \hat{A}\psi | \psi \rangle + ab\langle \psi |\psi \rangle \tag{22} \\ &= \langle \psi |\hat{A}\hat{B}|\psi \rangle - ab - ab + ab \tag{23} \\ &= \langle \psi |\hat{A}\hat{B}|\psi \rangle - ab \tag{24} \end{align}$$


Likewise,


$$\langle g | f \rangle = \langle \psi |\hat{B}\hat{A}|\psi \rangle - ab \tag{25}$$


So, substituting $(24)$ and $(25)$ into $(19)$,


$$\begin{align} \sigma_A^2 \sigma_B^2 &\geq \left[\frac{1}{2\mathrm{i}}(\langle\psi |\hat{A}\hat{B}|\psi \rangle - \langle \psi |\hat{B}\hat{A}|\psi\rangle) \right]^2 \tag{26} \\ &= \left[\frac{1}{2\mathrm{i}}(\langle\psi |\hat{A}\hat{B} - \hat{B}\hat{A}|\psi \rangle ) \right]^2 \tag{27} \end{align}$$



The commutator of two operators is defined as


$$[\hat{A},\hat{B}] = \hat{A}\hat{B} - \hat{B}\hat{A} \tag{28}$$


So, the term in parentheses in equation $(27)$ is simply the expectation value of the commutator, and we have reached the Robertson uncertainty relation:


$$\sigma_A^2 \sigma_B^2 \geq \left(\frac{1}{2\mathrm{i}}\langle[\hat{A},\hat{B} ]\rangle \right)^2 \tag{29}$$


This inequality can be applied to any pair of observables $A$ and $B$.2


The Heisenberg uncertainty principle


Simply substituting in $A = x$ and $B = p$ gives us


$$\sigma_x^2 \sigma_p^2 \geq \left(\frac{1}{2\mathrm{i}}\langle[\hat{x},\hat{p} ]\rangle \right)^2 \tag{30}$$


The commutator of $\hat{x}$ and $\hat{p}$ is famously $\mathrm{i}\hbar$,3 and the expectation value of $\mathrm{i}\hbar$ is of course none other than $\mathrm{i}\hbar$. This completes the proof:


$$\begin{align} \sigma_x^2 \sigma_p^2 &\geq \left(\frac{1}{2\mathrm{i}}\cdot\mathrm{i}\hbar \right)^2 \tag{31} \\ &= \left(\frac{\hbar}{2}\right)^2 \tag{32} \\ \sigma_x \sigma_p &\geq \frac{\hbar}{2} \tag{33} \end{align}$$



where we have simply "removed the square" on both sides because as standard deviations, $\sigma_x$ and $\sigma_p$ are always positive.




Notes


1 I have skipped some stuff. Namely, $\langle \hat{A}\psi |\hat{B}\psi \rangle = \langle \psi |\hat{A}\hat{B}|\psi \rangle$ which is quite straightforward to prove using the hermiticity of both operators; $\langle \psi |\hat{A}|\psi \rangle = a$; $\langle \psi |\hat{B}|\psi \rangle = b$; and $a = a^*$ since it is the expectation value of a physical observable, which must be real.


2 This does not apply to, and cannot be used to derive, the energy-time uncertainty principle. There is no time operator in quantum mechanics, and time is not a measurable observable, it is only a parameter.


3 Technically, it is a postulate of quantum mechanics. (If I am not wrong, it derives from the Schrodinger equation, which is itself a postulate.)


No comments:

Post a Comment

digital communications - Understanding the Matched Filter

I have a question about matched filtering. Does the matched filter maximise the SNR at the moment of decision only? As far as I understand, ...