Problem statement:
I am designing a NP detector for the following detection problem:
$\mathcal H_{0}: x[n] = A_0\cos(2\pi f_0n) + w[n]$
$\mathcal H_{1}: x[n] = A_1\cos(2\pi f_0n) + w[n]$
where:
$f_{0}: \text{known parameter}$
$w[n] \sim \mathcal N\left(0, \sigma^2\right), \text {WGN}$
$A_{0}\ \ \sim \mathcal N\left(a_0, \sigma_0^2\right)$
$A_{1}\ \ \sim \mathcal N\left(a_{1},\sigma_1^2\right)$
What didn't work for me:
o I have tried to directly apply Neyman-Pearson theorem in the way it's done in "Fundamentals of Statistical Signal Processing, Volume II, Detection Theory" by Steven M. Kay. But I have failed to derive test statistics for optimal detector.
o I have tried to use the amplitude of the spectral component for $f_{0}$ as a test statistics and estimate threshold of making decision from given $P_{FA}$ (probability of false alarm). It worked fine for me, but I'm still concerned that this detector is not optimal.
Question:
Is there some way to design optimal NP detector for such problem?
Any ideas, advices, links will be much appreciated.
Answer
Since both the null hypothesis $\mathcal H_0$ and the alternative hypothesis $\mathcal H_1$ are signal + noise, this is not only the detection of a random signal in WGN, but also a discrimination problem (i.e. "classification") of one signal from the other both embedded in WGN. The Neyman-Pearson detector decides $\mathcal H_1$ if:
$$\frac{p(\mathbf x; \mathcal H_1)}{p(\mathbf x; \mathcal H_0)}> \gamma$$
Which is:
$$ \frac{\displaystyle\frac{1}{\left[2\pi\left(\sigma_1^2 + \sigma^2\right)\right]^\frac N2}\exp\left[-\frac{1}{2\left(\sigma_1^2 + \sigma^2\right)}\sum_{n=0}^{N-1}\left(x[n]-a_1\cos\left(2\pi f_0 n\right)\right)^2\right]}{\displaystyle\frac{1}{\left[2\pi\left(\sigma_0^2 + \sigma^2\right)\right]^\frac N2}\exp\left[-\frac{1}{2\left(\sigma_0^2 + \sigma^2\right)}\sum_{n=0}^{N-1}\left(x[n]-a_0\cos\left(2\pi f_0 n\right)\right)^2\right]}>\gamma $$
Taking the logarithm on the LHS you get:
\begin{align} T(\mathbf x)&=\frac N2 \ln\left(\frac{\sigma_0^2 + \sigma^2}{\sigma_1^2 + \sigma^2}\right) +\frac{\sum_{n=0}^{N-1}\left(x[n]-a_0\cos\left(2\pi f_0 n\right)\right)^2}{2\left(\sigma_0^2 + \sigma^2\right)}-\frac{\sum_{n=0}^{N-1}\left(x[n]-a_1\cos\left(2\pi f_0 n\right)\right)^2}{2\left(\sigma_1^2 + \sigma^2\right)}\\ &=\frac N2 \ln\left(\frac{\sigma_0^2 + \sigma^2}{\sigma_1^2 + \sigma^2}\right) +\left[\frac{\left(\sigma_1^2-\sigma_0^2\right)}{2\left(\sigma_0^2+\sigma^2\right)\left(\sigma_1^2+\sigma^2\right)}\right]\cdot\sum_{n=0}^{N-1}x^2[n] \\&\quad+ \left[\frac{\left(a_1-a_0\right)\sigma^2-a_0\sigma_1^2+a_1\sigma_0^2}{\left(\sigma_0^2+\sigma^2\right)\left(\sigma_1^2+\sigma^2\right)}\right]\cdot\sum_{n=0}^{N-1}x[n]\cos\left(2\pi f_0 n\right)\\&\quad+\left[\frac{a_0^2}{2\left(\sigma_0^2 + \sigma^2\right)}-\frac{a_1^2}{2\left(\sigma_1^2 +\sigma^2\right)}\right]\cdot\sum_{n=0}^{N-1} \cos^2\left(2\pi f_0 n\right) \end{align}
The inequality becomes $T(\mathbf x) > \ln\gamma$. Keeping data-dependent terms on the LHS and defining $\gamma'$ as:
$$ \gamma'=\ln\gamma-\frac N2 \ln\left(\frac{\sigma_0^2 + \sigma^2}{\sigma_1^2 + \sigma^2}\right)-\left[\frac{a_0^2}{2\left(\sigma_0^2 + \sigma^2\right)}-\frac{a_1^2}{2\left(\sigma_1^2 +\sigma^2\right)}\right]\cdot\sum_{n=0}^{N-1}{\cos^2\left(2\pi f_0 n\right)} $$
We get:
$$ \underbrace{\overbrace{\frac{\left(\sigma_1^2-\sigma_0^2\right)}{2\left(\sigma_0^2+\sigma^2\right)\left(\sigma_1^2+\sigma^2\right)}\sum_{n=0}^{N-1}x^2[n]}^{\rm Quadratic} + \overbrace{\frac{\left(a_1-a_0\right)\sigma^2-a_0\sigma_1^2+a_1\sigma_0^2}{\left(\sigma_0^2+\sigma^2\right)\left(\sigma_1^2+\sigma^2\right)}\sum_{n=0}^{N-1}{x[n]\cos\left(2\pi f_0 n\right)}}^{\rm Linear}}_{T'(\mathbf x)}> \gamma' $$
The test statistic $T'(\mathbf x)$ contains both a linear term and a quadratic term, the detection should then be based on both the variance characterized by the quadratic term and the "mean" characterized by the linear term. More like a composite energy detector. That's my two cents for now.
By the way, this problem looks like a special case the general Gaussian detection problem where the signal has a deterministic part and a random part. And this under $\mathcal H_1$ is modeled as the Bayesian general linear model as follows:
$$ \mathbf{ x = H\theta + w}\tag{1} $$
Where $\mathbf H$ is a $n\times p$ deterministic matrix , $\mathbf \theta$ is a $p\times 1$ random vector with PDF $\mathcal N\left(\mu_\theta, \sigma_\theta^2\right)$ and independent of $\mathbf w$.
If the problem was to detect the presence of a signal (any) versus noise, then one could transform the initial definition to for instance $\mathcal H_0$ noise only versus $\mathcal H_1$ signal 1 + signal 2. And this could be brought to the case in $(1)$.
UPDATE:
Note two special cases:
- When the random amplitudes have equal variances but different means, that is if $\sigma_0^2 = \sigma_1^2$ but $a_0 \neq a_1$, the detection problem reduces to the one of simply tracking the "mean" as the quadratic term (i.e energy part) cancels out and the test statistic becomes
$$ T'(\mathbf x)=\left(\frac{a_1-a_0}{\sigma_0^2+\sigma^2}\right)\sum_{n=0}^{N-1}{x[n]\cos\left(2\pi f_0 n\right)} $$
- And for the case where $a_0 = a_1$ but $\sigma_0^2 \neq \sigma_1^2$ you get both the quadratic and linear term with the same coefficient by a factor of $-2a_0$.
$$ T'(\mathbf x)=\frac{\left(\sigma_1^2-\sigma_0^2\right)}{2\left(\sigma_0^2+\sigma^2\right)\left(\sigma_1^2+\sigma^2\right)}\sum_{n=0}^{N-1}x^2[n] + \frac{\left(\sigma_0^2-\sigma_1^2\right)a_0}{\left(\sigma_0^2+\sigma^2\right)\left(\sigma_1^2+\sigma^2\right)}\sum_{n=0}^{N-1}{x[n]\cos\left(2\pi f_0 n\right)} $$
So, the random amplitudes $A_0$ and $A_1$ play part in the detection not only through their respective means $a_0$ and $a_1$ but also through their variances $\sigma_0^2$ and $\sigma_1^2$.
No comments:
Post a Comment