Tuesday, August 13, 2019

estimation - What Is Uncorrelated Noise


In many applications such as estimation theory, when we need to estimate a parameter then we usually consider in presence of white gaussian noise of zero mean and some standard deviation. During Maximum likelihood estimation, we also use this assumption. So, my question is -




  1. Do we consider noise to be uncorrelated or correlated in estimation?




  2. What is the difference between correlated and uncorrelated noise and its significance





  3. Why do we consider such a property of correlated or uncorrelated in estimation and when we say that measurement noise is gaussian.





Answer



1) It depends on the application, but in general, both. In some cases we can whiten the noise, say in the following situation:


$$Y = x+ \eta\text{ where }\eta \sim N(0, \Sigma)$$


where we want to estimate $x \in \mathcal{A}$ for some set $\mathcal{A}$ from $Y$. In this case, we multiply both sides by the inverse of the square root of $\Sigma$, and we have an estimation problem where $\Sigma^{-1/2} x \in \Sigma^{-1/2} \mathcal{A}$.


2) Two random variables are said to be correlated if their covariance is non-zero. In the cause of jointly Gaussian random variables, uncorrelated is equivalent to independent.



3) Uncorrelated things are often easier to deal with since things decouple (for example, when estimating in the MMSE sense, uncorrelated is a form of orthogonality (perpendicularity, geometrically)). Its the next best thing to independence (and in the most important case, the jointly Gaussian case, they coincide). As for when things are Gaussian, one may justify things via central limit theorem or just by saying the answer is usually easy to find, and several estimators coincide in the jointly Gaussian case (MAP, MMSE, MMAE) [In the jointly Gaussian MMSE case, the estimator is the linear MMSE estimator, which is quite easy to calculate and only relies on first and second order moments, which makes life a lot easier instead of relying on higher order moments].


MLE is just a paradigm for non-random parameter estimation (where as in Bayesian settings such as MAP, MMSE, MMAE, the quantity you are trying to estimate is modeled as random). None of these methods impose a constraint of signal+additive noise for parameter estimation.


You should try reading a book or two on the topic.


No comments:

Post a Comment

digital communications - Understanding the Matched Filter

I have a question about matched filtering. Does the matched filter maximise the SNR at the moment of decision only? As far as I understand, ...