Tuesday, August 6, 2019

image processing - Poisson noise and curve fitting - denoise first?


If I have an image that is severely corrupted by Poisson noise, and I want to fit a function to the image, is it "better" to attempt to denoise the signal first before fitting, or should I move straight to the fitting stage?


In the example below, a 2D Gaussian function has been corrupted by Poisson noise. Should I fit a 2D Gaussian to the noisy data, or to a denoised version?


Denoising images is often good for qualitative reasons, but I'm curious to know about the quantitative case, for example where the volume of the Gaussian is important?


By "denoising", I'm thinking along the lines of the techniques such as Non-local PCA rather than median filtering etc.


Noisy Gaussian image



Answer



If the example images you've given are at all representative of your application, you may want to consider thinking about the problem a little differently. Instead of thinking of the image as "corrupted by Poisson noise", think of the observed data as a limited number of photons sampled at random from the latent image intensity map. The photon counts you get are providing incomplete information about the latent image, not corrupt or noisy information.


From this perspective, if you are truly fitting a curve or formula of few parameters to your data, this is a classical parametric statistical estimation problem (lots of samples, few parameters). Such problems have a good estimation theory which guarantees you will accurately learn the fit parameters without needing too many sample points. If you apply a denoising step before parametric fitting, the theoretical guarantees will no longer apply because the error in the denoised image is some complicated composite of residual fit variance and bias from the denoiser's implicit signal model.


So there's a theoretical justification for fitting the function straight to the data. Just make sure that you use the correct Poissonian likelihood function when you do so, or use the EM algorithm if you don't want to worry about such things.



EDIT: actually you don't even need Poissonian likelihood here (unless your data is quantized/binned). Just take the approach described in http://en.wikipedia.org/wiki/Maximum_likelihood. Here the $x_1, x_2, ... x_n$ are a collection of $n$ vectors giving the 2-D coordinates of your photon observations, and $\theta$ is the parameter vector for the function you want to fit. In the case of a Gaussian you have $\theta = (\mu,\Sigma)$ where $\mu$ is the mean (centroid in 2D) and $\Sigma$ is the covariance matrix. Then in the notation of the wiki article, your likelihood is


$$ \prod_{i=1}^n f(x_i | \theta) = C \prod_{i=1}^n \exp( -\tfrac{1}{2} (x_i - \mu)^T \Sigma^{-1} (x_i - \mu)) $$


where $C$ is some constant with square roots of pi in it and such :) If your data is quantized/binned then you consider the count value in each bin as a Poisson random variable and go from there. Weirdly, the Poisson and the $n$ observation approaches are nearly equivalent from an optimization standpoint.


I don't have a great reference for you offhand because I learned this stuff in lab classes or wherever years ago. But the wiki article and refs therein seem good. This paper seems ok too: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.74.671&rep=rep1&type=pdf


No comments:

Post a Comment

digital communications - Understanding the Matched Filter

I have a question about matched filtering. Does the matched filter maximise the SNR at the moment of decision only? As far as I understand, ...