Sunday, November 25, 2012

Sunday Morning Insight: So what is missing in Compressive Imaging and Uncertainty Quantification ?

Paul Graham in one of his essay recently mentioned the following when it came to finding start-up ideas.

"Once you're living in the future in some respect, the way to notice startup ideas is to look for things that seem to be missing. If you're really at the leading edge of a rapidly changing field, there will be things that are obviously missing. What won't be obvious is that they're startup ideas. So if you want to find startup ideas, don't merely turn on the filter "What's missing?" Also turn off every other filter, particularly "Could this be a big company?" There's plenty of time to apply that test later. But if you're thinking about that initially, it may not only filter out lots of good ideas, but also cause you to focus on bad ones."

If you read this blog, you are already on the edge of knowledge, in a rapidly moving field and to a certain extent you are already living in the future.

So what is missing ?

Compressive sensing is a way of gathering data efficiently while knowing full well that, what you generally acquire, follows a power law of some sorts. With that in mind, people pushed the idea of concentration of measure results farther to more complex objects like matrices and then will eventually cross over to tensors (especially if we can leverage the matrix approach featured in the QTT format)

In a way, compressive sensing is successful because the framework has a wide appeal beyond signal processing. Indeed, if you read again what the analysis operator approach does, it is nothing more than solving a differential equation subjected to some measurements. The co-sparsity parameter represents  sources and/or the inhomogeneous part of these partial differential equations that are themselves constrained in the loss function.

All is well but there are dark clouds.



The last time I mentioned Uncertainty Quantification in the blog, it was to say that while a Polynomial Chaos approach could follow the traditional compressive sensing framework, in all likelihood, given the Donoho-Tanner phase transition,  you probably had to go through the extra complexity of wavelet chaos in order to find a winner. If you have ever dealt with polynomial series expansions, you know that all kinds of problems come from the coefficient thresholding we expect in compressive sensing. Even if some coefficients expansions are small they still matter ... at least empirically. It's known as the Gibbs phenomenon



Similarly, if you are part of the compressive sensing group on LinkedIn, you have seen that seemingly small questions lead to not so favorable answers for compressive imaging. In this instance, you realize that the only way to reconstruct a very nice image through some of the most advanced compressive sensing reconstruction algorithm is to ... cheat. You first have to acquire an image, threshold its series expansion and then acquire the compressive measurements from that thresholded version. When you do that, you get indeed near perfect reconstructions but it doubtful there is a CS camera that can do that.

The underlying reason for this disillusion in both CS imaging and UQ is that while CS works fine for sparse signals, it is not all that great for unknowns that are merely compressible, i.e. not sparse. In short, if you hope to do well because the object of interest is only compressible in terms of some eigenfunctions expansion, you might make a mistake. In both UQ or images, what is missing is a result for compressible signals and it just looks like the one we have is just not good enough.

As Paul said earlier "What is missing ?"

I think two ideas could take us out of this slump.



One: Maybe our sampling scheme for the eigenfunction expansion is chosen too early and we ought to rethink it in light of the work on infinite-dimensional compressive sensing (see [1] [2]). 


Two: The second idea revolves around learning the right analysis operator in both imaging applications and UQ.

Unlike the traditional approach in compressive imaging that relied on an eigenfunction expansion (which eventually led us to this slump), the analysis approach on the other hand, goes for what is obvious. The zeros are on one side of the homogeneous equation fulfilled by the field of interest. They are not epsilons, just zeros, the stuff that makes something sparse.

In imaging applications, you traditionally acquire the eigenfunctions expansion of a 2D projection of the plenoptic function. The TV-norm, a special case of analysis operator, is successful because the PSF of most cameras is degenerate. What if the PSF were not degenerate, what sorts of analysis operator approach should be used ? 

In UQ, you try to estimate how changes in the coefficients of the transfer equation will affect the solution of said equation. The question being answered is: What are the coefficient phase space of interest that is consistent with the measurements ?

Both problematic eventually rejoin in their need to develop an analysis approach of the following sort:

min || L(f) - c || subject to Ax = b

Where L represents the discretization of field transport operator (with unknown coefficients in UQ), c the boundary conditions or sources, A represents the hardware sensing mechanism and b the measurements.

Thank you Leslie Smith for the fruitful discussion.

Reference:

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly