Sunday, May 02, 2010

CS: MATLAB Programming Contest on Compressive Sensing (part deux)

Thanks to R's advice, the CoSamp score has gone down to 289051 while the best score is currently 28491. What's an order of magnitude among friends ? I am not quite sure how more instances I am going to run, but if you do compete there is a feature saying "clone this code" so that you can use it and improve upon it. If you do so, please let me know your version's score in the comment section.

15 comments:

Unknown said...

How come the results are poorer then simple interpolation?

Igor said...

what interpolation ?

Igor

R said...

Hi Igor,

I am having some issues with this contest :)

1- It seems like this is not really compressed sensing in the sense that (in the contest) the measurement process is allowed to be adaptive. In other words, you can make calculations "at the sensor".

2- In a sense, compressed sensing is a non-symmetric process, you should be allowed to take time to reconstruct the image as long as you keep the sensing cheap, i.e., non-adaptive. The timing constraint seems to be a major issue here.

R said...

More issues :)

It appears that the major part of the cost in the CS-based codes (in terms of time) is in the generation and storage of the measurement matrix. Again, in reality you would do this once and store it and use it for as many images as you want.

R said...

Hi Igor,

Some relatively good news:
I modified one of your codes (repeatedly) to get this thing.
Basically, the sampling scheme (measurement matrix) is just a restriction operator. In other words, every row polls one pixel value at random.
Thus, we are now sampling in the spatial domain, which happens to be incoherent with the Fourier domain (and the discrete cosine transform domain). Image are sparse(ish) in Fourier and we are now in business.
I also added the iterative hard thresholding algorithm into the mix.
The results still could use improvement but at least we are making progress.

The Vlad said...

Hi R,

I ran your algorithm, and while the results are certainly an improvement over the previous version, they are still underwhelming. I entered an algorithm that simply solves laplace's equation based on M randomly-sampled pixel values ('vladirator10'), and this did much better than the compressive sensing algorithms. The winning entries are, of course, highly optimized hacks on hacks, so it's obviously difficult to compete with those. But still, any thoughts why the performance of the compressive sensing algorithms is so poor?

Igor said...

I have not looked at any other figure of merit than the score but are those images sparse?

R said...

I completely agree. The results are depressing, on the other hand (on my local machine) if I run spgl1 as the "solver" then the results improve dramatically. You start to see stuff that resemble images.
Like I said before this is not really "canonical" compressed sensing for several reasons. Here are two important ones:
1-You should not be allowed to adaptively sample.
2-You should not be penalized for the sampling cost. I found it impossible to generate a full random {0,1} measurement matrix (easily convertible to {-1 1}). The only sampling scheme I could get away with is this silly random sparse one. My experiments timed out otherwise.
3- Given the time constraints (and rules) of the contest, you can't run powerful algorithms like l1 minimization or Total Variation minimization. Nor can you really use good sparsity bases without a lot of programming effort...

Despite all the above, I am still disappointed with the results

Igor said...

To get the conversation going I have removed the authorization process so you comment live!

R said...

I completely agree. The results are depressing, on the other hand (on my local machine) if I run spgl1 as the "solver" then the results improve dramatically. You start to see stuff that resemble images.
Like I said before this is not really "canonical" compressed sensing for several reasons. Here are two important ones:
1-You should not be allowed to adaptively sample.
2-You should not be penalized for the sampling cost. I found it impossible to generate a full random {0,1} measurement matrix (easily convertible to {-1 1}). The only sampling scheme I could get away with is this silly random sparse one. My experiments timed out otherwise.
3- Given the time constraints (and rules) of the contest, you can't run powerful algorithms like l1 minimization or Total Variation minimization. Nor can you really use good sparsity bases without a lot of programming effort...

Despite all the above, I am still disappointed with the results

R said...

To follow up on that, just by running 100 iterations of spgl1 (a fast l1 minimization algorithm) the reconstruction error is reduced by a factor of 3-5 on the provided test cases, and this is over the best that I can currently do with iterative soft and hard thresholding in the contest.

LONG NGUYEN said...

At first, all of image are natural so they do not sparse. To Test, you can run

>> runcontest(1)

in MatLabs.

As I have mentioned before, CS requires sparse basis, but in this game I cant find them in some implements. I intend use SSP/SSMP to run it but it require wavelets basis at least. And to compare with parameters in the function Solver, I got no answer :(

R said...

After much tinkering, I got the score down to around 75000.
On the sampling side, I used jittered sampling. On the reconstruction side,
basically, I used iterative soft-thresholding with variable thresholds (one for low freq and one for high freq), with cooling (i.e., the thresholds decrease with the iteration count). Note that this is a poor man's substitute for a good l1 minimization algorithm (spgl1 does much better (now, 2-3 times lower error) on the contest's training set and in much shorter time (about a half), but its too hard to convert the code so that it complies with the contest rules).

In short, despite the non-canonical CS scenario, CS techniques (even when not used to full force) can still be made to work, albeit with much effort.

R said...

here's the link

Anonymous said...

I have run some Compressed Sensing code on the MATLAB contest.

Basicially, it is soft shrinkage with continuation (decrease of threshold) in a simply implemented DWT and application of the pixel range constraint.

I use image blocks (eg. size 32x32 or 16x16) to handle SVD for imaging matrix inversion in reasonable time.

Each block uses one DC measure plus 50%-pixel-random measurements.

The code is pretty straight forward.
There is a lot of "fail-safe" stuff in it to handle non-power-of-2 cases. There must be a lot of room for improvement.


My currently best score of 54168.4 is here:
http://www.mathworks.com/matlabcentral/contest/contests/2/submissions/4338

I made it pretty easy to vary parameters. I know time is short, but if you are curious feel free to mess around with the code.

I can only agree with "R" on the scoring and measuring process... I would have loved to see some real compressed sensing challenge. Plus, the penalty on the computation time is huge - in the CS case, this encourages for reduction of quality vs. computation time.

Robert

Printfriendly