Ozan just sent me the following e-mail. It has the right mix of elements of The Great Convergence by applying learning to learn methods to inverse problems that are some of the problems we thought compressive sensing could solve well (CT tomography), papers suporting those results, an implementation, a blog entry and two postdoc jobs. Awesome !
I have for some time followed your excellent blog Nuit Blanche. I'm not familiar with how you select entries for Nuit Blanche, but let me take the opportunity to provide potential input for related to Nuit Blanche on the exciting research we pursue at the Department of Mathematics, KTH Royal Institute of Technology. If you find any of this interesting, please feel free to post it on Nuit Blanche.
1. Deep learning and tomographic image reconstruction
The main objective for the research is to develop theory and algorithms for 3D tomographic reconstruction. An important recent development has been to use techniques from deep learning to solve inverse problems. We have developed a rather generic, yet adaptable, framework that combines elements of variational regularization with machine learning for solving large scale inverse problems. More precisely, the idea is to learn a reconstruction scheme by making use of the forward operator, noise model and other a priori information. This goes beyond learning a denoiser where one first performs an initial (non machine-learning) reconstruction and then uses machine learning on the resulting image-to-image (denoising) problem. Several groups have done learning a denoiser and the results are in fact quite remarkable, outperforming previous state of the art methods. Our approach however combines reconstruction and denoising steps which further improves the results. The following two arXiv-reports http://arxiv.org/abs/1707.06474 and http://arxiv.org/abs/1704.04058 provide more details, there is also a blog-post at http://adler-j.github.io/2017/07/21/Learning-to-reconstruct.html by one of our PhD students that explains this idea of "learning to reconstruct".
2. Post doctoral fellowships
I'm looking for two 2-year post-doctoral fellowships, one dealing with regularization of spatiotemporal and/or multichannel images and the other with methods for combining elements of variational regularization with deep learning for solving inverse problems. The announcements are given below. I would be glad if you could post these also on your blog.
Postdoctoral fellow in PET/SPECT Image Reconstruction (S-2017-1166)
Deadline: December 1, 2017
The position includes research & development of algorithms for PET and SPECT image reconstruction. Work is closely related to on-going research on (a) multi-channel regularization for PET/CT and SPECT/CT imaging, (b) joint reconstruction and image matching for spatio-temporal pulmonary PET/CT and cardiac SPECT/CT imaging, and (c) task-based reconstruction by iterative deep neural networks. An important part is to integrate routines for forward and backprojection from reconstruction packages like STIR and EMrecon for PET and NiftyRec for SPECT with ODL (http://github.com/odlgroup/odl), our Python based framework for reconstruction. Part of the research may include industrial (Elekta and Philips Healthcare) and clinical (Karolinska University Hospital) collaboration.
Announcement & instructions:
Postdoctoral fellow in Image Reconstruction/Deep Dictionary Learning (S-2017-1165)
Deadline: December 1, 2017
The position includes research & development of theory and algorithms that combine methods from machine learning with sparse signal processing for joint dictionary design and image reconstruction in tomography. A key element is to design dictionaries that not only yield sparse representation, but also contain discriminative information. Methods will be implemented in ODL (http://github.com/odlgroup/odl), our Python based framework for reconstruction which enables one to utilize the existing integration between ODL and TensorFlow. The research is part of a larger effort that aims to combine elements of variational regularization with machine learning for solving large scale inverse problems, see the arXiv-reports http://arxiv.org/abs/1707.06474 and http://arxiv.org/abs/1704.04058 and the blog-post at http://adler-j.github.io/2017/07/21/Learning-to-reconstruct.html for further details. Part of the research may include industrial (Elekta and Philips Healthcare) and clinical (Karolinska University Hospital) collaboration.Announcement & instructions:
Assoc. Prof. Ozan Öktem
Director, KTH Life Science Technology Platform
Department of Matematics
KTH Royal Institute of Technology
SE-100 44 Stockholm, Sweden
Learned Primal-dual Reconstruction by Jonas Adler, Ozan Öktem
We propose a Learned Primal-Dual algorithm for tomographic reconstruction. The algorithm includes the (possibly non-linear) forward operator in a deep neural network inspired by unrolled proximal primal-dual optimization methods, but where the proximal operators have been replaced with convolutional neural networks. The algorithm is trained end-to-end, working directly from raw measured data and does not depend on any initial reconstruction such as FBP.
We evaluate the algorithm on low dose CT reconstruction using both analytic and human phantoms against classical reconstruction given by FBP and TV regularized reconstruction as well as deep learning based post-processing of a FBP reconstruction.
For the analytic data we demonstrate PSNR improvements of >10 dB when compared to both TV reconstruction and learned post-processing. For the human phantom we demonstrate a 6.6 dB improvement compared to TV and a 2.2 dB improvement as compared to learned post-processing. The proposed algorithm also improves upon the compared algorithms with respect to the SSIM and the evaluation time is approximately 600 ms for a 512 x 512 pixel dataset.
Solving ill-posed inverse problems using iterative deep neural networks by Jonas Adler, Ozan Öktem
We propose a partially learned approach for the solution of ill posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional. The method results in a gradient-like iterative scheme, where the "gradient" component is learned using a convolutional network that includes the gradients of the data discrepancy and regularizer as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against FBP and TV reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the TV reconstruction while being significantly faster, giving reconstructions of 512 x 512 volumes in about 0.4 seconds using a single GPU.An implementation is here: https://github.com/adler-j/learned_gradient_tomography
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.