Imaging through glass diffusers using densely connected convolutional networks

I just found a new paper by the group of G. Barbastathis at MIT.

Imaging through glass diffusers using densely connected convolutional networks,

S. Li et al, Submitted on 18 Nov 2017, https://arxiv.org/abs/1711.06810

(featured image from Fig. 3 of the manuscript)

Abstract,

Computational imaging through scatter generally is accomplished by first characterizing the scattering medium so that its forward operator is obtained; and then imposing additional priors in the form of regularizers on the reconstruction functional so as to improve the condition of the originally ill-posed inverse problem. In the functional, the forward operator and regularizer must be entered explicitly or parametrically (e.g. scattering matrices and dictionaries, respectively.) However, the process of determining these representations is often incomplete, prone to errors, or infeasible. Recently, deep learning architectures have been proposed to instead learn both the forward operator and regularizer through examples. Here, we propose for the first time, to our knowledge, a convolutional neural network architecture called “IDiffNet” for the problem of imaging through diffuse media and demonstrate that IDiffNet has superior generalization capability through extensive tests with well-calibrated diffusers. We found that the Negative Pearson Correlation Coefficient loss function for training is more appropriate for spatially sparse objects and strong scattering conditions. Our results show that the convolutional architecture is robust to the choice of prior, as demonstrated by the use of multiple training and testing object databases, and capable of achieving higher space-bandwidth product reconstructions than previously reported.

Basically they have trained a neural network to ‘solve’ the path of light traveling through a scattering medium, thus being able to recover images hidden by glass diffusers. It may sound simple, but thousands of scientists are trying to see objects hidden by scattering media. We are seeing the first steps of the combination between neural networks, machine learning, and optics to go beyond physical constraints imposed by nature (see inside our bodies with visible light, see through fog, etc.).

Fig8_Barbastatis
Fig. 7 of the paper, with some nice results.

 

Experimental comparison of single-pixel imaging algorithms

I just read on ArXiv.org that L. Bian and his colleagues made a cool comparison between several ways of performing single-pixel imaging. They have tested the performance on several recovery procedures, some quite familiar but others not so well stablished. I find both Table 1 and Fig. 7 extremely interesting. One sums up really well the different reconstruction approaches that can be used in single-pixel imaging (with or without using Compressive Sensing). The figure points out one thing that experience has told me: every problem you try to solve usually needs an specific solver if you want to get good and fast results (which is extremely important when you start to work with BIG objects, as I plan to write soon here).

Experimental comparison of single-pixel imaging algorithms,

L. Biam et al, last revised 24 Oct 2017, https://arxiv.org/abs/1707.03164

(featured image extracted from Fig.7 of the manuscript)

Abstract:

Single-pixel imaging (SPI) is a novel technique capturing 2D images using a photodiode, instead of conventional 2D array sensors. SPI owns high signal-to-noise ratio, wide spectrum range, low cost, and robustness to light scattering. Various algorithms have been proposed for SPI reconstruction, including the linear correlation methods, the alternating projection method (AP), and the compressive sensing based methods. However, there has been no comprehensive review discussing respective advantages, which is important for SPI’s further applications and development. In this paper, we reviewed and compared these algorithms in a unified reconstruction framework. Besides, we proposed two other SPI algorithms including a conjugate gradient descent based method (CGD) and a Poisson maximum likelihood based method. Both simulations and experiments validate the following conclusions: to obtain comparable reconstruction accuracy, the compressive sensing based total variation regularization method (TV) requires the least measurements and consumes the least running time for small-scale reconstruction; the CGD and AP methods run fastest in large-scale cases; the TV and AP methods are the most robust to measurement noise. In a word, there are trade-offs between capture efficiency, computational complexity and robustness to noise among different SPI algorithms. We have released our source code for non-commercial use.

 

table

Toward Depth Estimation Using Mask-Based Lensless Cameras

I just discovered on ArXiv.org a new paper by M. Asif, one of the guys behind the FlatCam.

Toward Depth Estimation Using Mask-Based Lensless Cameras,

M. Asif, submitted November 9th, http://arxiv.org/abs/1711.03527

(featured image extracted from Fig.1 of the manuscript)

Abstract:

Recently, coded masks have been used to demonstrate a thin form-factor lensless camera, FlatCam, in which a mask is placed immediately on top of a bare image sensor. In this paper, we present an imaging model and algorithm to jointly estimate depth and intensity information in the scene from a single or multiple FlatCams. We use a light field representation to model the mapping of 3D scene onto the sensor in which light rays from different depths yield different modulation patterns. We present a greedy depth pursuit algorithm to search the 3D volume and estimate the depth and intensity of each pixel within the camera field-of-view. We present simulation results to analyze the performance of our proposed model and algorithm with different FlatCam settings.

For those of you who do not know about it, the idea behind FlatCam is to extend the camera obscura principle by using a transmission mask and computational algorithms to obtain images without using lenses. This provides very compact devices. For me, it is a very good example of trading some physical elements of your optical system for post-processing algorithms. The concept can be extended to multiple regions of the electromagnetic spectrum, and also to obtain extra information (wavelength, depth), by adequate tuning of the algorithms used.