Realization of hybrid compressive imaging strategies

Recently I have been reading a lot about Compressive Sensing strategies. One of the things we always want when we work in a single-pixel architecture is to project the lowest possible number of masks, because the projecting process is the longest in all the acquisition procedure (and it gets longer and longer when you increase the spatial resolution of your images).

In the past, several strategies haven been implemented to reduce that number of projections. From going fully random to partially scan a basis at random and at the low frequency region, each approach presents some benefits and more or less speed gain.

In this work by the group of K.F. Kelly, they explored a different approach. Instead of chosing one measurement basis and design a sensing strategy (picking random elements, or centering around the low frequency part of the basis, or a mix), they create a measurement basis by merging different functions. They call it hybrid patterns. The basic idea is to chose a low number of patterns which work well for recovering low frequency content of natural images, and also some other patterns which are good to recover high frequency content. The novel thing here is that they do not require the patterns to belong to the same orthogonal basis, thus being able to carefully design its measurement basis. This provides very good quality results with a low number of projections.

Another thing I liked a lot was the Principal Component Analysis (PCA) part of the paper. Basically, they gathered a collection of natural images and they generated an orthogonal basis by using PCA. This leads me to think of PCA as a way of obtaining orthogonal bases where objects have their sparsest representation (maybe I am wrong about that).

Realization of hybrid compressive imaging strategies,

Y.Li et al, at Journal of the Optical Society of America A

(featured image exctracted from Fig.2 of the manuscript)

Abstract:

The tendency of natural scenes to cluster around low frequencies is not only useful in image compression, it also can prove advantageous in novel infrared and hyperspectral image acquisition. In this paper, we exploit this signal model with two approaches to enhance the quality of compressive imaging as implemented in a single-pixel compressive camera and compare these results against purely random acquisition. We combine projection patterns that can efficiently extract the model-based information with subsequent random projections to form the hybrid pattern sets. With the first approach, we generate low-frequency patterns via a direct transform. As an alternative, we also used principal component analysis of an image library to identify the low-frequency components. We present the first (to the best of our knowledge) experimental validation of this hybrid signal model on real data. For both methods, we acquire comparable quality of reconstructions while acquiring only half the number of measurements needed by traditional random sequences. The optimal combination of hybrid patterns and the effects of noise on image reconstruction are also discussed.

Fig3_Kelly
Really nice to see that PCA gives something very similar to DCT functions. This means that compressing images with DCT is really a good choice.

Deep learning microscopy

This week a new paper by the group leaded by A. Ozcan appeared in Optica.

Deep learning microscopy,

Y. Ribenson et al, at Optica

(featured image exctracted from Fig. 6 of the supplement)

Abstract,

We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field of view and depth of field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with better resolution, matching the performance of higher numerical aperture lenses and also significantly surpassing their limited field of view and depth of field. These results are significant for various fields that use microscopy tools, including, e.g., life sciences, where optical microscopy is considered as one of the most widely used and deployed techniques. Beyond such applications, the presented approach might be applicable to other imaging modalities, also spanning different parts of the electromagnetic spectrum, and can be used to design computational imagers that get better as they continue to image specimens and establish new transformations among different modes of imaging.

By using different images obtained with high/low numerical aperture microscope objectives, they have trained a deep neural network to create high spatial resolution images from low spatial resolution ones. Moreover, the final result matches the field of view of the input image, thus obtaining one of the major goals of optical microscopy: high resolution and high field of view at the same time (and using a low numerical aperture objective).

I really liked the supplement, where they give information about the neural network (which is really useful for a newbie like me).

FigS1_OZcan
Fig.1 of the supplement. Details on how to train the neural network

 

Imaging through glass diffusers using densely connected convolutional networks

I just found a new paper by the group of G. Barbastathis at MIT.

Imaging through glass diffusers using densely connected convolutional networks,

S. Li et al, Submitted on 18 Nov 2017, https://arxiv.org/abs/1711.06810

(featured image from Fig. 3 of the manuscript)

Abstract,

Computational imaging through scatter generally is accomplished by first characterizing the scattering medium so that its forward operator is obtained; and then imposing additional priors in the form of regularizers on the reconstruction functional so as to improve the condition of the originally ill-posed inverse problem. In the functional, the forward operator and regularizer must be entered explicitly or parametrically (e.g. scattering matrices and dictionaries, respectively.) However, the process of determining these representations is often incomplete, prone to errors, or infeasible. Recently, deep learning architectures have been proposed to instead learn both the forward operator and regularizer through examples. Here, we propose for the first time, to our knowledge, a convolutional neural network architecture called “IDiffNet” for the problem of imaging through diffuse media and demonstrate that IDiffNet has superior generalization capability through extensive tests with well-calibrated diffusers. We found that the Negative Pearson Correlation Coefficient loss function for training is more appropriate for spatially sparse objects and strong scattering conditions. Our results show that the convolutional architecture is robust to the choice of prior, as demonstrated by the use of multiple training and testing object databases, and capable of achieving higher space-bandwidth product reconstructions than previously reported.

Basically they have trained a neural network to ‘solve’ the path of light traveling through a scattering medium, thus being able to recover images hidden by glass diffusers. It may sound simple, but thousands of scientists are trying to see objects hidden by scattering media. We are seeing the first steps of the combination between neural networks, machine learning, and optics to go beyond physical constraints imposed by nature (see inside our bodies with visible light, see through fog, etc.).

Fig8_Barbastatis
Fig. 7 of the paper, with some nice results.

 

Experimental comparison of single-pixel imaging algorithms

I just read on ArXiv.org that L. Bian and his colleagues made a cool comparison between several ways of performing single-pixel imaging. They have tested the performance on several recovery procedures, some quite familiar but others not so well stablished. I find both Table 1 and Fig. 7 extremely interesting. One sums up really well the different reconstruction approaches that can be used in single-pixel imaging (with or without using Compressive Sensing). The figure points out one thing that experience has told me: every problem you try to solve usually needs an specific solver if you want to get good and fast results (which is extremely important when you start to work with BIG objects, as I plan to write soon here).

Experimental comparison of single-pixel imaging algorithms,

L. Biam et al, last revised 24 Oct 2017, https://arxiv.org/abs/1707.03164

(featured image extracted from Fig.7 of the manuscript)

Abstract:

Single-pixel imaging (SPI) is a novel technique capturing 2D images using a photodiode, instead of conventional 2D array sensors. SPI owns high signal-to-noise ratio, wide spectrum range, low cost, and robustness to light scattering. Various algorithms have been proposed for SPI reconstruction, including the linear correlation methods, the alternating projection method (AP), and the compressive sensing based methods. However, there has been no comprehensive review discussing respective advantages, which is important for SPI’s further applications and development. In this paper, we reviewed and compared these algorithms in a unified reconstruction framework. Besides, we proposed two other SPI algorithms including a conjugate gradient descent based method (CGD) and a Poisson maximum likelihood based method. Both simulations and experiments validate the following conclusions: to obtain comparable reconstruction accuracy, the compressive sensing based total variation regularization method (TV) requires the least measurements and consumes the least running time for small-scale reconstruction; the CGD and AP methods run fastest in large-scale cases; the TV and AP methods are the most robust to measurement noise. In a word, there are trade-offs between capture efficiency, computational complexity and robustness to noise among different SPI algorithms. We have released our source code for non-commercial use.

 

table

Toward Depth Estimation Using Mask-Based Lensless Cameras

I just discovered on ArXiv.org a new paper by M. Asif, one of the guys behind the FlatCam.

Toward Depth Estimation Using Mask-Based Lensless Cameras,

M. Asif, submitted November 9th, http://arxiv.org/abs/1711.03527

(featured image extracted from Fig.1 of the manuscript)

Abstract:

Recently, coded masks have been used to demonstrate a thin form-factor lensless camera, FlatCam, in which a mask is placed immediately on top of a bare image sensor. In this paper, we present an imaging model and algorithm to jointly estimate depth and intensity information in the scene from a single or multiple FlatCams. We use a light field representation to model the mapping of 3D scene onto the sensor in which light rays from different depths yield different modulation patterns. We present a greedy depth pursuit algorithm to search the 3D volume and estimate the depth and intensity of each pixel within the camera field-of-view. We present simulation results to analyze the performance of our proposed model and algorithm with different FlatCam settings.

For those of you who do not know about it, the idea behind FlatCam is to extend the camera obscura principle by using a transmission mask and computational algorithms to obtain images without using lenses. This provides very compact devices. For me, it is a very good example of trading some physical elements of your optical system for post-processing algorithms. The concept can be extended to multiple regions of the electromagnetic spectrum, and also to obtain extra information (wavelength, depth), by adequate tuning of the algorithms used.