Single-pixel imaging with sampling distributed over simplex vertices

Last week I posted a recently uploaded paper on using positive-only patterns in a single-pixel imaging system.

Today I just found another implementation looking for the same objective. This time the authors (from University of Warsaw, leaded by Rafał Kotyński) introduce the idea of simplexes, or how any point in some N-dimensional space can be located using only positive coordinates if you choose the correct coordinate system. Cool concept!

Fig.1 extracted from “Single-pixel imaging with sampling distributed over simplex vertices,”
Krzysztof M. Czajkowski, Anna Pastuszczak, and Rafał Kotyński, Opt. Lett. 44, 1241-1244 (2019)

Single-pixel imaging with sampling distributed over simplex vertices

by Krzysztof M. Czajkowski et al., on Optics Letters

Abstract:

We propose a method of reduction of experimental noise in single-pixel imaging by expressing the subsets of sampling patterns as linear combinations of vertices of a multidimensional regular simplex. This method also may be directly extended to complementary sampling. The modified measurement matrix contains nonnegative elements with patterns that may be directly displayed on intensity spatial light modulators. The measurement becomes theoretically independent of the ambient illumination, and in practice becomes more robust to the varying conditions of the experiment. We show how the optimal dimension of the simplex depends on the level of measurement noise. We present experimental results of single-pixel imaging using binarized sampling and real-time reconstruction with the Fourier domain regularized inversion method.

Handling negative patterns for fast single-pixel lifetime imaging

A group of researchers working in France and USA, leaded by N. Ducros, has uploaded an interesting paper this week.

When doing single-pixel imaging, one of the most important aspects you need to take into account is the kind of structured patters (functions) you are going to use. This is quite relevant because it is greatly connected with the speed you are going to achieve (as the number of total measurements needed for obtaining good images strongly depends on the set of functions you choose). Usually, the go-to solution for single-pixel cameras is to either choose random functions, or a set (family) of orthogonal functions (Fourier, DCT, Hadamard, etc.).

The problem with random functions is that they are not orthogonal (it is very hard to distinguish between two different random functions, all of them are similar), so you usually need to project a high number of them (which is time consuming). Orthogonal functions that belong to a basis are a better choice, because you can send the full basis to get “perfect” quality (i.e., without losing information due to undersampling). However, usually these functions have positive and negative values, which is something you cannot directly implement in lots of Spatial Light Modulators (for example, in Digital Micromirror Devices). If you want to implement these patterns, there are multiple workarounds. The most common one is to implement two closely-related patterns sequentially in the SLM to generate one function. This solves the negative-positive problem, but increases the time it takes to obtain an image in a factor two.

What Lorente-Mur et al. show in this paper is a method to generate a new family of positive-only patterns, derived from the original positive-negative family. This makes it possible to obtain images with a reduced number of measurements when compared to the dual or splitting approach I mentioned earlier, but still with high quality. Nice way to tackle one of the most limiting factors of single-pixel architectures.

Working principle visualization of the generalization method to measure with positive-only patterns in single-pixel imaging setups. Figure extracted from Lorente-Mur et al., ”
Handling negative patterns for fast single-pixel lifetime imaging,” at https://hal.archives-ouvertes.fr/hal-02017598

Handling negative patterns for fast single-pixel lifetime imaging

by Antonio Lorente Mur et al., at https://hal.archives-ouvertes.fr/hal-02017598

Abstract:

Pattern generalization was proposed recently as an avenue to increase the acquisition speed of single-pixel imaging setups. This approach consists of designing some positive patterns that reproduce the target patterns with negative values through linear combinations. This avoids the typical burden of acquiring the positive and negative parts of each of the target patterns, which doubles the acquisition time. In this study, we consider the generalization of the Daubechies wavelet patterns and compare images reconstructed using our approach and using the regular splitting approach. Overall, the reduction in the number of illumination patterns should facilitate the implementation of compressive hyperspectral lifetime imaging for fluorescence-guided surgery.

Realization of hybrid compressive imaging strategies

Recently I have been reading a lot about Compressive Sensing strategies. One of the things we always want when we work in a single-pixel architecture is to project the lowest possible number of masks, because the projecting process is the longest in all the acquisition procedure (and it gets longer and longer when you increase the spatial resolution of your images).

In the past, several strategies haven been implemented to reduce that number of projections. From going fully random to partially scan a basis at random and at the low frequency region, each approach presents some benefits and more or less speed gain.

In this work by the group of K.F. Kelly, they explored a different approach. Instead of chosing one measurement basis and design a sensing strategy (picking random elements, or centering around the low frequency part of the basis, or a mix), they create a measurement basis by merging different functions. They call it hybrid patterns. The basic idea is to chose a low number of patterns which work well for recovering low frequency content of natural images, and also some other patterns which are good to recover high frequency content. The novel thing here is that they do not require the patterns to belong to the same orthogonal basis, thus being able to carefully design its measurement basis. This provides very good quality results with a low number of projections.

Another thing I liked a lot was the Principal Component Analysis (PCA) part of the paper. Basically, they gathered a collection of natural images and they generated an orthogonal basis by using PCA. This leads me to think of PCA as a way of obtaining orthogonal bases where objects have their sparsest representation (maybe I am wrong about that).

Realization of hybrid compressive imaging strategies,

Y.Li et al, at Journal of the Optical Society of America A

(featured image exctracted from Fig.2 of the manuscript)

Abstract:

The tendency of natural scenes to cluster around low frequencies is not only useful in image compression, it also can prove advantageous in novel infrared and hyperspectral image acquisition. In this paper, we exploit this signal model with two approaches to enhance the quality of compressive imaging as implemented in a single-pixel compressive camera and compare these results against purely random acquisition. We combine projection patterns that can efficiently extract the model-based information with subsequent random projections to form the hybrid pattern sets. With the first approach, we generate low-frequency patterns via a direct transform. As an alternative, we also used principal component analysis of an image library to identify the low-frequency components. We present the first (to the best of our knowledge) experimental validation of this hybrid signal model on real data. For both methods, we acquire comparable quality of reconstructions while acquiring only half the number of measurements needed by traditional random sequences. The optimal combination of hybrid patterns and the effects of noise on image reconstruction are also discussed.

Fig3_Kelly
Really nice to see that PCA gives something very similar to DCT functions. This means that compressing images with DCT is really a good choice.

Deep learning microscopy

This week a new paper by the group leaded by A. Ozcan appeared in Optica.

Deep learning microscopy,

Y. Ribenson et al, at Optica

(featured image exctracted from Fig. 6 of the supplement)

Abstract,

We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field of view and depth of field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with better resolution, matching the performance of higher numerical aperture lenses and also significantly surpassing their limited field of view and depth of field. These results are significant for various fields that use microscopy tools, including, e.g., life sciences, where optical microscopy is considered as one of the most widely used and deployed techniques. Beyond such applications, the presented approach might be applicable to other imaging modalities, also spanning different parts of the electromagnetic spectrum, and can be used to design computational imagers that get better as they continue to image specimens and establish new transformations among different modes of imaging.

By using different images obtained with high/low numerical aperture microscope objectives, they have trained a deep neural network to create high spatial resolution images from low spatial resolution ones. Moreover, the final result matches the field of view of the input image, thus obtaining one of the major goals of optical microscopy: high resolution and high field of view at the same time (and using a low numerical aperture objective).

I really liked the supplement, where they give information about the neural network (which is really useful for a newbie like me).

FigS1_OZcan
Fig.1 of the supplement. Details on how to train the neural network

 

Imaging through glass diffusers using densely connected convolutional networks

I just found a new paper by the group of G. Barbastathis at MIT.

Imaging through glass diffusers using densely connected convolutional networks,

S. Li et al, Submitted on 18 Nov 2017, https://arxiv.org/abs/1711.06810

(featured image from Fig. 3 of the manuscript)

Abstract,

Computational imaging through scatter generally is accomplished by first characterizing the scattering medium so that its forward operator is obtained; and then imposing additional priors in the form of regularizers on the reconstruction functional so as to improve the condition of the originally ill-posed inverse problem. In the functional, the forward operator and regularizer must be entered explicitly or parametrically (e.g. scattering matrices and dictionaries, respectively.) However, the process of determining these representations is often incomplete, prone to errors, or infeasible. Recently, deep learning architectures have been proposed to instead learn both the forward operator and regularizer through examples. Here, we propose for the first time, to our knowledge, a convolutional neural network architecture called “IDiffNet” for the problem of imaging through diffuse media and demonstrate that IDiffNet has superior generalization capability through extensive tests with well-calibrated diffusers. We found that the Negative Pearson Correlation Coefficient loss function for training is more appropriate for spatially sparse objects and strong scattering conditions. Our results show that the convolutional architecture is robust to the choice of prior, as demonstrated by the use of multiple training and testing object databases, and capable of achieving higher space-bandwidth product reconstructions than previously reported.

Basically they have trained a neural network to ‘solve’ the path of light traveling through a scattering medium, thus being able to recover images hidden by glass diffusers. It may sound simple, but thousands of scientists are trying to see objects hidden by scattering media. We are seeing the first steps of the combination between neural networks, machine learning, and optics to go beyond physical constraints imposed by nature (see inside our bodies with visible light, see through fog, etc.).

Fig8_Barbastatis
Fig. 7 of the paper, with some nice results.

 

Experimental comparison of single-pixel imaging algorithms

I just read on ArXiv.org that L. Bian and his colleagues made a cool comparison between several ways of performing single-pixel imaging. They have tested the performance on several recovery procedures, some quite familiar but others not so well stablished. I find both Table 1 and Fig. 7 extremely interesting. One sums up really well the different reconstruction approaches that can be used in single-pixel imaging (with or without using Compressive Sensing). The figure points out one thing that experience has told me: every problem you try to solve usually needs an specific solver if you want to get good and fast results (which is extremely important when you start to work with BIG objects, as I plan to write soon here).

Experimental comparison of single-pixel imaging algorithms,

L. Biam et al, last revised 24 Oct 2017, https://arxiv.org/abs/1707.03164

(featured image extracted from Fig.7 of the manuscript)

Abstract:

Single-pixel imaging (SPI) is a novel technique capturing 2D images using a photodiode, instead of conventional 2D array sensors. SPI owns high signal-to-noise ratio, wide spectrum range, low cost, and robustness to light scattering. Various algorithms have been proposed for SPI reconstruction, including the linear correlation methods, the alternating projection method (AP), and the compressive sensing based methods. However, there has been no comprehensive review discussing respective advantages, which is important for SPI’s further applications and development. In this paper, we reviewed and compared these algorithms in a unified reconstruction framework. Besides, we proposed two other SPI algorithms including a conjugate gradient descent based method (CGD) and a Poisson maximum likelihood based method. Both simulations and experiments validate the following conclusions: to obtain comparable reconstruction accuracy, the compressive sensing based total variation regularization method (TV) requires the least measurements and consumes the least running time for small-scale reconstruction; the CGD and AP methods run fastest in large-scale cases; the TV and AP methods are the most robust to measurement noise. In a word, there are trade-offs between capture efficiency, computational complexity and robustness to noise among different SPI algorithms. We have released our source code for non-commercial use.

 

table