Interesting paper by people at Rice and Northwestern universities about different phase retrieval algorithms for measuring transmission matrices without using interferometric techniques. The thing with interferometers is that they provide you lots of cool stuff (high sensibility, phase information, etc.), but also involve quite a lot of technical problems that you do not want to face every day in the lab: they are so sensitive that it is a pain in the ass to calibrate and measure without vibrations messing everything up.
Using only intensity measurements (provided by a common sensor such as a CCD) and algorithmic approaches can provide the phase information, but at a computational cost that sometimes makes things not very useful. There is more info about all of this (for the coherent illumination case) in the Rice webpage (including a dataset and an implementation of some of the codes).
Inverse Scattering via Transmission Matrices: Broadband Illumination and Fast Phase Retrieval Algorithms
When a narrowband coherent wavefront passes through or reflects off of a scattering medium, the input and output relationship of the incident field is linear and so can be described by a transmission matrix (TM). If the TM for a given scattering medium is known, one can computationally “invert” the scattering process and image through the medium. In this work, we investigate the effect of broadband illumination, i.e., what happens when the wavefront is only partially coherent? Can one still measure a TM and “invert” the scattering? To accomplish this task, we measure TMs using the double phase retrieval technique, a method which uses phase retrieval algorithms to avoid difficult-to-capture interferometric measurements. Generally, using the double phase retrieval method re- quires performing massive amounts of computation. We alleviate this burden by developing a fast, GPU-accelerated algorithm, prVAMP, which lets us reconstruct 256^2×64^2 TMs in under five hours.
After reconstructing several TMs using this method, we find that, as expected, reducing the coherence of the illumination significantly restricts our ability to invert the scattering process. Moreover, we find that past a certain bandwidth an incoherent, intensity-based scattering model better describes the scattering process and is easier to invert.
Last week I posted a recently uploaded paper on using positive-only patterns in a single-pixel imaging system.
Today I just found another implementation looking for the same objective. This time the authors (from University of Warsaw, leaded by Rafał Kotyński) introduce the idea of simplexes, or how any point in some N-dimensional space can be located using only positive coordinates if you choose the correct coordinate system. Cool concept!
Single-pixel imaging with sampling distributed over simplex vertices
We propose a method of reduction of experimental noise in single-pixel imaging by expressing the subsets of sampling patterns as linear combinations of vertices of a multidimensional regular simplex. This method also may be directly extended to complementary sampling. The modified measurement matrix contains nonnegative elements with patterns that may be directly displayed on intensity spatial light modulators. The measurement becomes theoretically independent of the ambient illumination, and in practice becomes more robust to the varying conditions of the experiment. We show how the optimal dimension of the simplex depends on the level of measurement noise. We present experimental results of single-pixel imaging using binarized sampling and real-time reconstruction with the Fourier domain regularized inversion method.
A group of researchers working in France and USA, leaded by N. Ducros, has uploaded an interesting paper this week.
When doing single-pixel imaging, one of the most important aspects you need to take into account is the kind of structured patters (functions) you are going to use. This is quite relevant because it is greatly connected with the speed you are going to achieve (as the number of total measurements needed for obtaining good images strongly depends on the set of functions you choose). Usually, the go-to solution for single-pixel cameras is to either choose random functions, or a set (family) of orthogonal functions (Fourier, DCT, Hadamard, etc.).
The problem with random functions is that they are not orthogonal (it is very hard to distinguish between two different random functions, all of them are similar), so you usually need to project a high number of them (which is time consuming). Orthogonal functions that belong to a basis are a better choice, because you can send the full basis to get “perfect” quality (i.e., without losing information due to undersampling). However, usually these functions have positive and negative values, which is something you cannot directly implement in lots of Spatial Light Modulators (for example, in Digital Micromirror Devices). If you want to implement these patterns, there are multiple workarounds. The most common one is to implement two closely-related patterns sequentially in the SLM to generate one function. This solves the negative-positive problem, but increases the time it takes to obtain an image in a factor two.
What Lorente-Mur et al. show in this paper is a method to generate a new family of positive-only patterns, derived from the original positive-negative family. This makes it possible to obtain images with a reduced number of measurements when compared to the dual or splitting approach I mentioned earlier, but still with high quality. Nice way to tackle one of the most limiting factors of single-pixel architectures.
Handling negative patterns for fast single-pixel lifetime imaging
Pattern generalization was proposed recently as an avenue to increase the acquisition speed of single-pixel imaging setups. This approach consists of designing some positive patterns that reproduce the target patterns with negative values through linear combinations. This avoids the typical burden of acquiring the positive and negative parts of each of the target patterns, which doubles the acquisition time. In this study, we consider the generalization of the Daubechies wavelet patterns and compare images reconstructed using our approach and using the regular splitting approach. Overall, the reduction in the number of illumination patterns should facilitate the implementation of compressive hyperspectral lifetime imaging for fluorescence-guided surgery.
The guys at LKB keep going inside turbid media. This time, they have done it reallyfast. By using a phase spatial light modulator and with the help of a FPGA card, they were able to focus light through a scattering medium at a rate of ~4 kHz.
This is trying to solve a common problem in biological systems when you use the Transmission Matrix approach: live systems evolve, and thus the matrix that you measure is not valid after a really short time.
For me, this is a really nice technical implementation (and not an easy one to do) merging electronics, computer science, and optics to tackle a well defined biological problem.
Focusing light through dynamical samples using fast continuous wavefront optimization,
(featured image extracted from Fig. 1 of the manuscript)
We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO2 particles in glycerol with tunable temporal stability.