De-scattering with Excitation Patterning (DEEP) Enables Rapid Wide-field Imaging Through Scattering Media

Very interesting stuff from the people at MIT regarding imaging through scattering media. Recently, multiple approaches taking advantage of temporal focusing (TF) increased efficiency inside scattering media when using two-photon microscopy have been published, and this goes a step further.

Here, the authors use wide-field structured illumination, in combination with TF, to obtain images with a large field-of-view and a slow number of camera acquisitions. To do so, they sequentially project a set of random structured patterns using a digital micromirror device (DMD). Using the pictures acquired for each illumination pattern in combination with the point-spread-function (PSF) of the imaging system allows to recover images of different biological samples without the typical scattering blur.

Optical design and working principle of the system. Figure extracted from “De-scattering with Excitation Patterning (DEEP) Enables Rapid Wide-field Imaging Through Scattering Media,” Dushan N. Wadduwage et al., at https://arxiv.org/abs/1902.10737

De-scattering with Excitation Patterning (DEEP) Enables Rapid Wide-field Imaging Through Scattering Media

by Dushan N. Wadduwage et al., at arXiv.

Abstract:

From multi-photon imaging penetrating millimeters deep through scattering biological tissue, to super-resolution imaging conquering the diffraction limit, optical imaging techniques have greatly advanced in recent years. Notwithstanding, a key unmet challenge in all these imaging techniques is to perform rapid wide-field imaging through a turbid medium. Strategies such as active wave-front correction and multi-photon excitation, both used for deep tissue imaging; or wide-field total-internal-refection illumination, used for super-resolution imaging; can generate arbitrary excitation patterns over a large field-of-view through or under turbid media. In these cases, throughput advantage gained by wide-field excitation is lost due to the use of point detection. To address this challenge, here we introduce a novel technique called De-scattering with Excitation Patterning, or ‘DEEP’, which uses patterned excitation followed by wide-field detection with computational imaging. We use two-photon temporal focusing (TFM) to demonstrate our approach at multiple scattering lengths deep in tissue. Our results suggest that millions of point-scanning measurements could be substituted with tens to hundreds of DEEP measurements with no compromise in image quality.

Rapid broadband characterization of scattering medium using hyperspectral imaging

People at LKB (and St. Andrews) keep shining light into scattering media. This time, they have developed a cool approach for measuring the multispectral Transmission Matrix (MSTM) of a medium. This knowledge allows to control each spectral component of a light beam when travelling through the medium, which permits to shape, for example, the spectral and temporal profiles of light pulses. This is quite nice, as can be used to generate tight focci inside biological tissues, improving the performance of nonlinear microscopy techniques.

Usually, the measurement of the MSTM entails a long iterative process (basically you just measure the TM for each spectral channel you want to characterize). This is not always possible (usually you do not have a laser with all the wavelengths you need to measure), and also tends to be slow (which is a problem if you want to measure the MSTM of a changing medium). Here the authors tackle this problem by performing a wavelength-to-spatial mapping, thus measuring the spatio-spectral information in just one shot of a CCD camera. To do so, they use a clever design with a lenslet array and a dispersion grating. In this way, the total time it takes to acquire the MSTM is reduced in ~2 orders of magnitude. Elegant, simple, and fast.

Design concept for the spectral measurements using a lenslet array and a single CCD sensor. Extracted from “Rapid broadband characterization of scattering medium using hyperspectral imaging,” A. Boniface et al., https://www.osapublishing.org/optica/abstract.cfm?uri=optica-6-3-274

Rapid broadband characterization of scattering medium using hyperspectral imaging

by Antoine Boniface et al., at Optica

Abstract:

Scattering of a coherent ultrashort pulse of light by a disordered medium results in a complex spatiotemporal speckle pattern. The form of the pattern can be described by knowledge of a spectrally dependent transmission matrix, which can in turn be used to shape the propagation of the pulse through the medium. We introduce a method for rapid measurement of this matrix for the entire spectrum of the pulse based on a hyperspectral imaging system that is close to 2 orders of magnitude faster than any approach previously reported. We demonstrate narrowband as well as spatiotemporal refocusing of a femtosecond pulse temporally stretched to several picoseconds after propagation through a multiply scattering medium. This enables new routes for multiphoton imaging and manipulation through complex media.

Compressive optical imaging with a photonic lantern

New single-pixel camera design, but this time using multicore fibers (MCF) and a photonic lantern instead of a spatial light modulator. Cool!

The fundamental idea is to excite one of the cores of a MCF. Then, light propagates through the fiber, which has a photonic lantern at the tip that generates a random-like light pattern at its tip. Exciting different cores of the MCF generates different light patterns at the end of the fiber, which can be used to obtain images using the single-pixel imaging formalism.

There is more cool stuff in the paper, for example the Compressive Sensing algorithm the authors are using, using positivity constraints. This is indeed quite relevant if you want to get high quality images, because of the reduced number of cores present in the MCF (remember, 1 core = 1 pattern, and the number of patterns determines the spatial resolution of the image in a single-pixel camera). It is also nice that there is available code from the authors here.

Some experimental/simulation results (nice Smash logo there!). Extracted from
Debaditya Choudhury et al., “Compressive optical imaging with a photonic lantern,” at https://arxiv.org/abs/1903.01288

Compressive optical imaging with a photonic lantern

by Debaditya Choudhury et al., at arXiv

Abstract:

The thin and flexible nature of optical fibres often makes them the ideal technology to view biological processes in-vivo, but current microendoscopic approaches are limited in spatial resolution. Here, we demonstrate a new route to high resolution microendoscopy using a multicore fibre (MCF) with an adiabatic multimode-to-singlemode photonic lantern transition formed at the distal end by tapering. We show that distinct multimode patterns of light can be projected from the output of the lantern by individually exciting the single-mode MCF cores, and that these patterns are highly stable to fibre movement. This capability is then exploited to demonstrate a form of single-pixel imaging, where a single pixel detector is used to detect the fraction of light transmitted through the object for each multimode pattern. A custom compressive imaging algorithm we call SARA-COIL is used to reconstruct the object using only the pre-measured multimode patterns themselves and the detector signals.

Single-pixel imaging with sampling distributed over simplex vertices

Last week I posted a recently uploaded paper on using positive-only patterns in a single-pixel imaging system.

Today I just found another implementation looking for the same objective. This time the authors (from University of Warsaw, leaded by Rafał Kotyński) introduce the idea of simplexes, or how any point in some N-dimensional space can be located using only positive coordinates if you choose the correct coordinate system. Cool concept!

Fig.1 extracted from “Single-pixel imaging with sampling distributed over simplex vertices,”
Krzysztof M. Czajkowski, Anna Pastuszczak, and Rafał Kotyński, Opt. Lett. 44, 1241-1244 (2019)

Single-pixel imaging with sampling distributed over simplex vertices

by Krzysztof M. Czajkowski et al., on Optics Letters

Abstract:

We propose a method of reduction of experimental noise in single-pixel imaging by expressing the subsets of sampling patterns as linear combinations of vertices of a multidimensional regular simplex. This method also may be directly extended to complementary sampling. The modified measurement matrix contains nonnegative elements with patterns that may be directly displayed on intensity spatial light modulators. The measurement becomes theoretically independent of the ambient illumination, and in practice becomes more robust to the varying conditions of the experiment. We show how the optimal dimension of the simplex depends on the level of measurement noise. We present experimental results of single-pixel imaging using binarized sampling and real-time reconstruction with the Fourier domain regularized inversion method.

Handling negative patterns for fast single-pixel lifetime imaging

A group of researchers working in France and USA, leaded by N. Ducros, has uploaded an interesting paper this week.

When doing single-pixel imaging, one of the most important aspects you need to take into account is the kind of structured patters (functions) you are going to use. This is quite relevant because it is greatly connected with the speed you are going to achieve (as the number of total measurements needed for obtaining good images strongly depends on the set of functions you choose). Usually, the go-to solution for single-pixel cameras is to either choose random functions, or a set (family) of orthogonal functions (Fourier, DCT, Hadamard, etc.).

The problem with random functions is that they are not orthogonal (it is very hard to distinguish between two different random functions, all of them are similar), so you usually need to project a high number of them (which is time consuming). Orthogonal functions that belong to a basis are a better choice, because you can send the full basis to get “perfect” quality (i.e., without losing information due to undersampling). However, usually these functions have positive and negative values, which is something you cannot directly implement in lots of Spatial Light Modulators (for example, in Digital Micromirror Devices). If you want to implement these patterns, there are multiple workarounds. The most common one is to implement two closely-related patterns sequentially in the SLM to generate one function. This solves the negative-positive problem, but increases the time it takes to obtain an image in a factor two.

What Lorente-Mur et al. show in this paper is a method to generate a new family of positive-only patterns, derived from the original positive-negative family. This makes it possible to obtain images with a reduced number of measurements when compared to the dual or splitting approach I mentioned earlier, but still with high quality. Nice way to tackle one of the most limiting factors of single-pixel architectures.

Working principle visualization of the generalization method to measure with positive-only patterns in single-pixel imaging setups. Figure extracted from Lorente-Mur et al., ”
Handling negative patterns for fast single-pixel lifetime imaging,” at https://hal.archives-ouvertes.fr/hal-02017598

Handling negative patterns for fast single-pixel lifetime imaging

by Antonio Lorente Mur et al., at https://hal.archives-ouvertes.fr/hal-02017598

Abstract:

Pattern generalization was proposed recently as an avenue to increase the acquisition speed of single-pixel imaging setups. This approach consists of designing some positive patterns that reproduce the target patterns with negative values through linear combinations. This avoids the typical burden of acquiring the positive and negative parts of each of the target patterns, which doubles the acquisition time. In this study, we consider the generalization of the Daubechies wavelet patterns and compare images reconstructed using our approach and using the regular splitting approach. Overall, the reduction in the number of illumination patterns should facilitate the implementation of compressive hyperspectral lifetime imaging for fluorescence-guided surgery.

Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens

The group leaded by P. Artal at Murcia University has recently published an interesting paper related to adaptive optics using an adaptive lens. When working in a real scenario, imperfections in the optical elements you use or just the objects you want to image introduce optical aberrations in the pictures you obtain. Usually these aberrations reduce the quality of your images just a bit (introducing a bit of defocus or some astigmatism), but in the worst case scenario it may result in completely useless results.

In order to overcome this problem, usually liquid crystal spatial light modulators or deformable mirrors are used in optical systems to introduce phase corrections to the light going through the system, countering the phase of these aberrations and thus restoring the image quality. However, these systems present several problems. Even though both spatial light modulators and deformable mirrors can correct the problems I mentioned earlier, they work in a reflection configuration. This introduces additional complexity to the optical systems. Also, liquid crystal spatial light modulators are sensitive to polarization, usually have low reflectance values, and tend to be slow.

As a way to tackle those obstacles, the authors have used an adaptive lens in a two-photon microscope to perform the adaptive optics procedure. Adaptive lenses are being used more and more recently to perform aberration correction. In contrast to both spatial light modulators and deformable mirrors, they work in transmission and present very low losses. Moreover, they can introduce low and mid-order aberrations at refresh rates of almost 1 kHz. The working principle can be seen in this figure:

Adaptive_lens
Schematics of the working principle of an adaptive lens. The lens is formed by two thin glass layers, and a liquid in between. Each actuator is triggered by an electrical signal, which deforms the glass windows, generating different shapes and changing the phase of the wavefront passing through the lens. Figure extracted from Stefano Bonora et. al., “Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens,” Opt. Express 23, 21931-21941 (2015)

In the paper, they show how this device can obtain results comparable to the traditional spatial light modulator approach, with the benefits mentioned before, in a multi-photon microscope.

Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens

by Juan M. Bueno et al., at Optics Express

Abstract:

A multi-actuator adaptive lens (AL) was incorporated into a multi-photon (MP) microscope to improve the quality of images of thick samples. Through a hill-climbing procedure the AL corrected for the specimen-induced aberrations enhancing MP images. The final images hardly differed when two different metrics were used, although the sets of Zernike coefficients were not identical. The optimized MP images acquired with the AL were also compared with those obtained with a liquid-crystal-on-silicon spatial light modulator. Results have shown that both devices lead to similar images, which corroborates the usefulness of this AL for MP imaging.

results_bueno.png
Experimental results showing the improvement on the image obtained with the adaptive lens system. Figure 3 from the paper: Juan M. Bueno, et. al, “Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens,” Opt. Express 26, 14278-14287 (2018)