Rapid broadband characterization of scattering medium using hyperspectral imaging

People at LKB (and St. Andrews) keep shining light into scattering media. This time, they have developed a cool approach for measuring the multispectral Transmission Matrix (MSTM) of a medium. This knowledge allows to control each spectral component of a light beam when travelling through the medium, which permits to shape, for example, the spectral and temporal profiles of light pulses. This is quite nice, as can be used to generate tight focci inside biological tissues, improving the performance of nonlinear microscopy techniques.

Usually, the measurement of the MSTM entails a long iterative process (basically you just measure the TM for each spectral channel you want to characterize). This is not always possible (usually you do not have a laser with all the wavelengths you need to measure), and also tends to be slow (which is a problem if you want to measure the MSTM of a changing medium). Here the authors tackle this problem by performing a wavelength-to-spatial mapping, thus measuring the spatio-spectral information in just one shot of a CCD camera. To do so, they use a clever design with a lenslet array and a dispersion grating. In this way, the total time it takes to acquire the MSTM is reduced in ~2 orders of magnitude. Elegant, simple, and fast.

Design concept for the spectral measurements using a lenslet array and a single CCD sensor. Extracted from “Rapid broadband characterization of scattering medium using hyperspectral imaging,” A. Boniface et al., https://www.osapublishing.org/optica/abstract.cfm?uri=optica-6-3-274

Rapid broadband characterization of scattering medium using hyperspectral imaging

by Antoine Boniface et al., at Optica

Abstract:

Scattering of a coherent ultrashort pulse of light by a disordered medium results in a complex spatiotemporal speckle pattern. The form of the pattern can be described by knowledge of a spectrally dependent transmission matrix, which can in turn be used to shape the propagation of the pulse through the medium. We introduce a method for rapid measurement of this matrix for the entire spectrum of the pulse based on a hyperspectral imaging system that is close to 2 orders of magnitude faster than any approach previously reported. We demonstrate narrowband as well as spatiotemporal refocusing of a femtosecond pulse temporally stretched to several picoseconds after propagation through a multiply scattering medium. This enables new routes for multiphoton imaging and manipulation through complex media.

Compressive optical imaging with a photonic lantern

New single-pixel camera design, but this time using multicore fibers (MCF) and a photonic lantern instead of a spatial light modulator. Cool!

The fundamental idea is to excite one of the cores of a MCF. Then, light propagates through the fiber, which has a photonic lantern at the tip that generates a random-like light pattern at its tip. Exciting different cores of the MCF generates different light patterns at the end of the fiber, which can be used to obtain images using the single-pixel imaging formalism.

There is more cool stuff in the paper, for example the Compressive Sensing algorithm the authors are using, using positivity constraints. This is indeed quite relevant if you want to get high quality images, because of the reduced number of cores present in the MCF (remember, 1 core = 1 pattern, and the number of patterns determines the spatial resolution of the image in a single-pixel camera). It is also nice that there is available code from the authors here.

Some experimental/simulation results (nice Smash logo there!). Extracted from
Debaditya Choudhury et al., “Compressive optical imaging with a photonic lantern,” at https://arxiv.org/abs/1903.01288

Compressive optical imaging with a photonic lantern

by Debaditya Choudhury et al., at arXiv

Abstract:

The thin and flexible nature of optical fibres often makes them the ideal technology to view biological processes in-vivo, but current microendoscopic approaches are limited in spatial resolution. Here, we demonstrate a new route to high resolution microendoscopy using a multicore fibre (MCF) with an adiabatic multimode-to-singlemode photonic lantern transition formed at the distal end by tapering. We show that distinct multimode patterns of light can be projected from the output of the lantern by individually exciting the single-mode MCF cores, and that these patterns are highly stable to fibre movement. This capability is then exploited to demonstrate a form of single-pixel imaging, where a single pixel detector is used to detect the fraction of light transmitted through the object for each multimode pattern. A custom compressive imaging algorithm we call SARA-COIL is used to reconstruct the object using only the pre-measured multimode patterns themselves and the detector signals.

Single-pixel imaging with sampling distributed over simplex vertices

Last week I posted a recently uploaded paper on using positive-only patterns in a single-pixel imaging system.

Today I just found another implementation looking for the same objective. This time the authors (from University of Warsaw, leaded by Rafał Kotyński) introduce the idea of simplexes, or how any point in some N-dimensional space can be located using only positive coordinates if you choose the correct coordinate system. Cool concept!

Fig.1 extracted from “Single-pixel imaging with sampling distributed over simplex vertices,”
Krzysztof M. Czajkowski, Anna Pastuszczak, and Rafał Kotyński, Opt. Lett. 44, 1241-1244 (2019)

Single-pixel imaging with sampling distributed over simplex vertices

by Krzysztof M. Czajkowski et al., on Optics Letters

Abstract:

We propose a method of reduction of experimental noise in single-pixel imaging by expressing the subsets of sampling patterns as linear combinations of vertices of a multidimensional regular simplex. This method also may be directly extended to complementary sampling. The modified measurement matrix contains nonnegative elements with patterns that may be directly displayed on intensity spatial light modulators. The measurement becomes theoretically independent of the ambient illumination, and in practice becomes more robust to the varying conditions of the experiment. We show how the optimal dimension of the simplex depends on the level of measurement noise. We present experimental results of single-pixel imaging using binarized sampling and real-time reconstruction with the Fourier domain regularized inversion method.

Handling negative patterns for fast single-pixel lifetime imaging

A group of researchers working in France and USA, leaded by N. Ducros, has uploaded an interesting paper this week.

When doing single-pixel imaging, one of the most important aspects you need to take into account is the kind of structured patters (functions) you are going to use. This is quite relevant because it is greatly connected with the speed you are going to achieve (as the number of total measurements needed for obtaining good images strongly depends on the set of functions you choose). Usually, the go-to solution for single-pixel cameras is to either choose random functions, or a set (family) of orthogonal functions (Fourier, DCT, Hadamard, etc.).

The problem with random functions is that they are not orthogonal (it is very hard to distinguish between two different random functions, all of them are similar), so you usually need to project a high number of them (which is time consuming). Orthogonal functions that belong to a basis are a better choice, because you can send the full basis to get “perfect” quality (i.e., without losing information due to undersampling). However, usually these functions have positive and negative values, which is something you cannot directly implement in lots of Spatial Light Modulators (for example, in Digital Micromirror Devices). If you want to implement these patterns, there are multiple workarounds. The most common one is to implement two closely-related patterns sequentially in the SLM to generate one function. This solves the negative-positive problem, but increases the time it takes to obtain an image in a factor two.

What Lorente-Mur et al. show in this paper is a method to generate a new family of positive-only patterns, derived from the original positive-negative family. This makes it possible to obtain images with a reduced number of measurements when compared to the dual or splitting approach I mentioned earlier, but still with high quality. Nice way to tackle one of the most limiting factors of single-pixel architectures.

Working principle visualization of the generalization method to measure with positive-only patterns in single-pixel imaging setups. Figure extracted from Lorente-Mur et al., ”
Handling negative patterns for fast single-pixel lifetime imaging,” at https://hal.archives-ouvertes.fr/hal-02017598

Handling negative patterns for fast single-pixel lifetime imaging

by Antonio Lorente Mur et al., at https://hal.archives-ouvertes.fr/hal-02017598

Abstract:

Pattern generalization was proposed recently as an avenue to increase the acquisition speed of single-pixel imaging setups. This approach consists of designing some positive patterns that reproduce the target patterns with negative values through linear combinations. This avoids the typical burden of acquiring the positive and negative parts of each of the target patterns, which doubles the acquisition time. In this study, we consider the generalization of the Daubechies wavelet patterns and compare images reconstructed using our approach and using the regular splitting approach. Overall, the reduction in the number of illumination patterns should facilitate the implementation of compressive hyperspectral lifetime imaging for fluorescence-guided surgery.

Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens

The group leaded by P. Artal at Murcia University has recently published an interesting paper related to adaptive optics using an adaptive lens. When working in a real scenario, imperfections in the optical elements you use or just the objects you want to image introduce optical aberrations in the pictures you obtain. Usually these aberrations reduce the quality of your images just a bit (introducing a bit of defocus or some astigmatism), but in the worst case scenario it may result in completely useless results.

In order to overcome this problem, usually liquid crystal spatial light modulators or deformable mirrors are used in optical systems to introduce phase corrections to the light going through the system, countering the phase of these aberrations and thus restoring the image quality. However, these systems present several problems. Even though both spatial light modulators and deformable mirrors can correct the problems I mentioned earlier, they work in a reflection configuration. This introduces additional complexity to the optical systems. Also, liquid crystal spatial light modulators are sensitive to polarization, usually have low reflectance values, and tend to be slow.

As a way to tackle those obstacles, the authors have used an adaptive lens in a two-photon microscope to perform the adaptive optics procedure. Adaptive lenses are being used more and more recently to perform aberration correction. In contrast to both spatial light modulators and deformable mirrors, they work in transmission and present very low losses. Moreover, they can introduce low and mid-order aberrations at refresh rates of almost 1 kHz. The working principle can be seen in this figure:

Adaptive_lens
Schematics of the working principle of an adaptive lens. The lens is formed by two thin glass layers, and a liquid in between. Each actuator is triggered by an electrical signal, which deforms the glass windows, generating different shapes and changing the phase of the wavefront passing through the lens. Figure extracted from Stefano Bonora et. al., “Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens,” Opt. Express 23, 21931-21941 (2015)

In the paper, they show how this device can obtain results comparable to the traditional spatial light modulator approach, with the benefits mentioned before, in a multi-photon microscope.

Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens

by Juan M. Bueno et al., at Optics Express

Abstract:

A multi-actuator adaptive lens (AL) was incorporated into a multi-photon (MP) microscope to improve the quality of images of thick samples. Through a hill-climbing procedure the AL corrected for the specimen-induced aberrations enhancing MP images. The final images hardly differed when two different metrics were used, although the sets of Zernike coefficients were not identical. The optimized MP images acquired with the AL were also compared with those obtained with a liquid-crystal-on-silicon spatial light modulator. Results have shown that both devices lead to similar images, which corroborates the usefulness of this AL for MP imaging.

results_bueno.png
Experimental results showing the improvement on the image obtained with the adaptive lens system. Figure 3 from the paper: Juan M. Bueno, et. al, “Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens,” Opt. Express 26, 14278-14287 (2018)

 

Evading scientific stalemates

This week I have been thinking about a strange thing that happened in my research group. One day while I was doing my MSc, me and my colleagues we were discussing some lab results. A small change on our experimental setup provided much better images than the ones we were getting up to that point. This change, even though small, was puzzling at first. It was counter-intuitive. We quickly realized why it was improving our measurements. However, this was not the important thing. By doing that small change, our system, which at the time was just simply an imaging system, seemed to be able to tackle much more difficult experimental scenarios. We thought that we had discovered a new property of the systems we were developing. We were right.

After that initial idea, we quickly designed some experiments to verify our initial guesses. Everything seemed to work, but we were not 100% sure why. We had some general ideas, some intuitions. Our plan was to keep doing some experiments while we figured all the details. We published some papers and started thinking big. This approach could be applied to real scenarios. We started collaborating with some other groups and in the end we developed a real-life system in collaboration with them. That was published in a very good journal.

However, even though we figured out the bugging details we had at the beginning, we were never able to build a model that allowed us to predict or at least to conjecture about what could be the limits of our technique.

Fast forward ~3 years to today. We have a meeting planned for next week to discuss why our latest experiments are not providing the results we expected. After months of PhD (and MSc) students work, we are at a stalemate. Some days it seems that we are close to change something in the lab that will yield the expected improvement. Some days, after hundreds of trials, everything remains the same. Given the lack of a physical model to hold to, the group is searching with a blindfold, and I don’t think this is working at all.

If I had to make a prediction right now, I would say that the research line is dead (long live the research line!). It shouldn’t be dramatic, it is just science (sometimes it works, sometimes it doesn’t). However, during all this process, several students joined the group and started their MSc’s and PhD’s on the topic. This could be dramatic for them. During all this time, I have been working in quite a lot of different stuff. I missed some publications, which hurt my CV. However, when something did not work, I always had different stuff to try. I think I have a wider scope of my field because of that. In the end, I have published more than enough to write my thesis.

I guess that’s a good practice: never put all your eggs in the same basket. You need to have hundreds of ideas to get a good one. Take your time to explore them, and build strong foundations where new people can construct upon without fear of falling down.