Giga-voxel multidimensional fluorescence imaging combining single-pixel detection and data fusion

Data fusion concept. From Fig.1 in the manuscript. Do you want a 4D reconstruction? Just take several 2D/3D objects and merge them in a clever way.

Some time ago I wrote a short post about using Data Fusion (DF) to perform some kind of Compressive Sensing (CS). We came with that idea when tackling a common problem in multidimensional imaging systems: the more you want to measure, the harder it gets. It is not only the fact that you need a system that is sensitive to many different physical parameters (wavelength, time, polarization, etc.), but also the point of having huge datasets that you need to record and store. If you try to measure a scene with high spatial resolution, in tens or hundreds of spectral channels, and with video frame rates (let’s say 30 or 60 frames per second), you generate gigabytes of data every second. This will burn through your hard drives in a moment, and if you want to send your data to a different lab/computer for analysis, you will need to wait ages for the transmission to end.

While there have been many techniques trying to solve these problems, there is not a really perfect solution (and, in my honest opinion, there cannot be a single solution that will solve all the problems that different systems will face) that allows you to obtain super high quality pictures in many different dimensions. You always need to live with some tradeoffs (for example, doing low spatial resolution but high frame rate, or gathering a low number of spectral bands with good image quality).

Data fusion results, from Fig.3 in the manuscript. Here you can see that the initial single-pixel datasets have low spatial resolution, but the DF results have high spatial resolution AND both spectral and temporal resolution.

However, there are cool ideas that can help a lot. In our last paper, we show how, by borrowing ideas from remote sensing and/or autonomous driving, you can obtain high resolution, multispectral, time-resolved images of fluorescent objects in a simple and effective manner. We use a single-pixel imaging system to build two single-pixel cameras: one that measures multispectral images, and another that obtains time-resolved measurements (in the ps range). Also, we use a conventional pixelated detector to obtain a high spatial resolution image (with no temporal or spectral resolution). The key point here is that we have multiple systems working in parallel, each one doing its best to obtain one specific dimension. For example, the single-pixel spectral camera obtains a 3D image (x,y,lambda) with a very good spectral resolution, but with very low spatial resolution. On the other hand, the pixelated detector acquires a high spatial resolution image, but neither spectral nor time resolved. After obtaining the different datasets, DF allows you to merge all the information in a final multidimensional image, where all the dimensions have been sampled at high resolution (so, our final 4D object has high spatial, temporal, and spectral resolution).

So, what about the compression? The cool thing here is that we only obtain three different datasets: the high resolution picture from the camera, and the two multispectral/time-resolved images from the single-pixel cameras. However, after the reconstruction we obtain a full 4D dataset that amounts for about 1 Gigavoxel. In the end, if you compare the number of voxels we measure versus the number of voxels we retrieve, we have a compression ratio higher than 99.9% (which is quite big if you ask me).

As a sample of the technique, we show the time-resolved fluorescence decay of a simple scene with three different fluorophores (each one of the letters you see on the following figures), where the species are excited and the fluorescence process takes place in less than 25 ns (woah!). You can see the live reconstruction here, and a short talk I made a while ago after the info of the paper, where you can see all the details about the system, the reconstruction algorithm, and so.

Giga-voxel multidimensional fluorescence imaging combining single-pixel detection and data fusion

F. Soldevila, A. J. M. Lenz, A. Ghezzi, A. Farina, C. D’Andrea, and E. Tajahuerce, on Optics Letters (and the arxiv version)

Abstract: Time-resolved fluorescence imaging is a key tool in biomedical applications, as it allows to non-invasively obtain functional and structural information. However, the big amount of collected data introduces challenges in both acquisition speed and processing needs. Here, we introduce a novel technique that allows to acquire a giga-voxel 4D hypercube in a fast manner while measuring only 0.03% of the dataset. The system combines two single-pixel cameras and a conventional 2D array detector working in parallel. Data fusion techniques are introduced to combine the individual 2D and 3D projections acquired by each sensor in the final high-resolution 4D hypercube, which can be used to identify different fluorophore species by their spectral and temporal signatures.

Handling negative patterns for fast single-pixel lifetime imaging

A group of researchers working in France and USA, leaded by N. Ducros, has uploaded an interesting paper this week.

When doing single-pixel imaging, one of the most important aspects you need to take into account is the kind of structured patters (functions) you are going to use. This is quite relevant because it is greatly connected with the speed you are going to achieve (as the number of total measurements needed for obtaining good images strongly depends on the set of functions you choose). Usually, the go-to solution for single-pixel cameras is to either choose random functions, or a set (family) of orthogonal functions (Fourier, DCT, Hadamard, etc.).

The problem with random functions is that they are not orthogonal (it is very hard to distinguish between two different random functions, all of them are similar), so you usually need to project a high number of them (which is time consuming). Orthogonal functions that belong to a basis are a better choice, because you can send the full basis to get “perfect” quality (i.e., without losing information due to undersampling). However, usually these functions have positive and negative values, which is something you cannot directly implement in lots of Spatial Light Modulators (for example, in Digital Micromirror Devices). If you want to implement these patterns, there are multiple workarounds. The most common one is to implement two closely-related patterns sequentially in the SLM to generate one function. This solves the negative-positive problem, but increases the time it takes to obtain an image in a factor two.

What Lorente-Mur et al. show in this paper is a method to generate a new family of positive-only patterns, derived from the original positive-negative family. This makes it possible to obtain images with a reduced number of measurements when compared to the dual or splitting approach I mentioned earlier, but still with high quality. Nice way to tackle one of the most limiting factors of single-pixel architectures.

Working principle visualization of the generalization method to measure with positive-only patterns in single-pixel imaging setups. Figure extracted from Lorente-Mur et al., ”
Handling negative patterns for fast single-pixel lifetime imaging,” at https://hal.archives-ouvertes.fr/hal-02017598

Handling negative patterns for fast single-pixel lifetime imaging

by Antonio Lorente Mur et al., at https://hal.archives-ouvertes.fr/hal-02017598

Abstract:

Pattern generalization was proposed recently as an avenue to increase the acquisition speed of single-pixel imaging setups. This approach consists of designing some positive patterns that reproduce the target patterns with negative values through linear combinations. This avoids the typical burden of acquiring the positive and negative parts of each of the target patterns, which doubles the acquisition time. In this study, we consider the generalization of the Daubechies wavelet patterns and compare images reconstructed using our approach and using the regular splitting approach. Overall, the reduction in the number of illumination patterns should facilitate the implementation of compressive hyperspectral lifetime imaging for fluorescence-guided surgery.

Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens

The group leaded by P. Artal at Murcia University has recently published an interesting paper related to adaptive optics using an adaptive lens. When working in a real scenario, imperfections in the optical elements you use or just the objects you want to image introduce optical aberrations in the pictures you obtain. Usually these aberrations reduce the quality of your images just a bit (introducing a bit of defocus or some astigmatism), but in the worst case scenario it may result in completely useless results.

In order to overcome this problem, usually liquid crystal spatial light modulators or deformable mirrors are used in optical systems to introduce phase corrections to the light going through the system, countering the phase of these aberrations and thus restoring the image quality. However, these systems present several problems. Even though both spatial light modulators and deformable mirrors can correct the problems I mentioned earlier, they work in a reflection configuration. This introduces additional complexity to the optical systems. Also, liquid crystal spatial light modulators are sensitive to polarization, usually have low reflectance values, and tend to be slow.

As a way to tackle those obstacles, the authors have used an adaptive lens in a two-photon microscope to perform the adaptive optics procedure. Adaptive lenses are being used more and more recently to perform aberration correction. In contrast to both spatial light modulators and deformable mirrors, they work in transmission and present very low losses. Moreover, they can introduce low and mid-order aberrations at refresh rates of almost 1 kHz. The working principle can be seen in this figure:

Adaptive_lens

Schematics of the working principle of an adaptive lens. The lens is formed by two thin glass layers, and a liquid in between. Each actuator is triggered by an electrical signal, which deforms the glass windows, generating different shapes and changing the phase of the wavefront passing through the lens. Figure extracted from Stefano Bonora et. al., “Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens,” Opt. Express 23, 21931-21941 (2015)

In the paper, they show how this device can obtain results comparable to the traditional spatial light modulator approach, with the benefits mentioned before, in a multi-photon microscope.

Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens

by Juan M. Bueno et al., at Optics Express

Abstract:

A multi-actuator adaptive lens (AL) was incorporated into a multi-photon (MP) microscope to improve the quality of images of thick samples. Through a hill-climbing procedure the AL corrected for the specimen-induced aberrations enhancing MP images. The final images hardly differed when two different metrics were used, although the sets of Zernike coefficients were not identical. The optimized MP images acquired with the AL were also compared with those obtained with a liquid-crystal-on-silicon spatial light modulator. Results have shown that both devices lead to similar images, which corroborates the usefulness of this AL for MP imaging.

results_bueno.png

Experimental results showing the improvement on the image obtained with the adaptive lens system. Figure 3 from the paper: Juan M. Bueno, et. al, “Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens,” Opt. Express 26, 14278-14287 (2018)

 

Weekly recap (29/04/2018)

This week we have a lot of interesting stuff:

Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms

Adaptive Optics + Light Sheet Microscopy to see living cells inside the body of a Zebra fish (the favorite fish of biologists!). Really impressive images overcoming scattering caused by tissue. You can read more about the paper on Nature and/or Howard Hughes Medical Institute.

 


The Feynmann Lectures on Physics online

I just read on OpenCulture that The Feynmann Lectures on Physics have been made available online. Until now, only the first part was published, but now you can also find volumes 2 and 3. Time to reread the classics…


Imaging Without Lenses

An interesting text appeared this week in American Scientist covering some aspects of the coming symbiosis between optics, computation and electronics. We are already able to overcome optical resolution, obtain phase information, or even imaging without using traditional optical elements, such as lenses. What’s coming next?


All-Optical Machine Learning Using Diffractive Deep Neural Networks

A very nice paper appeared on arXiv this week.

Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Mona Jarrahi, Aydogan Ozcan

We introduce an all-optical Diffractive Deep Neural Network (D2NN) architecture that can learn to implement various functions after deep learning-based design of passive diffractive layers that work collectively. We experimentally demonstrated the success of this framework by creating 3D-printed D2NNs that learned to implement handwritten digit classification and the function of an imaging lens at terahertz spectrum. With the existing plethora of 3D-printing and other lithographic fabrication methods as well as spatial-light-modulators, this all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs.

Imagine if Fourier Transforms were discovered before lenses, and then some day someone comes up with just a piece of glass and says “this can make the computations of FT at the speed of light”. Very cool read.


OPEN SPIN MICROSCOPY

I just stumbled upon this project while reading Lab on the Cheap. Seems like a very good resource if you plan to build a light-sheet microscope and do not wanna spend $$$$ on Thorlabs.


Artificial Inteligence kits from Google, updated edition

Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We’re seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven’t yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.

We’re taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.

To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.

Looking forward to see the price tag and the date they become available.

The week in papers (22/04/18)

As a way to keep posts going, I am starting a short recap about interesting papers being published (or being discovered) every now and then. Probably I will write longer posts about some of them in the future.

Let’s get this thing going:

Two papers using ‘centroid estimation‘ to retrieve interesting information:

Extract voice information using high-speed camera

Mariko AkutsuYasuhiro Oikawa, and Yoshio Yamasaki, at The Journal of the Acoustical Society of America

Kilohertz binary phase modulator for pulsed laser sources using a digital micromirror device

People at Judkewitz lab tend to do really cool stuff. This time they have implemented a binary phase modulator using a DMD.

Kilohertz binary phase modulator for pulsed laser sources using a digital micromirror device,

M. Hoffmann et al, at Optics Letters

Abstract:

The controlled modulation of an optical wavefront is required for aberration correction, digital phase conjugation, or patterned photostimulation. For most of these applications, it is desirable to control the wavefront modulation at the highest rates possible. The digital micromirror device (DMD) presents a cost-effective solution to achieve high-speed modulation and often exceeds the speed of the more conventional liquid crystal spatial light modulator but is inherently an amplitude modulator. Furthermore, spatial dispersion caused by DMD diffraction complicates its use with pulsed laser sources, such as those used in nonlinear microscopy. Here we introduce a DMD-based optical design that overcomes these limitations and achieves dispersion-free high-speed binary phase modulation. We show that this phase modulation can be used to switch through binary phase patterns at the rate of 20 kHz in two-photon excitation fluorescence applications.

Controlling phase is of paramount interest in multiple optical scenarios. Doing it fast is very difficult, given that spatial light modulators that are really good at modulating phase precisely tend to be slow (~hundreds of Hz). On the other side, intensity modulators such as DMDs are very fast (~20 kHz), but they cannot directly modulate phase. There have been several workarounds with the general idea of using DMDs to modulate phase. I remember a very nice paper by A. Mosk, using groups of mirrors to codify the phase of a superpixel.

Here, they use the fact that DMDs reflect light in two different directions to introduce a phase shift with a moving mirror into one of the reflection directions, achieving binary phase distributions at kHz refresh rates.

 

 

Seems like we are getting closer and closer to get a high-efficiency method to modulate phase with DMD’s.