Giga-voxel multidimensional fluorescence imaging combining single-pixel detection and data fusion

Data fusion concept. From Fig.1 in the manuscript. Do you want a 4D reconstruction? Just take several 2D/3D objects and merge them in a clever way.

Some time ago I wrote a short post about using Data Fusion (DF) to perform some kind of Compressive Sensing (CS). We came with that idea when tackling a common problem in multidimensional imaging systems: the more you want to measure, the harder it gets. It is not only the fact that you need a system that is sensitive to many different physical parameters (wavelength, time, polarization, etc.), but also the point of having huge datasets that you need to record and store. If you try to measure a scene with high spatial resolution, in tens or hundreds of spectral channels, and with video frame rates (let’s say 30 or 60 frames per second), you generate gigabytes of data every second. This will burn through your hard drives in a moment, and if you want to send your data to a different lab/computer for analysis, you will need to wait ages for the transmission to end.

While there have been many techniques trying to solve these problems, there is not a really perfect solution (and, in my honest opinion, there cannot be a single solution that will solve all the problems that different systems will face) that allows you to obtain super high quality pictures in many different dimensions. You always need to live with some tradeoffs (for example, doing low spatial resolution but high frame rate, or gathering a low number of spectral bands with good image quality).

Data fusion results, from Fig.3 in the manuscript. Here you can see that the initial single-pixel datasets have low spatial resolution, but the DF results have high spatial resolution AND both spectral and temporal resolution.

However, there are cool ideas that can help a lot. In our last paper, we show how, by borrowing ideas from remote sensing and/or autonomous driving, you can obtain high resolution, multispectral, time-resolved images of fluorescent objects in a simple and effective manner. We use a single-pixel imaging system to build two single-pixel cameras: one that measures multispectral images, and another that obtains time-resolved measurements (in the ps range). Also, we use a conventional pixelated detector to obtain a high spatial resolution image (with no temporal or spectral resolution). The key point here is that we have multiple systems working in parallel, each one doing its best to obtain one specific dimension. For example, the single-pixel spectral camera obtains a 3D image (x,y,lambda) with a very good spectral resolution, but with very low spatial resolution. On the other hand, the pixelated detector acquires a high spatial resolution image, but neither spectral nor time resolved. After obtaining the different datasets, DF allows you to merge all the information in a final multidimensional image, where all the dimensions have been sampled at high resolution (so, our final 4D object has high spatial, temporal, and spectral resolution).

So, what about the compression? The cool thing here is that we only obtain three different datasets: the high resolution picture from the camera, and the two multispectral/time-resolved images from the single-pixel cameras. However, after the reconstruction we obtain a full 4D dataset that amounts for about 1 Gigavoxel. In the end, if you compare the number of voxels we measure versus the number of voxels we retrieve, we have a compression ratio higher than 99.9% (which is quite big if you ask me).

As a sample of the technique, we show the time-resolved fluorescence decay of a simple scene with three different fluorophores (each one of the letters you see on the following figures), where the species are excited and the fluorescence process takes place in less than 25 ns (woah!). You can see the live reconstruction here, and a short talk I made a while ago after the info of the paper, where you can see all the details about the system, the reconstruction algorithm, and so.

Giga-voxel multidimensional fluorescence imaging combining single-pixel detection and data fusion

F. Soldevila, A. J. M. Lenz, A. Ghezzi, A. Farina, C. D’Andrea, and E. Tajahuerce, on Optics Letters (and the arxiv version)

Abstract: Time-resolved fluorescence imaging is a key tool in biomedical applications, as it allows to non-invasively obtain functional and structural information. However, the big amount of collected data introduces challenges in both acquisition speed and processing needs. Here, we introduce a novel technique that allows to acquire a giga-voxel 4D hypercube in a fast manner while measuring only 0.03% of the dataset. The system combines two single-pixel cameras and a conventional 2D array detector working in parallel. Data fusion techniques are introduced to combine the individual 2D and 3D projections acquired by each sensor in the final high-resolution 4D hypercube, which can be used to identify different fluorophore species by their spectral and temporal signatures.

Single pixel hyperspectral bioluminescence tomography based on compressive sensing

Really cool implementation of Single-pixel Imaging + Compressive Sensing from the people at University of Birmingham.

Using hyperspectral data measured with a single-pixel spectrometer + tomographic reconstruction, they show that it is possible to perform Bioluminiscence Imaging.

Nice to see that the topics I used to work keep showing super cool results.

Single pixel hyperspectral bioluminescence tomography based on compressive sensing

By Alexander Bentley, Jonathan E. Rowe, and Hamid Dehghani, at Biomedical Optics Express


Photonics based imaging is a widely utilised technique for the study of biological functions within pre-clinical studies. Specifically, bioluminescence imaging is a sensitive non-invasive and non-contact optical imaging technique that is able to detect distributed (biologically informative) visible and near-infrared activated light sources within tissue, providing information about tissue function. Compressive sensing (CS) is a method of signal processing that works on the basis that a signal or image can be compressed without important information being lost. This work describes the development of a CS based hyperspectral Bioluminescence imaging system that is used to collect compressed fluence data from the external surface of an animal model, due to an internal source, providing lower acquisition times, higher spectral content and potentially better tomographic source localisation. The work demonstrates that hyperspectral surface fluence images of both block and mouse shaped phantom due to internal light sources could be obtained at 30% of the time and measurements it would take to collect the data using conventional raster scanning methods. Using hyperspectral data, tomographic reconstruction of internal light sources can be carried out using any desired number of wavelengths and spectral bandwidth. Reconstructed images of internal light sources using four wavelengths as obtained through CS are presented showing a localisation error of ∼3 mm. Additionally, tomographic images of dual-colored sources demonstrating multi-wavelength light sources being recovered are presented further highlighting the benefits of the hyperspectral system for utilising multi-colored biomarker applications.

Compressive optical imaging with a photonic lantern

New single-pixel camera design, but this time using multicore fibers (MCF) and a photonic lantern instead of a spatial light modulator. Cool!

The fundamental idea is to excite one of the cores of a MCF. Then, light propagates through the fiber, which has a photonic lantern at the tip that generates a random-like light pattern at its tip. Exciting different cores of the MCF generates different light patterns at the end of the fiber, which can be used to obtain images using the single-pixel imaging formalism.

There is more cool stuff in the paper, for example the Compressive Sensing algorithm the authors are using, using positivity constraints. This is indeed quite relevant if you want to get high quality images, because of the reduced number of cores present in the MCF (remember, 1 core = 1 pattern, and the number of patterns determines the spatial resolution of the image in a single-pixel camera). It is also nice that there is available code from the authors here.

Some experimental/simulation results (nice Smash logo there!). Extracted from
Debaditya Choudhury et al., “Compressive optical imaging with a photonic lantern,” at

Compressive optical imaging with a photonic lantern

by Debaditya Choudhury et al., at arXiv


The thin and flexible nature of optical fibres often makes them the ideal technology to view biological processes in-vivo, but current microendoscopic approaches are limited in spatial resolution. Here, we demonstrate a new route to high resolution microendoscopy using a multicore fibre (MCF) with an adiabatic multimode-to-singlemode photonic lantern transition formed at the distal end by tapering. We show that distinct multimode patterns of light can be projected from the output of the lantern by individually exciting the single-mode MCF cores, and that these patterns are highly stable to fibre movement. This capability is then exploited to demonstrate a form of single-pixel imaging, where a single pixel detector is used to detect the fraction of light transmitted through the object for each multimode pattern. A custom compressive imaging algorithm we call SARA-COIL is used to reconstruct the object using only the pre-measured multimode patterns themselves and the detector signals.

Single-pixel imaging with sampling distributed over simplex vertices

Last week I posted a recently uploaded paper on using positive-only patterns in a single-pixel imaging system.

Today I just found another implementation looking for the same objective. This time the authors (from University of Warsaw, leaded by Rafał Kotyński) introduce the idea of simplexes, or how any point in some N-dimensional space can be located using only positive coordinates if you choose the correct coordinate system. Cool concept!

Fig.1 extracted from “Single-pixel imaging with sampling distributed over simplex vertices,”
Krzysztof M. Czajkowski, Anna Pastuszczak, and Rafał Kotyński, Opt. Lett. 44, 1241-1244 (2019)

Single-pixel imaging with sampling distributed over simplex vertices

by Krzysztof M. Czajkowski et al., on Optics Letters


We propose a method of reduction of experimental noise in single-pixel imaging by expressing the subsets of sampling patterns as linear combinations of vertices of a multidimensional regular simplex. This method also may be directly extended to complementary sampling. The modified measurement matrix contains nonnegative elements with patterns that may be directly displayed on intensity spatial light modulators. The measurement becomes theoretically independent of the ambient illumination, and in practice becomes more robust to the varying conditions of the experiment. We show how the optimal dimension of the simplex depends on the level of measurement noise. We present experimental results of single-pixel imaging using binarized sampling and real-time reconstruction with the Fourier domain regularized inversion method.

Handling negative patterns for fast single-pixel lifetime imaging

A group of researchers working in France and USA, leaded by N. Ducros, has uploaded an interesting paper this week.

When doing single-pixel imaging, one of the most important aspects you need to take into account is the kind of structured patters (functions) you are going to use. This is quite relevant because it is greatly connected with the speed you are going to achieve (as the number of total measurements needed for obtaining good images strongly depends on the set of functions you choose). Usually, the go-to solution for single-pixel cameras is to either choose random functions, or a set (family) of orthogonal functions (Fourier, DCT, Hadamard, etc.).

The problem with random functions is that they are not orthogonal (it is very hard to distinguish between two different random functions, all of them are similar), so you usually need to project a high number of them (which is time consuming). Orthogonal functions that belong to a basis are a better choice, because you can send the full basis to get “perfect” quality (i.e., without losing information due to undersampling). However, usually these functions have positive and negative values, which is something you cannot directly implement in lots of Spatial Light Modulators (for example, in Digital Micromirror Devices). If you want to implement these patterns, there are multiple workarounds. The most common one is to implement two closely-related patterns sequentially in the SLM to generate one function. This solves the negative-positive problem, but increases the time it takes to obtain an image in a factor two.

What Lorente-Mur et al. show in this paper is a method to generate a new family of positive-only patterns, derived from the original positive-negative family. This makes it possible to obtain images with a reduced number of measurements when compared to the dual or splitting approach I mentioned earlier, but still with high quality. Nice way to tackle one of the most limiting factors of single-pixel architectures.

Working principle visualization of the generalization method to measure with positive-only patterns in single-pixel imaging setups. Figure extracted from Lorente-Mur et al., ”
Handling negative patterns for fast single-pixel lifetime imaging,” at

Handling negative patterns for fast single-pixel lifetime imaging

by Antonio Lorente Mur et al., at


Pattern generalization was proposed recently as an avenue to increase the acquisition speed of single-pixel imaging setups. This approach consists of designing some positive patterns that reproduce the target patterns with negative values through linear combinations. This avoids the typical burden of acquiring the positive and negative parts of each of the target patterns, which doubles the acquisition time. In this study, we consider the generalization of the Daubechies wavelet patterns and compare images reconstructed using our approach and using the regular splitting approach. Overall, the reduction in the number of illumination patterns should facilitate the implementation of compressive hyperspectral lifetime imaging for fluorescence-guided surgery.

The week in papers (22/04/18)

As a way to keep posts going, I am starting a short recap about interesting papers being published (or being discovered) every now and then. Probably I will write longer posts about some of them in the future.

Let’s get this thing going:

Two papers using ‘centroid estimation‘ to retrieve interesting information:

Extract voice information using high-speed camera

Mariko AkutsuYasuhiro Oikawa, and Yoshio Yamasaki, at The Journal of the Acoustical Society of America