Giga-voxel multidimensional fluorescence imaging combining single-pixel detection and data fusion

Data fusion concept. From Fig.1 in the manuscript. Do you want a 4D reconstruction? Just take several 2D/3D objects and merge them in a clever way.

Some time ago I wrote a short post about using Data Fusion (DF) to perform some kind of Compressive Sensing (CS). We came with that idea when tackling a common problem in multidimensional imaging systems: the more you want to measure, the harder it gets. It is not only the fact that you need a system that is sensitive to many different physical parameters (wavelength, time, polarization, etc.), but also the point of having huge datasets that you need to record and store. If you try to measure a scene with high spatial resolution, in tens or hundreds of spectral channels, and with video frame rates (let’s say 30 or 60 frames per second), you generate gigabytes of data every second. This will burn through your hard drives in a moment, and if you want to send your data to a different lab/computer for analysis, you will need to wait ages for the transmission to end.

While there have been many techniques trying to solve these problems, there is not a really perfect solution (and, in my honest opinion, there cannot be a single solution that will solve all the problems that different systems will face) that allows you to obtain super high quality pictures in many different dimensions. You always need to live with some tradeoffs (for example, doing low spatial resolution but high frame rate, or gathering a low number of spectral bands with good image quality).

Data fusion results, from Fig.3 in the manuscript. Here you can see that the initial single-pixel datasets have low spatial resolution, but the DF results have high spatial resolution AND both spectral and temporal resolution.

However, there are cool ideas that can help a lot. In our last paper, we show how, by borrowing ideas from remote sensing and/or autonomous driving, you can obtain high resolution, multispectral, time-resolved images of fluorescent objects in a simple and effective manner. We use a single-pixel imaging system to build two single-pixel cameras: one that measures multispectral images, and another that obtains time-resolved measurements (in the ps range). Also, we use a conventional pixelated detector to obtain a high spatial resolution image (with no temporal or spectral resolution). The key point here is that we have multiple systems working in parallel, each one doing its best to obtain one specific dimension. For example, the single-pixel spectral camera obtains a 3D image (x,y,lambda) with a very good spectral resolution, but with very low spatial resolution. On the other hand, the pixelated detector acquires a high spatial resolution image, but neither spectral nor time resolved. After obtaining the different datasets, DF allows you to merge all the information in a final multidimensional image, where all the dimensions have been sampled at high resolution (so, our final 4D object has high spatial, temporal, and spectral resolution).

So, what about the compression? The cool thing here is that we only obtain three different datasets: the high resolution picture from the camera, and the two multispectral/time-resolved images from the single-pixel cameras. However, after the reconstruction we obtain a full 4D dataset that amounts for about 1 Gigavoxel. In the end, if you compare the number of voxels we measure versus the number of voxels we retrieve, we have a compression ratio higher than 99.9% (which is quite big if you ask me).

As a sample of the technique, we show the time-resolved fluorescence decay of a simple scene with three different fluorophores (each one of the letters you see on the following figures), where the species are excited and the fluorescence process takes place in less than 25 ns (woah!). You can see the live reconstruction here, and a short talk I made a while ago after the info of the paper, where you can see all the details about the system, the reconstruction algorithm, and so.

Giga-voxel multidimensional fluorescence imaging combining single-pixel detection and data fusion

F. Soldevila, A. J. M. Lenz, A. Ghezzi, A. Farina, C. D’Andrea, and E. Tajahuerce, on Optics Letters (and the arxiv version)

Abstract: Time-resolved fluorescence imaging is a key tool in biomedical applications, as it allows to non-invasively obtain functional and structural information. However, the big amount of collected data introduces challenges in both acquisition speed and processing needs. Here, we introduce a novel technique that allows to acquire a giga-voxel 4D hypercube in a fast manner while measuring only 0.03% of the dataset. The system combines two single-pixel cameras and a conventional 2D array detector working in parallel. Data fusion techniques are introduced to combine the individual 2D and 3D projections acquired by each sensor in the final high-resolution 4D hypercube, which can be used to identify different fluorophore species by their spectral and temporal signatures.

Single pixel hyperspectral bioluminescence tomography based on compressive sensing

Really cool implementation of Single-pixel Imaging + Compressive Sensing from the people at University of Birmingham.

Using hyperspectral data measured with a single-pixel spectrometer + tomographic reconstruction, they show that it is possible to perform Bioluminiscence Imaging.

Nice to see that the topics I used to work keep showing super cool results.

Single pixel hyperspectral bioluminescence tomography based on compressive sensing

By Alexander Bentley, Jonathan E. Rowe, and Hamid Dehghani, at Biomedical Optics Express

Abstract:

Photonics based imaging is a widely utilised technique for the study of biological functions within pre-clinical studies. Specifically, bioluminescence imaging is a sensitive non-invasive and non-contact optical imaging technique that is able to detect distributed (biologically informative) visible and near-infrared activated light sources within tissue, providing information about tissue function. Compressive sensing (CS) is a method of signal processing that works on the basis that a signal or image can be compressed without important information being lost. This work describes the development of a CS based hyperspectral Bioluminescence imaging system that is used to collect compressed fluence data from the external surface of an animal model, due to an internal source, providing lower acquisition times, higher spectral content and potentially better tomographic source localisation. The work demonstrates that hyperspectral surface fluence images of both block and mouse shaped phantom due to internal light sources could be obtained at 30% of the time and measurements it would take to collect the data using conventional raster scanning methods. Using hyperspectral data, tomographic reconstruction of internal light sources can be carried out using any desired number of wavelengths and spectral bandwidth. Reconstructed images of internal light sources using four wavelengths as obtained through CS are presented showing a localisation error of ∼3 mm. Additionally, tomographic images of dual-colored sources demonstrating multi-wavelength light sources being recovered are presented further highlighting the benefits of the hyperspectral system for utilising multi-colored biomarker applications.

Data fusion as a way to perform compressive sensing

Some time ago I started working on some kind of data fusion problem where we have access to several imaging systems working in parallel, each one gathering a different multidimensional dataset with mixed spectral, temporal, and/or spatial resolutions. The idea is to perform 4D imaging at high spectral, temporal, and spatial resolutions using some single-pixel/multi-pixel detectors, where each detector is specialized on measuring one dimension in high detail while subsampling the others. Using ideas from regularization/compressive sensing, the goal is to merge all the information we acquire individually in a way that makes sense, and while doing so, achieve very high compression ratios.

Looking for similar approaches, I stumbled with a series of papers from people doing remote sensing that basically do the same thing. While the idea is fundamentally the same, their fusion model relies on a Bayesian approach, which is something I have never seen before, and seems quite interesting. They try to estimate a 4D object that maximizes the coherence between the data they acquire (the low-resolution 2D/3D projections) and their estimation. This is quite close to what we usually do in compressive sensing experiments on imaging, but with a minimization based on a sparsity prior.

An Integrated Framework for the Spatio–Temporal–Spectral Fusion of Remote Sensing Images

By Huanfeng Shen ; Xiangchao Meng , and Liangpei Zhang, at IEEE Transactions on Geoscience and Remote Sensing

Abstract:

Remote sensing satellite sensors feature a tradeoff between the spatial, temporal, and spectral resolutions. In this paper, we propose an integrated framework for the spatio-temporal-spectral fusion of remote sensing images. There are two main advantages of the proposed integrated fusion framework: it can accomplish different kinds of fusion tasks, such as multiview spatial fusion, spatio-spectral fusion, and spatio-temporal fusion, based on a single unified model, and it can achieve the integrated fusion of multisource observations to obtain high spatio-temporal-spectral resolution images, without limitations on the number of remote sensing sensors. The proposed integrated fusion framework was comprehensively tested and verified in a variety of image fusion experiments. In the experiments, a number of different remote sensing satellites were utilized, including IKONOS, the Enhanced Thematic Mapper Plus (ETM+), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Hyperspectral Digital Imagery Collection Experiment (HYDICE), and Systeme Pour l’ Observation de la Terre-5 (SPOT-5). The experimental results confirm the effectiveness of the proposed method.

Operation principle of the technique. Multiple images with different resolutions are combined to obtain a high-resolution multidimensional reconstruction of the data. Extracted from Fig.1 of the manuscript.

As sensors evolve, the amount of information we can gather grows at an alarming rate: we have gone from just hundreds or thousands of pixels to millions in just a few decades. Also, now we gather hundreds of spectral channels at hundreds or thousands of frames per second. This implies that we usually suffer from bottlenecks in the acquisition and storage of multidimensional datasets. Using approaches like this, one can make it possible to obtain very good object estimations while measuring very little data in a fast way, which is always a must.

Bonus: Of course, they have also been doing the same but using Machine Learning ideas:

Spatial–Spectral Fusion by Combining Deep Learning and Variational Model

By Huanfeng Shen, Menghui Jiang, Jie Li, Qiangqiang Yuan, Yanchong Wei, and Liangpei Zhang, at IEEE Transactions on Geoscience and Remote Sensing