Data fusion as a way to perform compressive sensing

Some time ago I started working on some kind of data fusion problem where we have access to several imaging systems working in parallel, each one gathering a different multidimensional dataset with mixed spectral, temporal, and/or spatial resolutions. The idea is to perform 4D imaging at high spectral, temporal, and spatial resolutions using some single-pixel/multi-pixel detectors, where each detector is specialized on measuring one dimension in high detail while subsampling the others. Using ideas from regularization/compressive sensing, the goal is to merge all the information we acquire individually in a way that makes sense, and while doing so, achieve very high compression ratios.

Looking for similar approaches, I stumbled with a series of papers from people doing remote sensing that basically do the same thing. While the idea is fundamentally the same, their fusion model relies on a Bayesian approach, which is something I have never seen before, and seems quite interesting. They try to estimate a 4D object that maximizes the coherence between the data they acquire (the low-resolution 2D/3D projections) and their estimation. This is quite close to what we usually do in compressive sensing experiments on imaging, but with a minimization based on a sparsity prior.

An Integrated Framework for the Spatio–Temporal–Spectral Fusion of Remote Sensing Images

By Huanfeng Shen ; Xiangchao Meng , and Liangpei Zhang, at IEEE Transactions on Geoscience and Remote Sensing


Remote sensing satellite sensors feature a tradeoff between the spatial, temporal, and spectral resolutions. In this paper, we propose an integrated framework for the spatio-temporal-spectral fusion of remote sensing images. There are two main advantages of the proposed integrated fusion framework: it can accomplish different kinds of fusion tasks, such as multiview spatial fusion, spatio-spectral fusion, and spatio-temporal fusion, based on a single unified model, and it can achieve the integrated fusion of multisource observations to obtain high spatio-temporal-spectral resolution images, without limitations on the number of remote sensing sensors. The proposed integrated fusion framework was comprehensively tested and verified in a variety of image fusion experiments. In the experiments, a number of different remote sensing satellites were utilized, including IKONOS, the Enhanced Thematic Mapper Plus (ETM+), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Hyperspectral Digital Imagery Collection Experiment (HYDICE), and Systeme Pour l’ Observation de la Terre-5 (SPOT-5). The experimental results confirm the effectiveness of the proposed method.

Operation principle of the technique. Multiple images with different resolutions are combined to obtain a high-resolution multidimensional reconstruction of the data. Extracted from Fig.1 of the manuscript.

As sensors evolve, the amount of information we can gather grows at an alarming rate: we have gone from just hundreds or thousands of pixels to millions in just a few decades. Also, now we gather hundreds of spectral channels at hundreds or thousands of frames per second. This implies that we usually suffer from bottlenecks in the acquisition and storage of multidimensional datasets. Using approaches like this, one can make it possible to obtain very good object estimations while measuring very little data in a fast way, which is always a must.

Bonus: Of course, they have also been doing the same but using Machine Learning ideas:

Spatial–Spectral Fusion by Combining Deep Learning and Variational Model

By Huanfeng Shen, Menghui Jiang, Jie Li, Qiangqiang Yuan, Yanchong Wei, and Liangpei Zhang, at IEEE Transactions on Geoscience and Remote Sensing

Instant ghost imaging: algorithm and on-chip implementation

Nice ghost imaging implementation on a chip. Even though the optical part has been quite well-known for a while, I really like the fact that more groups are starting to incorporate FPGA cards in their optical systems (if only they were easier to use!). Seems like a very interesting way of speeding-up the post-processing of the signal in order to obtain the final image. How long until we see compressive sensing and/or machine learning on a chip?

Experimental setup and operation principle. Extracted from Fig.1 of the paper.

Instant ghost imaging: algorithm and on-chip implementation

Ghost imaging (GI) is an imaging technique that uses the correlation between two light beams to reconstruct the image of an object. Conventional GI algorithms require large memory space to store the measured data and perform complicated offline calculations, limiting practical applications of GI. Here we develop an instant ghost imaging (IGI) technique with a differential algorithm and an implemented high-speed on-chip IGI hardware system. This algorithm uses the signal between consecutive temporal measurements to reduce the memory requirements without degradation of image quality compared with conventional GI algorithms. The on-chip IGI system can immediately reconstruct the image once the measurement finishes; there is no need to rely on post-processing or offline reconstruction. This system can be developed into a realtime imaging system. These features make IGI a faster, cheaper, and more compact alternative to a conventional GI system and make it viable for practical applications of GI.

By Zhe Yang, Wei-Xing Zhang, Yi-Pu Liu, Dong Ruan, and Jun-Lin Li, at Optics Express

Single frame wide-field Nanoscopy based on Ghost Imaging via Sparsity Constraints (GISC Nanoscopy)

This just got posted on the arXiv, and has some interesting ideas inside. Using a ground glass diffuser before a pixelated detector, and after a calibrating procedure where you measure the associated speckle patterns when scanning the sample plane, a single shot of the fluorescence signal speckle pattern can be used to retrieve high spatial resolution images of a sample. Also, the authors claim that the approach should work on STORM setups, achieving really fast and sharp fluorescence images. Nice single-shot example of Compressive Sensing and Ghost Imaging!

Single frame wide-field Nanoscopy based on Ghost Imaging via Sparsity Constraints (GISC Nanoscopy)

by Wenwen Li, Zhishen Tong, Kang Xiao, Zhentao Liu, Qi Gao, Jing Sun, Shupeng Liu, Shensheng Han, and Zhongyang Wang, at


The applications of present nanoscopy techniques for live cell imaging are limited by the long sampling time and low emitter density. Here we developed a new single frame wide-field nanoscopy based on ghost imaging via sparsity constraints (GISC Nanoscopy), in which a spatial random phase modulator is applied in a wide-field microscopy to achieve random measurement for fluorescence signals. This new method can effectively utilize the sparsity of fluorescence emitters to dramatically enhance the imaging resolution to 80 nm by compressive sensing (CS) reconstruction for one raw image. The ultra-high emitter density of 143 {\mu}m-2 has been achieved while the precision of single-molecule localization below 25 nm has been maintained. Thereby working with high-density of photo-switchable fluorophores GISC nanoscopy can reduce orders of magnitude sampling frames compared with previous single-molecule localization based super-resolution imaging methods.

Experimental setup and fundamentals of the calibration and recovery process. Extracted from Fig.1 of the manuscript.

Simultaneous multiplane imaging with reverberation multiphoton microscopy

Really nice pre-print by the people at Boston University, leaded by J. Mertz.

Love the idea of generating ~infinite focal spots (until you run out of photons) inside a sample, and using a extremely fast single-pixel detector to recover the signal. Very original way to tackle volumetric imaging in bio-imaging!

Fundamental workflow of the technique. Extracted from Fig. 1 in the manuscript

Simultaneous multiplane imaging with reverberation multiphoton microscopy

by Devin R. Beaulieu, Ian G. Davison, Thomas G. Bifano, and Jerome Mertz, at


Multiphoton microscopy (MPM) has gained enormous popularity over the years for its capacity to provide high resolution images from deep within scattering samples. However, MPM is generally based on single-point laser-focus scanning, which is intrinsically slow. While imaging speeds as fast as video rate have become routine for 2D planar imaging, such speeds have so far been unattainable for 3D volumetric imaging without severely compromising microscope performance. We demonstrate here 3D volumetric (multiplane) imaging at the same speed as 2D planar (single plane) imaging, with minimal compromise in performance. Specifically, multiple planes are acquired by near-instantaneous axial scanning while maintaining 3D micron-scale resolution. Our technique, called reverberation MPM, is well adapted for large-scale imaging in scattering media with low repetition-rate lasers, and can be implemented with conventional MPM as a simple add-on.

Inverse Scattering via Transmission Matrices: Broadband Illumination and Fast Phase Retrieval Algorithms

Interesting paper by people at Rice and Northwestern universities about different phase retrieval algorithms for measuring transmission matrices without using interferometric techniques. The thing with interferometers is that they provide you lots of cool stuff (high sensibility, phase information, etc.), but also involve quite a lot of technical problems that you do not want to face every day in the lab: they are so sensitive that it is a pain in the ass to calibrate and measure without vibrations messing everything up.

Using only intensity measurements (provided by a common sensor such as a CCD) and algorithmic approaches can provide the phase information, but at a computational cost that sometimes makes things not very useful. There is more info about all of this (for the coherent illumination case) in the Rice webpage (including a dataset and an implementation of some of the codes).

Inverse Scattering via Transmission Matrices: Broadband Illumination and Fast Phase Retrieval Algorithms

by Sharma, M. et al., at IEEE Transactions on Computational Imaging 


When a narrowband coherent wavefront passes through or reflects off of a scattering medium, the input and output relationship of the incident field is linear and so can be described by a transmission matrix (TM). If the TM for a given scattering medium is known, one can computationally “invert” the scattering process and image through the medium. In this work, we investigate the effect of broadband illumination, i.e., what happens when the wavefront is only partially coherent? Can one still measure a TM and “invert” the scattering? To accomplish this task, we measure TMs using the double phase retrieval technique, a method which uses phase retrieval algorithms to avoid difficult-to-capture interferometric measurements. Generally, using the double phase retrieval method re- quires performing massive amounts of computation. We alleviate this burden by developing a fast, GPU-accelerated algorithm, prVAMP, which lets us reconstruct 256^2×64^2 TMs in under five hours.

After reconstructing several TMs using this method, we find that, as expected, reducing the coherence of the illumination significantly restricts our ability to invert the scattering process. Moreover, we find that past a certain bandwidth an incoherent, intensity-based scattering model better describes the scattering process and is easier to invert.

De-scattering with Excitation Patterning (DEEP) Enables Rapid Wide-field Imaging Through Scattering Media

Very interesting stuff from the people at MIT regarding imaging through scattering media. Recently, multiple approaches taking advantage of temporal focusing (TF) increased efficiency inside scattering media when using two-photon microscopy have been published, and this goes a step further.

Here, the authors use wide-field structured illumination, in combination with TF, to obtain images with a large field-of-view and a slow number of camera acquisitions. To do so, they sequentially project a set of random structured patterns using a digital micromirror device (DMD). Using the pictures acquired for each illumination pattern in combination with the point-spread-function (PSF) of the imaging system allows to recover images of different biological samples without the typical scattering blur.

Optical design and working principle of the system. Figure extracted from “De-scattering with Excitation Patterning (DEEP) Enables Rapid Wide-field Imaging Through Scattering Media,” Dushan N. Wadduwage et al., at

De-scattering with Excitation Patterning (DEEP) Enables Rapid Wide-field Imaging Through Scattering Media

by Dushan N. Wadduwage et al., at arXiv.


From multi-photon imaging penetrating millimeters deep through scattering biological tissue, to super-resolution imaging conquering the diffraction limit, optical imaging techniques have greatly advanced in recent years. Notwithstanding, a key unmet challenge in all these imaging techniques is to perform rapid wide-field imaging through a turbid medium. Strategies such as active wave-front correction and multi-photon excitation, both used for deep tissue imaging; or wide-field total-internal-refection illumination, used for super-resolution imaging; can generate arbitrary excitation patterns over a large field-of-view through or under turbid media. In these cases, throughput advantage gained by wide-field excitation is lost due to the use of point detection. To address this challenge, here we introduce a novel technique called De-scattering with Excitation Patterning, or ‘DEEP’, which uses patterned excitation followed by wide-field detection with computational imaging. We use two-photon temporal focusing (TFM) to demonstrate our approach at multiple scattering lengths deep in tissue. Our results suggest that millions of point-scanning measurements could be substituted with tens to hundreds of DEEP measurements with no compromise in image quality.