Data fusion as a way to perform compressive sensing

Some time ago I started working on some kind of data fusion problem where we have access to several imaging systems working in parallel, each one gathering a different multidimensional dataset with mixed spectral, temporal, and/or spatial resolutions. The idea is to perform 4D imaging at high spectral, temporal, and spatial resolutions using some single-pixel/multi-pixel detectors, where each detector is specialized on measuring one dimension in high detail while subsampling the others. Using ideas from regularization/compressive sensing, the goal is to merge all the information we acquire individually in a way that makes sense, and while doing so, achieve very high compression ratios.

Looking for similar approaches, I stumbled with a series of papers from people doing remote sensing that basically do the same thing. While the idea is fundamentally the same, their fusion model relies on a Bayesian approach, which is something I have never seen before, and seems quite interesting. They try to estimate a 4D object that maximizes the coherence between the data they acquire (the low-resolution 2D/3D projections) and their estimation. This is quite close to what we usually do in compressive sensing experiments on imaging, but with a minimization based on a sparsity prior.

An Integrated Framework for the Spatio–Temporal–Spectral Fusion of Remote Sensing Images

By Huanfeng Shen ; Xiangchao Meng , and Liangpei Zhang, at IEEE Transactions on Geoscience and Remote Sensing


Remote sensing satellite sensors feature a tradeoff between the spatial, temporal, and spectral resolutions. In this paper, we propose an integrated framework for the spatio-temporal-spectral fusion of remote sensing images. There are two main advantages of the proposed integrated fusion framework: it can accomplish different kinds of fusion tasks, such as multiview spatial fusion, spatio-spectral fusion, and spatio-temporal fusion, based on a single unified model, and it can achieve the integrated fusion of multisource observations to obtain high spatio-temporal-spectral resolution images, without limitations on the number of remote sensing sensors. The proposed integrated fusion framework was comprehensively tested and verified in a variety of image fusion experiments. In the experiments, a number of different remote sensing satellites were utilized, including IKONOS, the Enhanced Thematic Mapper Plus (ETM+), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Hyperspectral Digital Imagery Collection Experiment (HYDICE), and Systeme Pour l’ Observation de la Terre-5 (SPOT-5). The experimental results confirm the effectiveness of the proposed method.

Operation principle of the technique. Multiple images with different resolutions are combined to obtain a high-resolution multidimensional reconstruction of the data. Extracted from Fig.1 of the manuscript.

As sensors evolve, the amount of information we can gather grows at an alarming rate: we have gone from just hundreds or thousands of pixels to millions in just a few decades. Also, now we gather hundreds of spectral channels at hundreds or thousands of frames per second. This implies that we usually suffer from bottlenecks in the acquisition and storage of multidimensional datasets. Using approaches like this, one can make it possible to obtain very good object estimations while measuring very little data in a fast way, which is always a must.

Bonus: Of course, they have also been doing the same but using Machine Learning ideas:

Spatial–Spectral Fusion by Combining Deep Learning and Variational Model

By Huanfeng Shen, Menghui Jiang, Jie Li, Qiangqiang Yuan, Yanchong Wei, and Liangpei Zhang, at IEEE Transactions on Geoscience and Remote Sensing

Instant ghost imaging: algorithm and on-chip implementation

Nice ghost imaging implementation on a chip. Even though the optical part has been quite well-known for a while, I really like the fact that more groups are starting to incorporate FPGA cards in their optical systems (if only they were easier to use!). Seems like a very interesting way of speeding-up the post-processing of the signal in order to obtain the final image. How long until we see compressive sensing and/or machine learning on a chip?

Experimental setup and operation principle. Extracted from Fig.1 of the paper.

Instant ghost imaging: algorithm and on-chip implementation

Ghost imaging (GI) is an imaging technique that uses the correlation between two light beams to reconstruct the image of an object. Conventional GI algorithms require large memory space to store the measured data and perform complicated offline calculations, limiting practical applications of GI. Here we develop an instant ghost imaging (IGI) technique with a differential algorithm and an implemented high-speed on-chip IGI hardware system. This algorithm uses the signal between consecutive temporal measurements to reduce the memory requirements without degradation of image quality compared with conventional GI algorithms. The on-chip IGI system can immediately reconstruct the image once the measurement finishes; there is no need to rely on post-processing or offline reconstruction. This system can be developed into a realtime imaging system. These features make IGI a faster, cheaper, and more compact alternative to a conventional GI system and make it viable for practical applications of GI.

By Zhe Yang, Wei-Xing Zhang, Yi-Pu Liu, Dong Ruan, and Jun-Lin Li, at Optics Express