Data fusion as a way to perform compressive sensing

Some time ago I started working on some kind of data fusion problem where we have access to several imaging systems working in parallel, each one gathering a different multidimensional dataset with mixed spectral, temporal, and/or spatial resolutions. The idea is to perform 4D imaging at high spectral, temporal, and spatial resolutions using some single-pixel/multi-pixel detectors, where each detector is specialized on measuring one dimension in high detail while subsampling the others. Using ideas from regularization/compressive sensing, the goal is to merge all the information we acquire individually in a way that makes sense, and while doing so, achieve very high compression ratios.

Looking for similar approaches, I stumbled with a series of papers from people doing remote sensing that basically do the same thing. While the idea is fundamentally the same, their fusion model relies on a Bayesian approach, which is something I have never seen before, and seems quite interesting. They try to estimate a 4D object that maximizes the coherence between the data they acquire (the low-resolution 2D/3D projections) and their estimation. This is quite close to what we usually do in compressive sensing experiments on imaging, but with a minimization based on a sparsity prior.

An Integrated Framework for the Spatio–Temporal–Spectral Fusion of Remote Sensing Images

By Huanfeng Shen ; Xiangchao Meng , and Liangpei Zhang, at IEEE Transactions on Geoscience and Remote Sensing

Abstract:

Remote sensing satellite sensors feature a tradeoff between the spatial, temporal, and spectral resolutions. In this paper, we propose an integrated framework for the spatio-temporal-spectral fusion of remote sensing images. There are two main advantages of the proposed integrated fusion framework: it can accomplish different kinds of fusion tasks, such as multiview spatial fusion, spatio-spectral fusion, and spatio-temporal fusion, based on a single unified model, and it can achieve the integrated fusion of multisource observations to obtain high spatio-temporal-spectral resolution images, without limitations on the number of remote sensing sensors. The proposed integrated fusion framework was comprehensively tested and verified in a variety of image fusion experiments. In the experiments, a number of different remote sensing satellites were utilized, including IKONOS, the Enhanced Thematic Mapper Plus (ETM+), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Hyperspectral Digital Imagery Collection Experiment (HYDICE), and Systeme Pour l’ Observation de la Terre-5 (SPOT-5). The experimental results confirm the effectiveness of the proposed method.

Operation principle of the technique. Multiple images with different resolutions are combined to obtain a high-resolution multidimensional reconstruction of the data. Extracted from Fig.1 of the manuscript.

As sensors evolve, the amount of information we can gather grows at an alarming rate: we have gone from just hundreds or thousands of pixels to millions in just a few decades. Also, now we gather hundreds of spectral channels at hundreds or thousands of frames per second. This implies that we usually suffer from bottlenecks in the acquisition and storage of multidimensional datasets. Using approaches like this, one can make it possible to obtain very good object estimations while measuring very little data in a fast way, which is always a must.

Bonus: Of course, they have also been doing the same but using Machine Learning ideas:

Spatial–Spectral Fusion by Combining Deep Learning and Variational Model

By Huanfeng Shen, Menghui Jiang, Jie Li, Qiangqiang Yuan, Yanchong Wei, and Liangpei Zhang, at IEEE Transactions on Geoscience and Remote Sensing

Single frame wide-field Nanoscopy based on Ghost Imaging via Sparsity Constraints (GISC Nanoscopy)

This just got posted on the arXiv, and has some interesting ideas inside. Using a ground glass diffuser before a pixelated detector, and after a calibrating procedure where you measure the associated speckle patterns when scanning the sample plane, a single shot of the fluorescence signal speckle pattern can be used to retrieve high spatial resolution images of a sample. Also, the authors claim that the approach should work on STORM setups, achieving really fast and sharp fluorescence images. Nice single-shot example of Compressive Sensing and Ghost Imaging!

Single frame wide-field Nanoscopy based on Ghost Imaging via Sparsity Constraints (GISC Nanoscopy)

by Wenwen Li, Zhishen Tong, Kang Xiao, Zhentao Liu, Qi Gao, Jing Sun, Shupeng Liu, Shensheng Han, and Zhongyang Wang, at arXiv.org

Abstract:

The applications of present nanoscopy techniques for live cell imaging are limited by the long sampling time and low emitter density. Here we developed a new single frame wide-field nanoscopy based on ghost imaging via sparsity constraints (GISC Nanoscopy), in which a spatial random phase modulator is applied in a wide-field microscopy to achieve random measurement for fluorescence signals. This new method can effectively utilize the sparsity of fluorescence emitters to dramatically enhance the imaging resolution to 80 nm by compressive sensing (CS) reconstruction for one raw image. The ultra-high emitter density of 143 {\mu}m-2 has been achieved while the precision of single-molecule localization below 25 nm has been maintained. Thereby working with high-density of photo-switchable fluorophores GISC nanoscopy can reduce orders of magnitude sampling frames compared with previous single-molecule localization based super-resolution imaging methods.

Experimental setup and fundamentals of the calibration and recovery process. Extracted from Fig.1 of the manuscript.

Compressive optical imaging with a photonic lantern

New single-pixel camera design, but this time using multicore fibers (MCF) and a photonic lantern instead of a spatial light modulator. Cool!

The fundamental idea is to excite one of the cores of a MCF. Then, light propagates through the fiber, which has a photonic lantern at the tip that generates a random-like light pattern at its tip. Exciting different cores of the MCF generates different light patterns at the end of the fiber, which can be used to obtain images using the single-pixel imaging formalism.

There is more cool stuff in the paper, for example the Compressive Sensing algorithm the authors are using, using positivity constraints. This is indeed quite relevant if you want to get high quality images, because of the reduced number of cores present in the MCF (remember, 1 core = 1 pattern, and the number of patterns determines the spatial resolution of the image in a single-pixel camera). It is also nice that there is available code from the authors here.

Some experimental/simulation results (nice Smash logo there!). Extracted from
Debaditya Choudhury et al., “Compressive optical imaging with a photonic lantern,” at https://arxiv.org/abs/1903.01288

Compressive optical imaging with a photonic lantern

by Debaditya Choudhury et al., at arXiv

Abstract:

The thin and flexible nature of optical fibres often makes them the ideal technology to view biological processes in-vivo, but current microendoscopic approaches are limited in spatial resolution. Here, we demonstrate a new route to high resolution microendoscopy using a multicore fibre (MCF) with an adiabatic multimode-to-singlemode photonic lantern transition formed at the distal end by tapering. We show that distinct multimode patterns of light can be projected from the output of the lantern by individually exciting the single-mode MCF cores, and that these patterns are highly stable to fibre movement. This capability is then exploited to demonstrate a form of single-pixel imaging, where a single pixel detector is used to detect the fraction of light transmitted through the object for each multimode pattern. A custom compressive imaging algorithm we call SARA-COIL is used to reconstruct the object using only the pre-measured multimode patterns themselves and the detector signals.

Weekly recap (29/04/2018)

This week we have a lot of interesting stuff:

Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms

Adaptive Optics + Light Sheet Microscopy to see living cells inside the body of a Zebra fish (the favorite fish of biologists!). Really impressive images overcoming scattering caused by tissue. You can read more about the paper on Nature and/or Howard Hughes Medical Institute.

 


The Feynmann Lectures on Physics online

I just read on OpenCulture that The Feynmann Lectures on Physics have been made available online. Until now, only the first part was published, but now you can also find volumes 2 and 3. Time to reread the classics…


Imaging Without Lenses

An interesting text appeared this week in American Scientist covering some aspects of the coming symbiosis between optics, computation and electronics. We are already able to overcome optical resolution, obtain phase information, or even imaging without using traditional optical elements, such as lenses. What’s coming next?


All-Optical Machine Learning Using Diffractive Deep Neural Networks

A very nice paper appeared on arXiv this week.

Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Mona Jarrahi, Aydogan Ozcan

We introduce an all-optical Diffractive Deep Neural Network (D2NN) architecture that can learn to implement various functions after deep learning-based design of passive diffractive layers that work collectively. We experimentally demonstrated the success of this framework by creating 3D-printed D2NNs that learned to implement handwritten digit classification and the function of an imaging lens at terahertz spectrum. With the existing plethora of 3D-printing and other lithographic fabrication methods as well as spatial-light-modulators, this all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs.

Imagine if Fourier Transforms were discovered before lenses, and then some day someone comes up with just a piece of glass and says “this can make the computations of FT at the speed of light”. Very cool read.


OPEN SPIN MICROSCOPY

I just stumbled upon this project while reading Lab on the Cheap. Seems like a very good resource if you plan to build a light-sheet microscope and do not wanna spend $$$$ on Thorlabs.


Artificial Inteligence kits from Google, updated edition

Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We’re seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven’t yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.

We’re taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.

To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.

Looking forward to see the price tag and the date they become available.

The week in papers (22/04/18)

As a way to keep posts going, I am starting a short recap about interesting papers being published (or being discovered) every now and then. Probably I will write longer posts about some of them in the future.

Let’s get this thing going:

Two papers using ‘centroid estimation‘ to retrieve interesting information:

Extract voice information using high-speed camera

Mariko AkutsuYasuhiro Oikawa, and Yoshio Yamasaki, at The Journal of the Acoustical Society of America

Light transport and imaging through complex media & Photonics West 2018

Last ~20 days have been completely crazy. First, I went to a meeting organized by the Royal Society: Light transport and imaging through complex media. It was amazing. Beautiful place, incredible researchers, and a nice combination of signal processing and optical imaging. I am sure I will be looking for future editions.

After that, I assisted Photonics West. Both BIOS and OPTO were full of interesting talks. Scattering media, adaptive optics, DMDs, some compressive sensing… Fantastic week. There I talked about two recent works we made in Spain: balanced photodetection single-pixel imaging and phase imaging using a DMD and a lateral position detector. Both contributions were very well received, and I am happy with the feedback I got. So many new ideas… now I need some time to implement them! I plan on writing a bit here on the blog about the last work, which has been published in the last issue of Optica.

 

Some of the cool stuff I heard about:

Valentina Emiliani – Optical manipulation of neuronal circuits by optical wave front shaping. Very cool implementations combining multiple SLMs and temporal focusing to see how neurons work.

Richard Baraniuk – Phase retrieval: tradeoffs and a new algorithm. How to recover phase information from intensity measurements. Compressive sensing and inverse problems. Very interesting, and a really good speaker. It is difficult to find someone capable of explaining these concepts as easily as Richard.

Michael Unser – GlobalBioIm

When being confronted with a new imaging problem, the common experience is that one has to reimplement (if not reinvent) the wheel (=forward model + optimization algorithm), which is very time consuming and also acts as a deterrent for engaging in new developments. This Matlab library aims at simplifying this process by decomposing the workflow onto smaller modules, including many reusable ones since several aspects such as regularization and the injection of prior knowledge are rather generic. It also capitalizes on the strong commonalities between the various image formation models that can be exploited to obtain fast, streamlined implementations.

Oliver Pust – High spatial resolution hyperspectral camera based on a continously variable filter. Really cool concept of merging a continous filter and multiple expositions to obtain hyperspectral information and even 3D images.

Seungwoo Shin – Exploiting a digital micromirror device for a multimodal approach combinning optical diffraction tomography and 3D structured illumination microscopy. I am always happy to see cool implementations with DMDs. This is one of them. KAIST delivers.

We propose a multimodal system combining ODT and 3-D SIM to measure both 3-D RI and fluorescence distributions of samples with advantages including high spatiotemporal resolution as well as molecular specificity. By exploiting active illumination control of a digital micromirror device and two different illumination wavelengths, our setup allows to individually operate either ODT or 3-D SIM. To demonstrate the feasibility of our method, 3-D RI and fluorescence distributions of a planar cluster of fluorescent beads were reconstructed. To further demonstrate the applicability, a 3-D fluorescence and time-lapse 3-D RI distributions of fluorescent beads inside a HeLa cell were measured.

Post featured image extracted from here.