Weekly recap (29/04/2018)

This week we have a lot of interesting stuff:

Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms

Adaptive Optics + Light Sheet Microscopy to see living cells inside the body of a Zebra fish (the favorite fish of biologists!). Really impressive images overcoming scattering caused by tissue. You can read more about the paper on Nature and/or Howard Hughes Medical Institute.


The Feynmann Lectures on Physics online

I just read on OpenCulture that The Feynmann Lectures on Physics have been made available online. Until now, only the first part was published, but now you can also find volumes 2 and 3. Time to reread the classics…

Imaging Without Lenses

An interesting text appeared this week in American Scientist covering some aspects of the coming symbiosis between optics, computation and electronics. We are already able to overcome optical resolution, obtain phase information, or even imaging without using traditional optical elements, such as lenses. What’s coming next?

All-Optical Machine Learning Using Diffractive Deep Neural Networks

A very nice paper appeared on arXiv this week.

Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Mona Jarrahi, Aydogan Ozcan

We introduce an all-optical Diffractive Deep Neural Network (D2NN) architecture that can learn to implement various functions after deep learning-based design of passive diffractive layers that work collectively. We experimentally demonstrated the success of this framework by creating 3D-printed D2NNs that learned to implement handwritten digit classification and the function of an imaging lens at terahertz spectrum. With the existing plethora of 3D-printing and other lithographic fabrication methods as well as spatial-light-modulators, this all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs.

Imagine if Fourier Transforms were discovered before lenses, and then some day someone comes up with just a piece of glass and says “this can make the computations of FT at the speed of light”. Very cool read.


I just stumbled upon this project while reading Lab on the Cheap. Seems like a very good resource if you plan to build a light-sheet microscope and do not wanna spend $$$$ on Thorlabs.

Artificial Inteligence kits from Google, updated edition

Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We’re seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven’t yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.

We’re taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.

To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.

Looking forward to see the price tag and the date they become available.

The week in papers (22/04/18)

As a way to keep posts going, I am starting a short recap about interesting papers being published (or being discovered) every now and then. Probably I will write longer posts about some of them in the future.

Let’s get this thing going:

Two papers using ‘centroid estimation‘ to retrieve interesting information:

Extract voice information using high-speed camera

Mariko AkutsuYasuhiro Oikawa, and Yoshio Yamasaki, at The Journal of the Acoustical Society of America

Kilohertz binary phase modulator for pulsed laser sources using a digital micromirror device

People at Judkewitz lab tend to do really cool stuff. This time they have implemented a binary phase modulator using a DMD.

Kilohertz binary phase modulator for pulsed laser sources using a digital micromirror device,

M. Hoffmann et al, at Optics Letters


The controlled modulation of an optical wavefront is required for aberration correction, digital phase conjugation, or patterned photostimulation. For most of these applications, it is desirable to control the wavefront modulation at the highest rates possible. The digital micromirror device (DMD) presents a cost-effective solution to achieve high-speed modulation and often exceeds the speed of the more conventional liquid crystal spatial light modulator but is inherently an amplitude modulator. Furthermore, spatial dispersion caused by DMD diffraction complicates its use with pulsed laser sources, such as those used in nonlinear microscopy. Here we introduce a DMD-based optical design that overcomes these limitations and achieves dispersion-free high-speed binary phase modulation. We show that this phase modulation can be used to switch through binary phase patterns at the rate of 20 kHz in two-photon excitation fluorescence applications.

Controlling phase is of paramount interest in multiple optical scenarios. Doing it fast is very difficult, given that spatial light modulators that are really good at modulating phase precisely tend to be slow (~hundreds of Hz). On the other side, intensity modulators such as DMDs are very fast (~20 kHz), but they cannot directly modulate phase. There have been several workarounds with the general idea of using DMDs to modulate phase. I remember a very nice paper by A. Mosk, using groups of mirrors to codify the phase of a superpixel.

Here, they use the fact that DMDs reflect light in two different directions to introduce a phase shift with a moving mirror into one of the reflection directions, achieving binary phase distributions at kHz refresh rates.



Seems like we are getting closer and closer to get a high-efficiency method to modulate phase with DMD’s.

Light transport and imaging through complex media & Photonics West 2018

Last ~20 days have been completely crazy. First, I went to a meeting organized by the Royal Society: Light transport and imaging through complex media. It was amazing. Beautiful place, incredible researchers, and a nice combination of signal processing and optical imaging. I am sure I will be looking for future editions.

After that, I assisted Photonics West. Both BIOS and OPTO were full of interesting talks. Scattering media, adaptive optics, DMDs, some compressive sensing… Fantastic week. There I talked about two recent works we made in Spain: balanced photodetection single-pixel imaging and phase imaging using a DMD and a lateral position detector. Both contributions were very well received, and I am happy with the feedback I got. So many new ideas… now I need some time to implement them! I plan on writing a bit here on the blog about the last work, which has been published in the last issue of Optica.


Some of the cool stuff I heard about:

Valentina Emiliani – Optical manipulation of neuronal circuits by optical wave front shaping. Very cool implementations combining multiple SLMs and temporal focusing to see how neurons work.

Richard Baraniuk – Phase retrieval: tradeoffs and a new algorithm. How to recover phase information from intensity measurements. Compressive sensing and inverse problems. Very interesting, and a really good speaker. It is difficult to find someone capable of explaining these concepts as easily as Richard.

Michael Unser – GlobalBioIm

When being confronted with a new imaging problem, the common experience is that one has to reimplement (if not reinvent) the wheel (=forward model + optimization algorithm), which is very time consuming and also acts as a deterrent for engaging in new developments. This Matlab library aims at simplifying this process by decomposing the workflow onto smaller modules, including many reusable ones since several aspects such as regularization and the injection of prior knowledge are rather generic. It also capitalizes on the strong commonalities between the various image formation models that can be exploited to obtain fast, streamlined implementations.

Oliver Pust – High spatial resolution hyperspectral camera based on a continously variable filter. Really cool concept of merging a continous filter and multiple expositions to obtain hyperspectral information and even 3D images.

Seungwoo Shin – Exploiting a digital micromirror device for a multimodal approach combinning optical diffraction tomography and 3D structured illumination microscopy. I am always happy to see cool implementations with DMDs. This is one of them. KAIST delivers.

We propose a multimodal system combining ODT and 3-D SIM to measure both 3-D RI and fluorescence distributions of samples with advantages including high spatiotemporal resolution as well as molecular specificity. By exploiting active illumination control of a digital micromirror device and two different illumination wavelengths, our setup allows to individually operate either ODT or 3-D SIM. To demonstrate the feasibility of our method, 3-D RI and fluorescence distributions of a planar cluster of fluorescent beads were reconstructed. To further demonstrate the applicability, a 3-D fluorescence and time-lapse 3-D RI distributions of fluorescent beads inside a HeLa cell were measured.

Post featured image extracted from here.

Optical companding

Christmas came and gone, and I am still trying to keep up with some papers I’ve read in the last months.

The guys at UCLA keep doing impressive stuff. First time I saw something from them was their work on Nature about ultrafast optical imaging (woah!).

This time they have proposed a way to improve the digitization of an electrical signal. Living in the time of the ‘great convergence’, every time we are more aware than Optics, Electronics, and Computer Science are closely related. Nowadays, in order to acquire optical information, one has almost always to deal with electrical signals in the analog domain, which need to be digitized before working with them in a computer. To do so, the most used tools are analog-to-digital converters (ADC). These instruments receive an electrical signal (analog), and convert it to a digital signal (a number representing the voltage or the current you are working with). This quantification sometimes results problematic, given that the full dynamic range of the signal (from the maximum to the minimum value) has to be divided in a finite number of steps (bins). If the signal presents very low variations, the bins might be not small enough to see the full details. One can try to see those details by amplifying the signal, but then the bigger values of the signal might be larger than the maximum value measurable by the ADC, provoking saturation.

Jalali’s group proposes to use Optical Companding to overcome this issue. The fundamental idea is to use optical processes that are not linear to compress the high amplitude signal parts, while amplifying the small amplitude signal values at the same time. After that, a traditional ADC digitizes the signal, and the knowledge about the optical compressor makes it possible to restore the original signal with great accuracy.

Optical Companding,

Yunshan Jiang, Bahram Jalali, submitted on 29 Dec 2017, https://arxiv.org/abs/1801.00007

(featured image exctracted from Fig. 1 of the manuscript)

We introduce a new nonlinear analog optical computing concept that compresses the signal’s dynamic range and realizes non-uniform quantization that reshapes and improves the signal-to-noise ratio in the digital domain.

Focusing light through dynamical samples using fast continuous wavefront optimization

The guys at LKB keep going inside turbid media. This time, they have done it really fast. By using a phase spatial light modulator and with the help of a FPGA card, they were able to focus light through a scattering medium at a rate of ~4 kHz.

This is trying to solve a common problem in biological systems when you use the Transmission Matrix approach: live systems evolve, and thus the matrix that you measure is not valid after a really short time.

For me, this is a really nice technical implementation (and not an easy one to do) merging electronics, computer science, and optics to tackle a well defined biological problem.

Focusing light through dynamical samples using fast continuous wavefront optimization,

B. Blochet et al, at Optics Letters

(featured image extracted from Fig. 1 of the manuscript)


We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO2 particles in glycerol with tunable temporal stability.


Realization of hybrid compressive imaging strategies

Recently I have been reading a lot about Compressive Sensing strategies. One of the things we always want when we work in a single-pixel architecture is to project the lowest possible number of masks, because the projecting process is the longest in all the acquisition procedure (and it gets longer and longer when you increase the spatial resolution of your images).

In the past, several strategies haven been implemented to reduce that number of projections. From going fully random to partially scan a basis at random and at the low frequency region, each approach presents some benefits and more or less speed gain.

In this work by the group of K.F. Kelly, they explored a different approach. Instead of chosing one measurement basis and design a sensing strategy (picking random elements, or centering around the low frequency part of the basis, or a mix), they create a measurement basis by merging different functions. They call it hybrid patterns. The basic idea is to chose a low number of patterns which work well for recovering low frequency content of natural images, and also some other patterns which are good to recover high frequency content. The novel thing here is that they do not require the patterns to belong to the same orthogonal basis, thus being able to carefully design its measurement basis. This provides very good quality results with a low number of projections.

Another thing I liked a lot was the Principal Component Analysis (PCA) part of the paper. Basically, they gathered a collection of natural images and they generated an orthogonal basis by using PCA. This leads me to think of PCA as a way of obtaining orthogonal bases where objects have their sparsest representation (maybe I am wrong about that).

Realization of hybrid compressive imaging strategies,

Y.Li et al, at Journal of the Optical Society of America A

(featured image exctracted from Fig.2 of the manuscript)


The tendency of natural scenes to cluster around low frequencies is not only useful in image compression, it also can prove advantageous in novel infrared and hyperspectral image acquisition. In this paper, we exploit this signal model with two approaches to enhance the quality of compressive imaging as implemented in a single-pixel compressive camera and compare these results against purely random acquisition. We combine projection patterns that can efficiently extract the model-based information with subsequent random projections to form the hybrid pattern sets. With the first approach, we generate low-frequency patterns via a direct transform. As an alternative, we also used principal component analysis of an image library to identify the low-frequency components. We present the first (to the best of our knowledge) experimental validation of this hybrid signal model on real data. For both methods, we acquire comparable quality of reconstructions while acquiring only half the number of measurements needed by traditional random sequences. The optimal combination of hybrid patterns and the effects of noise on image reconstruction are also discussed.

Really nice to see that PCA gives something very similar to DCT functions. This means that compressing images with DCT is really a good choice.

Deep learning microscopy

This week a new paper by the group leaded by A. Ozcan appeared in Optica.

Deep learning microscopy,

Y. Ribenson et al, at Optica

(featured image exctracted from Fig. 6 of the supplement)


We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field of view and depth of field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with better resolution, matching the performance of higher numerical aperture lenses and also significantly surpassing their limited field of view and depth of field. These results are significant for various fields that use microscopy tools, including, e.g., life sciences, where optical microscopy is considered as one of the most widely used and deployed techniques. Beyond such applications, the presented approach might be applicable to other imaging modalities, also spanning different parts of the electromagnetic spectrum, and can be used to design computational imagers that get better as they continue to image specimens and establish new transformations among different modes of imaging.

By using different images obtained with high/low numerical aperture microscope objectives, they have trained a deep neural network to create high spatial resolution images from low spatial resolution ones. Moreover, the final result matches the field of view of the input image, thus obtaining one of the major goals of optical microscopy: high resolution and high field of view at the same time (and using a low numerical aperture objective).

I really liked the supplement, where they give information about the neural network (which is really useful for a newbie like me).

Fig.1 of the supplement. Details on how to train the neural network


Imaging through glass diffusers using densely connected convolutional networks

I just found a new paper by the group of G. Barbastathis at MIT.

Imaging through glass diffusers using densely connected convolutional networks,

S. Li et al, Submitted on 18 Nov 2017, https://arxiv.org/abs/1711.06810

(featured image from Fig. 3 of the manuscript)


Computational imaging through scatter generally is accomplished by first characterizing the scattering medium so that its forward operator is obtained; and then imposing additional priors in the form of regularizers on the reconstruction functional so as to improve the condition of the originally ill-posed inverse problem. In the functional, the forward operator and regularizer must be entered explicitly or parametrically (e.g. scattering matrices and dictionaries, respectively.) However, the process of determining these representations is often incomplete, prone to errors, or infeasible. Recently, deep learning architectures have been proposed to instead learn both the forward operator and regularizer through examples. Here, we propose for the first time, to our knowledge, a convolutional neural network architecture called “IDiffNet” for the problem of imaging through diffuse media and demonstrate that IDiffNet has superior generalization capability through extensive tests with well-calibrated diffusers. We found that the Negative Pearson Correlation Coefficient loss function for training is more appropriate for spatially sparse objects and strong scattering conditions. Our results show that the convolutional architecture is robust to the choice of prior, as demonstrated by the use of multiple training and testing object databases, and capable of achieving higher space-bandwidth product reconstructions than previously reported.

Basically they have trained a neural network to ‘solve’ the path of light traveling through a scattering medium, thus being able to recover images hidden by glass diffusers. It may sound simple, but thousands of scientists are trying to see objects hidden by scattering media. We are seeing the first steps of the combination between neural networks, machine learning, and optics to go beyond physical constraints imposed by nature (see inside our bodies with visible light, see through fog, etc.).

Fig. 7 of the paper, with some nice results.


Experimental comparison of single-pixel imaging algorithms

I just read on ArXiv.org that L. Bian and his colleagues made a cool comparison between several ways of performing single-pixel imaging. They have tested the performance on several recovery procedures, some quite familiar but others not so well stablished. I find both Table 1 and Fig. 7 extremely interesting. One sums up really well the different reconstruction approaches that can be used in single-pixel imaging (with or without using Compressive Sensing). The figure points out one thing that experience has told me: every problem you try to solve usually needs an specific solver if you want to get good and fast results (which is extremely important when you start to work with BIG objects, as I plan to write soon here).

Experimental comparison of single-pixel imaging algorithms,

L. Biam et al, last revised 24 Oct 2017, https://arxiv.org/abs/1707.03164

(featured image extracted from Fig.7 of the manuscript)


Single-pixel imaging (SPI) is a novel technique capturing 2D images using a photodiode, instead of conventional 2D array sensors. SPI owns high signal-to-noise ratio, wide spectrum range, low cost, and robustness to light scattering. Various algorithms have been proposed for SPI reconstruction, including the linear correlation methods, the alternating projection method (AP), and the compressive sensing based methods. However, there has been no comprehensive review discussing respective advantages, which is important for SPI’s further applications and development. In this paper, we reviewed and compared these algorithms in a unified reconstruction framework. Besides, we proposed two other SPI algorithms including a conjugate gradient descent based method (CGD) and a Poisson maximum likelihood based method. Both simulations and experiments validate the following conclusions: to obtain comparable reconstruction accuracy, the compressive sensing based total variation regularization method (TV) requires the least measurements and consumes the least running time for small-scale reconstruction; the CGD and AP methods run fastest in large-scale cases; the TV and AP methods are the most robust to measurement noise. In a word, there are trade-offs between capture efficiency, computational complexity and robustness to noise among different SPI algorithms. We have released our source code for non-commercial use.