Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens

The group leaded by P. Artal at Murcia University has recently published an interesting paper related to adaptive optics using an adaptive lens. When working in a real scenario, imperfections in the optical elements you use or just the objects you want to image introduce optical aberrations in the pictures you obtain. Usually these aberrations reduce the quality of your images just a bit (introducing a bit of defocus or some astigmatism), but in the worst case scenario it may result in completely useless results.

In order to overcome this problem, usually liquid crystal spatial light modulators or deformable mirrors are used in optical systems to introduce phase corrections to the light going through the system, countering the phase of these aberrations and thus restoring the image quality. However, these systems present several problems. Even though both spatial light modulators and deformable mirrors can correct the problems I mentioned earlier, they work in a reflection configuration. This introduces additional complexity to the optical systems. Also, liquid crystal spatial light modulators are sensitive to polarization, usually have low reflectance values, and tend to be slow.

As a way to tackle those obstacles, the authors have used an adaptive lens in a two-photon microscope to perform the adaptive optics procedure. Adaptive lenses are being used more and more recently to perform aberration correction. In contrast to both spatial light modulators and deformable mirrors, they work in transmission and present very low losses. Moreover, they can introduce low and mid-order aberrations at refresh rates of almost 1 kHz. The working principle can be seen in this figure:

Adaptive_lens
Schematics of the working principle of an adaptive lens. The lens is formed by two thin glass layers, and a liquid in between. Each actuator is triggered by an electrical signal, which deforms the glass windows, generating different shapes and changing the phase of the wavefront passing through the lens. Figure extracted from Stefano Bonora et. al., “Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens,” Opt. Express 23, 21931-21941 (2015)

In the paper, they show how this device can obtain results comparable to the traditional spatial light modulator approach, with the benefits mentioned before, in a multi-photon microscope.

Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens

by Juan M. Bueno et al., at Optics Express

Abstract:

A multi-actuator adaptive lens (AL) was incorporated into a multi-photon (MP) microscope to improve the quality of images of thick samples. Through a hill-climbing procedure the AL corrected for the specimen-induced aberrations enhancing MP images. The final images hardly differed when two different metrics were used, although the sets of Zernike coefficients were not identical. The optimized MP images acquired with the AL were also compared with those obtained with a liquid-crystal-on-silicon spatial light modulator. Results have shown that both devices lead to similar images, which corroborates the usefulness of this AL for MP imaging.

results_bueno.png
Experimental results showing the improvement on the image obtained with the adaptive lens system. Figure 3 from the paper: Juan M. Bueno, et. al, “Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens,” Opt. Express 26, 14278-14287 (2018)

 

Evading scientific stalemates

This week I have been thinking about a strange thing that happened in my research group. One day while I was doing my MSc, me and my colleagues we were discussing some lab results. A small change on our experimental setup provided much better images than the ones we were getting up to that point. This change, even though small, was puzzling at first. It was counter-intuitive. We quickly realized why it was improving our measurements. However, this was not the important thing. By doing that small change, our system, which at the time was just simply an imaging system, seemed to be able to tackle much more difficult experimental scenarios. We thought that we had discovered a new property of the systems we were developing. We were right.

After that initial idea, we quickly designed some experiments to verify our initial guesses. Everything seemed to work, but we were not 100% sure why. We had some general ideas, some intuitions. Our plan was to keep doing some experiments while we figured all the details. We published some papers and started thinking big. This approach could be applied to real scenarios. We started collaborating with some other groups and in the end we developed a real-life system in collaboration with them. That was published in a very good journal.

However, even though we figured out the bugging details we had at the beginning, we were never able to build a model that allowed us to predict or at least to conjecture about what could be the limits of our technique.

Fast forward ~3 years to today. We have a meeting planned for next week to discuss why our latest experiments are not providing the results we expected. After months of PhD (and MSc) students work, we are at a stalemate. Some days it seems that we are close to change something in the lab that will yield the expected improvement. Some days, after hundreds of trials, everything remains the same. Given the lack of a physical model to hold to, the group is searching with a blindfold, and I don’t think this is working at all.

If I had to make a prediction right now, I would say that the research line is dead (long live the research line!). It shouldn’t be dramatic, it is just science (sometimes it works, sometimes it doesn’t). However, during all this process, several students joined the group and started their MSc’s and PhD’s on the topic. This could be dramatic for them. During all this time, I have been working in quite a lot of different stuff. I missed some publications, which hurt my CV. However, when something did not work, I always had different stuff to try. I think I have a wider scope of my field because of that. In the end, I have published more than enough to write my thesis.

I guess that’s a good practice: never put all your eggs in the same basket. You need to have hundreds of ideas to get a good one. Take your time to explore them, and build strong foundations where new people can construct upon without fear of falling down.

Weekly recap (29/04/2018)

This week we have a lot of interesting stuff:

Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms

Adaptive Optics + Light Sheet Microscopy to see living cells inside the body of a Zebra fish (the favorite fish of biologists!). Really impressive images overcoming scattering caused by tissue. You can read more about the paper on Nature and/or Howard Hughes Medical Institute.

 


The Feynmann Lectures on Physics online

I just read on OpenCulture that The Feynmann Lectures on Physics have been made available online. Until now, only the first part was published, but now you can also find volumes 2 and 3. Time to reread the classics…


Imaging Without Lenses

An interesting text appeared this week in American Scientist covering some aspects of the coming symbiosis between optics, computation and electronics. We are already able to overcome optical resolution, obtain phase information, or even imaging without using traditional optical elements, such as lenses. What’s coming next?


All-Optical Machine Learning Using Diffractive Deep Neural Networks

A very nice paper appeared on arXiv this week.

Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Mona Jarrahi, Aydogan Ozcan

We introduce an all-optical Diffractive Deep Neural Network (D2NN) architecture that can learn to implement various functions after deep learning-based design of passive diffractive layers that work collectively. We experimentally demonstrated the success of this framework by creating 3D-printed D2NNs that learned to implement handwritten digit classification and the function of an imaging lens at terahertz spectrum. With the existing plethora of 3D-printing and other lithographic fabrication methods as well as spatial-light-modulators, this all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs.

Imagine if Fourier Transforms were discovered before lenses, and then some day someone comes up with just a piece of glass and says “this can make the computations of FT at the speed of light”. Very cool read.


OPEN SPIN MICROSCOPY

I just stumbled upon this project while reading Lab on the Cheap. Seems like a very good resource if you plan to build a light-sheet microscope and do not wanna spend $$$$ on Thorlabs.


Artificial Inteligence kits from Google, updated edition

Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We’re seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven’t yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.

We’re taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.

To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.

Looking forward to see the price tag and the date they become available.

The week in papers (22/04/18)

As a way to keep posts going, I am starting a short recap about interesting papers being published (or being discovered) every now and then. Probably I will write longer posts about some of them in the future.

Let’s get this thing going:

Two papers using ‘centroid estimation‘ to retrieve interesting information:

Extract voice information using high-speed camera

Mariko AkutsuYasuhiro Oikawa, and Yoshio Yamasaki, at The Journal of the Acoustical Society of America

Kilohertz binary phase modulator for pulsed laser sources using a digital micromirror device

People at Judkewitz lab tend to do really cool stuff. This time they have implemented a binary phase modulator using a DMD.

Kilohertz binary phase modulator for pulsed laser sources using a digital micromirror device,

M. Hoffmann et al, at Optics Letters

Abstract:

The controlled modulation of an optical wavefront is required for aberration correction, digital phase conjugation, or patterned photostimulation. For most of these applications, it is desirable to control the wavefront modulation at the highest rates possible. The digital micromirror device (DMD) presents a cost-effective solution to achieve high-speed modulation and often exceeds the speed of the more conventional liquid crystal spatial light modulator but is inherently an amplitude modulator. Furthermore, spatial dispersion caused by DMD diffraction complicates its use with pulsed laser sources, such as those used in nonlinear microscopy. Here we introduce a DMD-based optical design that overcomes these limitations and achieves dispersion-free high-speed binary phase modulation. We show that this phase modulation can be used to switch through binary phase patterns at the rate of 20 kHz in two-photon excitation fluorescence applications.

Controlling phase is of paramount interest in multiple optical scenarios. Doing it fast is very difficult, given that spatial light modulators that are really good at modulating phase precisely tend to be slow (~hundreds of Hz). On the other side, intensity modulators such as DMDs are very fast (~20 kHz), but they cannot directly modulate phase. There have been several workarounds with the general idea of using DMDs to modulate phase. I remember a very nice paper by A. Mosk, using groups of mirrors to codify the phase of a superpixel.

Here, they use the fact that DMDs reflect light in two different directions to introduce a phase shift with a moving mirror into one of the reflection directions, achieving binary phase distributions at kHz refresh rates.

 

 

Seems like we are getting closer and closer to get a high-efficiency method to modulate phase with DMD’s.

Light transport and imaging through complex media & Photonics West 2018

Last ~20 days have been completely crazy. First, I went to a meeting organized by the Royal Society: Light transport and imaging through complex media. It was amazing. Beautiful place, incredible researchers, and a nice combination of signal processing and optical imaging. I am sure I will be looking for future editions.

After that, I assisted Photonics West. Both BIOS and OPTO were full of interesting talks. Scattering media, adaptive optics, DMDs, some compressive sensing… Fantastic week. There I talked about two recent works we made in Spain: balanced photodetection single-pixel imaging and phase imaging using a DMD and a lateral position detector. Both contributions were very well received, and I am happy with the feedback I got. So many new ideas… now I need some time to implement them! I plan on writing a bit here on the blog about the last work, which has been published in the last issue of Optica.

 

Some of the cool stuff I heard about:

Valentina Emiliani – Optical manipulation of neuronal circuits by optical wave front shaping. Very cool implementations combining multiple SLMs and temporal focusing to see how neurons work.

Richard Baraniuk – Phase retrieval: tradeoffs and a new algorithm. How to recover phase information from intensity measurements. Compressive sensing and inverse problems. Very interesting, and a really good speaker. It is difficult to find someone capable of explaining these concepts as easily as Richard.

Michael Unser – GlobalBioIm

When being confronted with a new imaging problem, the common experience is that one has to reimplement (if not reinvent) the wheel (=forward model + optimization algorithm), which is very time consuming and also acts as a deterrent for engaging in new developments. This Matlab library aims at simplifying this process by decomposing the workflow onto smaller modules, including many reusable ones since several aspects such as regularization and the injection of prior knowledge are rather generic. It also capitalizes on the strong commonalities between the various image formation models that can be exploited to obtain fast, streamlined implementations.

Oliver Pust – High spatial resolution hyperspectral camera based on a continously variable filter. Really cool concept of merging a continous filter and multiple expositions to obtain hyperspectral information and even 3D images.

Seungwoo Shin – Exploiting a digital micromirror device for a multimodal approach combinning optical diffraction tomography and 3D structured illumination microscopy. I am always happy to see cool implementations with DMDs. This is one of them. KAIST delivers.

We propose a multimodal system combining ODT and 3-D SIM to measure both 3-D RI and fluorescence distributions of samples with advantages including high spatiotemporal resolution as well as molecular specificity. By exploiting active illumination control of a digital micromirror device and two different illumination wavelengths, our setup allows to individually operate either ODT or 3-D SIM. To demonstrate the feasibility of our method, 3-D RI and fluorescence distributions of a planar cluster of fluorescent beads were reconstructed. To further demonstrate the applicability, a 3-D fluorescence and time-lapse 3-D RI distributions of fluorescent beads inside a HeLa cell were measured.

Post featured image extracted from here.

Optical companding

Christmas came and gone, and I am still trying to keep up with some papers I’ve read in the last months.

The guys at UCLA keep doing impressive stuff. First time I saw something from them was their work on Nature about ultrafast optical imaging (woah!).

This time they have proposed a way to improve the digitization of an electrical signal. Living in the time of the ‘great convergence’, every time we are more aware than Optics, Electronics, and Computer Science are closely related. Nowadays, in order to acquire optical information, one has almost always to deal with electrical signals in the analog domain, which need to be digitized before working with them in a computer. To do so, the most used tools are analog-to-digital converters (ADC). These instruments receive an electrical signal (analog), and convert it to a digital signal (a number representing the voltage or the current you are working with). This quantification sometimes results problematic, given that the full dynamic range of the signal (from the maximum to the minimum value) has to be divided in a finite number of steps (bins). If the signal presents very low variations, the bins might be not small enough to see the full details. One can try to see those details by amplifying the signal, but then the bigger values of the signal might be larger than the maximum value measurable by the ADC, provoking saturation.

Jalali’s group proposes to use Optical Companding to overcome this issue. The fundamental idea is to use optical processes that are not linear to compress the high amplitude signal parts, while amplifying the small amplitude signal values at the same time. After that, a traditional ADC digitizes the signal, and the knowledge about the optical compressor makes it possible to restore the original signal with great accuracy.

Optical Companding,

Yunshan Jiang, Bahram Jalali, submitted on 29 Dec 2017, https://arxiv.org/abs/1801.00007

(featured image exctracted from Fig. 1 of the manuscript)

Abstract,
We introduce a new nonlinear analog optical computing concept that compresses the signal’s dynamic range and realizes non-uniform quantization that reshapes and improves the signal-to-noise ratio in the digital domain.

Focusing light through dynamical samples using fast continuous wavefront optimization

The guys at LKB keep going inside turbid media. This time, they have done it really fast. By using a phase spatial light modulator and with the help of a FPGA card, they were able to focus light through a scattering medium at a rate of ~4 kHz.

This is trying to solve a common problem in biological systems when you use the Transmission Matrix approach: live systems evolve, and thus the matrix that you measure is not valid after a really short time.

For me, this is a really nice technical implementation (and not an easy one to do) merging electronics, computer science, and optics to tackle a well defined biological problem.

Focusing light through dynamical samples using fast continuous wavefront optimization,

B. Blochet et al, at Optics Letters

(featured image extracted from Fig. 1 of the manuscript)

Abstract:

We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO2 particles in glycerol with tunable temporal stability.

 

Realization of hybrid compressive imaging strategies

Recently I have been reading a lot about Compressive Sensing strategies. One of the things we always want when we work in a single-pixel architecture is to project the lowest possible number of masks, because the projecting process is the longest in all the acquisition procedure (and it gets longer and longer when you increase the spatial resolution of your images).

In the past, several strategies haven been implemented to reduce that number of projections. From going fully random to partially scan a basis at random and at the low frequency region, each approach presents some benefits and more or less speed gain.

In this work by the group of K.F. Kelly, they explored a different approach. Instead of chosing one measurement basis and design a sensing strategy (picking random elements, or centering around the low frequency part of the basis, or a mix), they create a measurement basis by merging different functions. They call it hybrid patterns. The basic idea is to chose a low number of patterns which work well for recovering low frequency content of natural images, and also some other patterns which are good to recover high frequency content. The novel thing here is that they do not require the patterns to belong to the same orthogonal basis, thus being able to carefully design its measurement basis. This provides very good quality results with a low number of projections.

Another thing I liked a lot was the Principal Component Analysis (PCA) part of the paper. Basically, they gathered a collection of natural images and they generated an orthogonal basis by using PCA. This leads me to think of PCA as a way of obtaining orthogonal bases where objects have their sparsest representation (maybe I am wrong about that).

Realization of hybrid compressive imaging strategies,

Y.Li et al, at Journal of the Optical Society of America A

(featured image exctracted from Fig.2 of the manuscript)

Abstract:

The tendency of natural scenes to cluster around low frequencies is not only useful in image compression, it also can prove advantageous in novel infrared and hyperspectral image acquisition. In this paper, we exploit this signal model with two approaches to enhance the quality of compressive imaging as implemented in a single-pixel compressive camera and compare these results against purely random acquisition. We combine projection patterns that can efficiently extract the model-based information with subsequent random projections to form the hybrid pattern sets. With the first approach, we generate low-frequency patterns via a direct transform. As an alternative, we also used principal component analysis of an image library to identify the low-frequency components. We present the first (to the best of our knowledge) experimental validation of this hybrid signal model on real data. For both methods, we acquire comparable quality of reconstructions while acquiring only half the number of measurements needed by traditional random sequences. The optimal combination of hybrid patterns and the effects of noise on image reconstruction are also discussed.

Fig3_Kelly
Really nice to see that PCA gives something very similar to DCT functions. This means that compressing images with DCT is really a good choice.

Deep learning microscopy

This week a new paper by the group leaded by A. Ozcan appeared in Optica.

Deep learning microscopy,

Y. Ribenson et al, at Optica

(featured image exctracted from Fig. 6 of the supplement)

Abstract,

We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field of view and depth of field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with better resolution, matching the performance of higher numerical aperture lenses and also significantly surpassing their limited field of view and depth of field. These results are significant for various fields that use microscopy tools, including, e.g., life sciences, where optical microscopy is considered as one of the most widely used and deployed techniques. Beyond such applications, the presented approach might be applicable to other imaging modalities, also spanning different parts of the electromagnetic spectrum, and can be used to design computational imagers that get better as they continue to image specimens and establish new transformations among different modes of imaging.

By using different images obtained with high/low numerical aperture microscope objectives, they have trained a deep neural network to create high spatial resolution images from low spatial resolution ones. Moreover, the final result matches the field of view of the input image, thus obtaining one of the major goals of optical microscopy: high resolution and high field of view at the same time (and using a low numerical aperture objective).

I really liked the supplement, where they give information about the neural network (which is really useful for a newbie like me).

FigS1_OZcan
Fig.1 of the supplement. Details on how to train the neural network