Love the idea of generating ~infinite focal spots (until you run out of photons) inside a sample, and using a extremely fast single-pixel detector to recover the signal. Very original way to tackle volumetric imaging in bio-imaging!
Simultaneous multiplane imaging with reverberation multiphoton microscopy
Multiphoton microscopy (MPM) has gained enormous popularity over the years for its capacity to provide high resolution images from deep within scattering samples. However, MPM is generally based on single-point laser-focus scanning, which is intrinsically slow. While imaging speeds as fast as video rate have become routine for 2D planar imaging, such speeds have so far been unattainable for 3D volumetric imaging without severely compromising microscope performance. We demonstrate here 3D volumetric (multiplane) imaging at the same speed as 2D planar (single plane) imaging, with minimal compromise in performance. Specifically, multiple planes are acquired by near-instantaneous axial scanning while maintaining 3D micron-scale resolution. Our technique, called reverberation MPM, is well adapted for large-scale imaging in scattering media with low repetition-rate lasers, and can be implemented with conventional MPM as a simple add-on.
New single-pixel camera design, but this time using multicore fibers (MCF) and a photonic lantern instead of a spatial light modulator. Cool!
The fundamental idea is to excite one of the cores of a MCF. Then, light propagates through the fiber, which has a photonic lantern at the tip that generates a random-like light pattern at its tip. Exciting different cores of the MCF generates different light patterns at the end of the fiber, which can be used to obtain images using the single-pixel imaging formalism.
There is more cool stuff in the paper, for example the Compressive Sensing algorithm the authors are using, using positivity constraints. This is indeed quite relevant if you want to get high quality images, because of the reduced number of cores present in the MCF (remember, 1 core = 1 pattern, and the number of patterns determines the spatial resolution of the image in a single-pixel camera). It is also nice that there is available code from the authors here.
Compressive optical imaging with a photonic lantern
The thin and flexible nature of optical fibres often makes them the ideal technology to view biological processes in-vivo, but current microendoscopic approaches are limited in spatial resolution. Here, we demonstrate a new route to high resolution microendoscopy using a multicore fibre (MCF) with an adiabatic multimode-to-singlemode photonic lantern transition formed at the distal end by tapering. We show that distinct multimode patterns of light can be projected from the output of the lantern by individually exciting the single-mode MCF cores, and that these patterns are highly stable to fibre movement. This capability is then exploited to demonstrate a form of single-pixel imaging, where a single pixel detector is used to detect the fraction of light transmitted through the object for each multimode pattern. A custom compressive imaging algorithm we call SARA-COIL is used to reconstruct the object using only the pre-measured multimode patterns themselves and the detector signals.
The group leaded by P. Artal at Murcia University has recently published an interesting paper related to adaptive optics using an adaptive lens. When working in a real scenario, imperfections in the optical elements you use or just the objects you want to image introduce optical aberrations in the pictures you obtain. Usually these aberrations reduce the quality of your images just a bit (introducing a bit of defocus or some astigmatism), but in the worst case scenario it may result in completely useless results.
In order to overcome this problem, usually liquid crystal spatial light modulators or deformable mirrors are used in optical systems to introduce phase corrections to the light going through the system, countering the phase of these aberrations and thus restoring the image quality. However, these systems present several problems. Even though both spatial light modulators and deformable mirrors can correct the problems I mentioned earlier, they work in a reflection configuration. This introduces additional complexity to the optical systems. Also, liquid crystal spatial light modulators are sensitive to polarization, usually have low reflectance values, and tend to be slow.
As a way to tackle those obstacles, the authors have used an adaptive lens in a two-photon microscope to perform the adaptive optics procedure. Adaptive lenses are being used more and more recently to perform aberration correction. In contrast to both spatial light modulators and deformable mirrors, they work in transmission and present very low losses. Moreover, they can introduce low and mid-order aberrations at refresh rates of almost 1 kHz. The working principle can be seen in this figure:
In the paper, they show how this device can obtain results comparable to the traditional spatial light modulator approach, with the benefits mentioned before, in a multi-photon microscope.
Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens
A multi-actuator adaptive lens (AL) was incorporated into a multi-photon (MP) microscope to improve the quality of images of thick samples. Through a hill-climbing procedure the AL corrected for the specimen-induced aberrations enhancing MP images. The final images hardly differed when two different metrics were used, although the sets of Zernike coefficients were not identical. The optimized MP images acquired with the AL were also compared with those obtained with a liquid-crystal-on-silicon spatial light modulator. Results have shown that both devices lead to similar images, which corroborates the usefulness of this AL for MP imaging.
Adaptive Optics + Light Sheet Microscopy to see living cells inside the body of a Zebra fish (the favorite fish of biologists!). Really impressive images overcoming scattering caused by tissue. You can read more about the paper on Nature and/or Howard Hughes Medical Institute.
The Feynmann Lectures on Physics online
I just read on OpenCulture that The Feynmann Lectures on Physics have been made available online. Until now, only the first part was published, but now you can also find volumes 2 and 3. Time to reread the classics…
An interesting text appeared this week in American Scientist covering some aspects of the coming symbiosis between optics, computation and electronics. We are already able to overcome optical resolution, obtain phase information, or even imaging without using traditional optical elements, such as lenses. What’s coming next?
Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Mona Jarrahi, Aydogan Ozcan
(Submitted on 14 Apr 2018)
We introduce an all-optical Diffractive Deep Neural Network (D2NN) architecture that can learn to implement various functions after deep learning-based design of passive diffractive layers that work collectively. We experimentally demonstrated the success of this framework by creating 3D-printed D2NNs that learned to implement handwritten digit classification and the function of an imaging lens at terahertz spectrum. With the existing plethora of 3D-printing and other lithographic fabrication methods as well as spatial-light-modulators, this all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs.
Imagine if Fourier Transforms were discovered before lenses, and then some day someone comes up with just a piece of glass and says “this can make the computations of FT at the speed of light”. Very cool read.
Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We’re seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven’t yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.
We’re taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.
To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.
Looking forward to see the price tag and the date they become available.
As a way to keep posts going, I am starting a short recap about interesting papers being published (or being discovered) every now and then. Probably I will write longer posts about some of them in the future.
Let’s get this thing going:
Two papers using ‘centroid estimation‘ to retrieve interesting information:
Mariko Akutsu, Yasuhiro Oikawa, and Yoshio Yamasaki, at The Journal of the Acoustical Society of America
Conversation is one of the most important channels for human beings. To help communications, speech recognition technologies have been developed. Above all, in a conversation, not only contents of utterances but also intonations and tones include important information regarding a speaker’s intention. To study the sphere of human speech, microphones are typically used to record voices. However, since microphones have to be set around a space, their existences affect a physical behavior of the sound field. To challenge this problem, we have suggested a recording method using a high-speed camera. By using a high-speed camera for recording sound vibrations, it can record two or more points within the range of the camera at the same time and can record from a distance, without interfering with the sound fields. In this study, we extract voice information using high-speed videos, which capture both a face and a cervical part of the subject. This method allows recording skin vibrations, which contain voices with individuality and extrapolating sound waves by using an image processing method. The result of the experiment shows that a high-speed camera is capable of recording voice information.
Dekel Raanan, Liqing Ren, Dan Oron, and Yaron Silberberg, at Optics Letters
Stimulated Raman scattering (SRS) has recently become useful for chemically selective bioimaging. It is usually measured via modulation transfer from the pump beam to the Stokes beam. Impulsive stimulated Raman spectroscopy, on the other hand, relies on the spectral shift of ultrashort pulses as they propagate in a Raman active sample. This method was considered impractical with low energy pulses since the observed shifts are very small compared to the excitation pulse bandwidth, spanning many terahertz. Here we present a new apparatus, using tools borrowed from the field of precision measurement, for the detection of low-frequency Raman lines via stimulated-Raman-scattering-induced spectral shifts. This method does not require any spectral filtration and is therefore an excellent candidate to resolve low-lying Raman lines (<200cm−1<200 cm−1), which are commonly masked by the strong Rayleigh scattering peak. Having the advantage of the high repetition rate of the ultrafast oscillator, we reduce the noise level by implementing a lock-in detection scheme with a wavelength shift sensitivity well below 100 fm. This is demonstrated by the measurement of low-frequency Raman lines of various liquid samples.
Machine learning keeps leaking into photonics. This time with a Compressive Sensing flavor and some holography:
The compressed sensing (CS) has been successfully applied to image compression in the past few years as most image signals are sparse in a certain domain. Several CS reconstruction models have been proposed and obtained superior performance. However, these methods suffer from blocking artifacts or ringing effects at low sampling ratios in most cases. To address this problem, we propose a deep convolutional Laplacian Pyramid Compressed Sensing Network (LapCSNet) for CS, which consists of a sampling sub-network and a reconstruction sub-network. In the sampling sub-network, we utilize a convolutional layer to mimic the sampling operator. In contrast to the fixed sampling matrices used in traditional CS methods, the filters used in our convolutional layer are jointly optimized with the reconstruction sub-network. In the reconstruction sub-network, two branches are designed to reconstruct multi-scale residual images and muti-scale target images progressively using a Laplacian pyramid architecture. The proposed LapCSNet not only integrates multi-scale information to achieve better performance but also reduces computational cost dramatically. Experimental results on benchmark datasets demonstrate that the proposed method is capable of reconstructing more details and sharper edges against the state-of-the-arts methods.
Yair Rivenson, Yibo Zhang, Harun Günaydın, Da Teng & Aydogan Ozcan, at Light: Science & Applications
Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography. In this study, we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training. This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference-related spatial artifacts. This neural network-based method is fast to compute and reconstructs phase and amplitude images of the objects using only one hologram, requiring fewer measurements in addition to being computationally faster. We validated this method by reconstructing the phase and amplitude images of various samples, including blood and Pap smears and tissue sections. These results highlight that challenging problems in imaging science can be overcome through machine learning, providing new avenues to design powerful computational imaging systems.
Last, single-pixel camera / ghost imaging being applied to x-ray tomography:
Andrew M. Kingston, Glenn R. Myers, Daniele Pelliccia, Imants D. Svalbe, David M. Paganin, at arXiv.org
Ghost imaging has recently been successfully achieved in the X-ray regime; due to the penetrating power of X-rays this immediately opens up the possibility of X-ray ghost tomography. No research into this topic currently exists in the literature. Here we present adaptations of conventional tomography techniques to this new ghost imaging scheme. Several numerical implementations for tomography through X-ray ghost imaging are considered. Specific attention is paid to schemes for denoising of the resulting tomographic reconstruction, issues related to dose fractionation, and considerations regarding the ensemble of illuminating masks used for ghost imaging. Each theme is explored through a series of numerical simulations, and several suggestions offered for practical realisations of X-ray ghost tomography.