Play all audios:
EDITORIAL In 2016, the news that Google’s artificial intelligence (AI) robot AlphaGo, based on the principle of deep learning, won the victory over lee Sedol, the former world Go champion
and the famous 9th Dan competitor of Korea, caused a sensation in both fields of AI and Go, which brought epoch-making significance to the development of deep learning. Deep learning is a
complex machine learning algorithm that uses multiple layers of artificial neural networks to automatically analyze signals or data. At present, deep learning has penetrated into our daily
life, such as the applications of face recognition and speech recognition. Scientists have also made many remarkable achievements based on deep learning. Professor Aydogan Ozcan from the
University of California, Los Angeles (UCLA) led his team to research deep learning algorithms, which provided new ideas for the exploring of optical computational imaging and sensing
technology, and introduced image generation and reconstruction methods which brought major technological innovations to the development of related fields. Optical designs and devices are
moving from being physically driven to being data-driven. We are much honored to have Aydogan Ozcan, Fellow of the National Academy of Inventors and Chancellor’s Professor of UCLA, to
unscramble his latest scientific research results and foresight for the future development of related fields, and to share his journey of pursuing Optics, his indissoluble relationship with
Light: Science & Applications (LSA), and his experience in talent cultivation. BIOGRAPHY: Prof. Aydogan Ozcan is the Chancellor’s Professor and the Volgenau Chair for Engineering
Innovation at UCLA and an HHMI Professor with the Howard Hughes Medical Institute, leading the Bio- and Nano-Photonics Laboratory at UCLA School of Engineering and is also the Associate
Director of the California NanoSystems Institute. Dr. Ozcan is elected Fellow of the National Academy of Inventors (NAI) and holds >45 issued/granted patents and >20 pending patent
applications and is also the author of one book and the co-author of >700 peer-reviewed publications in major scientific journals and conferences. Dr. Ozcan is the founder and a member of
the Board of Directors of Lucendi Inc., Hana Diagnostics, Pictor Labs, as well as Holomic/Cellmic LLC, which was named a Technology Pioneer by The World Economic Forum in 2015. Dr. Ozcan is
also a Fellow of the American Association for the Advancement of Science (AAAS), the International Photonics Society (SPIE), the Optical Society of America (OSA), the American Institute for
Medical and Biological Engineering (AIMBE), the Institute of Electrical and Electronics Engineers (IEEE), the Royal Society of Chemistry (RSC), the American Physical Society (APS) and the
Guggenheim Foundation, and has received major awards including the Presidential Early Career Award for Scientists and Engineers, International Commission for Optics Prize, Biophotonics
Technology Innovator Award, Rahmi M. Koc Science Medal, International Photonics Society Early Career Achievement Award, Army Young Investigator Award, NSF CAREER Award, NIH Director’s New
Innovator Award, Navy Young Investigator Award, IEEE Photonics Society Young Investigator Award and Distinguished Lecturer Award, National Geographic Emerging Explorer Award, National
Academy of Engineering The Grainger Foundation Frontiers of Engineering Award and MIT’s TR35 Award for his seminal contributions to computational imaging, sensing and diagnostics. Dr. Ozcan
is also listed as a Highly Cited Researcher by Web of Science, Clarivate, and serves as the co-Editor-in-Chief of eLight. Q1: CAN YOU PLEASE BRIEFLY DEFINE WITH YOUR OWN WORDS THE FOCUS OF
YOUR RESEARCH LAB? A1: My lab is composed of three sub-areas: * Computational microscopy which includes e.g., deep learning-enabled microscopy, holography, lensless imaging, on-chip
microscopy, 3D microscopy, imaging flow-cytometry, among others. * Sensing: point-of-care sensors, mobile-phone enabled sensors, field-based sensing and measurement systems with applications
in mobile health and telemedicine, environmental monitoring (for example, air/water quality sensing). * Optical computing and inverse design: diffractive networks, diffractive optical
processors, and deep learning-designed free-space optics. While conducting exciting, cutting edge applied research on photonics and optics, we are also training the next generation of
engineers, scientists and entrepreneurs through our research programs. Some of our trainees have started up their own labs in the United States, China and other parts of the world, some went
to industry, leading their own teams, and some started up companies. Q2: OPTICAL COHERENCE TOMOGRAPHY (OCT) IS A WIDELY USED IMAGING METHOD THAT PROVIDES THREE-DIMENSIONAL INFORMATION ABOUT
THE OPTICAL SCATTERING PROPERTIES OF BIOLOGICAL SAMPLES. ONE OF YOUR RECENT PUBLICATIONS IN LSA FOCUSED ON IMPROVING IMAGE RECONSTRUCTION IN OCT1. COULD YOU PLEASE INTRODUCE THE RESEARCH
IDEA AND MAIN ADVANTAGES OF YOUR METHOD AS WELL AS ITS IMPACT FOR OCT FIELD? A2: Indeed, OCT is widely used in diagnostic medicine, for example in ophthalmology, to noninvasively obtain
detailed 3D images of the retina and underlying tissue structure. In our latest paper1 published in LSA, we have developed a deep learning-based OCT image reconstruction method that can
successfully generate 3D images of tissue specimen using significantly less spectral data than normally required. Using standard image reconstruction methods employed in OCT, undersampled
spectral data, where some of the spectral measurements are omitted, result in severe spatial artifacts in the reconstructed images, obscuring 3D information and structural details of the
sample to be visualized. In our new approach, we trained a neural network using deep learning to rapidly reconstruct 3D images of tissue samples with much less spectral data than normally
acquired in a typical OCT system, successfully removing the spatial artifacts observed in standard image reconstruction methods. The efficacy and robustness of this new method was
demonstrated by imaging various human and mouse tissue samples using threefold less spectral data captured by a state-of-the-art swept-source OCT system. Running on graphics processing units
(GPUs), the neural network successfully eliminated severe spatial artifacts due to undersampling and omission of most spectral data points in less than 1 ms for an OCT image that is
composed of 512 depth scans (A-lines). These results that we have recently published in LSA highlight the transformative potential of this neural network-based OCT image reconstruction
framework, which can be easily integrated with various spectral domain OCT systems, to improve their 3D imaging speed without sacrificing resolution or signal-to-noise of the reconstructed
images. Q3: IN 2017, YOUR PAPER “PHASE RECOVERY AND HOLOGRAPHIC IMAGE RECONSTRUCTION USING DEEP LEARNING IN NEURAL NETWORKS2” PUBLISHED IN LSA HAS GOT 359 CITATIONS ON THE WEB OF SCIENCE,
AND ALSO WON THE LSA ANNUAL OUTSTANDING PAPER, HIGHLY CITED PAPER, POPULAR DOWNLOADED PAPER, WHICH BECAME A MAJOR BREAKTHROUGH IN THE FIELD OF OPTICAL COMPUTATIONAL IMAGING. CAN YOU TALK
ABOUT THE DEVELOPMENT STATUS AND FUTURE TRENDS OF COMPUTATIONAL IMAGING? WHAT ARE THE POTENTIAL APPLICATIONS? A3: This seminal LSA paper of our team was published online in October 2017, and
it was placed in arXiv by our team in May 2017. These results demonstrate the first use of deep neural networks for holographic image reconstruction and phase recovery. In this paper, we
demonstrated that a convolutional neural network (CNN) can learn to perform phase recovery and holographic image reconstruction after appropriate training. This deep learning-based approach
provides a fundamentally new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference related spatial artifacts, which have been part of holography
since its invention by Dennis Gabor (leading to the Nobel Prize in Physics, 1971). Compared to existing holographic phase recovery approaches, this neural network framework is significantly
faster to compute and reconstructs much improved phase and amplitude images of the objects using a single hologram, i.e., requires less number of measurements in addition to being
computationally faster. Remarkably, this deep learning based twin-image elimination and phase recovery have been achieved without any modeling of light-matter interaction or a solution of
the wave equation. After this 2017 LSA publication, our team has further expanded these results in many unique ways. In an Optica paper3 published in 2018, we demonstrated an innovative
application of deep learning to significantly extend the imaging depth of a hologram. We demonstrated a CNN-based approach that simultaneously performs auto-focusing and phase-recovery to
significantly extend the depth-of-field (DOF) in holographic image reconstruction. For this, a CNN is trained by using pairs of randomly de-focused back-propagated holograms and their
corresponding in-focus phase-recovered images. After this training phase, the CNN takes a single back-propagated hologram of a 3D sample as input to rapidly achieve phase-recovery and
reconstruct an in focus image of the sample over a significantly extended DOF. Furthermore, this deep learning based auto-focusing and phase-recovery method is non-iterative, and
significantly improves the algorithm time-complexity of holographic image reconstruction from O(_nm_) to O(1), where _n_ refers to the number of individual object points or particles within
the sample volume, and _m_ represents the discrete focusing search space, within which each object point or particle needs to be individually focused. Another breakthrough result, in my
opinion, that is at the intersection of deep learning and holography was published in LSA in 2019, and this work was placed in arXiv in 2018 by our team. In this publication4, we introduced
the use of a deep neural network to perform cross-modality image transformation from a digitally back-propagated hologram corresponding to a given depth within the sample volume into an
image that is equivalent to a bright-field microscope image acquired at the same depth. Because a single hologram is used to digitally propagate to different sections of the sample to
virtually generate bright-field equivalent images of each section, this approach bridges the volumetric imaging capability of digital holography with speckle- and artifact-free image
contrast of bright-field microscopy. After its training, the deep neural network learns the statistical image transformation between a holographic imaging system and an incoherent
bright-field microscope, and intuitively it brings together “the best of both worlds” by fusing the advantages of holographic and incoherent bright-field imaging modalities. For this
holographic to bright-field image transformation, we used a generative adversarial network (GAN), which was trained using pollen samples imaged by an in-line holographic microscope along
with a bright-field incoherent microscope (used as the ground truth). After the training phase, which needs to be performed only once, the generator network blindly takes as input a new
hologram (never seen by the network before) to infer its bright-field equivalent image at any arbitrary depth within the sample volume. We experimentally demonstrated the success of this
powerful cross-modality image transformation between holography and bright-field microscopy, where the network output images were free from speckle and various other interferometric
artifacts observed in holography, matching the contrast and DOF of bright-field microscopy images that were mechanically focused onto the same planes within the 3D sample. We also
demonstrated that the deep network also correctly colorizes the output image, using an input hologram acquired with a monochrome sensor and narrow-band illumination, matching the color
distribution of the bright-field image. This deep learning-enabled image transformation between holography and bright-field microscopy replaces the need to mechanically scan a volumetric
sample, as it benefits from the digital wave-propagation framework of holography to virtually scan through the sample, where each one of these digitally propagated fields are transformed
into bright-field microscopy equivalent images, exhibiting the spatial and color contrast as well as the DOF expected from an incoherent microscope. Q4: YOU LEAD YOUR TEAM TO RESEARCH
DIFFRACTION DEEP NEURAL NETWORK AND MOVED THE NEURAL NETWORK FROM THE CHIP TO THE REAL WORLD, WHICH REALIZED DEEP LEARNING WITH ALMOST ZERO ENERGY CONSUMPTION AND ZERO DELAY BY RELYING ON
THE LIGHT PROPAGATION. THE RESULTS WERE ALSO SUCCESSFULLY PUBLISHED IN SCIENCE5. CAN YOU TALK ABOUT THE SUBTLETY OF THIS RESEARCH WORK? FOR THE FURTHER DEVELOPMENT OF THIS TECHNOLOGY, WHAT
ARE THE KEY PROBLEMS TO BE SOLVED CURRENTLY? A4: Diffractive Optical Networks have been introduced by my lab as a machine learning framework that unifies deep learning-based training of
matter with the physical models governing the light propagation and diffraction to enable optical inference through a set of diffractive layers (published in Science, 2018)5. The forward
model of a diffractive optical network can be mathematically formulated as a complex-valued matrix operator that multiplies an input field to create an output field at the detector
plane/aperture. This operator is designed/trained using deep learning to transform a set of complex fields (forming, e.g., the object data classes) at the input aperture of the optical
network into another set of corresponding fields at the output aperture (forming, e.g., the data classification signals) and is physically created through the interaction of the input light
with the designed diffractive surfaces as well as free-space propagation within the network. The training stage of a diffractive network is performed using a computer, and relies on deep
learning and iterative error backpropagation methods to tailor the light-matter interaction across a set of diffractive layers that collectively perform a given machine learning task, such
as object classification. After this numerical training phase implemented in a computer, the diffractive network design is fixed and the transmission or reflection coefficients of the
neurons of all the layers are determined. This diffractive design, once physically fabricated using, e.g., 3D-printing, lithography, etc., can then perform at the speed of light, the
specific task that it is trained for, using only optical diffraction and passive optical layers, creating an efficient and fast way of implementing optical inference. Our previous studies on
diffractive optical networks have demonstrated the generalization capability of these multi-layer diffractive designs to new, unseen image data. For example, using a 5-layer diffractive
network architecture, all-optical blind testing accuracies of >98% and >90% have been reported for the classification of the images of handwritten digits (MNIST dataset) and fashion
objects (Fashion-MNIST dataset) that were encoded in the amplitude and phase channels of the input object plane, respectively. For more complex and much harder to classify image datasets
such as CIFAR-10, a substantial improvement in the inference performance of diffractive networks was demonstrated using ensemble learning6 published in LSA in 2021. After independently
training >1250 individual diffractive networks that were diversely designed with a variety of passive input filters, a pruning algorithm was applied to select an optimized ensemble of
diffractive networks that collectively improved the image classification accuracy. Through this pruning strategy, an ensemble of _N_ = 14 diffractive networks collectively achieved a blind
testing accuracy of 61.14% on the classification of CIFAR-10 test images, providing an inference improvement of 16.6% compared to the average performance of the individual diffractive
networks within each ensemble, demonstrating the “wisdom of the crowd”. Successful experimental demonstrations of these all-optical image classification systems have been reported using
3D-printed diffractive layers that conduct inference by modulating the in-coming object wave at terahertz (THz) wavelengths. In addition to statistical inference, the same optical
information processing framework of diffractive networks has also been utilized to design deterministic optical components, for e.g., ultra-short pulse shaping7, spectral filtering and
wavelength division multiplexing8. In these latter examples, the input field was known a priori and was fixed, i.e., unchanged, where the task of the diffractive network was to perform a
deterministic, desired transformation for a given/known optical input. Despite the lack of nonlinear optical elements in these previous implementations, diffractive optical networks have
been shown to offer significant advantages in terms of (1) inference/classification accuracy, (2) diffraction efficiency, and (3) output signal contrast, when the number of successive
diffractive layers in the network design is increased. We published an important work in LSA detailing some of these characteristics of diffractive optical networks, which is titled:
“All-Optical Information Processing Capacity of Diffractive Surfaces9”. Diffractive optical networks have also been extended to harness broadband input radiation to design spectrally encoded
single-pixel machine vision systems, where unknown input objects were classified and reconstructed through a single-pixel detector10. This single-pixel machine vision framework achieved
>96% blind testing accuracy for optical classification of handwritten digits (MNIST dataset) based on the spectral power detected at ten distinct wavelengths, each assigned to one
digit/class. In addition to the optical classification of objects through spectral encoding of data classes, a shallow electronic neural network with two hidden layers was trained (after the
diffractive network training) to rapidly reconstruct the images of the classified objects based on their diffracted power spectra detected by a single-pixel spectroscopic detector. Using
only 10 inputs, one for each wavelength, this shallow network successfully reconstructed the images of the input, unknown objects, even if they were (rarely) incorrectly classified by the
trained diffractive network. Considering the fact that each image of a handwritten digit is composed of ~800 pixels, this shallow image reconstruction network, with an input vector size of
10, performs a form of image decompression to successfully decode the task-specific spectral encoding of the diffractive network. In one of our latest papers published in LSA11, we also
report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input and output field-of-view. Stated differently, we
demonstrated in this recent work the universal approximation capability of diffractive networks for all-optically synthesizing any arbitrarily selected linear transformation with independent
phase and amplitude channels. Our methods and conclusions in this recent LSA work can be broadly applied to any part of the electromagnetic spectrum to design all-optical processors using
spatially engineered diffractive surfaces to universally perform an arbitrary complex-valued linear transformation. Therefore, these results have wide implications and can be used to design
and investigate various coherent optical processors formed by diffractive surfaces such as metamaterials, plasmonic or dielectric-based metasurfaces, as well as flat optics-based designer
surfaces that can form information processing networks to execute a desired computational task between an input and output aperture. Q5: DEEP LEARNING IS A CLASS OF MACHINE LEARNING
TECHNIQUES THAT USE MULTIPLE LAYERS OF ARTIFICIAL NEURAL NETWORKS TO AUTOMATICALLY ANALYZE SIGNALS OR DATA. THE INTRODUCTION OF DEEP LEARNING BRINGS NEW OPPORTUNITIES FOR THE DEVELOPMENT OF
OPTICAL MICROSCOPY. CAN YOU TALK ABOUT THE CURRENT STATUS OF OPTICAL MICROSCOPY? WHAT REVOLUTIONS HAS DEEP LEARNING BROUGHT TO OPTICAL MICROSCOPY TECHNIQUE? WHAT ARE THE APPLICATION
PROSPECTS? A5: In addition to holography, phase recovery and holographic image reconstruction that I described earlier, another area that our lab has made seminal contributions to is deep
learning microscopy. In fact, the first use of deep neural networks for advancing the resolution, field-of-view and DOF of optical microscopy has been demonstrated by our team in 2017. This
work of our lab titled “Deep Learning Microscopy12” was published in Optica in November 2017 and was put in arXiv by our team in May 2017, receiving so far >400 citations. This seminal
work was followed by many research labs, bringing in new computational tools to microscopy field, powered by deep learning. In a follow-up work by our team, we published another very
important result13 in Nature Methods demonstrating super-resolution imaging in fluorescence microscopy and beating the diffraction limit of light through cross-modality image transformations
from e.g. confocal microscopy to STED or TIRF to SIM. Perhaps one of the most surprising results that we have got on this line of research was on 3D virtual refocusing of fluorescence
microscopy images using deep learning14, also published in Nature Methods. In this work, we introduced a new framework (termed Deep-Z) that is enabled by deep learning to statistically learn
and harness hidden spatial features in a fluorescence image to virtually propagate a single fluorescence image onto user-defined 3D surfaces within a fluorescent sample volume. This is
achieved without any mechanical scanning, additional optical hardware/components or a trade-off of imaging resolution, field-of-view or speed. Stated differently, we introduced, for the
first time, a digital propagation framework that learns (through only image data and without any assumptions or theoretical models) the spatial features in fluorescence images that uniquely
encode the 3D fluorescence wave propagation information in an intensity-only 2D recording without additional hardware. There are various powerful features of Deep-Z that make it
transformative for fluorescence imaging at all scales and across various disciplines that use fluorescence. A unique capability of Deep-Z framework is that it enables digital propagation of
a fluorescence image of a 3D surface onto another 3D surface, user-defined by the pixel mapping of the corresponding digital propagation matrix (DPM). An analog of the same capability exists
in holography under the paraxial approximation. In this sense, Deep-Z framework brings the computational 3D image propagation advantage of holography or other coherent imaging modalities
into fluorescence microscopy. Such a unique capability can be useful, among many applications, for e.g., simultaneous auto-focusing of different parts of a fluorescence image after the image
capture, measurement or assessment of aberrations introduced by the optical system as well as for correction of such aberrations by applying a desired nonuniform DPM. To exemplify the power
of this additional degree of freedom enabled by Deep-Z, we experimentally demonstrated the correction of the planar tilting and curvature of different samples after the acquisition of a
single 2D fluorescence image per object. Yet another unique feature of this Deep-Z framework is that it permits cross-modality digital propagation of fluorescence images, where the neural
network is trained with gold standard label images obtained by a different fluorescence microscopy modality to teach the generator network to digitally propagate an input image onto another
plane within the sample volume, but this time to match the image of the same plane that is acquired by a different fluorescence imaging modality compared to the input image. We term this
related framework Deep-Z+. To demonstrate the proof of concept of this unique capability, we trained Deep-Z+ with input and label images that were acquired with a wide-field fluorescence
microscope and a confocal microscope, respectively, to blindly generate at the output of this cross-modality network, digitally propagated images of an input fluorescence image that match
confocal microscopy images of the same sample sections, i.e., performing axial sectioning (similar to the contrast that a confocal microscope has) and digital propagation of a fluorescence
image, both at the same time. After its training, Deep-Z remains fixed, and its non-iterative inference requires no parameter estimation or search. The inference is performed in a rapid
fashion, as it outputs the desired digitally-propagated fluorescence image without any changes to the optical microscopy set-up, or a trade-off of its spatial resolution or field-of-view. As
such, it allows rapid volumetric imaging (limited only by the detector speed) without any axial scanning or hardware modifications. In addition to fluorescence microscopy, Deep-Z framework
might be applied to other incoherent imaging modalities, and in fact it provides a bridge to close the gap between coherent and incoherent microscopes by enabling computational 3D imaging of
a volume using a single 2D incoherent image. Q6: PRIOR TO JOINING UCLA IN 2007, YOU RECEIVED YOUR PH.D. AND COMPLETED POSTDOCTORAL FELLOWSHIP AT STANFORD UNIVERSITY AND WERE APPOINTED AS A
RESEARCH FACULTY AT HARVARD MEDICAL SCHOOL, WELLMAN CENTER FOR PHOTOMEDICINE. HOW DID THIS EXPERIENCE INFLUENCE YOUR SUBSEQUENT SCIENTIFIC CAREER? WHY DID YOU LATER CONSIDER TO JOIN UCLA?
A6: After an engineering PhD at Stanford, moving to a medical school to conduct research was an eye opening experience for me. At Harvard Medical School, I learned how an MD looks at
innovation from the perspective of impact—not novelty. An engineer, sometimes, can get obsessed with novelty and “coolness” of an invention or idea—but biomedical research first and foremost
cares about impact, especially from the perspective of improving human health. My experience at Stanford and Harvard Medical School (Wellman Center for Photomedicine) taught me the
importance of the intersection set between Impact and Novelty. That shaped how I look at science and engineering from the perspective of “innovations that matter”. Gary Tearney and Brett
Bouma, pioneers in OCT field have been my mentors at Harvard and I would like to acknowledge them for their contributions to my training. Q7: YOU’VE BEEN WORKING ON OPTICS FOR MANY YEARS AND
HAVE BECOME A WORLD LEAD SCIENTIST IN RELATED FIELDS, SO WE ARE WONDERING, WHAT ARE YOUR PLANS FOR YOUR FUTURE DEVELOPMENT OF SCIENTIFIC RESEARCH? A7: A very exciting development in my
career right now is a new spin-off company from my lab that aims to transform a century-old field and redefine histopathology and how it is practiced: Pictor Labs:
https://www.pictor-labs.com/ Approximately, 2 years ago, my lab published a paper in Nature Biomedical Engineering which introduced a deep learning-based method to “virtually stain”
autofluorescence images of unlabeled histological tissue sections, eliminating the need for chemical staining15. This technology was developed to leverage the speed and computational power
of deep learning to improve upon century-old histochemical staining techniques which can be slow, laborious and expensive. In this seminal paper we showed that this virtual staining
technology using deep neural networks is capable of generating highly accurate stains across a wide variety of tissue and stain types. It has the potential to revolutionize the field of
histopathology by reducing the cost of tissue staining, while making it much faster, less destructive to the tissue and more consistent/repeatable. Since the publication of our paper, we
have had a number of exciting developments moving the technology forward. We have continued to find new applications for this unique technology, using the computational nature of the
technique to generate stains which would be impossible to create using traditional histochemical staining. For example, we have used of what we refer to as a “digital staining matrix” which
allows us to generate and digitally blend multiple stains using a single deep neural network, by specifying which stain should be performed on the pixel level. Not only can this unique
framework be used to perform multiple different stains on a single tissue section, it can also be used to create micro-structured stains, digitally staining different areas of labelfree
tissue with different stains. Furthermore, this digital staining matrix enables these stains to blended together, by setting the encoding matrix to be a mixture of the possible stains. This
technology can be used to ensure that pathologists are able to receive the most relevant information possible from the various virtual stains being performed. This work16 was published in
LSA in 2020 and has opened up the path for a very exciting new opportunity: stain-to-stain transformations, which enable transforming existing images of tissue biopsy stained with one type
of stain into many other types of stains17, almost instantaneously. Published in Nature Communications, this stain-to-stain transformation process takes less than one minute per tissue
sample, as opposed to several hours or even more than a day when performed by human experts. And, this speed differential enables faster preliminary diagnoses that require special stains,
while also providing significant savings in costs. Motivated by transformative potential of our virtual staining technology, we have also begun the process of its commercialization and
founded Pictor Labs, a new Los Angeles-based startup. Pictor in Latin means “painter” and at Pictor Labs we virtually “paint” the microstructure of tissue samples using deep learning. In
2020, we were successful in raising seed funding from venture capital firms including M Ventures (a subsidy of Merck KGaA), Motus Ventures, as well as private investors. Through Pictor labs,
we aim to revolutionize the histopathology staining workflow using this virtual staining technology, and by building a cloud computing-based platform which facilitates histopathology
through AI, we will enable tissue diagnoses and help clinicians manage patient care. I am very excited to have this unique opportunity to bring our cutting-edge academic research into the
commercialization phase and look forward to more directly impacting human health over the coming years using this transformative virtual staining technology. Q8: THE SCIENTIFIC RESEARCH
SHOULD FINALLY SERVE TO IMPROVE PEOPLE’S LIVELIHOOD AND ECONOMIC DEVELOPMENT, AND THE TRANSFORMATION FROM ACADEMIC TO INDUSTRIAL IS AN IMPORTANT PROCESS. AT PRESENT, WHICH ONES OF YOUR
RESEARCH WORKS HAVE REALIZED ACHIEVEMENT TRANSFORMATION? WHAT ARE THE SOCIAL BENEFITS? A8: My research on computational imaging, mobile sensing and diagnostics created widely scalable mobile
technologies, for e.g., blood analysis, sensing of pathogens/toxins in bodily fluids, food and water samples, diagnosis of infectious diseases, screening of antimicrobial resistance,
pathology analysis, as well as particulate matter and bio-aerosol detection for air quality measurements, which altogether have the potential to dramatically increase the reach of advanced
biomedical technologies to developing countries and resource limited settings. My work has been broadly helping to democratize biomedical measurement science by enabling advanced
measurements to be cost-effectively performed even in field-settings using mobile instruments powered by computational optics and machine learning. These innovative measurement technologies
led to >45 issued patents, several licensed by different companies including Honeywell, GE, Northrup Grumman (Litton), Arcelik, NOW Diagnostics and my own start-ups, targeting
multi-billion $ markets, with products used in >10 countries, also earning one of my own companies a “Technology Pioneer” Award given by The World Economic Forum in 2015. My work on
mobile microscopy entered introductory level biology textbooks by, e.g., National Geographic, Cengage, and is being used by numerous academic groups world-wide, including in developing
countries, also through my lab’s massive collaborations with >25 labs. My lab was one of the first teams that utilized the cellphone as a platform for advanced measurements, microscopy
and sensing covering various applications. For example, we were the first group to image and count individual viruses, individual DNA molecules using mobile phone-based microscopes. We were
one of the first groups to utilize the smart phone as a platform for quantitative sensing, for example, quantification of lateral flow tests. Our mobile diagnostic test readers are still
being used in industry through a licensee of one of our patents. Lucendi, a start-up that I cofounded, has commercialized a mobile imaging flow cytometer for water quality analysis,
including for example alga toxic blooms. In fact this seminal work18 was also published in LSA. As another example, our team has introduced the first point of care sensor that is designed by
machine learning and that runs and makes decisions based on a neural network19,20. This is a vertical flow assay that can look at >80 immunoreactions in parallel. We have shown its
efficacy for detection of early stage Lyme disease patients based on IgG and IgM panels (profiling the immunity of the patient), published in ACS Nano. We are also considering a similar
approach for COVID-19—especially important to understand, e.g., the efficacy of vaccines and when a booster shot is needed. Q9: WE OFFICIALLY LAUNCHED ELIGHT JOURNAL IN JUNE THIS YEAR, WHICH
AIMS TO ATTRACT THE FINEST MANUSCRIPTS, BROADLY COVERING ALL SUB-FIELDS OF OPTICS, PHOTONICS AND ELECTROMAGNETICS, IN PARTICULAR FOCUSING ON THOSE EMERGING TOPICS AND CROSS-DISCIPLINARY
RESEARCHES RELATED TO OPTICS. AS THE EDITOR-IN-CHIEF, CAN YOU TALK ABOUT THE DEVELOPMENT ORIENTATION AND DIRECTION OF ELIGHT? HOW CAN THIS NEW JOURNAL STAND OUT FROM ITS PEERS? WHAT IS YOUR
VISION AND EXPECTATION FOR ELIGHT? A9: eLight is the new kid in the block. Together with Cheng-Wei, I am serving as the co-EIC of eLight and I am also the father of its name. LSA is an
amazing success story in Optics/Photonics field, with an impact factor of 17.782—it is a global leading journal in optics, and CIOMP should be proud of it. eLight will be powered by the same
team that has made LSA an amazing success. Approximately, 50 publications will be targeted per year. So my message to all the researchers who are focusing on Optics/Photonics field, broadly
defined: Think of your best work in a given year—that would provide an excellent fit to eLight. Please consider submitting your best work to eLight. Q10: IN OUR IMPRESSION, YOU ARE WORKING
ALMOST ALL THE TIME, AND IT SEEMS THAT WHENEVER SEND YOU A MESSAGE, WE CAN GET A RESPONSE AT ONCE. WE REALLY ADMIRE YOUR PROFESSIONALISM A LOT. HOW DO YOU MAINTAIN YOUR ROBUST ENERGY AND
POSITIVE ATTITUDE AT WORK? IN YOUR OPINION, WHAT CHARACTERS SHOULD AN EXCELLENT SCIENTIFIC RESEARCHER POSSESS? A10: You have got to constantly learn new things. My lab is not focusing on a
narrow area, and effectively it has three different labs come together. This gives us the big picture and the ability to connect things, because we can see how different areas can come
together by looking at the big picture. This also makes it much bigger than the sum of its parts. Running a large engineering lab with a broad focus is a lot of effort, but to succeed in all
of these areas you need to build interdisciplinary teams with a culture of sharing and learning from each other. My team is very diverse, covering many areas of engineering, and everyone
can learn from everyone else. At the same time, we have many interdisciplinary collaborators. This makes our publications high impact and relevant—meaning, we solve problems that really
matter—at the intersection of impact and novelty. In summary: Never stop learning. Be open minded to new directions. Learn from your students, team members and colleagues. And understand
that everyone can be wrong, including yourself. No one is perfect and understanding your weaknesses is important. Q11: RISING STARS OF LIGHT IS AN ANNUAL INTERNATIONAL COMPETITION INITIATED
BY LSA TO SELECT THE BRIGHTEST YOUNG SCIENTISTS IN OPTICS-RELATED FIELDS. DUE TO THE EPIDEMIC, 2020 RISING STARS WAS MOVED ONLINE. AFTER TWO FIERCE AND FRIENDLY CONTESTS TOTALING 8 H, IT WAS
SUCCESSFULLY CONCLUDED IN THE PRESENCE OF MORE THAN 500,000 WORLDWIDE VIEWERS. 2021 RISING STARS HAS SET SAIL, AND WE ARE HONORED TO INVITE YOU AGAIN TO BE THE CHAIRMAN OF THIS COMPETITION.
WHAT POSITIVE EFFECTS DO YOU THINK SUCH EVENTS ACHIEVE? IF IT WILL PROMOTE THE INTERNATIONAL INFLUENCE OF LSA? A11: I think Rising Stars awards have already become a very prestigious
recognition among junior researchers in optics and photonics related fields. This year’s cohort of applications was amazingly well qualified and it was very difficult for the award committee
to make a selection for the next round. Researchers do not work for awards—we are motivated by our curiosity and our passion for “change”—making the world better and advancing our
understanding of the universe through the impact of our science and engineering. Having said this, these awards can sometime motivate researchers to continue in their careers, and help them
accelerate their progress, reminding them of the importance of their research and broader impact. In this sense, I find the Rising Stars awards very important and timely. Q12: IN 2017, YOU
WERE INVITED BY OSA/SPIE STUDENT BRANCH OF CHANGCHUN INSTITUTE OF OPTICS, FINE MECHANICS AND PHYSICS (CIOMP) TO BRING A WONDERFUL OSA TRAVELLING LECTURE, WHICH ATMOSPHERE ON-SITE WAS QUITE
WARM. WHAT’S YOUR IMPRESSION OF CIOMP? WHAT ASPECTS DO YOU THINK SHOULD BE NOTICED IN TALENT TRAINING? A12: This was my first time visiting CIOMP. I am very impressed with Optics and
Photonics research coming out of China, and over the last 2 decades I have seen Chinese universities in general, together with their students and scholars, making amazing progress. In my lab
I have got several generations of Chinese students, from Tsinghua University, Peking University, Zhejiang University and others, and some of my best students are from China. I always
eagerly look to see new applications from Chinese universities. The students have an amazing package. They come with an excellent research background, strong publications and are always
eager to work hard and learn deeper. Also I personally find Chinese students provide an excellent fit to my lab and its dynamic style—they are very collegial, respectful, and hardworking.
Q13: HOW DO YOU RELIEVE PRESSURE AT WORK? DO YOU HAVE ANY INTERESTS EXCEPT FOR YOUR JOB? A13: I like cooking and inventing new dishes, which mostly end up very tasty (according to my wife),
but sometimes perhaps end up with a tasteless invention. I am not afraid of failures in this sense. I also like basketball very much—played at a competitive level until I seriously injured
my knees—both of them, one ACL torn after another, one year apart, playing basketball in each case. This also tells something more about me: perseverance and tendency to learn (slowly)
through experiments. I stopped playing basketball after losing all the ACL in my knees. LIGHT CORRESPONDENT _Tingting Sun is an Assistant Professor of Light Publishing Group at Changchun
Institute of Optics, Fine Mechanics and Physics (CIOMP), Chinese Academy of Sciences (CAS). She received Engineering Doctor Degree from University of Chinese Academy of Sciences in 2016. She
currently serves as an Academic Editor for an Excellence Program leading journal Light: Science & Applications, and is the Editor-in-Chief Assistant of LSA subclassification journal
eLight. She is also a senior talent jointly trained by the International Cooperation Bureau of CAS. She came from the frontline of scientific research and had profound scientific research
background. She has presided over two scientific research projects as the project leader, and participated in many major scientific projects. She has published many SCI and EI academic
papers and applied for two national invention patents._ REFERENCES * Zhang, Y. J. et al. Neural network-based image reconstruction in swept-source optical coherence tomography using
undersampled spectral data. _Light Sci. Appl._ 10, 155, https://doi.org/10.1038/s41377-021-00594-7 (2021). Article ADS Google Scholar * Rivenson, Y. et al. Phase recovery and holographic
image reconstruction using deep learning in neural networks. _Light Sci. Appl._ 7, 17141, https://doi.org/10.1038/lsa.2017.141 (2018). Article Google Scholar * Wu, Y. C. et al. Extended
depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery. _Optica_ 5, 704–710, https://doi.org/10.1364/OPTICA.5.000704 (2018). Article ADS Google
Scholar * Wu, Y. C. et al. Bright-field holography: cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram. _Light Sci. Appl._ 8, 25,
https://doi.org/10.1038/s41377-019-0139-9 (2019). Article ADS Google Scholar * Lin, X. et al. All-optical machine learning using diffractive deep neural networks. _Science_ 361,
1004–1008, https://doi.org/10.1126/science.aat8084 (2018). Article ADS MathSciNet MATH Google Scholar * Rahman, S. S. et al. Ensemble learning of diffractive optical networks. _Light
Sci. Appl._ 10, 14, https://doi.org/10.1038/s41377-020-00446-w (2021). Article ADS Google Scholar * Veli, M. et al. Terahertz pulse shaping using diffractive surfaces. _Nat. Commun._ 12,
37, https://doi.org/10.1038/s41467-020-20268-z (2021). Article ADS Google Scholar * Luo, Y. et al. Design of task-specific optical systems using broadband diffractive neural networks.
_Light Sci. Appl._ 8, 112, https://doi.org/10.1038/s41377-019-0223-1 (2019). Article ADS Google Scholar * Kulce, O. et al. All-optical information-processing capacity of diffractive
surfaces. _Light Sci. Appl._ 10, 25, https://doi.org/10.1038/s41377-020-00439-9 (2021). Article ADS Google Scholar * Li, J. X. et al. Spectrally encoded single-pixel machine vision using
diffractive networks. _Sci. Adv._ 7, 13, https://doi.org/10.1126/sciadv.abd7690 (2021). Article ADS Google Scholar * Kulce, O. et al. All-optical synthesis of an arbitrary linear
transformation using diffractive surfaces. _Light Sci. Appl._ 10, 196, https://doi.org/10.1038/s41377-021-00623-5 (2021). Article ADS Google Scholar * Rivenson, Y. et al. Deep learning
microscopy. _Optica_ 4, 1437–1443, https://doi.org/10.1364/OPTICA.4.001437 (2017). Article ADS Google Scholar * Wang, H. D. et al. Deep learning enables cross-modality super-resolution in
fluorescence microscopy. _Nat. Methods_ 16, 103–110, https://doi.org/10.1038/s41592-018-0239-0 (2019). Article Google Scholar * Wu, Y. C. et al. Three-dimensional virtual refocusing of
fluorescence microscopy images using deep learning. _Nat. Methods_ 16, 1323–1331, https://doi.org/10.1038/s41592-019-0622-5 (2019). Article Google Scholar * Rivenson, Y. et al. Virtual
histological staining of unlabelled tissue-autofluorescence images via deep learning. _Nat. Biomed. Eng._ 3, 466–477, https://doi.org/10.1038/s41551-019-0362-y (2019). Article Google
Scholar * Zhang, Y. J. et al. Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue. _Light Sci. Appl._ 9, 78,
https://doi.org/10.1038/s41377-020-0315-y (2020). Article ADS Google Scholar * de Haan, K. et al. Deep learning-based transformation of H&E stained tissues into special stains. _Nat.
Commun._ 12, 4884, https://doi.org/10.1038/s41467-021-25221-2 (2021). Article ADS Google Scholar * Gӧrӧcs, Z. et al. A deep learning-enabled portable imaging flow cytometer for
cost-effective, high-throughput, and label-free analysis of natural water samples. _Light Sci. Appl._ 7, 66, https://doi.org/10.1038/s41377-018-0067-0 (2018). Article Google Scholar *
Joung, H. A. et al. Point-of-care serodiagnostic test for early-stage Lyme disease using a multiplexed paper-based immunoassay and machine learning. _ACS Nano_ 14, 229–240,
https://doi.org/10.1021/acsnano.9b08151 (2020). Article Google Scholar * Ballard, Z. S. et al. Deep learning-enabled point-of-care sensing using multiplexed paper-based sensors. _npj
Digital Med._ 3, 66, https://doi.org/10.1038/s41746-020-0274-y (2020). Article Google Scholar Download references AUTHOR INFORMATION AUTHORS AND AFFILIATIONS * Light Publishing Group,
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, 3888 Dong Nan Hu Road, Changchun, 130033, China Tingting Sun Authors * Tingting Sun View author
publications You can also search for this author inPubMed Google Scholar CORRESPONDING AUTHOR Correspondence to Tingting Sun. RIGHTS AND PERMISSIONS OPEN ACCESS This article is licensed
under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and
your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
license, visit http://creativecommons.org/licenses/by/4.0/. Reprints and permissions ABOUT THIS ARTICLE CITE THIS ARTICLE Sun, T. Light People: Professor Aydogan Ozcan. _Light Sci Appl_ 10,
208 (2021). https://doi.org/10.1038/s41377-021-00643-1 Download citation * Published: 05 October 2021 * DOI: https://doi.org/10.1038/s41377-021-00643-1 SHARE THIS ARTICLE Anyone you share
the following link with will be able to read this content: Get shareable link Sorry, a shareable link is not currently available for this article. Copy to clipboard Provided by the Springer
Nature SharedIt content-sharing initiative