Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published May 12, 2022 | Published + Supplemental Material
Journal Article Open

Deep learning acceleration of multiscale superresolution localization photoacoustic imaging

Abstract

A superresolution imaging approach that localizes very small targets, such as red blood cells or droplets of injected photoacoustic dye, has significantly improved spatial resolution in various biological and medical imaging modalities. However, this superior spatial resolution is achieved by sacrificing temporal resolution because many raw image frames, each containing the localization target, must be superimposed to form a sufficiently sampled high-density superresolution image. Here, we demonstrate a computational strategy based on deep neural networks (DNNs) to reconstruct high-density superresolution images from far fewer raw image frames. The localization strategy can be applied for both 3D label-free localization optical-resolution photoacoustic microscopy (OR-PAM) and 2D labeled localization photoacoustic computed tomography (PACT). For the former, the required number of raw volumetric frames is reduced from tens to fewer than ten. For the latter, the required number of raw 2D frames is reduced by 12 fold. Therefore, our proposed method has simultaneously improved temporal (via the DNN) and spatial (via the localization method) resolutions in both label-free microscopy and labeled tomography. Deep-learning powered localization PA imaging can potentially provide a practical tool in preclinical and clinical studies requiring fast temporal and fine spatial resolutions.

Additional Information

© The Author(s) 2022. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Received 21 November 2021; Revised 24 April 2022; Accepted 26 April 2022; Published 12 May 2022. J.K. would like to thank Joongho Ahn for fruitful discussions about the operating software of the OR-PAM system. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (2020R1A6A1A03047902), supported by National R&D Program through the NRF funded by the Ministry of Science and ICT (MSIT) (2020M3H2A1078045), supported by the NRF grant funded by the Korea government MSIT (No. NRF-2019R1A2C2006269 and No. 2020R1C1C1013549). This work was partly supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government MSIT (No. 2019-0-01906, Artificial Intelligence Graduate School Program (POSTECH)) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Ministry of Trade, industry and Energy (MOTIE). This work was also supported by the Korea Medical Device Development Fund grant funded by the MOTIE (9991007019, KMDF_PR_20200901_0008). It was also supported by the BK21 Four project. Data availability: All data are available within the Article and Supplementary Files or available from the authors upon request. Contributions: C.K. and J.K. conceived and designed the study. J.K., J.Y.K., Y.K., and L.L. constructed the imaging systems. J.K., L.L., and P.Z. contributed to managing the imaging systems for collecting the raw data. J. K., G.K., and L.L. developed the image processing algorithms and DL networks. J.K. and G.K. contributed to perform the training of the DNNs and analyze the results. C.K. supervised the entire project. J.K., G.K., and L.L. prepared the figures and wrote the manuscript under the guidance of C.K., L.V.W., and S.L. All authors contributed to the critical reading and writing of the manuscript. Conflict of interest: C. Kim and J.Y. Kim have financial interests in Opticho and the OR-PAM system (i.e., OptichoM) was supported by Opticho. L.V. Wang has financial interests in Microphotoacoustics, Inc., CalPACT, LLC, and Union Photoacoustic Technologies, Ltd., which did not support this work.

Attached Files

Published - s41377-022-00820-w.pdf

Supplemental Material - 41377_2022_820_MOESM1_ESM.pdf

Supplemental Material - 41377_2022_820_MOESM2_ESM.mp4

Supplemental Material - 41377_2022_820_MOESM3_ESM.mp4

Files

41377_2022_820_MOESM1_ESM.pdf
Files (34.4 MB)
Name Size Download all
md5:609886f18512690743c1ac9f5b3e1bdc
19.2 MB Download
md5:8b0dcd74cf7f0ab526e341c6160fda39
1.6 MB Preview Download
md5:e96c3c04fce094be999cd9d53ab7b663
9.9 MB Download
md5:80190fe652ec9e9f01ad83556d8409ff
3.8 MB Preview Download

Additional details

Created:
August 22, 2023
Modified:
October 24, 2023