Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published February 2023 | Supplemental Material + Accepted Version
Journal Article Open

Label-free intraoperative histology of bone tissue via deep-learning-assisted ultraviolet photoacoustic microscopy

Abstract

Obtaining frozen sections of bone tissue for intraoperative examination is challenging. To identify the bony edge of resection, orthopaedic oncologists therefore rely on pre-operative X-ray computed tomography or magnetic resonance imaging. However, these techniques do not allow for accurate diagnosis or for intraoperative confirmation of the tumour margins, and in bony sarcomas, they can lead to bone margins up to 10-fold wider (1,000-fold volumetrically) than necessary. Here, we show that real-time three-dimensional contour-scanning of tissue via ultraviolet photoacoustic microscopy in reflection mode can be used to intraoperatively evaluate undecalcified and decalcified thick bone specimens, without the need for tissue sectioning. We validate the technique with gold-standard haematoxylin-and-eosin histology images acquired via a traditional optical microscope, and also show that an unsupervised generative adversarial network can virtually stain the ultraviolet-photoacoustic-microscopy images, allowing pathologists to readily identify cancerous features. Label-free and slide-free histology via ultraviolet photoacoustic microscopy may allow for rapid diagnoses of bone-tissue pathologies and aid the intraoperative determination of tumour margins.

Additional Information

© The Author(s), under exclusive licence to Springer Nature Limited 2022. We thank M. D'Apuzzo for helpful discussions and valuable pathological feedback. This work was sponsored by the United States National Institutes of Health (NIH) grants R01 CA186567 (NIH Director's Transformative Research Award), R35 CA220436 (Outstanding Investigator Award) and R01 EB028277A. Contributions. R.C., B.C. and L.V.W. designed the experiment. R.C. and Y.Z. built the system and wrote the control programme. R.C. performed the experiment. S.D.N., B.C. and Y. Liang provided bone specimens and H&E slices. R.C., S.D. and Y. Luo performed image processing. L.V.W. and B.C. supervised the project. All authors were involved in discussions during the work and in the preparation of the manuscript. Data availability. The main data supporting the findings of this study are available within the paper and its Supplementary Information. The training dataset and the fake output images for the CycleGAN network are available at https://doi.org/10.5281/zenodo.6345772. The raw data generated during the study are too large to be publicly shared, yet they are available for research purposes from the corresponding authors on reasonable request. Code availability. The original code for CycleGAN is available at https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix. We applied this code to our dataset with the customized settings described in Methods. MATLAB was used for creating image tiles for the network and for restitching the output image tiles. The quantitative analysis of photoacoustic virtual histology was done via QuPath (https://qupath.github.io). The system control software and the data collection software are proprietary and used in licensed technologies. Competing interests. L.V.W. has a financial interest in MicroPhotoAcoustics, CalPACT and Union Photoacoustic Technologies. (However, these companies did not financially support this work.) The other authors declare no competing interests.

Attached Files

Accepted Version - nihms-1902875.pdf

Supplemental Material - 41551_2022_940_MOESM1_ESM.pdf

Supplemental Material - 41551_2022_940_MOESM3_ESM.mp4

Files

nihms-1902875.pdf
Files (39.3 MB)
Name Size Download all
md5:957effcd33b209bfff9dda0ecd963fac
2.4 MB Preview Download
md5:3058f0015b796f506a6e402db1962799
33.4 MB Download
md5:3d463610dacf22541f75fc7ea1d09235
3.5 MB Preview Download

Additional details

Created:
August 22, 2023
Modified:
October 24, 2023