Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published April 2022 | Supplemental Material + Submitted
Journal Article Open

Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning

Abstract

A principal challenge in the analysis of tissue imaging data is cell segmentation—the task of identifying the precise boundary of every cell in an image. To address this problem we constructed TissueNet, a dataset for training segmentation models that contains more than 1 million manually labeled cells, an order of magnitude more than all previously published segmentation training datasets. We used TissueNet to train Mesmer, a deep-learning-enabled segmentation algorithm. We demonstrated that Mesmer is more accurate than previous methods, generalizes to the full diversity of tissue types and imaging platforms in TissueNet, and achieves human-level performance. Mesmer enabled the automated extraction of key cellular features, such as subcellular localization of protein signal, which was challenging with previous approaches. We then adapted Mesmer to harness cell lineage information in highly multiplexed datasets and used this enhanced version to quantify cell morphology changes during human gestation. All code, data and models are released as a community resource.

Additional Information

© 2021 Nature Publishing Group. Received 01 March 2021; Accepted 14 September 2021; Published 18 November 2021. We thank K. Borner, L. Cai, M. Covert, A. Karpathy, S. Quake and M. Thomson for interesting discussions; D. Glass and E. McCaffrey for feedback on the manuscript; T. Vora for copy editing; R. Angoshtari, G. Barlow, B. Bodenmiller, C. Carey, R. Coffey, A. Delmastro, C. Egelston, M. Hoppe, H. Jackson, A. Jeyasekharan, S. Jiang, Y. Kim, E. McCaffrey, E. McKinley, M. Nelson, S.-B. Ng, G. Nolan, S. Patel, Y. Peng, D. Philips, R. Rashid, S. Rodig, S. Santagata, C. Schuerch, D. Schulz, Di. Simons, P. Sorger, J. Weirather and Y. Yuan for providing imaging data for TissueNet; the crowd annotators who powered our human-in-the-loop pipeline; and all patients who donated samples for this study. This work was supported by grants from the Shurl and Kay Curci Foundation, the Rita Allen Foundation, the Susan E. Riley Foundation, the Pew Heritage Trust, the Alexander and Margaret Stewart Trust, the Heritage Medical Research Institute, the Paul Allen Family Foundation through the Allen Discovery Centers at Stanford and Caltech, the Rosen Center for Bioengineering at Caltech and the Center for Environmental and Microbial Interactions at Caltech (D.V.V.). This work was also supported by 5U54CA20997105, 5DP5OD01982205, 1R01CA24063801A1, 5R01AG06827902, 5UH3CA24663303, 5R01CA22952904, 1U24CA22430901, 5R01AG05791504 and 5R01AG05628705 from NIH, W81XWH2110143 from DOD, and other funding from the Bill and Melinda Gates Foundation, Cancer Research Institute, the Parker Center for Cancer Immunotherapy and the Breast Cancer Research Foundation (M.A.). N.F.G. was supported by NCI CA246880-01 and the Stanford Graduate Fellowship. B.J.M. was supported by the Stanford Graduate Fellowship and Stanford Interdisciplinary Graduate Fellowship. T.D. was supported by the Schmidt Academy for Software Engineering at Caltech. Data availability: The TissueNet dataset is available at https://datasets.deepcell.org/ for noncommercial use. Code availability: All software for dataset construction, model training, deployment and analysis is available on our github page https://github.com/vanvalenlab/intro-to-deepcell. All code to generate the figures in this paper is available at https://github.com/vanvalenlab/publication-figures/tree/master/2021-Greenwald_Miller_et_al-Mesmer. These authors contributed equally: Noah F. Greenwald, Geneva Miller. Author Contributions: N.F.G., L.K., M.A. and D.V.V. conceived the project. E.M. and D.V.V. conceived the human-in-the-loop approach. L.K. and M.A. conceived the whole-cell segmentation approach. G.M., T.D., E.M., W.G. and D.V.V. developed DeepCell Label. G.M., N.F.G., E.M., I.C., W.G. and D.V.V. developed the human-in-the-loop pipeline. M.S.S., C.P., W.G. and D.V.V. developed Mesmer's deep learning architecture. W.G., N.F.G. and D.V.V. developed model training software. C.P. and W.G. developed cloud deployment. M.S.S., S.C., W.G. and D.V.V. developed metrics software. W.G. developed plugins. N.F.G., A. Kong, A. Kagel, J.S. and O.B.-T. developed the multiplex image analysis pipeline. A. Kagel and G.M. developed the pathologist evaluation software. N.F.G., G.M. and T.H. supervised training data creation. N.F.G., C.C.F., B.J.M., K.X.L., M.F., G.C., Z.A., J.M. and S.W. performed quality control on the training data. E.S., S.G. and T.R. generated MIBI-TOF data for morphological analyses. S.C.B. helped with experimental design. N.F.G., W.G. and D.V.V. trained the models. N.F.G., W.G., G.M. and D.V.V. performed data analysis. N.F.G., G.M., M.A. and D.V.V. wrote the manuscript. M.A. and D.V.V. supervised the project. All authors provided feedback on the manuscript. Peer review information: Nature Biotechnology thanks the anonymous reviewers for their contribution to the peer review of this work.

Attached Files

Submitted - 2021.03.01.431313v1.full.pdf

Supplemental Material - 41587_2021_1094_Fig10_ESM.webp

Supplemental Material - 41587_2021_1094_Fig7_ESM.webp

Supplemental Material - 41587_2021_1094_Fig8_ESM.webp

Supplemental Material - 41587_2021_1094_Fig9_ESM.webp

Supplemental Material - 41587_2021_1094_MOESM1_ESM.pdf

Files

2021.03.01.431313v1.full.pdf
Files (18.4 MB)
Name Size Download all
md5:db367232bebb95432c67b39ba3233126
462.6 kB Download
md5:d007ffed61f7d81d18e92b49c92ed027
15.7 MB Preview Download
md5:1014987d749d342f296fd9ba70c9720b
415.2 kB Download
md5:24cad94bdd75ca573bce895c6b8007ef
338.0 kB Download
md5:ddcce8af2061981155f838ccea28ff39
1.4 MB Preview Download
md5:280f10d077bab9e0a4c80493510b4b01
135.1 kB Download

Additional details

Created:
August 22, 2023
Modified:
December 22, 2023