Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 5, 2017 | Accepted Version
Journal Article Open

Automated Computational Processing of 3-D MR Images of Mouse Brain for Phenotyping of Living Animals

Abstract

Magnetic resonance (MR) imaging provides a method to obtain anatomical information from the brain in vivo that is not typically available by optical imaging because of this organ's opacity. MR is nondestructive and obtains deep tissue contrast with 100-µm^3 voxel resolution or better. Manganese-enhanced MRI (MEMRI) may be used to observe axonal transport and localized neural activity in the living rodent and avian brain. Such enhancement enables researchers to investigate differences in functional circuitry or neuronal activity in images of brains of different animals. Moreover, once MR images of a number of animals are aligned into a single matrix, statistical analysis can be done comparing MR intensities between different multi-animal cohorts comprising individuals from different mouse strains or different transgenic animals, or at different time points after an experimental manipulation. Although preprocessing steps for such comparisons (including skull stripping and alignment) are automated for human imaging, no such automated processing has previously been readily available for mouse or other widely used experimental animals, and most investigators use in-house custom processing. This protocol describes a stepwise method to perform such preprocessing for mouse.

Additional Information

© 2017 John Wiley & Sons, Inc. Published Online: 5 JUL 2017. This work was funded by The Harvey Family Endowment (E.L.B.), MH096093 (E.L.B.), NIGMS (P50GM085273) (E.L.B.), NIDA (DA018184) (Russel E. Jacobs with sub-award to E.L.B.), and NINDS (NS046810, and NS062184) (E.L.B.). We are indebted to Russell E. Jacobs and the Beckman Institute Imaging Center at Caltech for access to the 11.7 MR scanner, the images, the template image, and the modal scaling code, as well as advice and coaching for over a dozen years on MR technology. We also gratefully acknowledge UNM's BA-MD program for support of Christopher Medina. We thank Kevin P. Reagan, Sharon Wu Lin, and XiaoWei Zhang for technical assistance, Krish Subramaniam for the modal scaling program, and current and former student helpers Brianna S. Mulligan, Amber Jean Zimmerman, Abdul Faheem Mohed, Adam Mitchell, Adam Delora, and Frances Chaves. The authors report no conflicts of interest.

Attached Files

Accepted Version - nihms-990106.pdf

Files

nihms-990106.pdf
Files (1.7 MB)
Name Size Download all
md5:f2e21c759a57e9dfb6485853019568ea
1.7 MB Preview Download

Additional details

Created:
August 21, 2023
Modified:
October 26, 2023