Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 1, 2022 | Published + Submitted + Supplemental Material
Journal Article Open

Decoding grasp and speech signals from the cortical grasp circuit in a tetraplegic human

Abstract

The cortical grasp network encodes planning and execution of grasps and processes spoken and written aspects of language. High-level cortical areas within this network are attractive implant sites for brain-machine interfaces (BMIs). While a tetraplegic patient performed grasp motor imagery and vocalized speech, neural activity was recorded from the supramarginal gyrus (SMG), ventral premotor cortex (PMv), and somatosensory cortex (S1). In SMG and PMv, five imagined grasps were well represented by firing rates of neuronal populations during visual cue presentation. During motor imagery, these grasps were significantly decodable from all brain areas. During speech production, SMG encoded both spoken grasp types and the names of five colors. Whereas PMv neurons significantly modulated their activity during grasping, SMG's neural population broadly encoded features of both motor imagery and speech. Together, these results indicate that brain signals from high-level areas of the human cortex could be used for grasping and speech BMI applications.

Additional Information

© 2022 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Received 3 November 2021, Revised 1 February 2022, Accepted 8 March 2022, Available online 31 March 2022. We wish to thank L. Bashford, H. Jo, and I. Rosenthal for helpful discussions and data collection. We wish to thank our study participant F.G. for his dedication to the study which made this work possible. This research was supported by the NIH National Institute of Neurological Disorders and Stroke grant U01: U01NS098975 (S.K.W., S.K., D.A.B., K.P., C.L., and R.A.A.) and by the T&C Chen Brain-Machine Interface Center (S.K.W., D.A.B., and R.A.A.). Author contributions: S.K., S.K.W., and R.A.A. designed the study. S.K.W. and S.K. developed the experimental tasks. S.K.W., S.K., and D.A.B. analyzed the results. S.K.W., S.K., D.A.B., and R.A.A. interpreted the results and wrote the paper. K.P. coordinated regulatory requirements of clinical trials. C.L. and B.L. performed the surgery to implant the recording arrays. The authors declare no competing interests. Data and code availability: All analyses were conducted in MATLAB using previously published methods and packages. MATLAB analysis scripts and preprocessed data are available on GitHub (Grasp and speech decoding: https://doi.org/10.5281/zenodo.6330179).

Attached Files

Published - 1-s2.0-S0896627322002458-main.pdf

Submitted - 2021.10.29.466528v1.full.pdf

Supplemental Material - 1-s2.0-S0896627322002458-mmc1.pdf

Files

1-s2.0-S0896627322002458-main.pdf
Files (6.8 MB)
Name Size Download all
md5:4de5a377bf42491483629e250c124ff4
1.3 MB Preview Download
md5:20abf59ef955b46d71215798772ce094
2.2 MB Preview Download
md5:b77d53539c3e1feba788e106e5b9498f
3.3 MB Preview Download

Additional details

Created:
August 22, 2023
Modified:
December 22, 2023