Individuals with speech problems may now use a computer cursor to spell out messages, thanks to brain–computer link based technology. Letter-by-letter selection may take a long time when using brain signal recordings to drive interfaces. It may be more efficient and natural to decode whole words from voice control areas. Th authors now have a better understanding of how the sensorimotor cortex's speech-controlling area orchestrates rapid vocal tract articulatory movements. These advances in neurobiology and machine learning have shown that speech can be decoded from brain activity in individuals who do not have speech difficulties.
As shown in this work, previously reported brain-computer interface systems need daily calibration of decoding models prior to user deployment, which may increase decoder performance variability across days and impede long-term interface acceptance for real-world use. Due of the high signal stability and potential for large data accumulation associated with electrocorticographic recordings, he used cortical activity captured by implanted electrodes over months of recording to train his decoding algorithms. As a result, high-density electrocorticography may be an ideal option for long-term neuroprosthetic direct speech applications such as direct voice prosthesis (Moses et al., 2021).
References
MOSES, D. A., METZGER, S. L., LIU, J. R., ANUMANCHIPALLI, G. K., MAKIN, J. G., SUN, P. F., CHARTIER, J., DOUGHERTY, M. E., LIU, P. M. & ABRAMS, G. M. J. N. E. J. O. M. 2021. Neuroprosthesis for decoding speech in a paralyzed person with anarthria. 385, 217-227.