Study Shows AI Can Predict Speech Success After Cochlear Implants

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

An AI model using deep transfer learning – the most advanced form of machine learning – predicted spoken language outcomes one to three years after cochlear implants (implanted electronic hearing aid) with 92% accuracy, according to a large international study published in JAMA Otolaryngology - Head and Neck Surgery. Although cochlear implantation is the only effective...

Study Shows AI Can Predict Speech Success After Cochlear Implants

According to a major international study published in , an AI model using deep transfer learning – the most advanced form of machine learning – predicted spoken language outcomes one to three years after cochlear implants (implanted electronic hearing aid) with 92% accuracyJAMA Otolaryngology – Head and Neck Surgery.

Although cochlear implantation is the only effective treatment for improving hearing and enabling spoken language in children with severe to profound hearing loss, spoken language development is more variable after early implantation compared to children with normal hearing. If it is determined prior to implantation that children are likely to have greater difficulty with spoken language, intensified therapy may be offered earlier to improve their language.

Researchers trained AI models to predict outcomes based on pre-implantation brain MRI scans of 278 children in Hong Kong, Australia and the US who spoke three different languages ​​(English, Spanish and Cantonese). The three centers in the study also used different brain scanning protocols and different outcome measures.

Such complex, heterogeneous data sets are problematic for traditional machine learning, but the study showed excellent results with the deep learning model. It outperformed traditional machine learning models in all outcome measures.

"Our results demonstrate the feasibility of a single AI model as a robust prognostic tool for the language outcomes of children served by cochlear implant programs worldwide. This is an exciting advance for the field," said lead author Nancy M. Young, medical director of audiology and cochlear implant programs at Ann & Robert H. Lurie Children's Hospital in Chicago - the U.S. center of the study.

This AI-powered tool enables a “predict-to-prescribe” approach to optimizing language development by identifying which child might benefit from more intensive therapy.”

Nancy M. Young, Ann & Robert H. Lurie Children's Hospital of Chicago

This work was supported by Research Grants Council of Hong Kong Grant GRF14605119, the National Institutes of Health R21DC016069 and R01DC019387.

Dr. Young holds the Lillian S. Wells Professorship in Pediatric Otolaryngology at Lurie Children’s. She is also a professor of otolaryngology at the Feinberg School of Medicine at Northwestern University and a professor and fellow at the Knowles Hearing Center, Department of Communication Sciences and Disorders at the Northwestern University School of Communication.

Lurie's pediatric cochlear implant program is one of the largest and most experienced in the world. Since its introduction in 1991, more than 2,000 cochlear implant procedures have been performed.


Sources:

Journal reference:

Wang, Y.,et al. (2025) Forecasting Spoken Language Development in Children With Cochlear Implants Using Preimplant Magnetic Resonance Imaging.JAMA Otolaryngology–Head & Neck Surgery. DOI:10.1001/jamaoto.2025.4694.  https://jamanetwork.com/journals/jamaotolaryngology/fullarticle/2842669.