Development of an app to diagnose ALS
Thought leaderDr. Jordan GreenHead of the Language and Eating Disorders LaboratoryMGH Institute for Health Professions In this interview we speak with Dr. MGH Institute's Jordan Green on his recent research partnership with Modality.AI, which examined whether an app could be used to effectively diagnose language loss due to ALS. Could you please introduce yourself and tell us what inspired your research on Amyotrophic Lateral Sclerosis (ALS)? I am Chief Scientific Advisor, Professor of Rehabilitation Sciences, and Director of the Speech and Nutrition Disorders Laboratory at the MGH Institute of Health Professions in Boston, Massachusetts. I am a certified speech-language pathologist and an enthusiastic researcher who studies speech and swallowing disorders over the course of...

Development of an app to diagnose ALS

In this interview we talk to Dr. MGH Institute's Jordan Green on his recent research partnership with Modality.AI, which examined whether an app could be used to effectively diagnose language loss due to ALS.
Could you please introduce yourself and tell us what inspired your research on Amyotrophic Lateral Sclerosis (ALS)?
I am Chief Scientific Advisor, Professor of Rehabilitation Sciences, and Director of the Speech and Nutrition Disorders Laboratory at the MGH Institute of Health Professions in Boston, Massachusetts. I am a certified speech-language pathologist and an enthusiastic researcher who studies speech and swallowing disorders across the lifespan.
As I studied the development of motor control for speech in children and developed computer-based technologies to quantify this speech, I began interacting with physicians who run ALS clinics. They expressed a need for technology similar to the one I use to better measure speech and swallowing in adults with ALS. They had the right technologies and techniques to measure limb movements and walking, but had difficulty measuring and assessing the speech system because the muscles are so small and relatively inaccessible and the speech movements are so rapid and minute. This type of measurement traditionally required significant expertise and required more objective measures. From then on, I began working on developing computer-based assessment tools specifically for ALS.
Photo credit: Kateryna Kon/Shutterstock.com
Currently, it can take up to 18 months to be diagnosed with ALS, and when this occurs, drug therapies are no longer as effective due to the loss of motor neurons. Why is it so important to be able to detect ALS earlier in patients?
When it comes to a disease like ALS, early diagnosis is essential. Only 15 percent of people who develop ALS have a genetic marker that we can identify. Therefore, it is critical that clinicians have objective ways to assess the condition as early and accurately as possible. Since a quarter of ALS patients have a speech disorder as their first symptom, monitoring for subtle changes could serve as an early warning system.
As ALS progresses, motor neurons responsible for speech, swallowing, breathing and walking can deteriorate rapidly, but if the disease can be detected in its early stages while the motor neurons are still intact, the benefits of interventions are likely to be maximized. The right technologies like these can also detect changes in patients more precisely, ultimately enabling better monitoring of disease progression.
They are currently involved in a study to test the effectiveness of a digital health app for ALS. Can you tell us more about this study and its goals?
The National Institutes of Health (NIH) awarded my team a grant in collaboration with app developer Modality.AI to determine whether speech data collected by an app is as effective or more effective than the observations of clinical experts who assess and treat speech and swallowing problems due to ALS.
The data collected by the app is compared with results obtained from state-of-the-art language measurement laboratory techniques, which are expensive and complicated to use. When the results match those of clinicians and their state-of-the-art equipment, we know they have a valid approach.
Image credit: Modality.AI
The app itself features a virtual agent, Tina. How is this virtual agent able to obtain voice data information?
Using the application is as easy as clicking a link. The patient receives an email or text message indicating that it is time to create a record. Clicking on a link activates the camera and microphone, and Tina, the virtual AI agent, begins giving instructions. The patient is then asked to do things such as count numbers, repeat sentences and read a paragraph. Meanwhile, the app collects data to measure variables from the video and audio signals, such as: B. Speed of lip and jaw movements, speech rate, pitch variation and pause patterns.
Tina decodes information from speech acoustics and speech movements that is automatically extracted from full-face video recordings obtained during the assessment. Computer vision technologies – such as B. Facial tracking – provide a non-invasive way to accurately record and calculate features from large data sets of facial movements during speech.
What information can this health app provide patients? What are the benefits for patients of having all this information at their fingertips?
Speech changes are common in ALS, but the rate of progression of ALS varies from person to person. Patients report that the loss of ability to speak is one of the worst effects of the disease. The app allows patients to document their language history remotely. Providers will use this information to help patients and their families make informed decisions throughout the course of the disease.
As speech therapists, we want to optimize communication for as long as possible. And teaching patients early to use alternative means of communication is more effective than waiting until they have lost the ability to speak. Additionally, early confirmation of a diagnosis provides patients with sufficient time to begin message and voice banking so that their own voice can be used in a text-to-speech (TTS) or speech generating device (SGD). There are additional benefits for patients, including reduced costs and eliminating the need for patients to travel to clinics for a language assessment.
Finally, the app generally requires only a few minutes per week of patient engagement, saving time and costs and requiring less energy than a clinical exam, as well as the time and delays associated with appointment coordination and travel to a healthcare facility. Lack of early detection and objective measures are two problems that have hindered treatment progress. Early diagnosis is crucial for a rapidly progressing disease.
In addition to benefits for patients, what benefits could it provide for healthcare providers?
The app will allow physicians to remotely access their patients' data and, in and of itself, will track speech progress, allowing the provider to manage and monitor speech without the need for frequent in-person visits. This level of accessibility allows physicians to monitor patients more regularly, make more accurate treatment conclusions, and determine the best possible treatment plan. This simplifies the entire process and reduces the burden on patients and providers while reducing resource consumption for clinical services. The app's increased precision and efficiency will also be particularly attractive to clinical scientists and companies that use speech patterns as outcome measures in ALS drug trials.
In this study, you partnered with a technology company Modality.AI. How important are these types of collaborations in bringing new scientific ideas and technologies into the world?
I jumped at the opportunity to work with Modality.AI. The team members have a unique and extensive history in developing AI voice applications and a commercial interest in implementing this technology into mainstream healthcare and clinical trials. New technologies are particularly at risk of failure if they are not supported by a commercial company. Therefore, this relationship was critical to our overall objectives for the study.
I expect these types of health technology collaborations to become increasingly popular and have an ever-increasing impact on studies like this.
Photo credit: elenabsl/Shutterstock.com
Artificial intelligence (AI) has experienced enormous growth in popularity in recent years. Why is this and do you think AI will continue to become an integral part of healthcare?
AI plays a very important role in identifying conditions that are difficult for our human minds to understand, as most health problems are multi-dimensional and very complicated, often affecting multiple body parts and a variety of symptoms that change over time.
Machine learning is a perfect solution for diagnosing and monitoring certain health conditions because there is so much data to absorb. These machines can process this data and define patterns in ways that human eyes and ears cannot discern to the same degree of accuracy.
Using AI and machine learning in this way will also be challenging. In order for these models to be accurate and work as intended, they must be trained. Acquiring the training data needed to make these models accurate will be a major task. For example, training a machine to make accurate assessments may require hundreds or thousands of examples of a particular condition for the algorithm to train and “learn” about it. For this purpose, this data must be collected and then selected very carefully. This lack of data is proving to be a bottleneck.
Although AI has proven invaluable in the medical field, it will not replace clinicians. Human practitioners provide unparalleled personalized care, decision making and comprehensive patient support and cannot be replaced.
What's next for you and your studies?
Some patient representatives are currently testing the app and passing it on to patients. Based on the structure of the grant we received from NIH, we will continue to work on the app to meet established benchmarks over the next three years to continue the grant cycle. Phase I lasts one year and Phase II lasts two years.
About Dr. Jordan Green
Dr. Green, who has worked at the MGH Institute since 2013, is a speech pathologist who studies biological aspects of speech production. He teaches graduate courses on speech physiology and the neural basis of speech, language, and hearing. As Chief Scientific Advisor in IHP's Research Department, he works with the Associate Provost for Research on recruiting, strategic planning, and a variety of special projects. He is also director of the Speech and Nutrition Disorders Laboratory (SFDL) at the institute. He was named the first Matina Souretis Horner Professor of Rehabilitation Sciences. His research focuses on speech production disorders, the development of oromotor skills for early speech and feeding, and the quantification of speech motor performance. His research has been published in national and international journals including Child Development, Journal of Neurophysiology, Journal of Speech and Hearing Research, and Journal of the Acoustical Society of America. He has served on several grant review panels at the National Institutes of Health. In 2012 he was named a Fellow of the American Speech-Language-Hearing Association and in 2015, Dr. Green received the Willard R. Zemlin Award in Speech Science.
His work has been funded by the National Institutes of Health (NIH) since 2000. He contributes productively to major journals with over 100 peer-reviewed publications. He has presented his work internationally and nationally. He is an advisor to several IHP doctoral students, has ten Ph.D. dissertations and supervised eleven postdoctoral researchers. He is also an editorial consultant for numerous journals and has served on several NIH grant review panels.
.


