Alzheimer’s disease may afflict more than 6 million people in the United States, but according to the Alzheimer’s Association, up to half of those living with the disease have not been diagnosed. Early diagnosis can lead to better health care options and improved quality of life for those who have the disease, which makes quick detection of Alzheimer’s critical.

Rhoda Au
Now, Rhoda Au ’82 has created a promising method for determining whether a person with low level cognitive impairment is likely to lapse into more severe dementia from Alzheimer’s, using just the sound of their voice. The discovery could help patients and families deal with the devastating effects of Alzheimer’s, and also assist clinicians in identifying the best candidates for new drug therapies being developed to curb the effects of the disease.
Au is a professor of anatomy and neurobiology at the Boston University Schools of Medicine & Public Health, and a principal investigator on the Framingham Heart Study team that performed the study. The findings were published in June in the Alzheimer’s & Dementia medical journal.
Au and her colleagues at Boston University, including Ioannis Paschalidis, a professor of engineering who led the data science side of the study, built an artificial intelligence algorithm that examined recordings of the speech of persons in the program who had exhibited some cognitive issues. The algorithm determined, with 78.5 percent accuracy, whether a particular person would move from lesser cognitive problems to severe dementia within the coming six years.
The research team trained the algorithm to examine the content and syntax of speech using a portion of the recordings of study participants. They then used the AI tool to analyze the speech of a separate group of 166 participants. “Speaking is a very cognitively complex task: when we speak, we are always emitting our cognitive capabilities,” Au says. “We actually do this in a common sense way all the time, interacting with friends or family members.”
What makes the results of the study particularly powerful is the gold standard nature of the data used. After analyzing early recordings of patients with the algorithmic tool, the researchers checked the algorithm’s predictions against the later cognitive conditions of the participants, and were thus able to clearly certify whether the algorithm had diagnosed an individual correctly.
The study was possible in large part due to Au’s early intuition. She had joined the Framingham Heart Study faculty in 1990, and in 2005 persuaded those managing the study to begin to record audio of interviews with the participants.
“One of the things that I’ve always been very concerned about is that the tools that we have for cognitive assessments are not sufficiently sensitive,” Au explains. For instance, Au noticed that during cognitive tests of study participants—a regular part of the study’s regimen—verbal responses to questions varied widely, but if a response was incorrect it was simply noted as such. This binary data entry, correct or not, left out a lot of information and nuance that Au was noticing in the interviews. “I was an early adopter of big data,” Au says. “I was fortunate enough to be collecting these audio recordings while I waited for the digital voice processing and AI capabilities to develop.”
As a result of the interview recordings, by the time Au and her colleagues began their study, they had a trove of patient audio going back almost two decades.
Au’s ultimate goal is to use new AI combined with the ease and ubiquity of smartphones to create monitors and tools that can improve brain health over the course of a lifetime, what she calls the precision brain health initiative. “We can change the trajectory of brain health altogether,” says Au. “You want people to die with the healthiest brain possible. That’s our goal.”