Lively and in-depth discussions of city news, politics, science, entertainment, the arts, and more.
Hosted by Larry Mantle
Airs Weekdays 10 a.m.-12 p.m.

How scientists are using facial and verbal expressions to teach AI to identify mental illness




The advancement of technology has allowed scientist to teach AI how to identify mental illness by using facial and verbal expressions.
The advancement of technology has allowed scientist to teach AI how to identify mental illness by using facial and verbal expressions.
RobinOlimb/Getty Images

Listen to story

17:15
Download this story 8.0MB

As scientists and researchers around the world continue to explore the boundaries of artificial intelligence and how humans can harness its power, among the most promising uses of the technology seems to be in health care.

Already, we know of its potential to be used as a tool for diagnosis, treatment and other kinds of personalized patient care. But new research is uncovering potential uses for AI in helping mental health professionals better diagnose conditions like depression and PTSD.

Computer scientist Louis-Philippe Morency at Pittsburgh’s Carnegie Mellon University has been working with research teams at his school as well as USC to developing a dictionary of biomarkers that AI can use to identify certain conditions. These include facial expressions, the way a certain person enunciates vowels or even how someone’s eyes or eyebrows move while they’re speaking. The idea is to help mental health professionals better identify certain conditions, as they are often relying on patients’ own assessments of themselves and don’t have many tools for an objective diagnosis. Morency says that this kind of AI is not designed to replace clinicians or completely eliminate the need for patients to tell doctors how they feel and why they feel that way, but rather as a way to create data points that can be measured, similar to vital signs.

What is the potential for this kind of technology to impact the field of mental health? Are there privacy or other concerns that might arise from implementing this kind of technology? What about possible unforeseen consequences, like a particular behavioral biomarker being misinterpreted by the AI leading to a misdiagnosis?

Guests:

Louis-Philippe Morency, associate professor of computer science at Carnegie Mellon University in Pittsburgh; he has conducted research on behavioral biomarkers and is now collaborating with researchers to implement the technology

Josh Magee, assistant psychology professor at Miami University; published a study last year examining mental-health apps, which aren’t regulated

Justin T. Baker, assistant professor of psychiatry and scientific director of the institute for technology and psychiatry at McLean Hospital; he’s working with Dr. Morency to implement biomarkers into mental healthcare