Researchers Look to Maker Safer AI Through Google Awards
People seeking mental health support are increasingly turning to large language models (LLMs) for advice.
However, most popular AI-powered chatbots are not trained to recognize when someone is in crisis. LLMs also cannot determine when to refer someone to a human specialist.
New Georgia Tech research projects that address these issues may soon provide people seeking mental health support with safer experiences.
Google has awarded research grants to three faculty members from the School of Interactive Computing to study artificial intelligence (AI), trust, safety, and security. The grants were among dozens awarded by the company to researchers across the country.
Professor Munmun De Choudhury, Associate Professor Rosa Arriaga, and Associate Professor Alan Ritter are among the recipients of the 2025 Google Academic Research Awards.
Their projects will explore questions like:
What harms could occur if people consult LLMs for mental health advice?
Which groups are most at risk of receiving harmful guidance?
When should an LLM stop responding and refer someone to a human professional?
However, most popular AI-powered chatbots are not trained to recognize when someone is in crisis. LLMs also cannot determine when to refer someone to a human specialist.
New Georgia Tech research projects that address these issues may soon provide people seeking mental health support with safer experiences.
Google has awarded research grants to three faculty members from the School of Interactive Computing to study artificial intelligence (AI), trust, safety, and security. The grants were among dozens awarded by the company to researchers across the country.
Professor Munmun De Choudhury, Associate Professor Rosa Arriaga, and Associate Professor Alan Ritter are among the recipients of the 2025 Google Academic Research Awards.
Their projects will explore questions like:
What harms could occur if people consult LLMs for mental health advice?
Which groups are most at risk of receiving harmful guidance?
When should an LLM stop responding and refer someone to a human professional?