No more chatting with depressing AIs

NEW YORK (HealthDay News) — Are you currently chatting with AI programs like ChatGPT, Microsoft Copilot, Google Gemini, Claude or DeepSeek?

You are most likely suffering from depression.

People who use artificial intelligence chatbots daily are about 30% more likely to suffer from at least moderate levels of depression, researchers reported June 21 in JAMA Network Open.

“We found that daily use of artificial intelligence was common and significantly associated with depressive symptoms and other negative effects” such as anxiety and irritability, concluded the research team led by psychiatrist Dr. by Roy Perlis, director of the Health Care Center at Massachusetts General Hospital in Boston.

Education also seems to have an effect, as the investigators found that middle-aged adults are particularly more likely to experience depression when they frequently use generative AI.

Regular AI users between the ages of 45 and 64 were 54% more likely to be depressed, compared to 32% between 25 and 44, according to the results. These suggest that “some people may be more likely to experience depressive symptoms associated with AI use,” the investigators wrote.

For the new study, investigators followed 21,000 cases of adults between April and May 2025 using a standard mental health guide to address symptoms of depression. We also asked participants how often they use AI.

After 10% I have to use generative AI daily, including more than 5% I have to return to it several times a day.

Based on the study design, the investigators say, it is difficult to know whether AI promotes depression or whether depressed people return to AI for advice.

Dr. Sunny Tang, an assistant professor of psychiatry at the Feinstein Medical Investigation Institutes at Northwell Health in Manhasset, New York, agreed that it’s hard to know how the association works.

“People experiencing mental health symptoms may be more likely to use generative AI for personal use by seeking help and support for their symptoms, persuading kindness or finding validation,” said Tang, who was not involved in the study.

“When we think about the relationship between artificial intelligence and mental health, we have to think in several directions: could the use of artificial intelligence negatively affect mental health?

But also how does mental health change and the differences in how we interact with AI?” says Tang, who works at Zucker Hillside Hospital in Queens, New York.

Solidarity could be an important factor, I say.

“A lot of people feel that way more recently because they’re working remotely or for other reasons,” Tang said. “We know that loneliness is a very strong predictor of mental health symptoms like depression, anxiety, and irritability. I believe that’s definitely one direction in which we should try to understand these relationships.”

The results also showed that AI companies need to make products that address people’s mental health, Tang said.

“You have to look at yourself… The most remarkable thing is that people with mental illnesses and those who show symptoms of mental health have to be actively involved in their products,” he said. “As all doctors know, first of all, no harm.

What I’m saying is that there needs to be “better safeguards” to ensure that AI doesn’t offer advice that prevents existing mental health symptoms. “Businesses should ask themselves, ‘Do you have a way to build AI that best supports people with mental health needs?'” he insisted.



Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*