Categories: US News

How Opelai and its competitors are tackling the AI ​​mental health problem

[ad_1]

As chatbots like chatgpt and character.ai face to face, companies and lawmakers are pushing for stronger mental health protections and age laws. Thuyen Nge / unsplash

Psychosis, mania and depression are not new problems, but experts fear AI chatbots can make them more difficult. With data suggesting that large parts of chatbot users are showing signs of mental stress, companies like OpenAI, anthropic, and character.ai are starting to take steps to reduce the risk in what could be a second.

This week, Extracted data It shows that 0.07 users of Chatgpt’s 800 million weekly users show symptoms of mental health emergencies related to Psychosis or Mania. While the company describes these cases as “rare,” that percentage still translates to hundreds of thousands of people.

In addition, about 0.15 percent of users – or about 1.2 million people have suicidal thoughts each week, and another 1.2 million appear to create emotional attachments to the chatbot, according to OpenAi data.

Is AI exacerbating today’s psychological problem or is it simply revealing one that was difficult to measure? Research estimates that between 15 and 100 people out of 100,000 develop psychosis every year, a range that underlines how difficult the condition is to measure. Meanwhile, the latest data from the Pew Research Center shows that about 5 percent of adults experience suicidal thoughts – a much higher figure than before.

Opelai’s findings can hold weight because chatbots can introduce barriers to mental health disclosure, barriers that go beyond costs, discrimination, and limited access to care. A recent survey of 1,000 US adults found that one in three AI users have shared secrets or personal information with their chatbot.

Opelai’s findings can hold weight because chatbots can introduce barriers to mental health disclosure, such as perceived shame and access to care. A recent survey of 1,000 US adults found that one in three AI users have shared secrets and personal information with their AI Chatbot.

However, chatbots are not a required care activity for licensed mental health practitioners. “If you’re already going through psychosis and delusions, the answer you got with Ayi Chatbot can put psychosis or parcoaia,” Jeffrey Ditzell, a New York-based psychiatrist, told the viewer. “AI is a closed system, so it invites disconnection from other people, and I don’t do well in isolation.”

“I don’t think the machine understands anything about what’s going on in my head. It’s imitating a friendly expert, which seems appropriate. But it’s not, said the AI ​​researcher at New York University Sterm of Business, told the thinker.

“There has to be some kind of responsibility that these companies have, because they are going into spaces that can be very dangerous to a large number of people and the general public,” said Dhar.

What AI companies are doing about the problem

The companies behind the popular Chatbots are scrambling to implement preventive and constructive measures.

Vulai’s latest model, the GPT-5, shows an improvement in handling stressful conversations compared to previous versions. A small public study by third parties confirmed that the GPT-5 shows a marked, although still incomplete, improvement over the previous one. The company also expanded its Crisis Hotline recommendations and added “kind reminders to take long breaks.”

In August, anthropic announced that its models 4 and 4.1 can now save conversations that are already “dangerous or abusive.” However, users can still work with the feature by starting a new conversation or editing previous messages “to create new branches of finished conversations,” the company noted.

After a series of lawsuits related to wrongful death and negligence, character.ai announced this week that it will legally ban children’s interviews. Users under the age of 18 now face a two-hour limit on “open conversations” with the platform’s AI characters, and the full ban will come into effect on November 25.

Meta Ai recently tightened its internal guidelines that previously allowed the Chatbot to generate content with ONGEOLLPAY – even for children.

Meanwhile, Grok’s Grok and Google’s and Google continue to draw criticism for their excessive behavior. Users say grok prioritizes the deal with accuracy, leading to problematic exposure. Gemini drew controversy after the disappearance of Jon Ganz, a Virginia man who went missing in Missouri on April 5 following what friends said was too much reliance on a Chatbot. (Ganz was not found.)

Regulators and activists are also pushing for legal protection. On Oct. 28, Senators Josh Hawley (R-ME.) and Richard Bluencenhal (D-Conc.) User Verification and Dialogue Act (DORT), which will require that AI companies will verify chatbots that assist chatbots or romantic or emotional attachments.



[ad_2]

kimdc171

Recent Posts

캘리포니아 5% 부유세 추진, 국내외 투자시장에 크게 미칠 충격은?

캘리포니아가 부유세 5% 도입을 검토하는 것으로 보인다. 시장에서는 상당한 파장이 예상된다는 해석도 나온다. 투자자 입장에서는…

3 hours ago

2026년 중국 반도체, 한국 턱밑까지 왔다: 위기의 7가지 신호?

최근 중국 반도체 산업의 도약 속도가 한국 시장에 바짝 다가온 느낌이다. 일각에서는 이 흐름이 단순한…

1 day ago

입시 전략, 정시 말고 다른 길은? 5가지 대안 집중 분석과 팁

최근 대입에서 정시 비중이 강화되면서 다양한 입시 전략을 고민하는 분위기가 확산됐다. 일부에서는 학생부종합전형이나 논술, 실기…

2 days ago

한국 경찰은 왜, 충격적으로 얼마나 많은 실종자를 ‘가출’로 처리하고 있을까?

최근 실종자 처리 방식이 의문을 불러일으키고 있다. 경찰 통계상 상당수가 ‘가출’로 분류된 사례가 확인됐다. 이는…

3 days ago

2026년 스타링크 한국 상륙, 인터넷 시장 대전환 위기 현실?

요약: 우주인터넷 서비스 스타링크가 드디어 한국에 상륙했다. 그동안 통신사 중심이던 국내 인터넷 시장에 일대 변화를…

4 days ago

긴장 고조, 전 매니저 폭로로 재점화된 박나래 사건 핵심 쟁점 5가지?

전 매니저의 폭로로 송출된 '술잔 비행' 의혹이 다시 수면 위로 떠올랐다. 당시 현장 분위기와 공개된…

5 days ago