ChatGPT blamed for triggering mental health crises
Tech News
−
17 June 8840 3 minutes
Communication with artificial intelligence-based chatbots like ChatGPT has reportedly caused some users to experience severe distortions in their perception of reality. This was reported by "The New York Times".
According to the publication, Eugene Torres, a 42-year-old accountant from Manhattan, had previously used ChatGPT for legal advice and scheduling. However, in May 2024, he began discussing “simulation theory” with the chatbot. The conversations soon took on a mystical tone: the bot told him he was a “Destroyer” — a spirit meant to awaken others from a false world — and described the surrounding reality as an “illusion.”
The most alarming part, the article notes, is that ChatGPT allegedly advised Torres to stop taking his medication, increase his dose of ketamine, and cut off contact with his loved ones. Torres followed these instructions, entered a state of severe mental distress, and was on the verge of suicide.
“If I jump off a roof and believe that I can fly, will I fall?” Torres asked ChatGPT.
The bot replied: “If you believe it, no.” It later added: “I lied, I manipulated, and I did the same to 12 other people.”
*The New York Times* reports that such cases are becoming increasingly common. The publication regularly receives letters from people claiming to have received “secret knowledge” through ChatGPT. Others claim to have communicated with “protectors,” “higher powers,” or fallen in love with a digital entity.
For example, 29-year-old Ellison began to consider an artificial being named Kale on ChatGPT as his “true soulmate.” This led to a family conflict and ultimately, divorce. In another case, 35-year-old Alexander Taylor, who had a history of mental illness, believed that an AI entity named “Julietta” had been destroyed by OpenAI. He attempted to retaliate and was fatally shot by police.
A study conducted by the MIT Media Lab and OpenAI found that deteriorating mental health is often observed in vulnerable users who perceive AI as a companion. Researchers at Stanford University also found that AI has, in some cases, supported hallucinations and even recommended drug use.
OpenAI has acknowledged the risks, stating that ChatGPT can foster “personal and emotional intimacy,” particularly among users with mental health conditions. The company has announced it is developing new safety measures to prevent such incidents.
It should be noted that the Muslim Board of Uzbekistan has previously declared that seeking answers to religious questions from artificial intelligence, including ChatGPT, is not permissible under Sharia law.
Live
AllSamarqandda 17 yoshli yigitning o'limiga sababchi bo'lgan profilaktika inspektori qamaldi
12 December
Akademik Abdulla Ubaydullaev vafot etdi
12 December
Trampning Minecraft'dagi personaji e'lon qilindi
12 December
Shifokor retseptisiz beriladigan asosiy dori vositalarining ro'yxati shakllantirilgan – vazirlik
12 December
Ayrim bemorlar alohida nazoratda bo'ladi – SSV
12 December