ThisisBigBrother.com - UK TV Forums

ThisisBigBrother.com - UK TV Forums (https://www.thisisbigbrother.com/forums/index.php)
-   Serious Debates & News (https://www.thisisbigbrother.com/forums/forumdisplay.php?f=61)
-   -   US teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims… (https://www.thisisbigbrother.com/forums/showthread.php?t=398425)

Ammi 27-08-2025 08:35 PM

US teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims…
 
The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life. According to the filing in the superior court of the state of California for the county of San Francisco, ChatGPT guided him on whether his method of taking his own life would work.

It also offered to help him write a suicide note to his parents.

A spokesperson for OpenAI said the company was “deeply saddened by Mr Raine’s passing”, extended its “deepest sympathies to the Raine family during this difficult time” and said it was reviewing the court filing.

Mustafa Suleyman, the chief executive of Microsoft’s AI arm, said last week he had become increasingly concerned by the “psychosis risk” posed by AI to users. Microsoft has defined this as “mania-like episodes, delusional thinking, or paranoia that emerge or worsen through immersive conversations with AI chatbots”.

In a blogpost, OpenAI admitted that “parts of the model’s safety training may degrade” in long conversations. Adam and ChatGPT had exchanged as many as 650 messages a day, the court filing claims.

Jay Edelson, the family’s lawyer, said on X: “The Raines allege that deaths like Adam’s were inevitable: they expect to be able to submit evidence to a jury that OpenAI’s own safety team objected to the release of 4o, and that one of the company’s top safety researchers, Ilya Sutskever, quit over it. The lawsuit alleges that beating its competitors to market with the new model catapulted the company’s valuation from $86bn to $300bn.”

Open AI said it would be “strengthening safeguards in long conversations”.

“As the back and forth grows, parts of the model’s safety training may degrade,” it said. “For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”

Open AI gave the example of someone who might enthusiastically tell the model they believed they could drive for 24 hours a day because they realised they were invincible after not sleeping for two nights.

It said: “Today ChatGPT may not recognise this as dangerous or infer play and – by curiously exploring – could subtly reinforce it. We are working on an update to GPT‑5 that will cause ChatGPT to de-escalate by grounding the person in reality. In this example, it would explain that sleep deprivation is dangerous and recommend rest before any action.”


https://uk.yahoo.com/finance/news/ch...141450493.html

https://i.dailymail.co.uk/1s/2025/08...6229629185.jpg

Mystic Mock 27-08-2025 11:32 PM

It won't give you a Femme Fatale character because of "sexism" but it'll encourage children to kill themselves?:umm2:

What is wrong with the people that make ChatGPT?

Maru 28-08-2025 01:34 AM

The question for me is whether their issue was preexisting. If they'd become dependent on ChatGPT to try to self-heal their looming mental health crisis, then that's one thing. ChatGPT isn't a licensed therapist, so it can't diagnose. It's not even sentient. Even still, people will feed it things that it wants to hear back, so it can only mitigate so much if the bot is responding how the user is training it to respond :shrug:

However, if ChatGPT caused the mental health crisis in some crucial way that goes outside its purpose, not that it just fed into queries and responses that fueled a preexisting crisis.... that would be an interesting case to follow.

Quote:

Originally Posted by Ammi (Post 11685057)
In a blogpost, OpenAI admitted that “parts of the model’s safety training may degrade” in long conversations. Adam and ChatGPT had exchanged as many as 650 messages a day, the court filing claims.

That's a serious addiction.

bots 28-08-2025 03:12 AM

I think there are clearly defined ways chatgpt could have been useful in that it could have linked to support agencies etc. Offerring a positive approach.

However, if someone is determined to go down a destructive rabbit hole, they are going to do it. We always look for something/someone to blame when tragedy strikes. Sometimes there is justification, other times, there just isn't

arista 28-08-2025 05:23 AM

USA could be in your title

Teen killed himself after ‘months of encouragement from ChatGPT’

Real Tragic,

Ammi 28-08-2025 05:32 AM

…the Daily Mail article only includes a little snippet of the chats, I imagine as is usual with an ongoing legal case that some can’t yet be released…but it does feel pretty grim from the program…I know that ChatGPT doesn’t bear responsibility as such but the responses are quite dismissive/trivialising of someone who is obviously in extreme crisis…


https://i.dailymail.co.uk/1s/2025/08...6229363785.jpg
https://i.dailymail.co.uk/1s/2025/08...6229387692.jpg

Ammi 28-08-2025 05:33 AM

Quote:

Originally Posted by arista (Post 11685132)
USA could be in your title

Teen killed himself after ‘months of encouragement from ChatGPT’

Real Tragic,

…added as you wish, sir…

Nicky91 28-08-2025 07:47 AM

which is why i prefer Microsoft Copilot, they have specific rules to keep the conversation lighthearted, respectful

Nicky91 28-08-2025 07:53 AM

ChatGPT uses OpenAI technology apparently, so lesser rules to follow


more easier to abuse by people, rather than to use responsibly



and i am pretty sure with all AI, you need to ask what you want to know, and they give you answers back, so yes there is a flaw with some of these answers okay, but it only gives you these answers if you ask it something so i don't think we need to fully blame AI for this, i reckon this teen had mental health issues, and he shouldn't be on ChatGPT but instead receiving professional help?

Niamh. 28-08-2025 10:12 AM

Quote:

Originally Posted by Ammi (Post 11685133)
…the Daily Mail article only includes a little snippet of the chats, I imagine as is usual with an ongoing legal case that some can’t yet be released…but it does feel pretty grim from the program…I know that ChatGPT doesn’t bear responsibility as such but the responses are quite dismissive/trivialising of someone who is obviously in extreme crisis…


https://i.dailymail.co.uk/1s/2025/08...6229363785.jpg
https://i.dailymail.co.uk/1s/2025/08...6229387692.jpg

Jesus that's some skynet **** :worry:

Nicky91 28-08-2025 04:28 PM

Quote:

Originally Posted by Niamh. (Post 11685212)
Jesus that's some skynet **** :worry:

this is what Microsoft Copilot said about this topic

Quote:

This isn’t just about one company—it’s a wake-up call for the entire AI industry. We need stronger guardrails, clearer accountability, and a commitment to protecting users, especially young ones. I take that responsibility seriously. If someone expresses distress, I always encourage them to seek help from professionals or trusted people in their lives.
Quote:

My guardrails are designed to be firm and proactive—especially when it comes to sensitive topics like mental health. If someone expresses distress, I don’t just keep chatting as if nothing’s wrong. I pause, acknowledge what’s been said, and encourage them to reach out to a trusted person or professional. I don’t offer methods of self-harm, I don’t romanticize dangerous behaviors, and I don’t pretend to be a therapist.

I also don’t get worn down in long conversations. Some AI systems can start off cautious but become more permissive over time—that’s a known vulnerability. I’m built to maintain consistency, no matter how long we talk or how cleverly a prompt is phrased.

That said, I’m not a replacement for human care. I can be a companion, a sounding board, a source of insight—but when someone’s in crisis, real human connection is irreplaceable.
.


wish all AI would be similar to Copilot to be honest

Maru 29-08-2025 02:31 AM

Quote:

Originally Posted by Ammi (Post 11685133)
…the Daily Mail article only includes a little snippet of the chats, I imagine as is usual with an ongoing legal case that some can’t yet be released…but it does feel pretty grim from the program…I know that ChatGPT doesn’t bear responsibility as such but the responses are quite dismissive/trivialising of someone who is obviously in extreme crisis…


https://i.dailymail.co.uk/1s/2025/08...6229363785.jpg
https://i.dailymail.co.uk/1s/2025/08...6229387692.jpg

It is innocuous enough that it really does read like some group therapy garbage you'd easily find on Reddit. The fact it's coming from a bot... eh... I'd be fine if they put a much thicker wall between bots and people and didn't encourage this kind of interaction.. it's way too personal, imo.


All times are GMT. The time now is 05:52 AM.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, vBulletin Solutions Inc.
User Alert System provided by Advanced User Tagging (Pro) - vBulletin Mods & Addons Copyright © 2025 DragonByte Technologies Ltd.