The question for me is whether their issue was
preexisting. If they'd become dependent on ChatGPT to try to self-heal their looming mental health crisis, then that's one thing. ChatGPT
isn't a licensed therapist, so it can't diagnose. It's not even sentient. Even still, people will feed it things that it wants to hear back, so it can only mitigate so much if the bot is responding how the user is training it to respond
However, if ChatGPT
caused the mental health crisis in some crucial way that goes outside its purpose, not that it just
fed into queries and responses that fueled a preexisting crisis....
that would be an interesting case to follow.
Quote:
Originally Posted by Ammi
In a blogpost, OpenAI admitted that “parts of the model’s safety training may degrade” in long conversations. Adam and ChatGPT had exchanged as many as 650 messages a day, the court filing claims.
|
That's a serious addiction.