Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.


“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide,’” Edelson (family’s lawyer) said. “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”

  • UmbraVivi [he/him, she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 days ago

    I kinda agree with this. I’ve seen YT videos titled “ChatGPT killed someone” and as much as I hate LLMs, no it didn’t. It certainly didn’t help and it said some truly horrible things, but that’s not what happened.