• PiraHxCx@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      13 minutes ago

      No. I can’t form an opinion without the full chat content, but you all seem to be painting it like “one day a happy little boy enters the internet and is gaslighted into killing himself by a computer”, while the article says he had been struggling with suicidal thoughts for many years, had been changing his medication on his own, and spent most of his time on forums where people talked about suicide. On the chatbot the boy ignored disclaimers, terms, and over a hundred warnings when talking about suicide until he pretended it was all fictional to get the bot to play along. The boy might have been a victim of several things, but not a victim of a chatbot - how many disclaimers and terms and warnings one has to put on their product, and does it even matter if the other party is set to ignore them? His self-medication might have played a big factor in his mental state, but no one seems to want to blame the pharmaceutical company, because somehow in this case you all seem to agree he did ignore terms and warnings, nor blame the rope manufacturer for supplying the tool because you seem to agree it was a misuse of their product… and judging by how quickly parents looked for a scapegoat instead of having a hard look at themselves, even knowing everything that was going on, and ignoring that if you are minor you need parents supervision to use the chatbot, my bet is on clueless shitty parents.