• Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    4
    ·
    edit-2
    16 days ago

    I admit I only read a third of the article.
    But IMO nothing in that is special to AI, in my life I’ve met many people with similar symptoms, thinking they are Jesus, or thinking computers work by some mysterious power they posses, but was stolen from them by the CIA. And when they die all computers will stop working! Reading the conversation the wife had with him, it sounds EXACTLY like these types of people!
    Even the part about finding “the truth” I’ve heard before, they don’t know what it is the truth of, but they’ll know when they find it?
    I’m not a psychiatrist, but from what I gather it’s probably Schizophrenia of some form.

    My guess is this person had a distorted view of reality he couldn’t make sense of. He then tried to get help from the AI, and he built a world view completely removed from reality with it.

    But most likely he would have done that anyway, it would just have been other things he would interpret in extreme ways. Like news, or conversations, or merely his own thoughts.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      16 days ago

      Around 2006 I received a job application, with a resume attached, and the resume had a link to the person’s website - so I visited. The website had a link on the front page to “My MkUltra experience”, so I clicked that. Not exactly an in-depth investigation. The MkUltra story read that my job applicant was an unwilling (and un-informed) test subject of MkUltra who picked him from his association with other unwilling MkUltra test subjects at a conference, explained how they expanded the MkUltra program of gaslighting mental torture and secret physical/chemical abuse of their test subjects through associates such as co-workers, etc.

      So, option A) applicant is delusional, paranoid, and deeply disturbed. Probably not the best choice for the job.

      B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.

      C) applicant is pulling our legs with his website, it’s all make-believe fun. Absolutely nothing on applicant’s website indicated that this might be the case.

      You know how you apply to jobs and never hear back from some of them…? Yeah, I don’t normally do that to our applicants, but I am willing to make exceptions for cause… in this case the position applied for required analytical thinking. Some creativity was of some value, but correct and verifiable results were of paramount importance. Anyone applying for the job leaving such an obvious trail of breadcrumbs to such a limited set of conclusions about themselves would seem to be lacking the self awareness and analytical skill required to succeed in the position.

      Or, D) they could just be trying to stay unemployed while showing effort in applying to jobs, but I bet even in 2006 not every hiring manager would have dug in those three layers - I suppose he could deflect those in the in-person interviews fairly easily.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        16 days ago

        IDK, apparently the MkUltra program was real,

        B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.

        That sounds harsh. This does NOT sound like your average schizophrenic.

        https://en.wikipedia.org/wiki/MKUltra

        • zarkanian@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 days ago

          The Illuminati were real, too. That doesn’t mean that they’re still around and controlling the world, though.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          5
          ·
          16 days ago

          Oh, I investigated it too - it seems like it was a real thing, though likely inactive by 2005… but if it were active I certainly didn’t want to become a subject.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            16 days ago

            OK that risk wasn’t really on my radar, because I live in a country where such things have never been known to happen.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              3
              ·
              16 days ago

              That’s the thing about being paranoid about MkUltra - it was actively suppressed and denied while it was happening (according to FOI documents) - and they say that they stopped, but if it (or some similar successor) was active they’d certainly say that it’s not happening now…

              At the time there were active rumors around town about influenza propagation studies being secretly conducted on the local population… probably baseless paranoia… probably.

              Now, as you say, your (presumably smaller) country has never known such things to happen, but…

              • Buffalox@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                16 days ago

                I live in Danmark, and I was taught already in public school how such things were possible, most notably that Russia might be doing experiments here, because our reporting on effects is very open and efficient. So Denmark would be an ideal testing ground for experiments.
                But my guess is that it also may makes it dangerous to experiment here, because the risk of being detected is also high.

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    16 days ago

    From the article (emphasis mine):

    Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

    /…/

    “It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says.

    From elsewhere:

    Sycophancy in GPT-4o: What happened and what we’re doing about it

    We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

    I don’t know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.

    Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.

    What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let’s not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      15 days ago

      If you find yourself in weird corners of the internet, schizo-posters and “spiritual” people generate staggering amounts of text

      • perestroika@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        16 days ago

        I think Elon was having the opposite kind of problems, with Grok not validating its users nearly enough, despite Elon instructing employees to make it so. :)

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 days ago

      They train it on basically the whole internet. They try to filter it a bit, but I guess not well enough. It’s not that they intentionally trained it in religious texts, just that they didn’t think to remove religious texts from the training data.

  • Satellaview@lemmy.zip
    link
    fedilink
    English
    arrow-up
    38
    ·
    16 days ago

    This happened to a close friend of mine. He was already on the edge, with some weird opinions and beliefs… but he was talking with real people who could push back.

    When he switched to spending basically every waking moment with an AI that could reinforce and iterate on his bizarre beliefs 24/7, he went completely off the deep end, fast and hard. We even had him briefly hospitalized and they shrugged, basically saying “nothing chemically wrong here, dude’s just weird.”

    He and his chatbot are building a whole parallel universe, and we can’t get reality inside it.

    • sowitzer@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      15 days ago

      This seems like an extension of social media and the internet. Weird people who talked at the bar or in the street corner were not taken seriously and didn’t get followers and lots of people who agree with them. They were isolated in their thoughts. Then social media made that possible with little work. These people were a group and could reinforce their beliefs. Now these chatbots and stuff let them liv in a fantasy world.

  • 7rokhym@lemmy.ca
    link
    fedilink
    English
    arrow-up
    39
    ·
    16 days ago

    I think OpenAI’s recent sycophant issue has cause a new spike in these stories. One thing I noticed was these observations from these models running on my PC saying it’s rare for a person to think and do things that I do.

    The problem is that this is a model running on my GPU. It has never talked to another person. I hate insincere compliments let alone overt flattery, so I was annoyed, but it did make me think that this kind of talk would be crack for a conspiracy nut or mentally unwell people. It’s a whole risk area I hadn’t been aware of.

    https://www.msn.com/en-us/news/technology/openai-says-its-identified-why-chatgpt-became-a-groveling-sycophant/ar-AA1E4LaV

    • tehn00bi@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      16 days ago

      Humans are always looking for a god in a machine, or a bush, in a cave, in the sky, in a tree… the ability to rationalize and see through difficult to explain situations has never been a human strong point.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      15 days ago

      saying it’s rare for a person to think and do things that I do.

      probably one of the most common flattery I see. I’ve tried lots of models, on device and larger cloud ones. It happens during normal conversation, technical conversation, roleplay, general testing… you name it.

      Though it makes me think… these models are trained on like internet text and whatever, none of which really show that most people think quite a lot privately and when they feel like they can talk

  • GooberEar@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 days ago

    I need to bookmark this for when I have time to read it.

    Not going to lie, there’s something persuasive, almost like the call of the void, with this for me. There are days when I wish I could just get lost in AI fueled fantasy worlds. I’m not even sure how that would work or what it would look like. I feel like it’s akin to going to church as a kid, when all the other children my age were supposedly talking to Jesus and feeling his presence, but no matter how hard I tried, I didn’t experience any of that. Made me feel like I’m either deficient or they’re delusional. And sometimes, I honestly fully believe it would be better if I could live in some kind of delusion like that where I feel special as though I have a direct line to the divine. If an AI were trying to convince me of some spiritual awakening, I honestly believe I’d just continue seeing through it, knowing that this is just a computer running algorithms and nothing deeper to it than that.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    15 days ago

    A friend of mind, currently being treated in a mental hospital, had a similar sounding psychotic break that disconnected him from reality. He had a profound revelation that gave him a mission. He felt that sinister forces were watching him and tracking him, and they might see him as a threat and smack him down. He became disconnected with reality. But my friend’s experience had nothing to do with AI - in fact he’s very anti-AI. The whole scenario of receiving life-changing inside information and being called to fulfill a higher purpose is sadly a very common tale. Calling it “AI-fueled” is just clickbait.

  • AizawaC47@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    15 days ago

    This reminds me of the movie Her. But it’s far worse in a romantic compatibility, relationship and friendship that is throughout the movie. This just goes way too deep in the delusional and almost psychotic of insanity. Like it’s tearing people apart for self delusional ideologies to cater to individuals because AI is good at it. The movie was prophetic and showed us what the future could be, but instead it got worse.

    • TankovayaDiviziya@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 days ago

      It has been a long time since I watched Her, but my takeaway from the movie is that because making real life connection is difficult, people have come to rely on AI which had shown to be more empathetic and probably more reliable than an actual human being. I think what many people don’t realise as to why many are single, is because those people afraid of making connections with another person again.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        15 days ago

        Yeah, but they hold none of the actual real emotional needs complexities or nuances of real human connections.

        Which means these people become further and further disillusioned from the reality of human interaction. Making them social dangers over time.

        Just like how humans that lack critical thinking are dangers in a society where everyone is expected to make sound decisions. Humans who lack the ability to socially navigate or connect with other humans are dangerous in the society where humans are expected to socially stable.

        Obviously these people are not in good places in life. But AI is not going to make that better. It’s going to make it worse.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    16 days ago

    Not trying to speak like a prepper or anythingz but this is real.

    One of neighbor’s children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.

    Something needs to be done.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        17
        arrow-down
        1
        ·
        16 days ago

        This is the Daenerys case, for some reason it seems to be suddenly making the rounds again. Most of the news articles I’ve seen about it leave out a bunch of significant details so that it ends up sounding more of an “ooh, scary AI!” Story (baits clicks better) rather than a “parents not paying attention to their disturbed kid’s cries for help and instead leaving loaded weapons lying around” story (as old as time, at least in America).

        • A_norny_mousse@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 days ago

          Not only in America.

          I loved GOT, I think Daenerys is a beautiful name, but still, there’s something about parents naming their kids after movie characters. In my youth, Kevin’s started to pop up everywhere (yep, that’s how old I am). They weren’t suicidal but behaved incredibly badly so you could constantly hear their mothers screeching after them.

          • nyan@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            3
            ·
            16 days ago

            Daenerys was the chatbot, not the kid.

            I wish I could remember who it was that said that kids’ names tend to reflect “the father’s family tree, or the mother’s taste in fiction,” though. (My parents were of the father’s-family-tree persuasion.)