When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

  • rsuri@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    2
    ·
    4 days ago

    “Hallucinations” is the wrong word. To the LLM there’s no difference between reality and “hallucinations”, because it has no concept of reality or what’s true and false. All it knows it what word maybe should come next. The “hallucination” only exists in the mind of the reader. The LLM did exactly what it was supposed to.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      Well, It’s not lying because the AI doesn’t know right or wrong. It doesn’t know that it’s wrong. It doesn’t have the concept of right or wrong or true or false.

      For the llm’s the hallucinations are just a result of combining statistics and producing the next word, as you say. From the llm’s “pov” it’s as real as everything else it knows.

      So what else can it be called? The closest concept we have is when the mind hallucinates.

    • Hobo@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      6
      ·
      edit-2
      3 days ago

      They’re bugs. Major ones. Fundamental flaws in the program. People with a vested interest in “AI” rebranded them as hallucinations in order to downplay the fact that they have a major bug in their software and they have no fucking clue how to fix it.

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        11
        ·
        4 days ago

        It’s an inherent negative property of the way they work. It’s a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.

        Calling it a bug indicates that it’s something unexpected that can be fixed, and as far as we know it can’t be fixed, and is expected behavior. Same as the car analogy.

        The only thing we can do is raise awareness and mitigate.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          6
          ·
          edit-2
          4 days ago

          It actually can be fixed. There is an accuracy to answers. Like how confident the statistical model is on the answer. That’s why some questions get consistent answers while others don’t.

          The fix is not that hard, it’s a matter of reputation on having the chatbot answer “I don’t know” when the confidence on an answer isn’t high enough. It’s pretty similar on what the chatbot does when you ask them to make you a bomb, just highjacks the answer calculated by the model and says a predefined answer instead.

          But it makes the AI look bad. So most public available models just answer anything even if they are not confident about it. Also your reaction to the incorrect answer is used to train the model better so it’s not even efficient for they to stop the hallucinations on their product. But it can be done.

          Models used by companies usually have a higher confidence threshold and answer “I don’t know” if they don’t have enough statistical proof on a particular answer.

          • Terrasque@infosec.pub
            link
            fedilink
            English
            arrow-up
            8
            ·
            4 days ago

            The fix is not that hard, it’s a matter of reputation on having the chatbot answer “I don’t know” when the confidence on an answer isn’t high enough.

            This has been tried, it’s helping but it’s not enough by itself. It’s one of the mitigation steps I was thinking of. And companies do work very hard to reduce hallucinations, just look at Microsoft’s newest thing.

            From that article:

            “Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water,” said Os Keyes, a PhD candidate at the University of Washington who studies the ethical impact of emerging tech. “It’s an essential component of how the technology works.”

            Text-generating models hallucinate because they don’t actually “know” anything. They’re statistical systems that identify patterns in a series of words and predict which words come next based on the countless examples they are trained on.

            It follows that a model’s responses aren’t answers, but merely predictions of how a question would be answered were it present in the training set. As a consequence, models tend to play fast and loose with the truth. One study found that OpenAI’s ChatGPT gets medical questions wrong half the time.

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              edit-2
              4 days ago

              The Hidrogen from water thing is simply wrong. If that is supposed to mean that hallucinations are just part of a generative LLM technology that cannot be solved.

              They are not inherent of the technology. They are a product of lack of control over the stadistical output. Prioritizing any answer before no answer.

              As with any statistics you have a confidence on how true something is based on your data. It’s just a matter of putting the threshold higher or lower.

              If you ask an easy question “What is the capital of France?” You wont ever get an hallucination. Because all models will have that answer provided with very high confidence. You just have to make so if that level of confidence is not reached it just default to a “I don’t know answer”. But, once again, this will make the chatbots seem very dumb as they will answer with lots of “I don’t know”.

              The problem here is the amount of data and the efficiency of the model. In order to get an usable general purpose model with a confidence threshold high enough to not hallucinate, by todays efficiency with the models it would need to be an humongous model, too big and with too much training data even for big tech. So we can go that big, we can try to improve efficiency (which is being proven very hard for general models) or we do both. Time will tell, but I’m quite confident that we will reach a general use model without hallucinations sooner or later.

              • jj4211@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                3 days ago

                This article is an example where statistical confidence doesn’t help. The model has lots of data so it likely has high confidence, but it didn’t have any understanding of the nature of the relation in the data.

                I recently did an application where we indicated the confidence of the output of the model. For some scenarios, the high confidence output had even more mistakes than the low confidence output

              • Terrasque@infosec.pub
                link
                fedilink
                English
                arrow-up
                5
                ·
                3 days ago

                As with any statistics you have a confidence on how true something is based on your data. It’s just a matter of putting the threshold higher or lower.

                You just have to make so if that level of confidence is not reached it just default to a “I don’t know answer”. But, once again, this will make the chatbots seem very dumb as they will answer with lots of “I don’t know”.

                I think you misunderstand how LLM’s work, it doesn’t have a confidence, it’s not like it looks at it’s data and say “hmm, yes, most say Paris is the capital of France, so that’s the answer”. It “just” puts weight on the next token depending on it’s internal statistics, and then one of those tokens are picked, and the process start anew.

                Teaching the model to say “I don’t know” helps a bit, and was lauded as “The Solution” a year or two ago but turns out it didn’t really help that much. Then you got Grounded approach, RAG, CoT, and so on, all with the goal to make the LLM more reliable. None of them solves the problem, because as the PhD said it’s inherent in how LLM’s work.

                And no, local llm’s aren’t better, they’re actually much worse, and the big companies are throwing billions on trying to solve this. And no, it’s not because “that makes the llm look dumb” that they haven’t solved it.

                Early on I was looking into making a business of providing local AI to businesses, especially RAG. But no model I tried - even with the documents being part of the context - came close to reliable enough. They all hallucinated too much. I still check this out now and then just out of own interest, and while it’s become a lot better it’s still a big issue. Which is why you see it on the news again and again.

                This is the single biggest hurdle for the big companies to turn their AI’s from a curiosity and something assisting a human into a full fledged autonomous / knowledge system they can sell to customers, you bet your dangleberries they try everything they can to solve this.

                And if you think you have the solution that every researcher and developer and machine learning engineer have missed, then please go prove it and collect some fat checks.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  3 days ago

                  What do you think is “weight”?

                  Is, simplifying, the amounts of data that says “The capital of France is Paris” it doesn’t need to understand anything. It just has to stop the process if the statistics don’t not provide enough to continue with confidence. If the data is all over the place and you have several “The capital of France is Berlin/Madrid/Milan”, it’s measurable compared to all data saying it is Paris. Not need for any kind of “understanding” of the meaning of the individual words, just measuring confidence on what next word should be.

                  Back a couple of years when we played with small neural networks playing mario and you could see the internal process in real time, as there where not that many layers. It was evident how the process and the levels of confidence changed depending on how deep the training was. Here it is just orders of magnitude above. But nothing imposible to overcome as some people pretend to sell.

                  Alternative ways of measure confidence is just run the same question several times and check if answers are equivalent.

                  PhD is PhD in scaremongering about technology, so it’s not an authority on anything here.

                  IDK what did you do, but slm don’t really hallucinate that much, if at all. Specially if they are trained with good datasets.

                  As I said the solution is not in my hand, as it involves improving the efficiency or the amount of data. Efficiency has issues as current techniques seems to be unable to improve efficiency over a certain level. And amount of data is, obviously, costly.

      • SkunkWorkz@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        edit-2
        4 days ago

        It’s not a bug. Just a negative side effect of the algorithm. This what happens when the LLM doesn’t have enough data points to answer the prompt correctly.

        It can’t be programmed out like a bug, but rather a human needs to intervene and flag the answer as false or the LLM needs more data to train. Those dozens of articles this guy wrote aren’t enough for the LLM to get that he’s just a reporter. The LLM needs data that explicitly says that this guy is a reporter that reported on those trials. And since no reporter starts their articles with ”Hi I’m John Smith the reporter and today I’m reporting on…” that data is missing. LLMs can’t make conclusions from the context.

  • gcheliotis@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    4
    ·
    4 days ago

    The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      4 days ago

      the AI “decided” in the same way the dice “decided” to land on 6 and 4 and screw me over. the system made a result using logic and entropy. With AI, some people are just using this informal way of speaking (subconsciously anthropomorphising) while others look at it and genuinely beleave or want to pretend its alive. You can never really know without asking them directly.

      Yes, if the intent is confusion, it is pretty minipulative.

      • gcheliotis@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Granted, our tendency towards anthropomorphism is near ubiquitous. But it would be disingenuous to claim that it does not play out in very specific and very important ways in how we speak and think about LLMs, given that they are capable of producing very convincing imitations of human behavior. And as such also produce a very convincing impression of agency. As if they actually do decide things. Very much unlike dice.

        • Hello Hotel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          A doll is also designed to be anthropomorphised, to have life projected onto it. Unlike dolls, when someone talks about LLMs as alive, most people have no clue if they are pretending or not. (And marketers take advantage of it!) We are feed a culture that accedentially says “chatGPT + Boston Dynamics robot = Robocop”. Assuming the only fictional part is that we dont have the ability to make it, not that the thing we create wouldn’t be human (or even be need to be human).

    • stingpie@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      No, you’re thinking of the first scene of the movie where a fly falls into the teletype machine and causes it to type ‘tuttle’ instead of ‘buttle’.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        It’s not my fault that Buttle’s heart condition didn’t appear on Tuttle’s file!

  • n0m4n@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 days ago

    If this were some fiction plot, Copilot reasoned the plot twist, and ran with it. Instead of the butler, the writer did it. To the computer, these are about the same.

  • sunzu2@thebrainbin.org
    link
    fedilink
    arrow-up
    7
    arrow-down
    14
    ·
    5 days ago

    These are not hallucinations whatever thay is supposed to mean lol

    Tool is working as intended and getting wrong answers due to how it works. His name frequently had these words around it online so AI told the story it was trained. It doesn’t understand context. I am sure you can also it clearify questions and it will admit it is wrong and correct itself…

    AI🤡

        • chiisana@lemmy.chiisana.net
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          5 days ago

          The models are not wrong. The models are nothing but a statistical model that’s really good at predicting the next word that is likely to follow base on prior information given. It doesn’t have understanding of the context of the words, just that statistically they’re likely to follow. As such, all LLM outputs are correct to their design.

          The users’ assumption/expectation of the output being factual is what is wrong. Hallucination is a fancy word in attempt make the users not feel as upset when the output passage doesn’t match their assumption/expectation.

          • snooggums@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 days ago

            The users’ assumption/expectation of the output being factual is what is wrong.

            So randomly spewing out bullshit is the actual design goal of AI models? Why does it exist at all?

            • ApexHunter@lemmy.ml
              link
              fedilink
              English
              arrow-up
              5
              ·
              5 days ago

              They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

              Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role… which is, at its core, to fill in a call+response pattern in a conversation.

              At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.

              • snooggums@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 days ago

                They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

                That it generates correct answers > 50% of the time is actually quite a marvel.

                So good as a translator as long as accuracy doesn’t matter?

              • chiisana@lemmy.chiisana.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                5 days ago

                If memory serves, 175B parameters is for the GPT3 model, not even the 3.5 model that caught the world by surprise; and they have not disclosed parameter space for GPT4, 4o, and o1 yet. If memory also serves, 3 was primarily English, and had only a relatively small set of words (I think 50K or something to that effect) it was considering as next token candidates. Now that it is able to work in multiple languages and multi modal, the parameter space must be much much larger.

                The amount of things it can do now is incredible, but our perceived incremental improvements on LLM will probably slow down (due to the pace fitting to the predicted lines in log space)… until the next big thing (neural nets > expert systems > deep learning > LLM > ???). Such an exciting time we’re in!

                Edit: found it. Roughly 50K tokens for input output embedding, in GPT3. 3Blue1Brown has a really good explanation here for anyone interested: https://youtu.be/wjZofJX0v4M

      • mindlesscrollyparrot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        Sure, but which of these factors do you think were relevant to the case in the article? The AI seems to have had a large corpus of documents relating to the reporter. Those articles presumably stated clearly that he was the reporter and not the defendant. We are left with “incorrect assumptions made by the model”. What kind of assumption would that be?

        In fact, all of the results are hallucinations. It’s just that some of them happen to be good answers and others are not. Instead of labelling the bad answers as hallucinations, we should be labelling the good ones as confirmation bias.

        • femtech@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          It was an incorrect assumption based on his name being in the article. It should have listed him as the author only, not a part of the cases.

      • EpeeGnome@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 days ago

        Yes, hallucination is the now standard term for this, but it’s a complete misnomer. A hallucination is when something that does not actually exist is perceived as if it were real. LLMs do not perceive, and therefor can’t hallucinate. I know, the word is stuck now and fighting against it is like trying to bail out the tide, but it really annoys me and I refuse to use it. The phenomenon would better be described as a confabulation.

  • tiramichu@lemm.ee
    link
    fedilink
    English
    arrow-up
    27
    ·
    5 days ago

    The worrying truth is that we are all going to be subject to these sorts of false correlations and biases and there will be very little we can do about it.

    You go to buy car insurance, and find that your premium has gone up 200% for no reason. Why? Because the AI said so. Maybe soneone with your name was in a crash. Maybe you parked overnight at the same GPS location where an accident happened. Who knows what data actually underlies that decision or how it was made, but it was. And even the insurance company themselves doesn’t know how it ended up that way.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      5 days ago

      We’re already there, no AI needed. Rates are all generated by computer. Ask your agent why your rate went up and they’ll say “idk computer said so”.

  • erenkoylu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    9
    ·
    4 days ago

    The problem is not the AI. The problem is the huge numbers of morons who deploy AI without proper verfication and control.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      3 days ago

      Sure, and also people using it without knowing that it’s glorifies text completion. It finds patterns, and that’s mostly it. If your task involves pattern recognition then it’s a great tool. If it requires novel thought, intelligence, or the synthesis of information, then you probably need something else.

  • Soup@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    4 days ago

    And yet here we’re are, praising this garbage for its ability to perform simple tasks and take jobs from artists and entertainers.

  • Broken@lemmy.ml
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    4 days ago

    This sounds like a great movie.

    AI sends police after him because of things he wrote. Writer is on the run, trying to clear his name the entire time. Somehow gets to broadcast the source of the articles to the world to clear his name. Plot twist ending is that he was indeed the perpetrator behind all the crimes.

  • deegeese@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    82
    arrow-down
    6
    ·
    5 days ago

    It’s frustrating that the article deals treats the problem like the mistake was including Martin’s name in the data set, and muses that that part isn’t fixable.

    Martin’s name is a natural feature of the data set, but when they should be taking about fixing the AI model to stop hallucinations or allow humans to correct them, it seems the only fix is to censor the incorrect AI response, which gives the implication that it was saying something true but salacious.

    Most of these problems would go away if AI vendors exposed the reasoning chain instead of treating their bugs as trade secrets.

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        https://learnprompting.org/docs/intermediate/chain_of_thought

        It’s suspected to be one of the reasons why Claude and OpenAI’s new o1 model is so good at reasoning compared to other llm’s.

        It can sometimes notice hallucinations and adjust itself, but there’s also been examples where the CoT reasoning itself introduce hallucinations and makes it throw away correct answers. So it’s not perfect. Overall a big improvement though.

    • 100@fedia.io
      link
      fedilink
      arrow-up
      19
      arrow-down
      4
      ·
      5 days ago

      just shows that these “ai”'s are completely useless at what they are trained for

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        5 days ago

        They’re trained for generating text, not factual accuracy. And they’re very good at it.

  • Brutticus@lemm.ee
    cake
    link
    fedilink
    English
    arrow-up
    31
    ·
    4 days ago

    “This guys name keeps showing up all over this case file” “Thats because he’s the victim!”

  • Ilovethebomb@lemm.ee
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    4
    ·
    5 days ago

    I’d love to see more AI providers getting sued for the blatantly wrong information their models spit out.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      29
      ·
      5 days ago

      I don’t think they should be liable for what their text generator generates. I think people should stop treating it like gospel. At most, they should be liable for misrepresenting what it can do.

      • Stopthatgirl7@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        3
        ·
        5 days ago

        If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          16
          ·
          5 days ago

          Their product doesn’t claim to be a source of facts. It’s a generator of human-sounding text. It’s great for that purpose and they’re not liable for people misusing it or not understanding what it does.

          • Stopthatgirl7@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            3
            ·
            edit-2
            5 days ago

            So you think these companies should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.

            • lunarul@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              7
              ·
              5 days ago

              I did not say companies should have no liability for publishing misinformation. Of course if someone uses AI to generate misinformation and tries to pass it off as factual information they should be held accountable. But it doesn’t seem like anyone did that in this case. Just a journalist putting his name in the AI to see what it generates. Nobody actually spread those results as fact.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        5 days ago

        If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.

        Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.

        Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.

      • Ilovethebomb@lemm.ee
        link
        fedilink
        English
        arrow-up
        20
        ·
        5 days ago

        I want them to have more warnings and disclaimers than a pack of cigarettes. Make sure the users are very much aware they can’t trust anything it says.

      • RvTV95XBeo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        52
        arrow-down
        1
        ·
        5 days ago

        If these companies are marketing their AI as being able to provide “answers” to your questions they should be liable for any libel they produce.

        If they market it as “come have our letter generator give you statistically associated collections of letters to your prompt” then I guess they’re in the clear.

      • TheFriar@lemm.ee
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        3
        ·
        5 days ago

        So you don’t think these massive megacompanies should be held responsible for making disinformation machines? Why not?

          • medgremlin@midwest.social
            link
            fedilink
            English
            arrow-up
            6
            ·
            4 days ago

            Which is why, in many cases, there should be liability assigned. If a self-driving car kills someone, the programming of the car is at least partially to blame, and the company that made it should be liable for the wrongful death suit, and probably for criminal charges as well. Citizens United already determined that corporations are people…now we just need to put a corporation in prison for their crimes.

  • Ganbat@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    4 days ago

    Oh, this would be funny if people en masse were smart enough to understand the problems with generative ai. But, because there are people out there like that one dude threatening to sue Mutahar (quoted as saying “ChatGPT understands the law”), this has to be a problem.

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      4 days ago

      And to help educate the ignorant masses:

      Generative AI and LLMs start by predicting the next word in a sequence. The words are generated independently of each other and when optimized: simultaneously.

      The reason that it used the reporter’s name as the culprit is because out of the names in the sample data his name appeared at or near the top of the list of frequent names so it was statistically likely to be the next name mentioned.

      AI have no concepts, period. It doesn’t know what a person is, or what the laws are. It generates word salad that approximates human statements. It is a math problem, statistics.

      There are actual science fiction stories built on the premise that AI reporting on the start of Nuclear War resulted in actual kickoff of the apocalypse, and we’re at that corner now.

      • Ganbat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        4 days ago

        There are actual science fiction stories built on the premise that AI reporting on the start of Nuclear War resulted in actual kickoff of the apocalypse, and we’re at that corner now.

        IIRC, this was the running theory in Fallout until the show.

        Edit: I may be misremembering, it may have just been something similar.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          4 days ago

          I haven’t played the original series but in 3 and 4 it was pretty much confirmed the big companies like BlamCo! intentionally set things in motion, but also that Chinese nuclear vessels were already in place near America.

          Ironically, Vault Tech wasn’t planning to ever actually use their vaults for anything except human expirimentation so they might have been out of the loop.

          • Ganbat@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            4 days ago

            Yeah, it’s kinda been all over the place, but that’s where the show ended up going, except Vault Tech was very much in the loop. I can’t get spoiler tags to work, so I’ll leave out the details.

            What I’m thinking of, though, was also in Fallout 4. I’ve been thinking on it, and I remember now that what I’m thinking of is that it’s implied that the AI from the Railroad quests fed fake info about incoming missiles to force America to fire. I still don’t remember any specifics, though, and I could be misremembering. It’s been a good few years after all, lol.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 days ago

        That’s not quite true. Ai’s are not just analyzing the possible next word they are using complex mathematical operations to calculate the next word it’s not just the next one that’s most possible it’s the net one that’s most likely given the input.

        No trouble is that the AIs are only as smart as their algorithms and Google’s AI seems to be really goddamn stupid.

        Point is they’re not all made equal some of them are actually quite impressive although you are correct none of them are actually intelligent.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 days ago

          nOt JUsT anAlYzInG thE NeXT wOrD

          Poor use of terms. AI does not analyze. It does not think, or decode, or even parse things. It gets fed sample data and when given a prompt (half a form) it uses statistical algorithm to finish the other half.

          All of the algorithms are stupid, they will all hallucinate and say the wrong things. You can add more corrective layers like OpenAI has but you’ll only be closer to the sample data. 95% accurate. 98%. 99%. It doesn’t matter, it’s always stuck just below average human competency for questions already asked countless times, and completely worthless for anything that requires actual independent thought.

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        3 days ago

        AI have no concepts, period. It doesn’t know what a person is, or what the laws are. It generates word salad that approximates human statements.

        This isn’t quite accurate. LLMs semantically group words and have a sort of internal model of concepts and how different words relate to them. It’s still not that of a human and certainly does not “understand” what it’s saying.

        I get that everyone’s on the “shit on AI train”, and it’s rightfully deserved in many ways, but you’re grossly oversimplifying. That said, way too many people do give LLMs too much credit and think it’s effectively magic. Reality, as is usually the case, is somewhere in the middle.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          Jfc you dudes really piss me of with these contrarian rants, piss off it takes power and makes sophisticated word salads.

          • NιƙƙιDιɱҽʂ@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            3 days ago

            Oh, my bad, I thought the point of discussion boards was to have a discussion…

            If your only goal is to spout misinformation and stick your fingers in your ears, I’ll go somewhere else.

      • WldFyre@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        4 days ago

        Generative AI and LLMs start by predicting the next word in a sequence. The words are generated independently of each other

        Is this true? I know that’s how Marcov chains work, but I thought neural nets worked differently with larger tokens.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          3 days ago

          The only difference between a generic old fashioned word salad generator and GPT4 is the scale. You put multiple layers correcting for different factors on it and suddenly your Language Model turns into a Large Language Model.

          So basically your large tokens are made up of smaller tokens, but its still just statistical approximation of the sample data with little to no emergent behavior or even memory of what its saying as it says it.

          It also exponentially increases power requirements, as the world is figuring out.

          • WldFyre@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            3 days ago

            I don’t disagree, I was just pointing out that “each word is generated independently of each other” isn’t strictly accurate for LLM’s.

            It’s part of the reason they are so convincing to some people, they are able to hold threads semi-coherently throughout entire essay length paragraphs without obvious internal lapses of logic.

            • finitebanjo@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              edit-2
              3 days ago

              I think you’re seeing coherence where there is none.

              Ask it to solve the riddle about the fox the chicken and the grains.

              Even if it does solve the riddle without blurting out random nonsense, that’s just because the sample data solved the riddle billions of times before.

              It’s just guessing words.

              • WldFyre@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                3 days ago

                I think you’re seeing coherence where there is none.

                Ask it to solve the riddle about the fox the chicken and the grains.

                I think it getting tripped up on riddles that people often fail or it not getting factual things correct isn’t as important for “believability”, which is probably a word closer to what I meant than “coherence.”

                No one was worried about misinformation coming from r/SubredditSimulator, for example, because Marcov chains have much much less believability. “Just guessing words” is a bit of a over-simplification for neural nets, which are a powerful technology even if the utility of turning it towards language is debatable.

                And if LLM’s weren’t so believable we wouldn’t be having so many discussions about the misinformation or misuse they could cause. I don’t think we’re disagreeing I’m just trying to add more detail to your “each word is generated independently” quote, which is patently wrong and detracts from your overall point.

                • finitebanjo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 days ago

                  lmao yeh bro such a hard riddle totally

                  I concede. AI has a superintelligient brain and I’m just so jealous. You have permission to whip me into submission.