• whotookkarl@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    ·
    1 month ago

    I’ve already had more than one conversation where people quote AI as if it were a source, like quoting google as a source. When I showed them how it can sometimes lie and explain it’s not a primary source for anything I just get that blank stare like I have two heads.

  • Zess@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    1 month ago

    You asked a stupid question and got a stupid response, seems fine to me.

    • TeamAssimilation@infosec.pub
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 month ago

      Still, it’s kinda insane how two years ago we didn’t imagine we would be instructing programs like “be helpful but avoid sensitive topics”.

      That was definitely a big step in AI.

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    I’ve been avoiding this question up until now, but here goes:

    Hey Siri …

    • how many r’s in strawberry? 0
    • how many letter r’s in the word strawberry? 10
    • count the letters in strawberry. How many are r’s? ChatGPT ……2
  • Fubarberry@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 month ago

    I asked mistral/brave AI and got this response:

    How Many Rs in Strawberry

    The word “strawberry” contains three "r"s. This simple question has highlighted a limitation in large language models (LLMs), such as GPT-4 and Claude, which often incorrectly count the number of "r"s as two. The error stems from the way these models process text through a process called tokenization, where text is broken down into smaller units called tokens. These tokens do not always correspond directly to individual letters, leading to errors in counting specific letters within words.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Yes, at some point the meme becomes the training data and the LLM doesn’t need to answer because it sees the answer all over the damn place.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    1 month ago

    Yeah and you know I always hated this screwdrivers make really bad hammers.

  • winkly@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    How many strawberries could a strawberry bury if a strawberry could bury strawberries 🍓

  • dan1101@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    1 month ago

    It’s like someone who has no formal education but has a high level of confidence and eavesdrops on a lot of random conversations.

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    1 month ago

    What would have been different about this if it had impressed you? It answered the literal question and also the question the user was actually trying to ask.

  • Grabthar@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 month ago

    Doc: That’s an interesting name, Mr…

    Fletch: Babar.

    Doc: Is that with one B or two?

    Fletch: One. B-A-B-A-R.

    Doc: That’s two.

    Fletch: Yeah, but not right next to each other, that’s what I thought you meant.

    Doc: Isn’t there a children’s book about an elephant named Babar.

    Fletch: Ha, ha, ha. I wouldn’t know. I don’t have any.

    Doc: No children?

    Fletch: No elephant books.

  • whynot_1@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    1 month ago

    I think I have seen this exact post word for word fifty times in the last year.

  • HoofHearted@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    8
    ·
    1 month ago

    The terrifying thing is everyone criticising the LLM as being poor, however it excelled at the task.

    The question asked was how many R in strawbery and it answered. 2.

    It also detected the typo and offered the correct spelling.

    What’s the issue I’m missing?

    • Fubarberry@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      There’s also a “r” in the first half of the word, “straw”, so it was completely skipping over that r and just focusing on the r’s in the word “berry”

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        It doesn’t see “strawberry” or “straw” or “berry”. It’s closer to think of it as seeing 🍓, an abstract token representing the same concept that the training data associated with the word.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 month ago

        It wasn’t focusing on anything. It was generating text per its training data. There’s no logical thought process whatsoever.

    • Tywèle [she|her]@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      4
      ·
      1 month ago

      The issue that you are missing is that the AI answered that there is 1 ‘r’ in ‘strawbery’ even though there are 2 'r’s in the misspelled word. And the AI corrected the user with the correct spelling of the word ‘strawberry’ only to tell the user that there are 2 'r’s in that word even though there are 3.

      • TomAwsm@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        1 month ago

        Sure, but for what purpose would you ever ask about the total number of a specific letter in a word? This isn’t the gotcha that so many think it is. The LLM answers like it does because it makes perfect sense for someone to ask if a word is spelled with a single or double “r”.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          1 month ago

          It makes perfect sense if you do mental acrobatics to explain why a wrong answer is actually correct.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Except many many experts have said this is not why it happens. It cannot count letters in the incoming words. It doesn’t even know what “words” are. It has abstracted tokens by the time it’s being run through the model.

          It’s more like you don’t know the word strawberry, and instead you see: How many 'r’s in 🍓?

          And you respond with nonsense, because the relation between ‘r’ and 🍓 is nonsensical.