• daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      That guy is a moron.

      But AI assistance in taxes is also being introduced where I live (Spain which is currently being government by a coalition of socialist parties).

      Still not deployed so I couldn’t say how it will work. But preliminary info seems promising. They are going to use a publicly trained AI project that has already being released.

      The thing is that I don’t think that precisely that is a Musk idea. It’s something that have been probably been talked about various tax agencies in the world in the latest years. The probably is just parroting the idea and giving them project to one of his billionaire friends.

  • TheGoldenGod@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    1 day ago

    Training AI with internet content was always going to fail, as at least 60% of users online are trolls. It’s even dumber than expecting you can have a child from anal sex.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      31
      ·
      1 day ago

      While I do think that it’s simply bad at generating answers because that is all that’s going on, generating the most likely next word that works a lot of the time but then can fail spectacularly…

      What if we’ve created AI but by training it with internet content, we’re simply being trolled by the ultimate troll combination ever.

      • seaQueue@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        1 day ago

        This is what happens when you train your magical AI on a decade+ of internet shitposting

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          edit-2
          1 day ago

          They didn’t learn from all the previous times someone tried to train a bot on the internet.

          • pogmommy@lemmy.ml
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            17 hours ago

            It’s almost poetic how Tay.ai, Microsoft’s earlier shitty ai, was also poisoned by internet trolling and became a Nazi on twitter nearly a decade ago

  • Repple (she/her)@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    17 hours ago

    I searched for pictures of Uranus recently. Google gave me pictures of Jupiter and then the ai description on top chided me telling me that what was shown were pictures of Jupiter, not Uranus. 20 years ago it would have just worked.

  • lemmylommy@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    1 day ago

    Well, that’s less bad than 100% SEO optimized garbage with LLM generated spam stories around a few Amazon links.

  • regrub@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    ·
    1 day ago

    Who could have seen this coming? Definitely not the critics of LLM hyperscalers.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    23
    ·
    18 hours ago

    Only yesterday, I searched for a very simple figure, the number of public service agents in a specific administrative region. This is, obviously, public information. There is a government site where you can get it. However I didn’t know the exact site, so I searched for it on Google.

    Of course, AI summary shows up first, and gives me a confident answer, accurately mirroring my exact request. However the number seems way too low to me, so I go check the first actual search result, the aforementioned official site. Google’s shitty assistant took a sentence about a subgroup of agents, and presented it as the total. The real number was clearly given before, and was about 4 times that.

    This is just a tidbit of information any human with the source would have identified in a second. How the hell are we supposed to trust AI for complex stuff after that?

    • Patch@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 hours ago

      The AI models can be hilariously bad even on their own terms.

      Yesterday I asked Gemini for a population figure (because I was too lazy to look it up myself). First I asked it:

      what is the combined population of place 1, place 2 and place 3

      It answered:

      Unfortunately, providing an absolutely precise, up-to-the-minute combined population figure is difficult. Population numbers are constantly changing.

      To find these figures, I would recommend checking the Office for National Statistics (ONS) website. They provide detailed population statistics for local authorities in England.

      On a whim, I asked it again as:

      combined population of place 1, place 2 and place 3

      And then it gave me the answer sweet as a nut.

      Apparently I was being too polite with it, I guess?

      • LinyosT@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 hours ago

        I slapped a picture of a chart into Gemini because I didn’t know what the type of chart was called but I wanted to mention it in a Uni report. I was too lazy to go looking at chart types and thought that would be quicker.

        I just asked it “What kind of chart is this” and it ignored that and started analysing the chart instead and started stating what the chart was about and giving insights into the chart. Didn’t tell me what kind of chart it was even though that was the only thing I asked.

        Bear in mind that I deliberately cropped out any context to avoid it trying to do that, just in case, so all I got from it was pure hallucinations. It was just making pure shit up that I didn’t ask for.

        I switched to the reasoning model and asked again, then it gave me the info I wanted.

  • JustEnoughDucks@feddit.nl
    link
    fedilink
    English
    arrow-up
    24
    ·
    16 hours ago

    And then I get down voted for laughing when people say that they use AI for “general research” 🙄🙄🙄

    • Mike_The_TV@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      13 hours ago

      I’ve had people legitimately post the answer they got from chat gpt to answer someone’s question and then get annoyed when people tell them its wrong.

  • lemmyingly@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    1 day ago

    To me it seems the title is misleading as the research is very narrowly scoped. They provided news excerts to the LLMs and asked for the title, the author, the publication date, and the URL. Is this something people do? I would be interested if they used some real world examples.