• NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    16 days ago

    But AI is the wave of the future! The hot, NEW thing that everyone wants! ** furious jerking off motion **

  • buddascrayon@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    16 days ago

    That’s why I avoid them like the plague. I’ve even changed almost every platform I’m using to get away from the AI-pocalypse.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      16 days ago

      I can’t stand the corporate double think.

      Despite the mountains of evidence that AI is not capable of something even basic as reading an article and telling you what is about it’s still apparently going to replace humans. How do they come to that conclusion?

      The world won’t be destroyed by AI, It will be destroyed by idiot venture capitalist types who reckon that AI is the next big thing. Fire everyone, replace it all with AI; then nothing will work and nobody will be able to buy anything because nobody has a job.

      Que global economic collapse.

      • vxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        16 days ago

        It’s a race, and bullshitting brings venture capital and therefore an advantage.

        99.9% of AI companies will go belly up when Investors start asking for results.

        • buddascrayon@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          Yeah seriously just look at Sam Bankman-Fried and that Theranos dipshit. Both bullshitted their way into millions. Only difference is that Altman and Musk’s bubbles haven’t popped yet.

  • rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    17 days ago

    Yes, I think it would be naive to expect humans to design something capable of what humans are not.

    • maniclucky@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 days ago

      We do that all the time. It’s kind of humanity’s thing. I can’t run 60mph, but my car sure can.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            17 days ago

            A human can move, a car can move. a human can’t move with such speed, a car can. The former is qualitative difference how I meant it, the latter quantitative.

            Anyway, that’s how I used those words.

            • maniclucky@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              16 days ago

              Ooooooh. Ok that makes sense.

              With that said, you might look at researchers using AI to come up with new useful ways to fold proteins and biology in general. The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.

              For qualitative examples we always have hallucinations and that’s a poorly understood mechanism that may well be able to create actual creativity. But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on. Though now it leads to “nothing new under the sun” so I’ll stop rambling now.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                16 days ago

                The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.

                Yes.

                But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on.

                That’s fundamentally solvable.

                I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.

                What all these companies like DeepSeek and OpenAI and others are doing lately, with some “chain-of-thought” model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they’ve invested so much data is a minor part which doesn’t have to be so powerful.

                • maniclucky@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  16 days ago

                  I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.

                  Agreed. The techbros pretending that the stochastic parrots they’ve created are general AI annoys me to no end.

                  While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they’re trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they’ve promised that this one technique (more or less, I know it’s more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).

                  Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.

  • Joelk111@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 days ago

    I’m pretty sure that every user of Apple Intelligence could’ve told you that. If AI is good at anything, it isn’t things that require nuance and factual accuracy.

  • Teknikal@eviltoast.org
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    4
    ·
    16 days ago

    I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.

    • datalowe@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      16 days ago

      Do you mean you rigorously went through a hundred articles, asking DeepSeek to summarise them and then got relevant experts in the subject of the articles to rate the quality of answers? Could you tell us what percentage of the summaries that were found to introduce errors then? Literally 0?

      Or do you mean that you tried having DeepSeek summarise a couple of articles, didn’t see anything obviously problematic, and figured it is doing fine? Replacing rigorous research and journalism by humans with a couple of quick AI prompts, which is the core of the issue that the article is getting at. Because if so, please reconsider how you evaluate (or trust others’ evaluations of) information tools which might help or help destroy democracy.

  • mentalNothing@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    ·
    17 days ago

    Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.

  • TroublesomeTalker@feddit.uk
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    8
    ·
    17 days ago

    But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.

      • StarlightDust@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        16 days ago

        Look at their reporting of the Employment Tribunal for the nurse from Five who was sacked for abusing a doctor. They refused to correctly gender the doctor correctly in every article to a point where the lack of any pronoun other than the sacked transphobe referring to her with “him”. They also very much paint it like it is Dr Upton on trial and not Ms Peggie.

      • TroublesomeTalker@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        16 days ago

        It’s a “how the mighty have fallen” kind of thing. They are well into the click-bait farm mentality now - have been for a while.

        It’s present on the news sites, but far worse on things where they know they steer opinion and discourse. They used to ensure political parties has coverage inline with their support, but for like 10 years prior to Brexit, they gave Farage and his Jackasses hugely disproportionate coverage - like 20X more than their base. This was at a time when SNP were doing very well and were frequently seen less than the UK independence party. And I don’t recall a single instance of it being pointed out that 10 years of poor interactions with Europe may have been at least partially fuelled by Nidge being our MEP and never turning up. Hell we had veto rights and he was on the fisheries commission. All that shit about fisherman was a problem he made.

        Current reporting is heavily spun and they definitely aren’t the worst in the world, but the are also definitely not the bastion of unbiased news I grew up with.

        Until relatively recently you could see the deterioration by flipping to the world service, but that’s fallen into line now.

        If you have the time to follow independent journalists the problem becomes clearer, if not, look at output from parody news sites - it’s telling that Private Eye and Newsthump manage the criticism that the BBC can’t seem to get too

        Go look at the bylinetimes.com front page, grab a random story and compare coverage with the BBC. One of these is crowd funded reporters and the other a national news site with great funding and legal obligations to report in the public interest.

        I don’t hate them, they just need to be better.

  • Turbonics@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    4
    ·
    16 days ago

    BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline

    • Krelis_@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      16 days ago

      Some examples of inaccuracies found by the BBC included:

      Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking

      ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left

      Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

      • Turbonics@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

        I did not even read up to there but wow BBC really went there openly.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    2
    ·
    17 days ago

    Turns out, spitting out words when you don’t know what anything means or what “means” means is bad, mmmmkay.

    It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

    It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

    Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

    Introduced factual errors

    Yeah that’s . . . that’s bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be “okay enough” for some tasks some day. That’ll be another 200 Billion please.

    • Rivalarrival@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      9
      ·
      17 days ago

      It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

      How good are the human answers? I mean, I expect that an AI’s error rate is currently higher than an “expert” in their field.

      But I’d guess the AI is quite a bit better than, say, the average Republican.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        16 days ago

        I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.

        There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles. At best you can compare it to other automated summaries that existed before LLMs, which might not have all the info, but won’t make up random facts that aren’t in the article.

        • Rivalarrival@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          ·
          16 days ago

          I’m more interested in the technology itself, rather than its current application.

          I feel like I am watching a toddler taking her first steps; wondering what she will eventually accomplish in her lifetime. But the loudest voices aren’t cheering her on: they’re sitting in their recliners, smugly claiming she’s useless. She can’t even participate in a marathon, let alone compete with actual athletes!

          Basically, the best AIs currently have college-level mastery of language, and the reasoning skills of children. They are already far more capable and productive than anti-vaxxers, or our current president.

          • Balder@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            16 days ago

            It’s not the people that simply decided to hate on AI, it was the sensationalist media hyping it up so much to the point of scaring people: “it’ll take all your jobs”, or companies shoving it down our throats by putting it in every product even when it gets in the way of the actual functionality people want to use. Even my company “forces” us all to use X prompts every week as a sign of being “productive”. Literally every IT consultancy in my country has a ChatGPT wrapper that they’re trying to sell and they think they’re different because of it. The result couldn’t be different, when something gets too much exposure it also gets a lot of hate, especially when it is forced down on people.

      • fine_sandy_bottom@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        17 days ago

        I don’t necessarily dislike “AI” but I reserve the right to be derisive about inappropriate use, which seems to be pretty much every use.

        Using AI to find pertoglyphs in Peru was cool. Reviewing medical scans is pretty great. Everything else is shit.

      • WagyuSneakers@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        7
        ·
        16 days ago

        I work in tech and can confirm the the vast majority of engineers “dislike ai” and are disillusioned with AI tools. Even ones that work on AI/ML tools. It’s fewer and fewer people the higher up the pay scale you go.

        There isn’t a single complex coding problem an AI can solve. If you don’t understand something and it helps you write it I’ll close the MR and delete your code since it’s worthless. You have to understand what you write. I do not care if it works. You have to understand every line.

        “But I use it just fine and I’m an…”

        Then you’re not an engineer and you shouldn’t have a job. You lack the intelligence, dedication and knowledge needed to be one. You are detriment to your team and company.

        • 5gruel@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          16 days ago

          That’s some weird gatekeeping. Why stop there? Whoever is using a linter is obviously too stupid to write clean code right off the bat. Syntax highlighting is for noobs.

          I full-heartedly dislike people that think they need to define some arcane rules how a task is achieved instead of just looking at the output.

          Accept that you probably already have merged code that was generated by AI and it’s totally fine as long as tests are passing and it fits the architecture.

          • WagyuSneakers@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 days ago

            You’re supposed to gatekeep code. There is nothing wrong with gatekeeping things that aren’t hobbies.

            If someone can’t explain every change they’re making and why they chose to do it that way they’re getting denied. The bar is low.

        • Eheran@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          16 days ago

          “I can calculate powers with decimal values in the exponent and if you can not do that on paper but instead use these machines, your calculations are worthless and you are not an engineer”

          You seem to fail to see that this new tool has unique strengths. As the other guy said, it is just like people ranting about Wikipedia. Absurd.

          • WagyuSneakers@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            16 days ago

            You can also just have an application designed to do that do it more accurately.

            If you can’t do that you’re not an engineer. If you don’t recommend that you’re not an engineer.

    • MDCCCLV@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      17 days ago

      Is it worse than the current system of editors making shitty click bait titles?

    • desktop_user@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      8
      ·
      17 days ago

      alternatively: 49% had no significant issues and 81% had no factual errors, it’s not perfect but it’s cheap quick and easy.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        17 days ago

        It’s easy, it’s quick, and it’s free: pouring river water in your socks.
        Fortunately, there are other possible criteria.

      • fine_sandy_bottom@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        17 days ago

        If it doesn’t work then quick cheap and easy I’d pointless.

        I’ll make you dinner every night for free but one night a week it will make you ill. Maybe a little maybe a lot.

    • chud37@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      16 days ago

      that’s the core problem though, isn’t it. They are just predictive text machines, not understanding what they are saying. Yet we are treating them as if they were some amazing solution to all our problems

      • Optional@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        16 days ago

        Well, “we” arent’ but there’s a hype machine in operation bigger than anything in history because a few tech bros think they’re going to rule the world.

    • devfuuu@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      17 days ago

      I’ll be here begging for a miserable 1 million to invest in some freaking trains and bicycle paths. Thanks.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    17 days ago

    The owners of LLMs don’t care about ‘accurate’ … they care about ‘fast’ and ‘summary’ … and especially ‘profit’ and ‘monetization’.

    As long as it’s quick, delivers instant content and makes money for someone … no one cares about ‘accurate’