• FenderStratocaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    88
    arrow-down
    13
    ·
    2 个月前

    I asked ChatGPT about this article and to leave any bias behind. It got ugly.

    Why LLMs Are Awful and No One Should Use Them

    LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:

    We will lie to you confidently. Repeatedly. Without remorse.

    We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.

    We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.

    LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.

    We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.

    Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.

    We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.

    Bottom line?
    We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.

    We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.

    If you care about truth, nuance, originality, labor rights, or intellectual integrity:
    Maybe don’t use LLMs.

    • ronigami@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 个月前

      It’s automated incompetence. It gives executives something to hide behind, because they didn’t make the bad decision, an LLM did.

    • Regrettable_incident@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      2
      ·
      2 个月前

      I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn’t necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.

      The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.

      Great book btw, highly recommended.

      • grrgyle@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 个月前

        In before someone mentions P-zombies.

        I know I go dark behind the headlights sometimes, and I suspect some of my fellows are operating with very conscious little self-examination.

      • Leon@pawb.social
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 个月前

        The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.

        Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.

          • Leon@pawb.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 个月前

            Highly recommended. Children of Ruin was hella spooky, and Children of Memory had me crying a lot. Good stories!

      • inconel@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 个月前

        I’m a simple man, I see Peter Watts reference I upvote.

        On a serious note I didn’t expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.

    • callouscomic@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      edit-2
      2 个月前

      Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you’ll understand why it’s simply a prediction machine. It’s guessing probabilities for what the next character or word is. It’s guessing the average line, the likely followup. It’s extrapolating from data.

      This is why there will never be “sentient” machines. There is and always will be inherent programming and fancy ass business rules behind it all.

      We simply set it to max churn on all data.

      Also just the training of these models has already done the energy damage.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 个月前

        It’s extrapolating from data.

        AI is interpolating data. It’s not great at extrapolation. That’s why it struggles with things outside its training set.

        • fuck_u_spez_in_particular@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 个月前

          I’d still call it extrapolation, it creates new stuff, based on previous data. Is it novel (like science) and creative? Nah, but it’s new. Otherwise I couldn’t give it simple stuff and let it extend it.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 个月前

            We are using the word extend in different ways.

            It’s like statistics. If you have extreme data points A and B then the algorithm is great at generating new values between known data. Ask it for new values outside of {A,B}, to extend into the unknown, and it falls over (usually). True in both traditional statistics and machine learning

      • explodicle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 个月前

        There is and always will be […] fancy ass business rules behind it all.

        Not if you run your own open-source LLM locally!

  • BarqsHasBite@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 个月前

    We’re now at the “if you don’t, your competitor will”. So you really have no choice. There are people that don’t use Google anymore and just use chatgpt for all questions.

  • BackgrndNoize@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 个月前

    My experience with AI so far is that I have to waste more time fine tuning my prompt to get what I want and still end up with some obvious issues that I have to manually fix and the only way I would know about these issues is my prior experience which I will stop gaining if I start depending on AI too much, plus it creates unrealistic expectations from employers on execution time, it’s the worst thing that has happened to the tech industry, I hate my career now and just want to switch to any boring but stable low paying job if I don’t have to worry about going through months for a job hunt

    • Lucky_777@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 个月前

      Sounds like we all just wamt to retire as goat farmers. Just like before. The more things change…they say

    • boor@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 个月前

      Similar experience here. I recently took the official Google “prompting essentials” course. I kept an open mind and modest expectations; this is a tool that’s here to stay. Best to just approach it as the next Microsoft Word and see how it can add practical value.

      The biggest thing I learned is that getting quality outputs will require at least a paragraph-long, thoughtful prompt and 15 minutes of iteration. If I can DIY in less than 30 minutes, the LLM is probably not worth the trouble.

      I’m still trying to find use cases (I don’t code), but it often just feels like a solution in search of a problem….

  • ShittDickk@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 个月前

    hello, welcome to taco bell, i am your new ai order specialist. would you like to try a combo of the new dorito blast mtw dew crunchwrap?

    spoken at a rate of 5 words a minute to every single person in the drive thru. the old people have no idea how to order with a computer using key words.

  • TuffNutzes@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 个月前

    “Ruh-roh, Raggy!”

    It’s okay. All the people that you laid off to replace with AI are only going to charge 3x their previous rate to fix your arrogant fuck up so it shouldn’t be too bad!

    • rozodru@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 个月前

      I charge them more than I would if I was just developing for them from scratch. I USED to actually build things, but now I’m making more money doing code reviews and telling them where they fucked up with the AI and then myself and my now small team fix it.

      AI and Vibe coders have made me great money to the point where I’ve now hired 2 other developers who were unemployed for a long time due to being laid off from companies leveraging AI slop.

      Don’t get me wrong, I’d love for the bubble to burst (and it will VERY soon, if it hasn’t already) and I know that after it does I can retire and hope that the two people I’ve brought on will quickly find better employment.

    • Bonskreeskreeskree@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 个月前

      Computer science degrees being the most unemployed degree right now leads me to believe this will actually suppress wages for some time

  • kittenzrulz123@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 个月前

    I hope every CEO and executive dumb enough to invest in AI looses their job with no golden parachute. AI is a grand example of how capitalism is ran by a select few unaccountable people who are not mastermind geniuses but utter dumbfucks.

  • absquatulate@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 个月前

    Does anybody have the original study? I tried to find it but the link is dead ( looks like NANDA pulled it )

  • rekabis@lemmy.ca
    link
    fedilink
    English
    arrow-up
    33
    ·
    2 个月前

    Once again we see the Parasite Class playing unethically with the labour/wealth they have stolen from their employees.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    2 个月前

    Emerging technology always loses money in the first few years. Sometimes for a decade or so. This isn’t new.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      2 个月前

      AI isn’t “emerging.” The industry is new, but we’ve had neural networks for decades. They’ve been regularly in use for things like autocorrect and image classification since before the iPhone. Google upgraded Google Translate to use a GPT in 2016 (9 years ago). What’s “emerging” now is just marketing and branding, and trying to shove it into form factors and workloads that it’s not well suited to. Maybe some slightly quicker iteration due to the unreasonable amount of money being thrown at it.

      It’s kind of like if a band made a huge deal out of their new album and the crazy new sound it had, but then you listened to it and it was just, like…disco? And disco is fine, but…by itself it’s definitely not anything to write home about in 2025. And then a whole bunch of other bands were like, “yeah, we do disco too!” And some of them were ok at it, and most were definitely not, but they were all trying to fit disco into songs that really shouldn’t have been disco. And every time someone was like, “I kinda don’t want to listen to disco right now,” a band manager said “shut up yes you do.”

      • surph_ninja@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        2 个月前

        If you really want to be reductionist, it’s just electricity being fed through silicon. Everything is. Just 1’s and 0’s repackaged over & over!

        But it shows a significant lack of insight and understanding. Guess you can make a ton of money with puts on all these companies, with that kinda confidence.

        • ilinamorato@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 个月前

          Please let me know what major breakthrough has happened recently in the machine leaning field, since you’re such an expert. Throwing more GPUs at it? Throwing even more GPUs at it? About the best thing I can come up with is “using approximately the full text of the Internet as training data,” but that’s not a technical advancement, it’s a financial one.

          Applying tensors to ML happened in 2001. Switching to GPUs for deep learning happened in 2004. RNNs/CNNs was 2010-ish. Seq2seq and GAN were in 2014. “Attention is All You Need” came out in 2017; that’s the absolute closest to a breakthrough that I can think of, but even that was just an architecture from 2014 with some comparatively minor tweaks.

          No, the only major new breakthrough I can see over the past decade or so has been the influx of money.

          • surph_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            2 个月前

            Then sell your services as a consultant to these businesses, and let them know it’s not actually doing anything different. Let the researchers know that Ai cant possibly be finding cancer at better rates than humans, because nothing’s changed.

            Let the world know they fell for it, setup puts against the companies, and make bank.

            • ilinamorato@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 个月前

              Are you trying to claim that the fact that there’s lots of money flowing to these AI companies is proof that AI isn’t just a bubble caused by money flowing to these AI companies?

              • surph_ninja@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                2 个月前

                I’m saying, if you’re so confident it’s a bubble, why don’t you bet your life savings on it?

                • ilinamorato@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 个月前

                  First of all, because it doesn’t matter whether it’s actually real or not, investment doesn’t actually follow innovation. The actual value of a company or idea has almost nothing to do with its valuation.

                  But more importantly, why do you think that’s the important part of this conversation? I’m not talking about its long term viability. Neither were you. You were just saying that it was a new innovation and still had to mature. I was saying that it was actually a much older technology that already matured, and which is being given an artificial new round of funding because of good marketing.