• oakey66@lemmy.world
    link
    fedilink
    English
    arrow-up
    187
    arrow-down
    2
    ·
    3 months ago

    AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There’s no thought or reasoning. They don’t understand inputs. They mimic human speech. They’re not presenting anything meaningful.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      57
      arrow-down
      3
      ·
      3 months ago

      I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.

      • Opinionhaver@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        12
        ·
        3 months ago

        pretending LLMs are AI

        LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.

        However, AI itself doesn’t imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          9
          ·
          3 months ago

          Here we go… Fanperson explaining the world to the dumb lost sheep. Thank you so much for stepping down from your high horse to try and educate a simple person. /s

          • Opinionhaver@feddit.uk
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            9
            ·
            edit-2
            3 months ago

            How’s insulting the people respectfully disagreeing with you working out so far? That ad-hominem was completely uncalled for.

            • raspberriesareyummy@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              7
              ·
              3 months ago

              “Fanperson” is an insult now? Cry me a river, snowflake. Also, you weren’t disagreeing, you were explaining something to someone perceived less knowledgeable than you, while demonstrating you have no grasp of the core difference between stochastics and AI.

          • Opinionhaver@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            7
            ·
            3 months ago

            It’s not. Bubble sort is a purely deterministic algorithm with no learning or intelligence involved.

              • Opinionhaver@feddit.uk
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                6
                ·
                3 months ago

                Bubble sort is just a basic set of steps for sorting numbers - it doesn’t make choices or adapt. A chess engine, on the other hand, looks at different possible moves, evaluates which one is best, and adjusts based on the opponent’s play. It actively searches through options and makes decisions, while bubble sort just follows the same repetitive process no matter what. That’s a huge difference.

                • jenesaisquoi@feddit.org
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  ·
                  3 months ago

                  Your argument can be reduced to saying that if the algorithm is comprised of many steps, it is AI, and if not, it isn’t.

                  A chess engine decides nothing. It understands nothing. It’s just an algorithm.

      • SinningStromgald@lemmy.world
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        2
        ·
        3 months ago

        There are at least three of us.

        I am worried what happens when the bubble finally pops because shit always rolls downhill and most of us are at the bottom of the hill.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          3 months ago

          Not sure if we need that particular bubble to pop for us to be drowned in a sea of shit, looking at the state of the world right now :( But silicon valley seems to be at the core of this clusterfuck, as if all the villains form there or flock there…

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      4
      ·
      3 months ago

      That undersells them slightly.

      LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They’re good at that. Need something summarized? They can do that, too. Need a question answered? No can do.

      LLMs can’t generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they’ll also happily generate complete bullshit answers and to them there’s no difference to a real answer.

      They’re text transformers marketed as general problem solvers because a) the market for text transformers isn’t that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      9
      ·
      3 months ago

      Why is AGI not in reach? What insight do you have on the matter than you can so confidently make an absolute statement like that?

    • biggerbogboy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      3 months ago

      My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.

      And it’s funny and sad how some people think these LLMs are their friends, like no, it’s a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt using numerical weights and a neural network.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        Billionaires are often referred to as dragons because they horde wealth. A Guillotine that could know the difference and decide to only harm billionaires would be a technological marvel.

  • ilinamorato@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 months ago

    I’m pretty sure the science says it’s more like 20-30. I know personally, if I try to work more than about 40-ish hours in a week, the time comes out of the following week without me even trying. A task that took two hours in a 45-hour “crunch” week will end up taking three when I don’t have to crunch. And if I keep up the crunch for too long, I start making a lot of mistakes.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    ·
    3 months ago

    I’m really getting sick and tired of these rich fuckers saying shit like this.

    1. we are no where close to AGI given this current technology.

    2. working 50% longer is not going to make a bit of difference for AGI

    3. and even if it would matter, hire 50% more people

    The only thing this is going to accomplish is likely make him wealthier. So fuck him.

      • billwashere@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        They are very impressive to where we were 20 years ago, hell even 5 years ago. The first time I played with ChatGPT I was absolutely floored. But after playing with a lot of them, even training a few RAGs (Retrieval-Augmented Generation), we aren’t really that close and in my opinion this is not a useful path towards a true AGI. Don’t get me wrong, this tool is extremely useful and to most people, they’d likely pass a basic Turing Test. But LLMs are sophisticated pattern recognition systems trained on vast amounts of text data that predict the most likely next word or token in a sequence. That’s really all they do. They are really good at predicting the next word. While they demonstrate impressive language capabilities, they lack several fundamental components necessary for an AGI: -no true understanding -they can’t really engage in the real world. -they have no real ability to learn real-time. -they don’t really have the ability to take in more then one type of info at a time.

        I mean the simplest way in my opinion to explain the difference is you will never have an LLM just come up with something on its own. It’s always just a response to a prompt.

        • helopigs@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Sorry for the late reply - work is consuming everything :)

          I suspect that we are (like LLMs) mostly “sophisticated pattern recognition systems trained on vast amounts of data.”

          Considering the claim that LLMs have “no true understanding”, I think there isn’t a definition of “true understanding” that would cleanly separate humans and LLMs. It seems clear that LLMs are able to extract the information contained within language, and use that information to answer questions and inform decisions (with adequately tooled agents). I think that acquiring and using information is what’s relevant, and that’s solved.

          Engaging with the real world is mostly a matter of tooling. Real-time learning and more comprehensive multi-modal architectures are just iterations on current systems.

          I think it’s quite relevant that the Turing Test has essentially been passed by machines. It’s our instinct to gatekeep intellect, moving the goalposts as they’re passed in order to affirm our relevance and worth, but LLMs have our intellectual essence, and will continue to improve rapidly while we stagnate.

          There is still progress to be made before we’re obsolete, but I think it will be just a few years, and then it’s just a question of cost efficiency.

          Anyways, we’ll see! Thanks for the thoughtful reply

    • JackFrostNCola@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 months ago

      Or option 4) stay as you are and you will just acheive it in due time rather than in a 50% shorter timeframe?
      Edit: 25% shorter? I dont know, maths isnt my strong suit and im drunk.

    • graphene@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 months ago

      Increasing working hours decreases actual labor done per hour. A person working 40 hours per week will more often than not achieve more than someone working 70.


      “in Britain during the First World War, there had been a munitions factory that made people work seven days a week. When they cut back to six days, they found, the factory produced more overall.”

      “In 1920s Britain, W. G. Kellogg—the manufacturer of cereals—cut his staff from an eight-hour day to a six-hour day, and workplace accidents (a good measure of attention) fell by 41 percent. In 2019 in Japan, Microsoft moved to a four-day week, and they reported a 40 percent improvement in productivity. In Gothenberg in Sweden around the same time, a care home for elderly people went from an eight-hour day to a six-hour day with no loss of pay, and as a result, their workers slept more, experienced less stress, and took less time off sick. In the same city, Toyota cut two hours per day off the workweek, and it turned out their mechanics produced 114 percent of what they had before, and profits went up by 25 percent. All this suggests that when people work less, their focus significantly improves. Andrew told me we have to take on the logic that more work is always better work. “There’s a time for work, and there’s a time for not having work,” he said, but today, for most people, “the problem is that we don’t have time. Time, and reflection, and a bit of rest to help us make better decisions. So, just by creating that opportunity, the quality of what I do, of what the staff does, improves.””

      • Hari, J. (2022). Stolen Focus: Why You Can’t Pay Attention–and How to Think Deeply Again. Crown.

      In 1920s Britain, W. G. Kellogg: A. Coote et al., The Case for a Four Day Week (London: Polity, 2021), 6.

      In 2019 in Japan, Microsoft moved to a four-day week: K. Paul, “Microsoft Japan Tested a Four-Day Work Week and Productivity Jumped by 40%,” Guardian, November 4, 2019; and Coote et al., Case for a Four Day Week, 89.

      In Gothenberg in Sweden around the same time: Coote et al., Case for a Four Day Week, 68–71.

      In the same city, Toyota cut two hours per: day: Ibid., 17–18.


      The real point of increasing working hours is to make your job consume your life.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    Is Google in the cloning business? Because I could swear that’s Zack Freedman from the Youtube 3D printing channel. He even wears the heads-up display (Youtube Link). Sorry for being off-Topic but who cares about what tech CEOs say about AGI anyway?

  • axh@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    3 months ago

    Yup… Work your ass off guys, so we can fire you sooner! Great deal.