• GoldenQuetzal@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    28 days ago

    I’ve been predicting this for a while now and people kept telling me I was wrong. Prepare for dot com burst two, electric boogaloo.

    • bthest@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      27 days ago

      I hope it crashes but what if the market completely embraces feels-based economics and just says that incomprehensible AI slop noise is what customers crave? Maybe CEOs will interpret AI gibberish output in much the same way as ancient high priests made calls by sifting through the entrails of sacrificed animals. Tesla meme stock is evidence that you can defy all known laws of economic theory and still just coast by.

  • Pnut@lemm.ee
    link
    fedilink
    arrow-up
    10
    ·
    28 days ago

    How much money was invested in reminding us that if the snake starts eating its tail it’s eating itself?

  • Shardikprime@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    24
    ·
    28 days ago

    AI, the one currently used for actual productive work by scientific researchers, healthcare specialists, energy development, manufacturing, agriculture and such, is poised to be able to handle about 20% of all human related work by 2040.

    By 2043, it will be able to handle 100% of any human related work in the fields. The takeoff is merely 3 years

    It’s fine if you guys want to live in a little mental bubble where this doesn’t happen

    But I’d suggest you start getting ready for what comes next.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        27 days ago

        The source is a research paper that the AI I community have been going on about for a few days now. I can’t link to it right now because I’m at work but I’ll update when I can.

        But if you Google for it you will find it as it’s been a fairly hot topic the last few days.

    • Stern@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      28 days ago

      Oh boy I can’t wait for our currently robust social safety net and already existent universal basic income to allow us to live a life pursuing the things that make us happy, rather then multi-billionaires firing everyone and the world becoming a plutocracy where the average person struggles to get even the bare minimum.

    • dan00@lemm.ee
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      27 days ago

      I think you should post sources for your claims. This sounds stupidly wrong. Are you American?

  • avattar@lemmy.sdf.org
    link
    fedilink
    arrow-up
    15
    ·
    28 days ago

    There is a solution to this. Make a **perfect ** AI detecting tool. The only way I can think of is through adding a tag to every bit of AI-generated data,

    Though it could easily be removed from text, I guess.And no, training AI to recognize AI will never work. Also every model would have to join this, or it won’t work.

    Related XKCD

    • Etterra@discuss.online
      link
      fedilink
      English
      arrow-up
      10
      ·
      28 days ago

      LOL you’re suggesting people already doing something unbelievably stupid should do something smart to compensate.

    • bthest@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      27 days ago

      Also people won’t be able to pass AI work off as their own if it is labeled as such. Cheating and selling slop is the chief use for AI so any tag or watermark will be removed on the vast majority of stuff.

      There’s also liability. If your AI generates code that’s used to program something important and a lot of people are injured or die, do you really want a tag that can be traceable to back the company to be on the evidence? Or slapped all over the child sex abuse images that their wonderful invention is churning out?

  • Rooty@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    27 days ago

    Ffs, neural networks and LLMs have their place and can be useful, but setting up datacentres that snort up the entire internet indiscriminately to create a glorified chatbot that spews data that may or may not be correct is insane.

    • AbnormalHumanBeing@lemmy.abnormalbeings.space
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      28 days ago

      That is so much better than their attempt (the “Lord of the Flies for AI” byline). Captures the essence of the problem better than the capitalism cannibalism metaphor does, as well.

      EDIT: That has to have been one of my favourite Freudian ADHD word-confusion typos I accidentally made there

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    7
    ·
    28 days ago

    you realize what this means, right?

    who is causing all the backwashed data? the peasants.

    who is training the models? the peasants.

    who benefits the most from AI? the oligarchy.

    I bet in a year or two, access to AI will be cost prohibitive and will be illegal to host without an expensive license.

    how does this benefit the oligarchy you ask?

    because the oligarchy is the government now, and AI needed the support of the peasants to get infrastructure up and running well enough to run on its own.

    they’re just going to use AI to oppress the peasants and ensure they know their place as slave labor.

    congrats everyone who supported AI by praising and promoting it as a solution, you fucked yourself.

    • ℍ𝕂-𝟞𝟝@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      28 days ago

      who is causing all the backwashed data? the peasants.

      No, actually, it’s the shitty slop sites. I mean they are usually not made by Big Tech, but it is also not your rando Twitter posts either.

      I bet in a year or two, access to AI will be cost prohibitive and will be illegal to host without an expensive license.

      I can run a Chinese model on my sub-1000 EUR GPU right now and generate all the word salad I want. I know, I know, they will make better models. But that’s the point, if they lock away better models, all the slop will be made with the worse models.

      The point is, all this means is that you can’t infinitely train AI on random internet content, and the value of social media as an AI training data source is going down since they are also getting infected with slop. This is actually a good thing, because one way SaaS models could have gotten better than freely hostable ones is by having access to data that is not openly accessible.

      These news mean that data they could have used as a differentiator is a pile of hot shit.

      • GreenKnight23@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        5
        ·
        28 days ago

        you’re a peasant and don’t even realize it because you’re not a part of the “club”. same as all those slop sites. they aren’t part of the club and so they’re lowly peasants.

        there were talks of making those Chinese models illegal. not much harder to just say anyone that’s not in the club can’t have one either, and if you’re caught you go to jail.

        • ℍ𝕂-𝟞𝟝@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          28 days ago

          Yeah, but how do you even make them illegal? Most of them are fly-by-night places, you can use a 600 EUR GPU to generate slop with a 4 gig model, the worse it is the more it hurts data collection.

          They couldn’t even get rid of phone farms. Cat’s out of the bag.

          • GreenKnight23@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            28 days ago

            did you know that most Texas Instruments software and hardware is illegal to use if you’re not using it for the further advancement of American interests?

            and if you’re caught you can face prison time and possibly even visit a black site if you’re charged under the espionage act.

            does it happen? sure. will you get caught maybe not…but they don’t go looking for people unless they’re bad people.

            this current administration will target every average citizen that isn’t affiliated with one of the oligarchs before they target the actual bad guys.

    • Michael@slrpnk.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      28 days ago

      We have accessible, open-source AI models - your predictions won’t come to pass.

        • Michael@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          28 days ago

          Fortunately, they can’t arrest everybody using open-source AI models. There are clear efforts to stop momentum with geo-tracking high-end GPUs and indirect efforts like the EU plan trying to backdoor everything.

          Personally, I see it all as ineffective.

          • GreenKnight23@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            28 days ago

            what about this current administration is effective?

            I think you’re under the misconception that standard legal rules apply with the current government.

            • Michael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              28 days ago

              There’s a whole world out there - if anybody can effectively run these models, how will they know to stop everyone?

              The current US administration and sphere of influence/power may be tyrannical, but they aren’t omnipresent or omniscient - even if they try to be.

              For example, I highly doubt China will be able to be stopped before they burst the AI dam. Honestly, they already have - these AI companies are just in denial because they need more capital for their proprietary, inefficient, and centralized models.

  • zephorah@lemm.ee
    link
    fedilink
    arrow-up
    6
    ·
    28 days ago

    Tragic and funny at the same time. As if consuming all of Reddit hadn’t already irreparably skewed things and that was still real people doing Reddit things. Now, released, it’s eating itself. This self-poisoning model seemed inevitable.

  • BigMacHole@lemm.ee
    link
    fedilink
    arrow-up
    27
    arrow-down
    2
    ·
    28 days ago

    Oh no! I HOPE us Taxpayers can Bail Out these AI Companies when they go Under! AFTER ALL we CUT my Child’s LIFESAVING MEDICATION so I KNOW we have the Funds to Help these Poor Billionaire CEOS!

    • utopiah@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      28 days ago

      Help these Poor Billionaire CEOS!

      Right, self-made billionaires for whom the way to success was already paved by subsidies. Yes, those surely need help to “build” absolutely pointless non-working projects that are supposed to “save humanity”. That’s great. /$

    • Etterra@discuss.online
      link
      fedilink
      English
      arrow-up
      7
      ·
      28 days ago

      I can’t afford groceries now! I’m sure all those billionaires will help us out now that they’ve got a little but more though.