I have realized a lot of posts on here mostly criticizing the data collection of people to train A.I, but I don’t think A.I upon itself is bad, because A.I- like software development- has many ways of implementations: Software can either control the user, or the user can control the software, and also like software development, some software might be for negative purposes while others may be for better purposes, so saying “Fuck Software” just because of software that controls the user feels pretty unfair, and I know A.I might be used for replacing jobs, but that has happened many times before, and it is mostly a positive move forward like with the internet. Now, I’m not trying to start a big ass debate on how A.I = Good, because as mentioned before, I believe that A.I is as good as its uses are. All I want to know from this post is why you hate A.I as a general topic. I’m currently writing a research paper on this topic, so I would like some opinion.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    2 months ago

    I hate the endless grift that “AI” exists.

    As far as technologies that are called “AI”, it’s just like any other technology. The usage determines the value. Under capitalism all technology is used first and foremost to violently enforce capitalism. This has already become apparent.

    For example, while I enjoy generated art (apart from the environmental destruction), it’s not the same thing as murdering palestinian children. The conflation of the two is grifter pseudo-science in service of evil.

  • haverholm@kbin.earth
    link
    fedilink
    arrow-up
    48
    arrow-down
    2
    ·
    2 months ago

    I do not hate AI, because it doesn’t exist. I’m not delusional.

    I do resent the bullshit generators that the tech giants are promoting as AI to individual and institutional users, and the ways they have been trained without consent on regular folks’ status updates, as well as the works of authors, academics, programmers, poets, and artists.

    I resent the amount of work, energy, environmental damage, and yes, promotional effort that has gone into creating an artificial desire for a product that a) nobody asked for, and b) still doesn’t do what it is claimed to do.

    And I resent that both institutions and individuals are blindly embracing a technology that at every step from its creation to its implementations denigrate the human work — creative, scholarly, administrative and social — that it intends to supplant.

    But Artificial Intelligence? No such thing. I’ll form an opinion if I ever see it.

    • FreeWilliam@lemmy.mlOP
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      While I haven’t thought about that before, now that I have, I totally agree. Ty fir sharing your pov :)

      • FreeWilliam@lemmy.mlOP
        link
        fedilink
        arrow-up
        6
        ·
        2 months ago

        What he means is that he doesn’t hate A.I because it simply doesn’t exist. There is no intelligence in any of the so called “A.I” since all it’s doing is a combination of stolen training data + randomness

        • TheFunkyMonk@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          Yeah, I can understand the sentiment. I was just clarifying that true intelligence (AGI) is a subset of what we refer to as AI, alongside other subsets such as Narrow AI/LLMs. I agree it’s odd usage of the term, but I can’t find a source otherwise.

  • CrocodilloBombardino@piefed.social
    link
    fedilink
    English
    arrow-up
    22
    ·
    2 months ago

    AI could be fine, except that in a capitalist society it’s going to be used by corps & govt as a weapon against labor, a surveillance technology, and a way to plagiarize the hard work of artists.

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 months ago

    I hate LLMs because their use leads to less human creativity by pushing artists out of creating art, and lowers the quality of the art available to everyone. Not to mention they were all created in a highly unethical manner.

    The rest of the slop going on with it is just a sideshow in my opinion. Replacing the very things that make us human with something artificial and ‘cheap’ is atrocious and I struggle to understand why everyone is going along with it.

  • kat_angstrom@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    2 months ago

    I hate it because LLMs are not AI; they’re statistical models that will never achieve true intelligence. It’s hallucinations all the way down, regardless of accuracy.

  • Tigeroovy@lemmy.ca
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    I hate all of this Generative AI trash.

    AI has been a concept in one way or another for a long time. The idea of ai is fine, this current state of slop machines and chat bots being pushed can suck my ass.

  • vane@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    If you’re talking about reading machine generated text then I’m to fucking old to eat a corporate propaganda. What’s difference between AI and TV ? You can’t turn off AI without turning off TV these days.
    That’s sad.

  • FriendOfDeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    2 months ago

    If we take the forum title here, the “fuck” is directed at the people in charge of so-called “AI” companies. The technology has value. It’s just being forcefed down our throats in ways that remind us of block chain and whatever happened to block chain?!

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      Well scammers destroyed its reputation and governments refused to use the tech BC it would expose corruption.

      Make no mistake when the next reshuffle happens, it will the bedrock of all of systems esp government and finance.

      People in power are not interested in such transparency currently

    • nfreak@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 months ago

      The tech with the most push behind it is being pushed in infancy, and is damn near useless without datasets of entirely stolen content.

      There’s genuinely useful things and impressive tech under the machine learning umbrella. This “AI” boom is just hard pushing garbage.

      This past week we saw the most obvious example yet of why they’re pushing LLMs so far too, Grok’s unprompted white supremacist ramblings over on twitter. These tools can easily be injected with biases like that (and much more subtly too) to turn them into a giant propaganda machine.

      Some of the tech is genuinely useful and impressive, but the stuff getting the biggest pushes is nothing but garbage, and the companies behind it all are vile.

  • JGrffn@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    2 months ago

    I don’t hate AI, I hate the system that’s using AI for purely profit-driven, capitalism-founded purposes. I hate the marketers, the CEOs, the bought lawmakers, the people with only a shallow understanding of the implications of this whole system and its interactions who become a part of this system and defend it. You see the pattern here? We can take out AI from the equation and the problematic system remains. AI should’ve been either the beginning of the end for humanity in a terminator sort of way, or the beginning of a new era of enlightenment and technological advancements for humanity. Instead we got a fast-tracked late stage capitalism doubling down on dooming us all for text that we dont have to think about writing while burning entire ecosystems to achieve it.

    I use AI on a near daily basis and find usefulness in it, it’s helped me solve a lot of issues and it’s a splendid rubber ducky for bouncing ideas, and I know people will disagree with me here but there are clear steps towards AGI here which cannot be ignored, we absolutely have systems in our brains which operate in a very similar fashion to LLMs, we just have more systems doing other shit too. Does anyone here actually think about every single word that comes out of their mouths? Has nobody ever experienced a moment where you clearly said something that you immediately have to backtrack on because you were lying for some inexplicable reason, or maybe you skipped too many words, slurred your speech or simply didn’t arrive anywhere with the words you were saying? Dismissing LLMs as advanced autocomplete absolutely ignores the fact that we’re doing exactly the same shit ourselves, with some more systems in place to guide our yapping.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    10
    ·
    2 months ago

    I hate that a market hype bubble has been forced onto us. LLMs are being shoehorned into tasks that they fundamentally can not actually do, they’re just pattern recognition engines, they have no intelligence. Despite their lack of intelligence they are sucking up energy, municipal water, a massive amounts of compute so they can be forced onto us.

  • weedwolf@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    I don’t hate AI as much as I hate the nonexistent ethics surrounding LLM’s and generative AI tools right now (which is what a lot of people refer to as “AI” at present).

    I have friends that openly admit they’d rather use AI to generate “art” and then call people who are upset by this luddites, whiny and butt-hurt that AI “does it better” and is more affordable. People use LLMs as a means to formulate opinions and use as their therapist, but when they encounter real life conversations that have ups and downs they don’t know what to do because they’re so used to the ultra-positive formulated responses from chatGPT. People use AI to generate work that isn’t their own. I’ve had someone already take my own, genuine written work, copy/paste it into claude, and then tell me they’re just “making it more professional for me”. In front of me, on a screen share. The output didn’t even make structural sense and had conflicting information from the LLM. It was a slap in the face and now I don’t want to work with startups because apparently a lot of them are doing this to contractors.

    All of these are examples that many people experience with me. They’re all examples of the same thing: “AI” as we are calling it is causing disruptions to the human experience because there’s nothing to regulate it. Companies are literally pirating your human experience to feed it into LLMs and generative tools, turning around and advertising the results as some revolutionary thing that will be your best friend, doctor, educator, personal artist and more. Going further, another person mentioned this, but it’s even weaponized. That same technology is being used to manipulate you, surveil you, and separate you from others to keep you in compliance with your running government, whether it be for good or bad. Not to mention, the ecological impact this has (all so someone can ask Gemini to generate a thank you note). Give the users & the environment more protections and give actual tangible consequences to these companies, and maybe I’ll be more receptive to “AI”.

      • weedwolf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        You’re right; and I do dismiss that opinion quite frequently and have learned at this point to just make no comment and continue the conversation forward.

        On that note: I am a member of the art community, I make digital and physical art as a hobbyist. Occasionally I do some commissions for people but it’s not often. But, those commission requests are going down in number because people want instant art. One friend that made the comment of preferring AI art to me the other day said she does so because she doesn’t want to spend time practicing, so instead she likes to generate images and trace them as it’s more efficient and less taxing on her mind to make something. Assuming she’s doing this for fun or therapeutic reasons, why would you want efficiency? Why would making something(even if it’s a simple flower and sun in the corner of the page) be that taxing that you need to generate it? Let’s think of why it’s taxing first rather than skirting around that and using a generator that scrapes data from others illegally.

        It’s a consumerist mindset that really leaks into a lot of the aspects I’ve mentioned in my previous comment. And honestly, I don’t think a lot of these people really believe AI art is better, I think they’re so used to instant gratification in almost every part of their life that they’re trying to get that dopamine hit regardless if it’s quality content(work, art, stories, etc) or not.

  • twice_hatch@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    No. Copyright should be consistently enforced, pollution should be taxed, privacy should be protected, sites shouldn’t be DoSed, but other than that I think it’s kinda nifty on its own.

  • paequ2@lemmy.today
    link
    fedilink
    arrow-up
    15
    ·
    2 months ago

    I didn’t hate AI (or LLMs whatever) at first, but after becoming a teacher I REALLY FUCKING HATE AI.

    99% of my students use AI to cheat on any work I give them. They’ll literally paste my assignment into ChatGPT and paste ChatGPT’s response back to me. Yes, I’ve had to change how I calculate grades.

    The other super annoying part of AI is that I often have to un-teach the slop that comes from AI. Too often it’s wrong, I have to unteach the wrong parts, and try to get students to remember the right way. OR, if it’s not technically wrong, it’s often wildly over-complicated and convoluted, and again I have to fight the AI to get students to remember the simple, plain way.

    The other thing I’ve heard from peers, is that parents are also using ChatGPT to try to get things from schools. For example, some student was caught cheating, got in trouble, but the parent was trying to use some lawyer-sounding ChatGPT argument to get the kid out of trouble. (They’ve met the parent before and the email seems wildly out of character.) Or in another instance, a parent sent another lawyer-sounding ChatGPT email to the school asking for unreasonable accomodations, demanding software that doesn’t even make sense for the university major.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      My kids’ teacher had a great teaching moment. He had the kids write an outline, use ChatGPT to write an essay from their outline, then he graded them on their corrections to the generated text

    • BlueSquid0741@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      We used to be too scared to tell our parents if we got in trouble, we’d always get in so much shit for it. (And this was late 90s/early 00s, it’s not like we were getting beatings).

      What’s up with parents trying to get their kids out of trouble instead of going ham on them.