• theunknownmuncher@lemmy.world
    link
    fedilink
    arrow-up
    39
    arrow-down
    22
    ·
    edit-2
    28 days ago

    the fact that it is theft

    There are LLMs trained using fully open datasets that do not contain proprietary material… (CommonCorpus dataset, OLMo)

    the fact that it is environmentally harmful

    There are LLMs trained with minimal power (typically the same ones as above as these projects cannot afford as much resources), and local LLMs use signiciantly less power than a toaster or microwave…

    the fact that it cuts back on critical, active thought

    This is a usecase problem. LLMs aren’t suitable for critical thinking or decision making tasks, so if it’s cutting back on your “critical, active thought” you’re just using it wrong anyway…

    The OOP genuinely doesn’t know what they’re talking about and are just reacting to sensationalized rage bait on the internet lmao

    • csh83669@programming.dev
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      4
      ·
      28 days ago

      Saying it uses less power that a toaster is not much. Yes, it uses less power than a thing that literally turns electricity into pure heat… but that’s sort of a requirement for toast. That’s still a LOT of electricity. And it’s not required. People don’t need to burn down a rainforest to summarize a meeting. Just use your earballs.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        27 days ago

        Yeah man, guess show much energy it would take to draw the 4k graphics on your phone screen in 1995?

      • theunknownmuncher@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        8
        ·
        edit-2
        27 days ago

        Saying it uses less power that a toaster is not much

        Yeah but we’re talking a fraction of 1%. A toaster uses 800-1500 watts for minutes, local LLM uses <300 watts for seconds. I toast something almost every day. I’d need to prompt a local LLM literally hundreds of times per day for AI to have a higher impact on the environment than my breakfast, only considering the toasting alone. I make probably around a dozen-ish prompts per week on average.

        That’s still a LOT of electricity.

        That’s exactly my point, thanks. All kinds of appliances use loads more power than AI. We run them without thinking twice, and there’s no anti-toaster movement on the internet claiming there is no ethical toast and you’re an asshole for making toast without exception. If a toaster uses a ton of electricity and is acceptable, while a local LLM uses less than 1% of that, then there is no argument to be made against local LLMs on the basis of electricity use.

        Your argument just doesn’t hold up and could be applied to literally anything that isn’t “required”. Toast isn’t required, you just want it. People could just stop playing video games to save more electricity, video games aren’t required. People could stop using social media to save more electricity, TikTok and YouTube’s servers aren’t required.

        People don’t need to burn down a rainforest to summarize a meeting.

        Strawman

        • NoiseColor @lemmy.world
          link
          fedilink
          arrow-up
          9
          arrow-down
          3
          ·
          27 days ago

          That’s nothing. People aren’t required to eat so much meat, it even eat so much food.

          I also don’t like this energy argument of anti ai, when everything else in our lives already consumes so much.

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          27 days ago

          I won’t call your point a strawman, but you’re ignoring the actual parts of LLMs that have high resource costs in order to push a narrative that doesn’t reflect the full picture. These discussions need to include the initial costs to gather the dataset and most importantly for training the model.

          Sure, post-training energy costs aren’t worth worrying about, but I don’t think people who are aware of how LLMs work were worried about that part.

          It’s also ignoring the absurd fucking AI datacenters that are being built with more methane turbines than they were approved for, and without any of the legally required pollution capture technology on the stacks. At least one of these datacenters is already measurably causing illness in the surrounding area.

          These aren’t abstract environmental damages by energy use that could potentially come from green power sources, these aren’t “fraction of a toast” energy costs only caused by people running queries either.

          • theunknownmuncher@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            27 days ago

            Nope, I’m not ignoring them, but the post is specifically about exceptions. The OOP claims there are no exceptions and there is no ethical generative AI, which is false. Your comment only applies to the majority of massive LLMs hosted by massive corporations.

            The CommonCorpus dataset is less than 8TB, so fits on a single hard drive, not a data center, and contains 2 trillion tokens, which is a relatively similar amount of tokens that small local LLMs are typically trained with (OLMo 2 7B and 13B were trained on 5 trilion tokens).

            These local LLMs don’t have high electricity use or environmental impact to train, and don’t require a massive data center for training. The training cost in energy is high, but nothing like GPT4, and is only a one time cost anyway.

            So, the OOP is wrong, there is ethical generative AI, trained only on data available in the public domain, and without a high environmental impact.

    • hpx9140@fedia.io
      link
      fedilink
      arrow-up
      15
      arrow-down
      6
      ·
      28 days ago

      You’re implying the edge cases you presented are the majority being used?

      • theunknownmuncher@lemmy.world
        link
        fedilink
        arrow-up
        21
        arrow-down
        9
        ·
        edit-2
        27 days ago

        No, and that’s irrelevant. Their post is explicitly not about the majority, but about exceptions/edge cases.

        I am responding to what they posted (I even quoted them), showing that the position that “there is no ethical use for generative AI” and that there are no exceptions is provably false.

        I didn’t think it needed to be said because it’s not relevant to this discussion, but: the majority of AI sucks on all fronts. It’s bad for intellectual property, it’s bad for the environment, it’s bad for privacy, it’s bad for people’s brains, and it’s bad at what it’s used for.

        All of these problems are not inherent to AI itself, and instead are problems with the massive short-term-profit-seeking corporations flush with unimaginable amounts of investor cash (read: unimaginable expectations and promises that they can’t meet) that control the majority of AI. Once again capitalism is the real culprit, and fools like the OOP will do these strawman mental gymnastics and spread misinformation to defend capitalism at all costs.

        • hpx9140@fedia.io
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          27 days ago

          I can get behind this clarification, so thanks for that.

          I’m a realist. To that end, relevance is assigned less on the basis on pedantic deconstruction on a single post and more on the practical reality of what is unfolding around us. Are there ethical applications for generative AI? Possibly. Will they become the standard? Unlikely, given incumbent power structures that are defining and dictating long term use.

          As with most things stitched into the human experience, gaming human psychology/behavioral mechanics are key to trendsetting. What the majority accepts is what reality re-acclimates to. At the moment, that appears to be mass adoption of unethical AI systems.

          I don’t disagree on these problems not being inherent to AI. But that sentiment has the same flavour as ‘guns don’t kill people’ ammosexuals like to bust out when confronted.

          Either way, it’s clear you have a good read on what needs to happen to get all this to a better place. Hope you keep fighting to make that happen.

          • theunknownmuncher@lemmy.world
            link
            fedilink
            arrow-up
            6
            arrow-down
            1
            ·
            27 days ago

            Yeah, agreed. But that’s not what the OOP is saying in their post and their attitude and language makes me believe they’re purposefully being wrong and outrageous for attention/trolling

            • hpx9140@fedia.io
              link
              fedilink
              arrow-up
              5
              ·
              27 days ago

              Yeah, don’t blame you for cracking the whip on hyperbole. Its good to have someone doing that to keep us sane.

              What OOP is reacting to is the majority sentiment thats saturating the feed they’re swimming through. It’s a messy response, but the direction they’re pointed in is generally correct - and a lot more aligned with your position than you might expect, despite fumbling the details.

  • ZMoney@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    4
    ·
    26 days ago

    So I’ll be honest. I use GPT to write Python scripts for my research. I’m not a coder and I don’t want to be one, but I do need to model data sometimes and I find it incredibly useful that I can tell it something in English and it can write modeling scripts in Python. It’s also a great way to learn some coding basics. So please tell me why this is bad and what I should do instead.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      26 days ago

      I think sometimes it is good to replace words to reevaluate a situation.

      Would “I don’t want to be one” be a good argument for using ai image generation?

    • DegenerateSupreme@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      26 days ago

      I’d say the main ethical concern at this time, regardless of harmless use cases, is the abysmal environmental impact necessary to power centralized, commercial AI models. Refer to situations like the one in Texas. A person’s use of models like ChatGPT, however small, contributes to the demand for this architecture that requires incomprehensible amounts of water, while much of the world does not have enough. In classic fashion, the U.S. government is years behind on accepting what’s wrong, allowing these companies to ruin communities behind a veil of hyped-up marketing about “innovation” and beating China at another dick-measuring contest.

      The other concern is that ChatGPT’s ability to write your Python code for data modeling is built on the hard work of programmers who will not see a cent for their contribution to the model’s training. As the adage goes, “AI allows wealth to access talent, while preventing talent from accessing wealth.” But since a ridiculous amount of data goes into these models, it’s an amorphous ethical issue that’s understandably difficult for us to contend with, because our brains struggle to comprehend so many levels of abstraction. How harmed is each individual programmer or artist? That approach ends up being meaningless, so you have to regard it more as a class-action lawsuit, where tens of thousands have been deprived as a whole.

      By my measure, this AI bubble will collapse like a dying star in the next year, because the companies have no path to profitability. I hope that shifts AI development away from these environmentally-destructive practices, and eventually we’ll see legislation requiring model training to be ethically sourced (Adobe is already getting ahead of the curve on this).

      As for what you can do instead, people have been running local Deepseek R1 models since earlier this year, so you could follow a guide to set one up.

  • HalfSalesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    5
    ·
    26 days ago

    I use LLMs in a way that reduces social anxiety from my autism, I give it details of a strange social interaction that I could not parse on my own and ask if I should worry about it, or if I should make any kind of amends or inquiries, or if I’m over thinking something and leave it alone.

    I use LLMs to bounce my own ideas off of that I’m not comfortable bouncing off someone I know IRL.

    I use LLM’s to role play. (all kinds)

    I use LLM’s to find things that I can’t find via conventional research methods.

    And you know what, my perspective on using it for “productive/generative” usage is nuanced. I get why artists and writers are upset, however there is nothing magical about human’s and their artistic abilities and in terms of material economic impacts automation of various kinds has screwed working people in the past and generally I’ve seen a lot less push back.

    I do think that generated images and writing is pretty bland and near worthless though without a ton of human done work atm anyway. Like, sure I could generate a video of a cat dancing on a moving bus while a nuclear bomb is going off in the background or whatever wacky shit with a simple prompt but what exactly am I even going to do with that?

    Highly directed AI content that includes a lot of human work tends to actually be pretty amazing IMO.

    And even though all the outrage pertains to intellectual work, this technology is going to likely result in a lot of blue collar work being automated via “embodied” neural network AI’s. In fact, it may be that it was needed for this kind of automation to really take off at all. Its not just white collar work. We aren’t just automating slop content and corporate purposed art. The day is coming when stuff like laundry, factory/warehouse work, and kitchen work, etc. is also all being done by robots.

  • BradleyUffner@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    26 days ago

    They only real exception I can think of would be to train an AI ENTIRELY on your own personally created material. No sources from other people AT ALL. Used purely for personal use, not used or available for use by the public.

    • pjwestin@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      26 days ago

      I think the public domain would be fair game as well, and the fact that AI companies don’t limit themselves to those works really gives away the game. An LMM that can write in the style of Shakespeare or Dickens is impressive, but people will pay for an LLM that will write their White Lotus fan fiction for them.

    • jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      26 days ago

      This is a very IP-brained take. This is not the reason that AI is harmful.

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        26 days ago

        Possibly, but the intention behind it is more about not exploiting other people. If it’s only trained on my work, and only used by me, I’m the only one harmed by it, and that’s my choice to make.

        • jsomae@lemmy.ml
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          26 days ago

          That’s very deontological. Suppose you train a model that is equally good as other models, but only using your own work. (If you were a billionaire, you could commission many works to achieve this, perhaps.) Either way, you end up with an AI that allows you to produce content without hiring artists. If the end result is just as bad for artists, why is using one of those ethical?

          • BradleyUffner@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            26 days ago

            True, but that’s why I specified that it could only be used for my own personal use. Once you start publishing the output you’ve entered unethical territory.

            • jsomae@lemmy.ml
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              26 days ago

              I don’t see the relevance of its personal use here. If it is ethical to use your own AI for personal use, why is it unethical to use an AI trained on stolen data for personal use?

  • Limonene@lemmy.world
    link
    fedilink
    arrow-up
    32
    arrow-down
    3
    ·
    27 days ago

    Generative AI and their outputs are derived products of their training data. I mean this ethically, not legally; I’m not a copyright lawyer.

    Using the output for personal viewing (advice, science questions, or jacking off to AI porn you requested) is weird but ethical. It’s equivalent to pirating a movie to watch at home.

    But as soon as you show someone else the output, I consider it theft without attribution. If you generate a meme image, you’re failing to attribute the artists whose work trained the AI without permission. If you generate code, that code infringes the numerous open source licenses of the training data, by failing to attribute it.

    Even a simple lemmy text post generated by AI is derived from thousands of unattributed novels.

    • shoo@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      27 days ago

      What a weird distinction. So if I get a prompt to make a particular scene in a particular artist’s distinct style: not stealing. But if I share that prompt (and maybe even some seed info) to a friend, is that stealing? If I take a picture of the generated content, stealing? If someone takes it off my laptop without my knowledge are they stealing from me or the artist?

      My viewpoint is that information wants to be free, and trying to restrict it is a losing battle (as shown by Ai training). The concept of IP is tenuous at best but I do recognize that artists need to eat in our capitalist reality. But once you make something and set it free to the world you inherently lose some ownership of it. Getting mad at the tech itself for the economic injustice is silly, there are plenty more important things to worry about in our hell scape.

      • backgroundcow@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        27 days ago

        Copyright law is more or less always formulated as limits on the rights to redistribute content, not how it is used. Hence, it isn’t a particularly strange position to take that one should be allowed to do whatever one wants with gen AI in the private confines of ones home, and it is only at the moment you start to redistribute content we have to start asking the difficult questions: what is, and what is not, a derivative work of the training data? What ethical limitations, if any, should apply when we use an algorithm to effortlessly copy “a style” that another human has spent lots of effort to develop?

        • shoo@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          26 days ago

          That makes sense wrt redistribution, but the original comment limited itself to the ethical problem and not the legal problem. I just don’t see how it makes sense in that context because it’s entirely unclear who owns the work, that’s the nature of the technology.

          If I train a model on the work of 1000 artists each of them contributes some fractional amount to each weight. When that model generates an image, it’s combining a pseudorandom human token input with the weights and some random seed info.

          If I provide a prompt of my own making, am I stealing 1/1000 of the content from each artist? Is the result 1/3 mine from my token input? Is the result 100% the property of whoever trained the model? Do we need to trace the traversal of the weights and sum the ownership of each artist based on their contribution to that weight? Is it nobody’s due to the sheer number of random steps that convert the input intent to the final result?

    • gmtom@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      9
      ·
      27 days ago

      No, gen AI pictures are not dirived works of their training data. They are seperate processes. The algorithm that actually generates the image has no knowledge of the training data.

        • gmtom@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          27 days ago

          The algorithms involved in the actual creation of the images are not the ones actually trained on the data. So its not at all accurate to claim they are derived.

            • gmtom@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              27 days ago

              Not directly no.

              The training data trains an algorithm that effectively just describes an image it sees (which BTW is super useful for blind people) and gives a score for each keyword.

              Then the actusl generative part takes a random background, tries to denoise it into somerthing recognisable, then shows it to thr first algorithm that gives it a score on how closely it resembles the prompts. Then does some fancy maths and performs another denoising cycle and gets another score from the first algorithm, more maths, another cycle etc. Until it spits out and image that maches the prompt.

              So the algorithm that genrstes the image has no data from the training process whatsoever.

              • petrol_sniff_king@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                26 days ago

                So the algorithm that genrstes the image has no data from the training process whatsoever.

                It gets a, uh, score. You wrote that yourself, I don’t know how you could forget.

                • gmtom@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  26 days ago

                  But thats not the same as a derivative. Like saying a chart on which art styles were most popular in every decade is a derivate of every work in that survey. Because those works were used to create the data being presented.

  • Atlas_@lemmy.world
    link
    fedilink
    arrow-up
    37
    arrow-down
    17
    ·
    27 days ago

    Do y’all hate chess engines?

    If yes, cool.

    If no, I think you hate tech companies more than you hate AI specifically.

    • Norah (pup/it/she)@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      11
      ·
      edit-2
      27 days ago

      The post is pretty clearly* about genAI, I think you’re just choosing to ignore that part. There’s plenty of really awesome machine learning technology that helps with disabilities, doesn’t rip off artists and isn’t environmentally deleterious.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        15
        arrow-down
        19
        ·
        edit-2
        27 days ago

        The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.

        So is trying to bucket them based on copyright violation: there are very powerful, open dataset, more or less reproducible LLMs trained and runnable on a trivial amount of electricity you can run on your own PC right now.

        Same with use cases. One can use embeddings models or tiny resnets to kill. People do, in fact, like with Palantir’s generative free recognition models. At the other extreme, LLMs can be totally task focused and useless at anything else.

        The distinction is corporate/enshittified vs not. Like Reddit vs Lemmy.

        • absentbird@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          5
          ·
          27 days ago

          The distinction between AI and GenAI is like the difference between eating and cannibalism; one contains the other, but there’s still a meaningful distinction.

          Generative AI produces text or images by leveraging huge neural networks weighted by tons and tons of training data. It’s fundamentally a system of guesses and vibes.

          Machine learning in general is often much more precise. The model finding early cancer in scans isn’t just guessing the next word, it’s running the image through a series of precisely tuned layers.

          The industry term for the distinction is supervised vs unsupervised learning.

        • Annoyed_🦀 @lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          27 days ago

          The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.

          Genuinely doubt the tech used to control Zerg is the same tech used to generate an essay about elephant which contain numerous misinformation. AI lately is being used liberally, which lost their meaning.

        • Probius@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          4
          ·
          edit-2
          27 days ago

          That first claim makes no sense and you make no argument to back it up. The distinction is actually quite meaningful; generative AI generates new samples from an existing distribution, be it text, audio, images, or anything else. Other forms of AI solve numerous problems in different ways, such as identifying patterns we can’t or inventing novel and more optimal solutions.

        • starman2112@sh.itjust.works
          link
          fedilink
          arrow-up
          18
          arrow-down
          8
          ·
          edit-2
          27 days ago

          The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.

          You know this is a stupid take, right? You know that chatgpt and Stockfish, while both being forms of “artificial intelligence,” are wildly incomparable, yeah? This is like saying “the distinction between an ICBM and the Saturn-V is meaningless, because they both use the same underlying tech”

          • SinAdjetivos@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            26 days ago

            You know that transformer and diffussion models, while both being forms of “GenAI,” are wildly incomparable, yeah?

  • kartoffelsaft@programming.dev
    link
    fedilink
    arrow-up
    123
    arrow-down
    10
    ·
    27 days ago

    I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.

    But also:

    Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who’ll accept you instead. It’s disgustingly twitter-brained. It’s a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.

    Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? “That time you used ChatGPT to recall the word ‘verisimilar’ makes you an evil person.” is what they hear. And at that moment you’ve cut that person off from ever actually considering your opinion ever again. Even if you’re right that’s not healthy.

    • BigDiction@lemmy.world
      link
      fedilink
      arrow-up
      25
      arrow-down
      3
      ·
      27 days ago

      I’m a what most people would consider an AI Luddite/hater and think OOP communicates like a dogmatic asshole.

    • ysjet@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      8
      ·
      27 days ago

      Using chatGPT to recall the word ‘verisimilar’ is an absurd waste of time, energy, and in no way justifies the use of AI.

      90% of LLM/GPT use is a waste or could be done with better with another tool, including non-LLM AIs. The remaining 10% are just outright evil.

        • ysjet@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          26 days ago

          Source is the commercial and academic uses I’ve personally seen as an academic-adjacent professional that’s had to deal with this sort of stuff at my job.

          • KeenFlame@feddit.nu
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            25 days ago

            What was the data you saw on what volume of requests to non-llm models as they relate to utility? I can’t figure out what profession have access to this kind of statistic? It would be very useful to know, thx.

            • ysjet@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              25 days ago

              I think you’ve misunderstood what I was saying- I don’t have spreadsheets of statistics on requests for LLM AIs vs non-LLM AIs. What I have is exposure to a significant amount of various AI users, each running different kinds of AIs, and me seeing what kind of AI they’re using, and for what purposes, and how well it works or doesn’t.

              Generally, LLM-based stuff is really only returning ‘useful’ results for language-based statistical analysis, which NLP handles better, faster, and vastly cheaper. For the rest, they really don’t even seem to be returning useful results- I typically see a LOT of frustration.

              I’m not about to give any information that could doxx myself, but the reason I see so much of this is because I’m professionally adjacent to some supercomputers. As you can imagine, those tend to be useful for AI research :P

              • KeenFlame@feddit.nu
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                24 days ago

                Ah ok that’s too bad. Super computers typically don’t have tensor cores though, and most LLM use is presumably client use on ready trained models which desktop or mobile cpus can manage now so it will be impossible to know then

                • ysjet@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  23 days ago

                  yyyyes they do have tensor cores? Where did you get such an absurd idea from?

    • WoodScientist@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      27 days ago

      (as a reverse dictionary, for example)

      Thanks for putting a name on that! That’s actually one of the few useful purposes I’ve found for LLMs. Sometimes you know or deduce that some thing, device, or technique must exist. The knowledge of this thing is out there, but you simply don’t know the term to search for. IMO, this is actually one of the killer features of LLMs. It works well because whatever the LLM is outputting is simply and instantly verifiable. You can describe the characteristics of something to the LLM and ask it what thing has those characteristics. Then once you have a possible name, you then look that name up in a reliable source and confirm it. Sometimes the biggest hurdle to figuring something out is just learning the name of a thing. And I’ve found LLMs very useful as a reverse dictionary. Thanks for putting a name on it!

    • azertyfun@sh.itjust.works
      link
      fedilink
      arrow-up
      15
      arrow-down
      5
      ·
      27 days ago

      You can also be right for the wrong reasons. You see that a lot in the anti-AI echo chambers, people who never gave a shit about IP law suddenly pretending that they care about copyright, the whole water use thing which is closer to myth than fact, or discussions on energy usage in general.

      Everyone can pick up on the vibes being off with the mainstream discourse around AI, but many can’t properly articulate why and they solve that cognitive dissonance with made-up or comforting bullshit.

      This makes me quite uncomfortable because that’s the exact same pattern of behavior we see from reactionaries, except that what weirds them out for reasons they can’t or won’t say explicitly isn’t tech bros but immigrants and queer people.

        • azertyfun@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          26 days ago

          It’s not that the datacenters don’t “use” water (you’ll find plenty of sources confirming that), but rather that the argument stretches the concept of “water usage” well beyond the limit of meaninglessness. Water is not electricity, it can’t usually be transported very far and the impact of a pumping operation is fundamentally location-dependent. Saying “X million litres of water used for Y” is usually not useful unless you’re defining the local geographic context.

          Pumping aquifers in a dry area and discharging the water in a field: very bad.

          Pumping from and subsequently releasing water to a lake/river: mostly harmless, though sometimes in summer the additional heat pumped into the water can be harmful depending on the size of the body of water.

          The real problem is that lots of areas (especially in the US) haven’t updated their water rights laws since the discovery of water tables. This is hardly a new problem, and big ag remains by far the worst offender here.

          Then there’s the raw materials in the supply chain… and like not to downplay it but water use is not exactly at the top of the list of environmental impacts there. Concrete is hella bad on CO2 emissions, electronics use tons of precious metals that often get strip mined and processed with little to no environmental regulation, etc.

          Frankly putting “datacenter pumped water out of the river then back in” in the same aggregate figure as “local lake polluted for 300 years in China by industrial byproducts” rubs me the wrong way. These are entirely different problems that do not benefit anyone from being bastardized like this. It feels the same way to me as saying “but there are children starving in Africa!” when someone throws away some food – sure throwing away food isn’t great, and it’s technically on-topic, but we can see how bundling these things together isn’t useful, right?

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        6
        arrow-down
        5
        ·
        27 days ago

        The people who hate immigrants and queer people are AI’s biggest defenders. It’s really no wonder that people who hate life also love the machine that replaces it.

        • KeenFlame@feddit.nu
          link
          fedilink
          arrow-up
          5
          arrow-down
          3
          ·
          27 days ago

          A perfect example of the just completely delusional factoids and statistics that will spontaneously form in the hater’s mind. Thank you for the demonstration.

  • Etterra@discuss.online
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    27 days ago

    It’s basically “I have no creative talent or skill so I’ll use this because it’ll teach those artists a lesson for acting so superior.” It’s a completely delusional disconnect with the reality of being an actually creative person in any way. Especially one when it comes to creatives trying to earn a living while must people just want their shit for free. Go harass billionaires for their shit and leave there starving artists alone.

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    14
    ·
    27 days ago

    Lmfao, this is the most childish take I could possibly imagine.

    You cannot avoid the problems with something by sticking your head in the sand and pretending like it doesn’t exist and will go away.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        27 days ago

        For people who proclaim to care about critical active thought to think through the issue.

        Lmao, how does noone see the irony of claiming to care about active thinking while boiling the issue down to an oversimplified, black and white, “all AI is bad and all uses of it are bad”.

  • NoiseColor @lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    10
    ·
    27 days ago

    I like to read the anti ai stuff, because ultimativly a lot of criticism is valid. But by god is there a lot of adolescent whining and hyperbole.

    • AquaTofana@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      27 days ago

      I’ve said it before and I’ll say it again, one of my favorite things is the AI rp chatbots. They’re stories written by me and an AI, for me, however the fuck I want to write them.

      I used to do it with other people over the web - including my bestie who Ive been writing with for 20+ years now - but I don’t write with other humans anymore.

      AI solves the ghosting issue, the “life got in the way” issues, the “I’m just not into it anymore” issues, and the “Oh you wanna make this smutty please for the love of god I hope you’re not lying about being 26” issue, and finally, the biggest issue for me: “Please I told you I’m happily married please stop asking for me socials or email. I just wanna write fun angsty romance stories with you.”

      So I’m with you. I’m also the problem, its me. But you know what? When I discovered these AI chatbots in February of this year, my doomscrolling was cut down to a third of what it was, and I all of a sudden was sleeping better and less angry.

      I’m not gonna stop.

    • grrgyle@slrpnk.net
      link
      fedilink
      arrow-up
      5
      ·
      27 days ago

      Yeah I do plenty of shit I know is a problem. Most of it just passively from living in a consumerist society.

      • dandelion (she/her)@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        26 days ago

        yes, a lot of my immoral actions are because it’s hard or against the grain to be more moral (e.g. being a strict vegan even when traveling or not easily accommodated, or using cars when technically I could bicycle, but on dangerous roads and long distances).

        I have definitely spent most of my adult life going against the grain in extreme ways to be a “better” person, but I have been left victimized and disabled for it, so I’m trying to learn to be more moderate and not take big social problems as entirely my personal responsibility. Obviously it’s not one extreme or the other, it’s an interplay between personal and social / structural.

    • chunkystyles@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      27 days ago

      I use it to help me solve tech and code issues, but only because searching the web for help has become so bad. LLM answers are almost always better, and I hate it.

      Everything is bullshit. Everything sucks. Capitalism has ruined everything.

    • bramkaandorp@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      26 days ago

      Neither is climate change, but we should still combat it where possible.

      Funny, that. Fighting against AI could be seen as fighting against climate change, considering the large carbon footprint it has.

    • OneClappedCheek@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      26 days ago

      Texans are facing a water shortage due to over 900 MILLION gallons of water being used to cool AI datacenters. Do you think that’s sustainable?

    • ZDL@lazysoci.al
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      26 days ago

      Do you want me to list the techbrodude technologies that were “not going anywhere” in past decades that have effectively died outside of tiny die-hard communities still living a delusion?

      Remember when the Metaverse was the next great thing that wasn’t going anywhere? Remember when cryptocurrency was going to wipe out banking forevermore? Remember when NFTs were going to revolutionize artists getting paid for their work? Segway or its somehow-lamer cousin “hoverboards”? Augmented Reality? 3D TVs? Theranos? Google Wave?

      Hell, just go visit the Google Graveyard for a list of “hot” technologies that withered and died on the vine. (And quite a few lame technologies that shouldn’t have ever even been on the vine.)

      Remember all that?

      But this time the techbrodudes have it right, despite there not being a viable business model; despite every AI vendor in the world burning through money faster than dumping that same cash into a forest fire. It’s not going anywhere!

      Every grift has two parties: the grifter and the sucker. You’re not the former.

    • korazail@lemmy.myserv.one
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      26 days ago

      sadly. I don’t have enough money to turn this shit-hose off.

      Gen AI is neat, and I use it for personal processes including code, image gen, llm/chat; but it is sooooo faaaar awaaaay from being a real game changer - while all the people poised to profit off it claim it is - that it’s just insane to claim it’s the next wave. evidence: all the creative (photo/art/code/etc) people who are adamantly against it and have espoused reasoning.

      There’s another story on my feed about a 10-year-old refactoring a code base with a LLM. Go look at the comments from actual experts that take into account things like unit tests, readability, manageability, security. Humans have more context than any AI will.

      LLMs are not intelligent. They are patently not. They make shit up constantly, since that is exactly what they do. Sometimes, maybe even most of the time, the shit they make up is mostly accurate… but do you want to rely on them?

      When a doctor prescribes you the wrong drug, you can sue them as a recourse. When a software company has a data breach, there is often a class-action (better than nothing) as a recourse. When an AI tells you to put glue on your pizza to hold the toppings, there is no recourse, since the AI is not a legal thing and the company disclaims all liability for its output. When an AI denies your health insurance claim because of inscrutable reasons, there is no recourse.

      In the first two, there is a penalty for being wrong, which is in effect an incentive to be correct – to be accurate, to be responsible.

      In the last, as an AI llm/agent/fuckingbuzzword, there is no penalty and no incentive. The AI just is as good as its input, and half the world is fucking stupid, so if we average out all the world’s input, we get “barely getting by” as a result. A coding AI is at least partially trained on random stackoverflow posts asking for help. The original code there is wrong!

      Sadly, it’s not going anywhere. But people who rely on it will find short-term success for long-term failure. And a society relying on it is doomed. AI relies on the creative works that already exist. If we don’t make any new things, AI will stagnate and die. Where will we be then?

      There are places AI/LLM/Machine-Learning can be used successfully and helpfully, but they are niche. The AI bros need to be figuring out how to quickly meet a specific need instead of trying to meet all needs at the same time. Think the early 2000-s Folding at Home, how to convince republicans to wear a fucking mask during covid, why we shouldn’t just eat the billionaires*.

      *Hermes-3 says cannibalism is “barbaric” in most cultures, but otherwise doesn’t give convincing arguments.

  • gmtom@lemmy.world
    link
    fedilink
    arrow-up
    69
    arrow-down
    24
    ·
    27 days ago

    I work at a company that uses AI to detect repirstory ilnesses in xrays and MRI scans weeks or mobths before a human doctor could.

    This work has already saved thousands of peoples lives.

    But good to know you anti-AI people have your 1 dimensional, 0 nuance take on the subject and are now doing moral purity tests on it and dick measuring to see who has the loudest, most extreme hatred for AI.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      20
      arrow-down
      4
      ·
      edit-2
      27 days ago

      All this is being stoked by OpenAI, Anthropic and such.

      They want the issue to be polarized and remove any nuance, so it’s simple: use their corporate APIs, or not. Anything else is ”dangerous.”

      For what they’re really scared of is awareness of locally runnable, ethical, and independent task specific tools like yours. That doesn’t make them any money. Stirring up “fuck AI” does, because that’s a battle they know they can win.

    • ysjet@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      18
      ·
      27 days ago

      Those are not GPTs or LLMs. Fuck off with your bullshit trying to conflate the two.

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        18
        arrow-down
        4
        ·
        27 days ago

        We actually do use Generative Pre-trained Transformers as the base for a lot of our tech. So yes they are GPTs.

        And even if they werent GPTs this is a post saying all AI is bad and how there is literally no exceptions to that.

        • ysjet@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          edit-2
          26 days ago

          Again with the conflation. They clearly mean GPTs and LLMs from the context they provide, they just don’t have another name for it, mostly because people like you like to pretend that AI is shit like chatGPT when it benefits you, and regular machine learning is AI when it benefits you.

          And no, GPTs are not needed, nor used, as a base for most of the useful tech, because anyone with any sense in this industry knows that good models and carefully curated training data gets you more accurate, reliable results than large amounts of shit data.

          • gmtom@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            26 days ago

            Our whole tech stack is built off of GPTs. They are just a tool, use it badly and you grt AI slop, use it well and you can save peoples lives.

    • Corelli_III@midwest.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      7
      ·
      26 days ago

      nobody is trashing Visual Machine Learning to assist in medical diagnostics

      cool strawman though, i like his little hat

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        26 days ago

        No, when you litterally say “Fuck AI, no exceptions” you are very very expliticly covering all AI in that statement.

        • Corelli_III@midwest.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          26 days ago

          what do you think visual machine learning applied to medical diagnostics is exactly

          does it count as “ai” if i could teach an 11th grader how to build it, because it’s essentially statistically filtering legos

          don’t lose the thread sportschampion

          • gmtom@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            25 days ago

            Well most of my colleagues have PHDs or MDs, so good luck teaching an 11th grader to do it.

    • redwattlebird @lemmings.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      6
      ·
      26 days ago

      And that AI has been trained on data that has been stolen, taking away the livelihood of thousands more. Further, the environmental destruction will have the capacity to destroy millions more.

      I’m not lost on the benefits; it can be used to better society. However, the lack of policy around it, especially the pandering to corporations by the American judicial system, is the crux here. For me, at least.

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        26 days ago

        No. Im also part of the ethics committee at my work and since we work with peoples medical data as our training sets 9/10ths of our time is about making sure that data is collected ethically and with very specific consent.

        • redwattlebird @lemmings.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          26 days ago

          I’m fine with that. My issue is primarily theft and permissions and the way your committee is running it should be the absolute baseline of how models gather data. Keep up the great work. I hope that this practice becomes mainstream.

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        6
        ·
        27 days ago
        1. Except clearly some people do. This post is very specifically saying ALL AI is bad and there is no exceptions.

        2. Generative AI isnt a well defined concept and a lot of the tech we use is indistinguishable on a technical level from “Generstive AI”

        • starman2112@sh.itjust.works
          link
          fedilink
          arrow-up
          10
          arrow-down
          7
          ·
          edit-2
          27 days ago
          1. sephirAmy explicitly said generative AI

          2. Give me an example, and watch me distinguish it from the kind of generative AI sephirAmy is talking about

        • starman2112@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          arrow-down
          6
          ·
          26 days ago

          It’s almost like it isn’t the “training on a large data set” part that people hate about generative AI

          ICBMs and rocket ships both burn fuel to send a payload to a destination. Why does NASA get to send tons of satellites to space, but I’m the asshole when I nuke Europe??? They both utiluze the same technology!

            • starman2112@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              4
              ·
              26 days ago

              Nope, all generative AI is bad, no exceptions. Something that uses the same kind of technology but doesn’t try to imitate a human with artistic or linguistic output isn’t the kind of AI we’re talking about.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        33
        arrow-down
        15
        ·
        edit-2
        27 days ago

        Generative AI is a meaningless buzzword for the same underlying technology, as I kinda ranted on below.

        Corporate enshittification is what’s demonic. When you say fuck AI, you should really mean “fuck Sam Altman”

        • monotremata@lemmy.ca
          link
          fedilink
          English
          arrow-up
          29
          arrow-down
          6
          ·
          27 days ago

          I mean, not really? Maybe they’re both deep learning neural architectures, but one has been trained on an entire internetful of stolen creative content and the other has been trained on ethically sourced medical data. That’s a pretty significant difference.

          • AdrianTheFrog@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            27 days ago

            I think DLSS/FSR/XeSS is a good example of something that is clearly ethical and also clearly generative AI. Can’t really think of many others lol

          • KeenFlame@feddit.nu
            link
            fedilink
            arrow-up
            13
            ·
            27 days ago

            No, really. Deep learning and transformers etc. was discoveries that allowed for all of the above, just because corporate vc shitheads drag their musty balls in the latest boom abusing the piss out of it and making it uncool, does not mean the technology is a useless scam

            • ILikeTraaaains@lemmy.world
              link
              fedilink
              arrow-up
              8
              ·
              27 days ago

              This.

              I recently attended a congress about technology applied on healthcare.

              There were works that improved diagnosis and interventions with AI, generative mainly used for synthetic data for training.

              However there were also other works that left a bad aftertaste in my mouth, like replacing human interaction between the patient and a specialist with a chatbot in charge of explaining the procedure and answering questions to the patient. Some saw privacy laws as a hindrance and wanted to use any kind of private data.

              Both GenAI, one that improves lives and other that improves profits.

            • monotremata@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              26 days ago

              Yeah, that’s not what I was disagreeing with. You’re right about that; I’m on record saying that capitalism is our first superintelligence and it’s already misaligned. I’m just saying that it isn’t really meaningless to object to generative AI. Sure the edges of the category are blurry, but all the LLMs and diffusion-based image generators and video generators were unethically trained on massive bodies of stolen data. Seriously, talking about AI as though the architecture is the only significant element when getting good training data is like 90% of the challenge is kind of a pet peeve of mine. And seen in that light there’s a pretty significant distinction between the AI people are objecting to and the AI people aren’t objecting to, and I don’t think it’s a matter of “a meaningless buzzword.”

              • KeenFlame@feddit.nu
                link
                fedilink
                arrow-up
                1
                ·
                25 days ago

                I totally understand that. The pet peeve of yours, i just disagree with on a fundamental level. The data is the content, and speaking about it as if the data is the technology itself is like talking about clothes in general as being useful or not. It’s meaningless especially if you don’t know about or acknowledge the different types of apparel and their uses. It’s obviously not general knowledge but it would be like bickering about if underwear is a great idea or not, it’s totally up to the individual if they want to wear them, even if being butt naked in public is illegal. If the framework is irrelevant, then the immediate problem isn’t generative AI, especially the perfectly ethical open source models

        • AeonFelis@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          8
          ·
          edit-2
          26 days ago

          Generative AI is a meaningless buzzword for the same underlying technology

          What? An AI that can “detect repirstory ilnesses in xrays and MRI scans” is not generative. It does not generate anything. It’s a discriminative AI. Sure, the theories behind these technologies have many things is common - but I wouldn’t call them “the same underlying technology”.

          • gmtom@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            26 days ago

            It is litterally the exact same technology. If i wanted to i could turn our xray product into a image generator in less than a day.

            • AeonFelis@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              4
              ·
              26 days ago

              Because they are both computers and you can install different (GPU-bound) software on them?

              It’s true that generative AI is uses discriminative models behind the scenes, but the layer needed on top of that is enough to classify it as a different technology.