Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • Taleya@aussie.zone
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    1 month ago

    What do I really want?

    Stop fucking jamming it up the arse of everything imaginable. If you asked for a genie wish, make it it illegal to be anything but opt in.

    • blackn1ght@feddit.uk
      link
      fedilink
      arrow-up
      6
      ·
      1 month ago

      I think it’s just a matter of time before it starts being removed from places where it just isn’t useful. For now companies are just throwing it at everything to see what sticks. WhatsApp and JustEat added AI features and I have no idea why or how it could be used for those services and I can’t imagine people using them.

  • Brave Little Hitachi Wand@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 month ago

    Part of what makes me so annoyed is that there’s no realistic scenario I can think of that would feel like a good outcome.

    Emphasis on realistic, before anyone describes some insane turn of events.

  • justOnePersistentKbinPlease@fedia.io
    link
    fedilink
    arrow-up
    49
    arrow-down
    3
    ·
    1 month ago

    They have to pay for every copyrighted material used in the entire models whenever the AI is queried.

    They are only allowed to use data that people opt into providing.

    • venusaur@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      1 month ago

      This definitely relates to moral concerns. Are there other examples like this of a company that is allowed to profit off of other people’s content without paying or citing them?

      • BlameTheAntifa@lemmy.world
        link
        fedilink
        arrow-up
        17
        ·
        1 month ago

        Careful, that might require a nuanced discussion that reveals the inherent evil of capitalism and neoliberalism. Better off just ensuring that wealthy corporations can monopolize the technology and abuse artists by paying them next-to-nothing for their stolen work rather than nothing at all.

    • Bob Robertson IX @discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      4
      ·
      1 month ago

      There’s no way that’s even feasible. Instead, AI models trained on pubically available data should be considered part of the public domain. So, any images that anyone can go and look at without a barrier in the way, would be fair game, but the model would be owned by the public.

      • turtle [he/him]@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Public Domain does not mean being able to see something without a barrier in the way. The vast majority of text and media you can consume for free on the Internet is not in the Public Domain.

        Instead, “Public Domain” means that 1) the creator has explicitly released it into the Public Domain, or 2) the work’s copyright has expired, which in turn then means that anyone is from that point on entitled to use that work for any purpose.

        All the major AI models scarfed up works without concern for copyrights, licenses, permissions, etc. For great profit. In some cases, like at least Meta, they knowingly used known collections of pirated works to do so.

        • Bob Robertson IX @discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          I am aware and I don’t expect that everything on the internet is public domain… I think the models built off of works displayed to the public should be automatically part of the public domain.

          The models are not creating copies of the works they are trained on any more than I am creating a copy of a sculpture I see in a park when I study it. You can’t open the model up and pull out images of everything that it was trained on. The models aren’t ‘stealing’ the works that they use for training data, and you are correct that the works were used without concern for copyright (because the works aren’t being copied through training), licenses (because a provision such as ‘you can’t use this work to influence your ability to create something with any similar elements’ isn’t really an enforceable provision in a license), or permission (because when you put something out for the public to view it’s hard to argue that people need permission to view it).

          Using illegal sources is illegal, and I’m sure if it can be proven in court then Meta will gladly accept a few hundred thousand dollar fine… before they appeal it.

          Putting massive restrictions on AI model creation is only going to make it so that the most wealthy and powerful corporations will have AI models. The best we can do is to fight to keep AI models in the public domain by default. The salt has already been spilled and wishing that it hadn’t isn’t going to change things.

        • Bob Robertson IX @discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          No, it’s not feasible because the models are already out there. The data has already been ingested and at this point it can’t be undone.

          And you can’t exactly steal something that is infinitely reproducible and doesn’t destroy the original. I have a hard time condemning model creators of training their models on images of Mickey Mouse while I have a Plex server with the latest episodes of Andor on it. Once something is put on display in public the creator of it should just accept that they have given up their total control of it.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        arrow-up
        20
        arrow-down
        1
        ·
        edit-2
        1 month ago

        There’s no way that’s even feasible.

        It’s totally feasible, just very expensive.

        Either copyright doesn’t exist in its current form or AI companies don’t.

    • A Wild Mimic appears!@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      1 month ago

      I would make a case for creation of datasets by a international institution like the UNESCO. The used data would be representative for world culture, and creation of the datasets would have to be sponsored by whoever wants to create models out of it, so that licencing fees can be paid to creators. If you wanted to make your mark on global culture, you would have an incentive to offer training data to UNESCO.

      I know, that would be idealistic and fair to everyone. No way this would fly in our age.

  • BananaTrifleViolin@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    I’m not against AI itself—it’s the hype and misinformation that frustrate me. LLMs aren’t true AI - or not AGI as the meaning of AI has drifted - but they’ve been branded that way to fuel tech and stock market bubbles. While LLMs can be useful, they’re still early-stage software, causing harm through misinformation and widespread copyright issues. They’re being misapplied to tasks like search, leading to poor results and damaging the reputation of AI.

    Real AI lies in advanced neural networks, which are still a long way off. I wish tech companies would stop misleading the public, but the bubble will burst eventually—though not before doing considerable harm.

  • naught101@lemmy.world
    link
    fedilink
    arrow-up
    34
    arrow-down
    1
    ·
    1 month ago

    TBH, it’s mostly the corporate control and misinformation/hype that’s the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people’s capacity for critical thinking.

    ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights…

    So yeah, uh… Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.

  • sweemoof@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    1 month ago

    The most popular models used online need to include citations for everything. It can be used to automate some white collar/knowledge work but needs to be scrutinized heavily by independent thinkers when using it to try to predict trend and future events.

    As always schools need to be better at teaching critical thinking, epistemology, emotional intelligence way earlier than we currently do and AI shows that rote subject matter is a dated way to learn.

    When artists create art, there should be some standardized seal, signature, or verification that the artist did not use AI or used it only supplementally on the side. This would work on the honor system and just constitute a scandal if the artist is eventually outed as having faked their craft. (Think finding out the handmade furniture you bought was actually made in a Vietnamese factory. The seller should merely have their reputation tarnished.)

    Overall I see AI as the next step in search engine synthesis, info just needs to be properly credited to the original researchers and verified against other sources by the user. No different than Google or Wikipedia.

  • subignition@fedia.io
    link
    fedilink
    arrow-up
    17
    ·
    1 month ago

    Training data needs to be 100% traceable and licensed appropriately.

    Energy usage involved in training and running the model needs to be 100% traceable and some minimum % of renewable (if not 100%).

    Any model whose training includes data in the public domain should itself become public domain.

    And while we’re at it we should look into deliberately taking more time at lower clock speeds to try to reduce or eliminate the water usage gone to cooling these facilities.

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    I think its important to figure out what you mean by AI?

    Im thinking a majority of people here are talking about LLMs BUT there are other AIs that have been quietly worked on that are finally making huge strides.

    AI that can produce songs (suno) and replicate voices. AI that can reproduce a face from one picture (theres a couple of github repos out there). When it comes to the above we are dealing with copyright infringement AI, specifically designed and trained on other peoples work. If we really do have laws coming into place that will deregulate AI, then I say we go all in. Open source everything (or as much as possible) and make it so its trained on all company specific info. And let anyone run it. I have a feeling we cant put he genie back in the bottle.

    If we have pie in the sky solutions, I would like a new iteration of the web. One that specially makes it difficult or outright impossible to pull into AI. Something like onion where it only accepts real nodes/people in ingesting the data.

  • MisterCurtis@lemmy.world
    link
    fedilink
    arrow-up
    20
    ·
    1 month ago

    Regulate its energy consumption and emissions. As a whole, the entire AI industry. Any energy or emissions in effort to develop, train, or operate AI should be limited.

    If AI is here to stay, we must regulate what slice of the planet we’re willing to give it. I mean, AI is cool and all, and it’s been really fascinating watching how quickly these algorithms have progressed. Not to oversimplify it, but a complex Markov chain isn’t really worth the energy consumption that it currently requires.

    A strict regulation now, would be a leg up in preventing any rogue AI, or runaway algorithms that would just consume energy to the detriment of life. We need a hand on the plug. Capitalism can’t be trusted to self regulate. Just look at the energy grabs all the big AI companies have been doing already (xAI’s datacenter, Amazon and Google’s investments into nuclear). It’s going to get worse. They’ll just keep feeding it more and more energy. Gutting the planet to feed the machine, so people can generate sexy cat girlfriends and cheat in their essays.

    We should be funding efforts to utilize AI more for medical research. protein folding , developing new medicines, predicting weather, communicating with nature, exploring space. We’re thinking to small. AI needs to make us better. With how much energy we throw at it we should be seeing something positive out of that investment.

    • medgremlin@midwest.social
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      These companies investing in nuclear is the only good thing about it. Nuclear power is our best, cleanest option to supplement renewables like solar and wind, and it has the ability to pick up the slack when the variable power generation doesn’t meet the variable demand. If we can trick those mega-companies into lobbying the government to allow nuclear fuel recycling, we’ll be all set to ditch fossil fuels fairly quickly. (provided they also lobby to streamline the permitting process and reverse the DOGE gutting of the government agency that provides all of the startup loans used for nuclear power plants.)

  • FuryMaker@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Lately, I just wish it didn’t lie or make stuff up. And after drawing attention to false information, it often doubles-down, or apologises, and just repeats the bs.

    If it doesn’t know something, it should just admit it.

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      LLM don’t know that they are wrong. It just mimics how we talk, but there is no conscious choice behind the words used.

      It just tries to predict which word to use next, trained on a ungodly amount of data.