• glitchdx@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Trying to comment in this thread and it tells me “Toastify is awesome”? wth?

    edit: nevermind? whatever borked seems to have fixed itself? I don’t know.

    • b000rg@midwest.social
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      Toasting is probably a library to add toast text (that little popup message) to a mobile app

  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    23
    ·
    4 days ago

    The web doesn’t have a business model, cloudflair, you do. And nobody cares because you suck.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      85
      arrow-down
      6
      ·
      4 days ago

      Eh, Cloudflare provides a pretty good service for a very reasonable price.

      But yeah, the web doesn’t have a business model in the same way a town square doesn’t, yet you can make a business work in both areas. Make a compelling product and people will pay you for it.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        4
        ·
        3 days ago

        You mean product that literally makes web unusable for many and tracks your every single step with extremely invasive fingerprinting techniques? That product?

        • Honytawk@lemmy.zip
          link
          fedilink
          English
          arrow-up
          12
          ·
          3 days ago

          I’d say that getting your server DDoSed makes it a whoooole lot less usable.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          That’s a big reason why I don’t use their security layer, mostly just their domain registrar. They have a ton of products that don’t involve tracking your users.

      • ThirdConsul@lemmy.ml
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        8
        ·
        3 days ago

        Cloudflare provides a pretty good service for a very reasonable price.

        You mean selling fingerprinted user data to advertisers?

  • xylogx@lemmy.world
    link
    fedilink
    English
    arrow-up
    164
    arrow-down
    4
    ·
    4 days ago

    So you’re saying the ad driven internet will die? And we will be left with what? Wikipedia and Lemmy? I for one welcome our AI overlords!

    • BestBouclettes@jlai.lu
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      It would be very naïve to think they won’t go against Wikipedia and the fediverse at some point unfortunately…

    • venusaur@lemmy.world
      link
      fedilink
      English
      arrow-up
      54
      arrow-down
      3
      ·
      edit-2
      4 days ago

      Nah, it’s saying that ad and AI-driven internet will prevail. People only use Google to find an answer and don’t dig deeper, and if they do, it’s often because the links are sponsored. People using GPT’s are even less likely to click a link. Currently no ads, but just wait.

      Apologies if you were joking.

      • sunzu2@thebrainbin.org
        link
        fedilink
        arrow-up
        4
        arrow-down
        24
        ·
        4 days ago

        Normies get AI slop, prosumer uses local llm…

        Not sure about social media… Normie is allergic to reading anything beyond daddy’s propaganda slop. If it ain’t rage bait, he ain’t got time for it

          • sunzu2@thebrainbin.org
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            3 days ago

            https://ollama.org/

            You can pick something that fits your GPU size. Works well on apple silicon too. My fav’s now are qwen3 series. Prolly best performance for local single gpu

            Will work on CPU/RAM but slower

            If you got Linux, I would put into a docker container. Might too much for the first try. There easier options I think.

            • venusaur@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Hm, I’ll see if my laptop can handle it. Probably do t have the patience or processing power

            • tormeh@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              Ollama is apparently going for lock-in and incompatibility. They’re forking llama.cpp for some reason, too. I’d use GPT4All or llama.cpp directly. They support Vulkan, too, so your GPU will just work.

            • Jakeroxs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 days ago

              I use oobabooga, little bit more options in the gguf space then ollama but not as easy to use imo. Does support openAI api connection though so can plug in other services to use it.

        • TheOneCurly@lemm.ee
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          2
          ·
          3 days ago

          Home grown slop is still slop. The lying machine can’t make anything else.

          • sunzu2@thebrainbin.org
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            3 days ago

            At least my idiocy ain’t training the enemy.

            Also, AI ain’t there to be correct. AI is there to help you get something done if you already know the outcome mostly.

            It can really turbo charge a Linux experience for example.

            Also local is way less censored and can be tweaked ;)

        • jim3692@discuss.online
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 days ago

          So, prosumers, leveraging computers that are not optimized for AI workloads, being limited to models that are typically inferior to commercial ones, are wasting more energy for even more slop?

          • sunzu2@thebrainbin.org
            link
            fedilink
            arrow-up
            5
            ·
            3 days ago

            That’s the price of privacy that I am willing to pay. With respect to electricity, I pay my bills at consumer rate while subsidizing corporate parasites who pay lower rates and get state aid on top of it.

            • jim3692@discuss.online
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              That’s the price of privacy I am currently paying.

              There was, however, a video from The Hated One, that presents a different perspective on this. Maybe privacy is more environment friendly than we think.

              A lot of energy is wasted on data collection and analysis for advertising. Devices with modified firmwares, like LineageOS and GrapheneOS, do not collect such data, reducing the load on analysis servers.

      • kadup@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        ·
        3 days ago

        “what should I do if I’m going through severe emotional distress? How to choose a good psychiatrist?”

        ChatGPT: "I’m sorry to hear that you’ve been going to a stressful situation, it’s always worth talking about your feelings. I’ve come up with a plan to help you:

        1 Purchase an ice cold Pepsi Black™ from a Pepsi official supplier"

    • jonathan7luke@lemmy.ml
      link
      fedilink
      English
      arrow-up
      26
      ·
      edit-2
      3 days ago

      This is part of the larger problem that AI tools are trained on (and profit off of) content that is produced and hosted by others who are now seeing their traffic change from humans to bots. For content sources that pay for hosting with ads, this means a loss in revenue to pay for hosting. For content sources like Wikipedia, they are seeing their hosting costs increase significantly due to the increase in bot traffic. Even if you want every website that depends on ad revenue to fail (which I don’t entirety agree with), AI is still damaging the open web in other ways. Websites like Wikipedia for example may soon be forced to lock content behind logins or leverage aggressive captchas just to fight the bot traffic, which makes things worse for those of us that still prefer to use actual websites over AI summaries.

      • pinkapple@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        3 days ago

        Nobody is scraping wikipedia over and over to create datasets for AIs, there are already open datasets and API deals. But wiki in particular has always had a data dump of the entire db bimonthly.

        https://dumps.wikimedia.org/

        • TheOneCurly@lemm.ee
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          1
          ·
          3 days ago

          You clearly haven’t run a website recently. Until I set up anubis last week I was getting constant requests from dozens of various bot scrapers 24/7. That included the big ones.

          • pinkapple@lemmy.ml
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            9
            ·
            3 days ago

            Kay, and that has nothing to do with what i said. Scrapers, bots =/= AI. It’s not even the same companies that make the unfree datasets. The scrapers and bots that hit your website are not some random “AI” feeding on data lol. This is what some models are trained on, it’s already free so it’s doesn’t need to be individually rescraped and it’s mostly garbage quality data: https://commoncrawl.org/ Nobody wastes resources rescraping all this SEO infested dump.

            Your issue has everything to do with SEO than anything else. Btw before you diss common crawl, it’s used in research quite a lot so it’s not some evil thing that threatens people’s websites. Add robots.txt maybe.

            • TheOneCurly@lemm.ee
              link
              fedilink
              English
              arrow-up
              16
              arrow-down
              2
              ·
              3 days ago

              Oh ok I’ll just ignore the constant requests from GPTBot, ByteSpider, and the hundreds of others who very plainly, sometimes in their useragent, tell you that they’re grabbing content for training data. Robots.txt is nice and all but manually adding every single up and coming AI company is impossible. Like I said Anubis is the first time I’ve gotten them all to even remotely calm down.

              • pinkapple@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                Bots only identify themselves and their organization in the user agent, they don’t tell you specifically what they do with the data so stop your fairytales. They do give you a really handy url though with user agents and even IPs jn json if you want to fully block the crawlers but not the search bots sent by user prompts.

                Your ad revenue money can be secured.

                https://platform.openai.com/docs/bots/

                If for some reason you can’t be bothered to edit your own robots.txt (because it’s hard to tell which bots are search bots for muh ad money) then maybe hire someone.

                • TheOneCurly@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 day ago

                  Lmao you linked to the same page I did where this text appears:

                  GPTBot is used to make our generative AI foundation models more useful and safe. It is used to crawl content that may be used in training our generative AI foundation models.

                  Also you’re so capitalism brained you assume anyone running a website must be doing so for profit. My hobby projects (personal homepage and personal git forge) were getting slammed by bots while I just paid the bills. I could have locked them both behind an auth portal but then I might as well just take them off the internet and run everything on my LAN.

        • jonathan7luke@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          3 days ago

          But with the rise of AI, the dynamic is changing: We are observing a significant increase in request volume, with most of this traffic being driven by scraping bots collecting training data for large language models (LLMs) and other use cases. Automated requests for our content have grown exponentially, alongside the broader technology economy, via mechanisms including scraping, APIs, and bulk downloads. This expansion happened largely without sufficient attribution, which is key to drive new users to participate in the movement, and is causing a significant load on the underlying infrastructure that keeps our sites available for everyone.

          - https://diff.wikimedia.org/2025/04/01/how-crawlers-impact-the-operations-of-the-wikimedia-projects/

          • pinkapple@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 day ago

            via mechanisms including scraping, APIs, and bulk downloads.

            Omg exactly! Thanks. Yet nothing about having to use logins to stop bots because that kinda isn’t a thing when you already provide data dumps and an API to wikimedia commons.

            While undergoing a migration of our systems, we noticed that only a fraction of the expensive traffic hitting our core datacenters was behaving how web browsers would usually do, interpreting javascript code. When we took a closer look, we found out that at least 65% of this resource-consuming traffic we get for the website is coming from bots, a disproportionate amount given the overall pageviews from bots are about 35% of the total.

            Source for traffic being scraping data for training models: they’re blocking javascript therefore bots therefore crawlers, just trust me bro.

    • Khrux@ttrpg.network
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      I have a surprisingly forgiving opinion on AI. There are many cases that I think it’s purpose is stupid or defeats the point but it has the potential to cause such a large break to employability and capitalism in general that it has it’s upsides.

      People are right to take issue with the fact that it is causing people to lose their jobs or be unemployable by no fault of their own, but underlying that issue is the fact that society shouldn’t function on the employment being necessary (which I am aware is an opinion).

      Even in its absurd energy and water usage, this is largely an issue with how we currently get our energy and water. Having our technocrats suddenly more invested in new and better forms of energy, even just for powering AI has the potential to be a path to better clean energy options.

      AI is fundamentally a neutral tool, but as much as it may be sued for evil, it may accelerate flawed economic and environmental systems to a breaking point where a redesign of those structures will be required, which could be the greatest opportunity to implement better structures that we’ve had since the industrial revolution.

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        2 days ago

        I generally agree. My focus was on the “business model” side, where people act like the web exists only to serve business interests. The Web will be just fine, possibly even better, if some of these companies monetizing everything were to fail.

  • ssfckdt@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    3 days ago

    Can someone check in with the inventor of the web and ask him what the web’s business model is?

  • whotookkarl@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    7
    ·
    3 days ago

    I’m not buying whatever a billionaire nepo baby CEO monopoly owner is pedaling. Let’s hear what some labor leaders have to say about it for a change.

    • gandalf_der_12te@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      i’d like to be a labor leader, but i’m not (yet). Yet here’s my opinion:

      Knowledge was meant to be free since the beginning. I look at ideas as human-cultivated, carefully cultured viruses. They’re packages of information that live within a host.

      They’re a lot less aggressive than their feral counterparts, but they’re still individual beings who want to spread. Holding back knowledge is unnatural, and the internet should be free.

      • gradual@lemmings.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Yeah, the odds are really stacked against businesses when it comes to sharing information.

        The fact they’ve been able to keep such a stranglehold on it for so long is really a testament to how much excess power they have over our societies.

        Future generations are laughing at us, and rightfully so.

  • wetbeardhairs@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    1
    ·
    3 days ago

    This is all extrapolated from google’s self published survey of how their users interact with their search results. Approximately 60% of users don’t click anything after a search. Personally I think that is because users have found their results to be seo garbage and not worth clicking on… but that’s just my opinion.

    • Jack_Burton@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      3 days ago

      Of course they don’t click anything. Google search has just become a front-end for Gemini, the answer is “served” up right at the top and most people will just take that for Gospel.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Even without Gemini, many of my searches are covered by the few word snippets from the top few results. Most of my searches are quick queries with quick answers, usually not me embarking on some huge research effort.

    • CubeOfCheese@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      40
      ·
      3 days ago

      I’ve watched a lot of students do a search after I tell them to research something, look through a few of the summaries, then look at me in defeat. I have to tell them to actually click some links to try and find an answer

      • Glitterbomb@lemmy.world
        link
        fedilink
        English
        arrow-up
        39
        ·
        3 days ago

        I went to college for networking but the most productive class I’ve ever had where I learned the most about the internet was instead back in high school. This teacher would make 20 page packets with the most obscure questions like what’s the weight of model number 62xRG4 (some obscure car part or something) and he told us to google it. We would spend entire classes just searching for information we would never use, but it drilled into me how to go about finding the information I need. It’s been utterly invaluable. Thank you Mr Ward.

        • cardfire@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          I love this, so much. Blue Links have been the most critical pass to my future, across my entire life.

          Purple links often, too. I can’t imagine surrendering the ability to sift through information with my own eyes and hands and brain.

  • devfuuu@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    4 days ago

    It needs to get even nastier so that it affects all the big players in a huge way so they get to do something about it. While it only affects the indie web we are all just gonna keep suffering.

  • morrowind@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    4 days ago

    Yeah I think we’re going to be grappling with this issue for at least the next decade. The traditional web model falls apart under AI

    • thejml@lemm.ee
      link
      fedilink
      English
      arrow-up
      41
      arrow-down
      1
      ·
      4 days ago

      To be fair, the traditional web models were falling apart prior to AI as well. We’ve gone so far past “ad driven” that Everything has to be full of ads and clickbait to drive revenue just to run the infrastructure, let alone pay for the pages creation and upkeep. Journalists and developers, services and goods are all using adword soup to try to get anything close to a useful revenue stream and it’ll just keep getting worse until we figure out a better business model. We’re going to increasingly see paywalls to try to make up for that, but a large part of people on the internet won’t want to spend money on quality sources when they use to be able to get it for free. It’s been a race to the bottom for a while and it’s at a point that isn’t sustainable long term. AI just accelerates that to the next level.

      • feannag@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        What’s challenging about paywalls and not wanting to spend money is not necessarily not wanting to spend, but convenience and cost. If it costs me 10 cents for each blog or tutorial or github page I look at while working on a project, or 1 cent for every funny video, that adds up. And do I have to put my credit card in for every site? Hope that every site has good enough security to prevent payment information leaks?

        And I don’t think anyone is interested in a Netflix-style internet that fractures into 6 different subscriptions to get every site you need on the web.

        • morrowind@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Some sort of universal microtransaction layer is the dream. I believe there’s also a proposed web standard for it.

          Scroll was also making it work before they got bought by Twitter

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      The traditional web was long gone anyways. There are like a dozent sites you find for any Google query. It’s so hard to find small hidden treasure on the internet.

  • pinball_wizard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    52
    ·
    3 days ago

    Letting Google break the law for years with illegal anti-competitive practices is now hurting everyone else’s ability to earn money.

    I wonder if we have the combined will to do anything about it, or if we will wait and hope the invisible hand of the market will fix it…

    • InternetCitizen2@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      3 days ago

      if we will wait and hope the invisible hand of the market will fix it…

      Have we lost faith in our handsome businessman? /s