• pezhore@infosec.pub
    link
    fedilink
    English
    arrow-up
    132
    ·
    16 hours ago

    I was just commenting on how shit the Internet has become as a direct result of LLMs. Case in point - I wanted to look at how to set up a router table so I could do some woodworking. The first result started out halfway decent, but the second section switched abruptly to something about routers having wifi and Ethernet ports - confusing network routers with the power tool. Any human/editor would catch that mistake, but here it is.

    I can only see this get worse.

      • pezhore@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 minutes ago

        I’d say it was weird, not shit. It was hard to find niche sites, but once you did they tended to be super deep into the hobby, sport, movies, or games.

        SEO (search engine optimization) was probably the first step down this path, where people would put white text on a white background with hundreds of words that they hoped a search engine would index.

    • null_dot@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      79
      ·
      15 hours ago

      It’s not just the internet.

      Professionals (using the term loosely) are using LLMs to draft emails and reports, and then other professionals (?) are using LLMs to summarise those emails and reports.

      I genuinely believe that the general effectiveness of written communication has regressed.

      • pezhore@infosec.pub
        link
        fedilink
        English
        arrow-up
        37
        ·
        15 hours ago

        I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.

        • piccolo@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          11 hours ago

          I like using gpt to generate powershell scripts, surprisingly its pretty good at that. It is a small task so unlikely to go off in the deepend.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            56 minutes ago

            Like all tools, it is good for some things and not others.

            “Make me an OS to replace Windows” is going to fail “Tell me the terminal command to rename a file” will succeed.

            It’s up to the user to apply the tool in a way that it is useful. A person simply saying ‘My hammer is terrible at making screw holes’ doesn’t mean that the hammer is a bad tool, it tells you the user is an idiot.

        • TheBrideWoreCrimson@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          9 hours ago

          I use it to construct regex’s which, for my use cases, can get quite complicated. It’s pretty good at doing that.

        • kurwa@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          13 hours ago

          I feel like it’s not that bad if you use it for small things, like single lines instead of blocks of code, like a glorified auto complete.

          Sometimes it’s nice to not use it though because it can feel distracting.

          • Swedneck@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            18
            ·
            10 hours ago

            truly who could have predicted that a glorified autocomplete program is best at performing autocompletion

            seriously the world needs to stop calling it “AI”, it IS just autocomplete!

          • Phen@lemmy.eco.br
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            1
            ·
            13 hours ago

            I find it most useful as a means of getting answers for stuff that have poor documentation. A couple weeks ago chatgpt gave me an answer whose keyword had no matches on Google at all. No idea where it took that from (probably some private codebase), but it worked.

  • T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    13 hours ago

    How did they estimate whether an LLM was used to write the text or not? Did they do it by hand, or using a detector?

    Since detectors are notorious for picking up ESL writers, or professionally written text as AI-Generated.

    • hypna@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      13 hours ago

      I don’t know of any reason that the proportion of ESL writers would have started trending up in 2022.

    • Bob Robertson IX@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      13 hours ago

      They just asked a few people if they thought it was written by an LLM. /s

      I mean, you can tell when something is written from ChatGPT, especially if the person isn’t using it for editing, but is just asking it to write a complaint or request. It is likely they are only counting the most obvious, so the actual count is higher.

    • sober_monk@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      6 hours ago

      They developed their own detector described in another paper. Basically, this reverse-engineers texts based on their vocabulary to provide an estimate on how much of them were ChatGPT.

    • Lucky_777@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      15
      ·
      13 hours ago

      This. It’s a tool, embrace it and learn the limitations…or get left behind and become obsolete. You won’t be able to keep up with people that do use it.

      • Swedneck@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 hours ago

        dude you figuring out how to make the AI shit out something half-passable isn’t making you clever and superior, it’s just sad

      • ggtdbz@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        edit-2
        7 hours ago

        The invention of the torque wrench didn’t severely impede my ability to retrieve stored information, and everyone else’s, affecting me by proxy.

        The tech four years ago was impressive but for me it’s only done two things since becoming widely available: thinned the soup of Internet fun things, and made some people, disproportionally executives at my work, abandon a solid third of their critical thinking skills.

        I use AI models locally, to turn around little jokes for friends, you could say I’ve put more effort into machine learning tools than many daily AI users. And I’ll be the first to call the article described by OP as a true, shameful indictment of us as a species.

    • :3 3: :3 3: :3 3: :3@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      7 hours ago

      What a dumb comparison. Calculators are just tools to do the same mechanical action as abaci, which were also just tools to speed up human mechanical actions of calculation.

      Writing, drawing, research are creative, not mechanical, and offloading them to a tool is very different from offloading calculations to integrated circuits

      • surph_ninja@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        3 hours ago

        Most of what’s being offloaded to AI is boiler plate work. People underestimate how much of what we do every day is boiler plate, and the perfect workload to offload to allow humans to focus more on the creative stuff.

        • :3 3: :3 3: :3 3: :3@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 hours ago

          First off, no, not even boilerplate can be done incorrectly sometimes. Software that ingests words and outputs words can’t check, say, official forms for correctness. Or test reports. You need a different type of reasoning for that.

          And then, even if we assume that AI can do these tasks correctly, boilerplate isn’t being just offloaded, it’s being created. Sure, we’ve had bullshit generators before. But now our bullshit machines are faster, and spew out more believable bullshit. Google has been ruined by generated slop. That’s work that wasn’t performed before, doesn’t improve our lives and yet is being done.

          • surph_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 hour ago

            Generating boilerplate to get past the blank page phase is not the same as trying to make it check forms for correctness. That’s why I didn’t suggest it should be used for that, so I don’t know what use the strawman is to make an irrelevant point.

            Many of you are very, very anti-AI. We get it. But that also leads to you having next to no experience with it, because you don’t practice enough to understand how to use it correctly, and it leads to y’all pulling nonsense criticisms out of your ass.

    • taiyang@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      12 hours ago

      Not a good analogy, except there is one interesting parallel. My students who overuse a calculator in stats tend to do fine on basic arithmetic but it does them a disservice when trying to do anything more elaborate. Granted, it should be able to follow PEDMAS but for whatever weird reason, it doesn’t sometimes. And when there’s a function that requires a sum and maybe multiple steps? Forget about it.

      Similarly, GPT can make cliche copy writing, but good luck getting it to spit out anything complex. Trust me, I’m grading that drinble. So in that case, the analogy works.

        • taiyang@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          8 hours ago

          LLMs by their very nature drive towards cliche and most common answers, since they’re synthesizing data. Prompts can attempt to sway it away from that, but it’s ultimately a regurgitation machine.

          Actual AI might be able to eventually, but it would require a lot more human like experience (and honestly, the chaos that gives us creativity). At that point it’ll probably be sentient, and we’d have bigger things you worry about, lol

  • taiyang@lemmy.world
    link
    fedilink
    English
    arrow-up
    73
    ·
    12 hours ago

    I’m the type to be in favor of new tech but this really is a downgrade after seeing it available for a few years. Midterms hit my classes this week and I’ll be grading them next week. I’m already seeing people try to pass off GPT as their own, but the quality of answers has really dropped in the past year.

    Just this last week, I was grading a quiz on persuasion and for fun, I have students pick an advertisement to analyze. You know, to personalize the experience, this was after the super bowl so we’re swimming in examples. Can even be audio, like a podcast ad, or a fucking bus bench or literally anything else.

    60% of them used the Nike Just Do It campaign, not even a specific commercial. I knew something was amiss, so I asked GPT what example it would probably use it asked. Sure enough, Nike Just Do It.

    Why even cheat on that? The universe has a billion ad examples. You could even feed GPT one and have it analyze for you. It’d be wrong, cause you have to reference the book, but at least it’d not be at blatant.

    I didn’t unilaterally give them 0s but they usually got it wrong anyway so I didn’t really have to. I did warn them that using that on the midterm in this way will likely get them in trouble though, as it is against the rules. I don’t even care that much because again, it’s usually worse quality anyway but I have to grade this stuff, I don’t want suffer like a sci-fi magazine getting thousands of LLM submissions trying to win prizes.

    • Shou@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      11 hours ago

      As someone who has been a teenager. Cheating is easy, and class wasn’t as fun as video games. Plus, what teenager understands the importance of an assignment? Of the skill it is supposed to make them practice?

      That said, I unlearned to copy summaries when I heard I had to talk about the books I “read” as part of the final exams in high school. The examinor would ask very specific plot questions often not included in online summaries people posted… unless those summaries were too long to read. We had no other option but to take it seriously.

      As long as there isn’t something that GPT can’t do the work for, they won’t learn how to write/do the assignment.

      Perhaps use GPT to fail assignments? If GPT comes up with the same subject and writing style/quality, subract points/give 0s.

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 hours ago

        I have a similar background and no surprise, it’s mostly a problem in my asynchronous class. The ones who have my in person lectures are much more engaged, since it is a fun topic and I don’t enjoy teaching unless I’m also making them laugh. No dice with asynchronous.

        And yeah, I’m also kinda doing that with my essay questions, requiring stuff you sorta can’t just summarize. Important you critical thinking, even if you’re not just trying to detect GPT.

        I remember reading that GPT isn’t really foolproof on verifying bad usage, and I am not willing to fail anyone over it unless I had to. False positives and all that. Hell, I just used GPT as a sounding board for a few new questions I’m writing, and it’s advice wasn’t bad. There’s good ways to use it, just… you know, not so stupidly.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        yeah, I’ve got around 40 trees I’ve planted in our yard. Thing was, when I bought this, it was labeled as a lychee. Then it started making soursop flowers.

        • Paradachshund@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          46 minutes ago

          Wow, 40 trees! You must have a big property.

          I live in the Pacific northwest, so no tropical fruit for me 😭 we’ve got good berries and stone fruit here, though.

  • msage@programming.dev
    link
    fedilink
    English
    arrow-up
    47
    ·
    9 hours ago

    I just want to point out that there were text generators before ChatGPT, and they were ruining the internet for years.

    Just like there are bots on social media, pushing a narrative, humans are being alienated from every aspect of modern society.

    What is a society for, when you can’t be a part of it?