• gustofwind@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    25 days ago

    until you have a coworker that loves using AI and produces an ungodly amount of work product in barely any time and now you have to keep up

    • GreenKnight23@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      25 days ago

      I don’t keep up, I just stop interaction.

      I had someone spit out an insane amount of requirements for a project at me. I ignored them and moved on with my day.

      if someone brings me actionable tasks, I’ll work the tasks. if they give me busywork slop, it goes right on the pile of bullshit and ignored.

      my evals are reliant on deliverable goals and tasks, not sloppy bullshit.

      that said, if they want me to work from a slopument I’ll give them exactly what they slopped together and the best part is that I’ll have a paper trail of slop to point the finger away from myself.

  • friend_of_satan@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    ·
    26 days ago

    I’m so sick of fixing AI slop code, especially because there’s no love for people who fix the slop, only for the people who shipped the slop.

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      26 days ago

      Hell I’m sick of fixing slop work from actual people

      I am now semiconvinced that half of my co-workers are AI bots due to some of the dumb shit that they say

      like literally AI hallucinations and reversals, coming from real people

        • mrgoosmoos@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          26 days ago

          the people who annoy me at work are the ones who don’t learn, though

          like how many times can you reforward the same chat messages explaining exactly what they’re asking you, and the question is “where do I find ____” lol

  • Rekorse@sh.itjust.works
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    26 days ago

    Bosses aren’t oblivious, AI isn’t for the workers benefit. They need the workers to use the AI, so it can improve and begin to replace them.

    • queermunist she/her@lemmy.ml
      link
      fedilink
      arrow-up
      21
      ·
      26 days ago

      That’s part of how they’re oblivious - mass adoption won’t actually improve LLMs beyond a certain point, and we’re long past it. The tech is fundamentally limited in what they can actually do, and instead of recognizing the limitations to work within them they’re pretending we’re gonna have AGI.

      • Rekorse@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        26 days ago

        No but they dont need mass adoption, they need their workers to figure out a way the tool can replace their work. Plenty of people will work to replace themselves unfortunately. Whether it works out or not doesnt matter, those types of businesses will just try the next “tool” that replaces labor when it comes along too.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          26 days ago

          Whether it works or not matters to the investors. If it doesn’t work they’ve sunk a lot of money, labor, and time into a boondoggle. They want to replace labor, but they want profit too. Businesses aren’t just infinite money machines that can keep throwing shit at the wall until something sticks, eventually the investors pull out when they don’t see the returns they expected.

          That said, it’s up to us to make sure the bosses don’t ride this out on golden parachutes.

          When the economy collapses we need to put them all in prison.

          • Rekorse@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            ·
            26 days ago

            Sure that’s all bad for AI in the long term, and maybe a few bosses in the short term, but the workers that are being targeted for replacement won’t stop being targeted for replacement. People should be abandoning companies that are doing this, but most don’t have the luxury to just quit their jobs. I think we focus far too much on the tools we use to replace people and not enough on the people who want to use tools to replace people. We could just stop supporting those people.

            • queermunist she/her@lemmy.ml
              link
              fedilink
              arrow-up
              3
              ·
              26 days ago

              Okay, but that goes back to what I said before, the bosses are oblivious to how poor this technology actually is and are sleepwalking into a disaster. They’re trying to replace their workers with something that can’t replace them and this will have serious consequences, not just for the AI companies, but for the entire economy.

              That’s why we need to make sure they don’t escape on their golden parachutes.

              In the meantime, the workers could organize and demand the bosses stop trying to replace them.

  • despite_velasquez@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    11
    ·
    26 days ago

    It’s undeniable that AI is great at problems with tight feedback loops, like software engineering.

    Most jobs don’t have the tight feedback loops that software engineering has

    • SocialMediaRefugee@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      26 days ago

      It is pretty bad at things that are “black boxes” that require documentation to analyze. For instance, I was trying to debug an SSL issue with DB2 (IBM database) and chatgpt and copilot gave conflicting answers. They frequently gave commands that didn’t work, with great confidence of course. I had to keep feeding errors back to it. I even had to remind it that I was working in Linux and not Windows.

      • AlecSadler@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        6
        ·
        26 days ago

        FWIW, ChatGPT and Copilot are two of the worst AIs out there for things like this. At many gigs I’ve had they’re outright banned for use because of how garbage they are.

          • AlecSadler@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            25 days ago

            Claude Code, or Claude in general, notably Sonnet 4.5 and Opus 4.5

            Gemini also solid, though for coding found it lesser than Claude, but for heavy inference and reasoning it can be great and also supports a larger context window

    • CandleTiger@programming.dev
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      26 days ago

      It’s undeniable that AI is great at problems with tight feedback loops, like software engineering

      I, CandleTiger, do hereby deny that AI is great at software engineering.

    • laranis@lemmy.zip
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      25 days ago

      One nit: they’re good at writing code. Specifically, code that has already been written. Software Engineers and Computer Scientists still need to exist for technology to evolve.

      • Mirror Giraffe@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        26 days ago

        This. Was setting up a new service and it scaffolded all the endpoints after the swagger and helped me setup tooling, tests, within a few hours. Also helped me research what has happened in the area since my last ms.

        Now when adding the business logic I’ll be doing most of it myself as it tends to be a bit creative about what I’m trying to achieve and tends to forget to check my models etc.

        It’s great at generic code, has issues on specifics.

        • Infrapink@thebrainbin.org
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          26 days ago

          I feel like if your code is so generic a generator can make it, you could achieve tge same results faster, more reliably, and more energy-efficiently with a shell script or two.

          • Mirror Giraffe@piefed.social
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            26 days ago

            A specific tool should definitely beat a generic one. If I was doing these things all the time I would consider building something like that, scaffolding based on a swagger seems pretty easily achievable but since I do this every other year tops, and the setup will need to be updated with new techniques it’s fast from a valuable time investment to write for me.

  • Jankatarch@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    26 days ago

    And the only reason they can get away with not charging the training and computation costs is bunch of rich people essentially gambling a small portion of their generational wealth.

  • KneeTitts@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    9
    ·
    26 days ago

    AI can be useful in certain circumstances, its great at speeding up research for example (kinda proving its just a glorified search engine at this stage) but in my experience most business owners are way too dumb to know what is or is not useful to their employees work processes.

      • LwL@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        26 days ago

        Research in the sense of researching a problem you’re having or getting an idea of how to start an implementation for something is a great use case and pretty much the only one I regularly use them for. Search engines usually fail to produce anything useful when describing the problem requires complec grammar.

        Research in an academic sense yea they’re horrible.

          • LwL@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            25 days ago

            It really isn’t but sure. If you’re dumb enough to assume what it spits out is gospel it’s dogshit for that purpose too, but that’s a user issue. Not like random stackoverflow answers are always exactly what you need either lol

            • FiniteBanjo@feddit.online
              link
              fedilink
              English
              arrow-up
              2
              ·
              25 days ago

              The irony of you calling it an ID-ten-T error, located between peripheral and chair, while you vehemently defend the use of AI tp study a subject. You’re truly irredeemable.

              • Snowclone@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                2
                ·
                25 days ago

                You are again misunderstanding what they are saying. they clearly said it had use as a search engine, not research in any academic sense.

                • FiniteBanjo@feddit.online
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  25 days ago

                  I am, again, not misunderstanding you, idk how you could possibly construe that. You’re a filthy slopper. It’s garbage for searching as it will inevitably misinform people with its hallucinations unlike an actual search engine.

  • fibojoly@sh.itjust.works
    link
    fedilink
    arrow-up
    21
    ·
    26 days ago

    Our new tech lead loves fucking AI, which let’s him refactor our terraform (I was already doing that), write pipelines in gitlab, and lots of other shiny cool things (after many many many attempts, if his commit history is any indication).

    Funnily, he won’t touch our legacy code. Like, he just answers “that’s outside my perimeter” when he’s clearly the one who should be helping us handle that shit. Also it’s for a mission critical part of our company. But no, outside his perimeter. Gee I wonder why.

  • Itdidnttrickledown@lemmy.world
    cake
    link
    fedilink
    arrow-up
    12
    ·
    25 days ago

    I have a simple anwer why managers think its smart and workers things its dumb. The managers see all kinds of documentaion from workers and to them the AI slop look the same. It looks the same due to the fact that the managers never take the time to comprehend what they are reading.

      • Itdidnttrickledown@lemmy.world
        cake
        link
        fedilink
        arrow-up
        4
        ·
        25 days ago

        Without a doubt. The skill set to be in management has nothing to do with intelligence. It has to do with selfish manipulation and no empathy. That way you can be cruel without missing a second of sleep.

    • bampop@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      25 days ago

      I think it’s more that AI is a soulless bullshit generator with no imagination and no deep understanding, and managers tend to notice that it can do most of the work they do. There’s a lot of skill overlap with management there, so naturally they would be impressed with it.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    26 days ago

    Management never has a clue what their employees actually do day-to-day. We’re just another black box to them, tracked on a spreadsheet by accounting. Stuff goes in, stuff comes out, you can’t explain that.

    • luciferofastora@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      25 days ago

      I’m vaguely on the periphery of a project to create a sort of info-hub chat-bot. The project lead was really enthusiastic about getting me on board and helping me develop my skills in that direction.

      Apparently there’s a lot of people calling the wrong departments about stuff. Think along the stereotype of people calling the IT “Help Desk” for a broken light. The bot should help them find the right info, or at least the right department.

      The issue, according to management, is that information is spread all over the place. Some departments use Confluence, others maintain pages on the intranet webserver. One has their own platform for FAQ and tickets, except it’s not actually for tickets any more, which you’ll only find out when they unhelpfully close your ticket with that remark. Wanna guess what confused users do? Right, call some other department.

      The obvious solution would be getting each department to be more transparent and consistent about their information, responsibilities and ways to reach them, possibly even making them all provide their info on some shared knowledgebase with a useful search function. But that would require people to change their stuck habits.

      So instead they develop a bot supposed to know all the knowledgebases and access them for users, answer simple queries, point them the right way for complex ones and potentially even help them raise tickets with the relevant departments. Surely, that will improve things?

      The one time I tried it, I asked it a question that would have been my area of responsibility to see if people would actually find me or at least the general department. Yeah, nah, it pointed me at someone not just unrelated to that function or department, but also responsible for a different geographical area. IDK what they trained it on, but it probably didn’t include any mentions of that topic, which is fair, given it’s still in development.

      But instead of saying “I have no information on that” or direct me to a general contact, it confidently told me to do the thing it’s supposed to fix: bother the wrong person.

      And the project lead wonders why I didn’t inmediately jump at the offer to join his department.

      • Jtotheb@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        25 days ago

        My wife, who works at a college, was recently trying to locate some information from an old college newspaper that may not have been digitized yet and used their new work AI for help finding it. It directed her to the school’s archives, but provided made-up contact info for the office, and also recommended she contact herself.

    • ThomasWilliams@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      25 days ago

      It’s really the middle management they don’t understand, not the floor staff, the people who do all the checking and compliance which the top management now think can be replaced by AI

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    26 days ago

    Any boss ramming a tool down their workers throats without understanding it or validating it’s usefulness is not a particularly good boss.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      26 days ago

      There’s bosses, and then there’s directors, and managers, and c-suites. Essentially, the people who don’t do any real fucking work are super impressed by it.