until you have a coworker that loves using AI and produces an ungodly amount of work product in barely any time and now you have to keep up
I don’t keep up, I just stop interaction.
I had someone spit out an insane amount of requirements for a project at me. I ignored them and moved on with my day.
if someone brings me actionable tasks, I’ll work the tasks. if they give me busywork slop, it goes right on the pile of bullshit and ignored.
my evals are reliant on deliverable goals and tasks, not sloppy bullshit.
that said, if they want me to work from a slopument I’ll give them exactly what they slopped together and the best part is that I’ll have a paper trail of slop to point the finger away from myself.
I’m so sick of fixing AI slop code, especially because there’s no love for people who fix the slop, only for the people who shipped the slop.
Hell I’m sick of fixing slop work from actual people
I am now semiconvinced that half of my co-workers are AI bots due to some of the dumb shit that they say
like literally AI hallucinations and reversals, coming from real people
When a human gives you bad code, you can pull them aside and coach them. With AI, that doesn’t work.
the people who annoy me at work are the ones who don’t learn, though
like how many times can you reforward the same chat messages explaining exactly what they’re asking you, and the question is “where do I find ____” lol
One of my college professors is mandating chat gpt
Why would they do that
Maybe because students are using it anyways, so it’s better to teach how to use it responsibly, check for sources and not to trust blindly anything it outputs. But I am being optimistic.
Because they asked AI to tell them?
Bosses aren’t oblivious, AI isn’t for the workers benefit. They need the workers to use the AI, so it can improve and begin to replace them.
That’s part of how they’re oblivious - mass adoption won’t actually improve LLMs beyond a certain point, and we’re long past it. The tech is fundamentally limited in what they can actually do, and instead of recognizing the limitations to work within them they’re pretending we’re gonna have AGI.
No but they dont need mass adoption, they need their workers to figure out a way the tool can replace their work. Plenty of people will work to replace themselves unfortunately. Whether it works out or not doesnt matter, those types of businesses will just try the next “tool” that replaces labor when it comes along too.
Whether it works or not matters to the investors. If it doesn’t work they’ve sunk a lot of money, labor, and time into a boondoggle. They want to replace labor, but they want profit too. Businesses aren’t just infinite money machines that can keep throwing shit at the wall until something sticks, eventually the investors pull out when they don’t see the returns they expected.
That said, it’s up to us to make sure the bosses don’t ride this out on golden parachutes.
When the economy collapses we need to put them all in prison.
Sure that’s all bad for AI in the long term, and maybe a few bosses in the short term, but the workers that are being targeted for replacement won’t stop being targeted for replacement. People should be abandoning companies that are doing this, but most don’t have the luxury to just quit their jobs. I think we focus far too much on the tools we use to replace people and not enough on the people who want to use tools to replace people. We could just stop supporting those people.
Okay, but that goes back to what I said before, the bosses are oblivious to how poor this technology actually is and are sleepwalking into a disaster. They’re trying to replace their workers with something that can’t replace them and this will have serious consequences, not just for the AI companies, but for the entire economy.
That’s why we need to make sure they don’t escape on their golden parachutes.
In the meantime, the workers could organize and demand the bosses stop trying to replace them.
It’s undeniable that AI is great at problems with tight feedback loops, like software engineering.
Most jobs don’t have the tight feedback loops that software engineering has
It is pretty bad at things that are “black boxes” that require documentation to analyze. For instance, I was trying to debug an SSL issue with DB2 (IBM database) and chatgpt and copilot gave conflicting answers. They frequently gave commands that didn’t work, with great confidence of course. I had to keep feeding errors back to it. I even had to remind it that I was working in Linux and not Windows.
FWIW, ChatGPT and Copilot are two of the worst AIs out there for things like this. At many gigs I’ve had they’re outright banned for use because of how garbage they are.
Which ones have you had recommended?
Claude Code, or Claude in general, notably Sonnet 4.5 and Opus 4.5
Gemini also solid, though for coding found it lesser than Claude, but for heavy inference and reasoning it can be great and also supports a larger context window
It’s undeniable that AI is great at problems with tight feedback loops, like software engineering
I, CandleTiger, do hereby deny that AI is great at software engineering.
it is totally deniable. Because it’s simply not true. It’s been studied.
One nit: they’re good at writing code. Specifically, code that has already been written. Software Engineers and Computer Scientists still need to exist for technology to evolve.
This. Was setting up a new service and it scaffolded all the endpoints after the swagger and helped me setup tooling, tests, within a few hours. Also helped me research what has happened in the area since my last ms.
Now when adding the business logic I’ll be doing most of it myself as it tends to be a bit creative about what I’m trying to achieve and tends to forget to check my models etc.
It’s great at generic code, has issues on specifics.
I feel like if your code is so generic a generator can make it, you could achieve tge same results faster, more reliably, and more energy-efficiently with a shell script or two.
A specific tool should definitely beat a generic one. If I was doing these things all the time I would consider building something like that, scaffolding based on a swagger seems pretty easily achievable but since I do this every other year tops, and the setup will need to be updated with new techniques it’s fast from a valuable time investment to write for me.
they are stringing it along so they can get thier golden parachutes and bounce.
And the only reason they can get away with not charging the training and computation costs is bunch of rich people essentially gambling a small portion of their generational wealth.
AI can be useful in certain circumstances, its great at speeding up research for example (kinda proving its just a glorified search engine at this stage) but in my experience most business owners are way too dumb to know what is or is not useful to their employees work processes.
The machines constantly lie, research is one of its worst use cases.
Research in the sense of researching a problem you’re having or getting an idea of how to start an implementation for something is a great use case and pretty much the only one I regularly use them for. Search engines usually fail to produce anything useful when describing the problem requires complec grammar.
Research in an academic sense yea they’re horrible.
Yes I speak English, it’s dog shit for your stated purpose. What word are you not understanding?
It really isn’t but sure. If you’re dumb enough to assume what it spits out is gospel it’s dogshit for that purpose too, but that’s a user issue. Not like random stackoverflow answers are always exactly what you need either lol
The irony of you calling it an ID-ten-T error, located between peripheral and chair, while you vehemently defend the use of AI tp study a subject. You’re truly irredeemable.
You are again misunderstanding what they are saying. they clearly said it had use as a search engine, not research in any academic sense.
I am, again, not misunderstanding you, idk how you could possibly construe that. You’re a filthy slopper. It’s garbage for searching as it will inevitably misinform people with its hallucinations unlike an actual search engine.
easy to be fast if you don’t care about accuracy
Dilbert manager energy
Our new tech lead loves fucking AI, which let’s him refactor our terraform (I was already doing that), write pipelines in gitlab, and lots of other shiny cool things (after many many many attempts, if his commit history is any indication).
Funnily, he won’t touch our legacy code. Like, he just answers “that’s outside my perimeter” when he’s clearly the one who should be helping us handle that shit. Also it’s for a mission critical part of our company. But no, outside his perimeter. Gee I wonder why.
All the sweet talk in the world ain’t gonna save their jobs when their ai babies take over
I have a simple anwer why managers think its smart and workers things its dumb. The managers see all kinds of documentaion from workers and to them the AI slop look the same. It looks the same due to the fact that the managers never take the time to comprehend what they are reading.
LLMs look smart if you have no idea what the fuck they’re on about. And management is full of Peters.
Without a doubt. The skill set to be in management has nothing to do with intelligence. It has to do with selfish manipulation and no empathy. That way you can be cruel without missing a second of sleep.
I think it’s more that AI is a soulless bullshit generator with no imagination and no deep understanding, and managers tend to notice that it can do most of the work they do. There’s a lot of skill overlap with management there, so naturally they would be impressed with it.
Of course it is and its the point of my post.
Management never has a clue what their employees actually do day-to-day. We’re just another black box to them, tracked on a spreadsheet by accounting. Stuff goes in, stuff comes out, you can’t explain that.
I’m vaguely on the periphery of a project to create a sort of info-hub chat-bot. The project lead was really enthusiastic about getting me on board and helping me develop my skills in that direction.
Apparently there’s a lot of people calling the wrong departments about stuff. Think along the stereotype of people calling the IT “Help Desk” for a broken light. The bot should help them find the right info, or at least the right department.
The issue, according to management, is that information is spread all over the place. Some departments use Confluence, others maintain pages on the intranet webserver. One has their own platform for FAQ and tickets, except it’s not actually for tickets any more, which you’ll only find out when they unhelpfully close your ticket with that remark. Wanna guess what confused users do? Right, call some other department.
The obvious solution would be getting each department to be more transparent and consistent about their information, responsibilities and ways to reach them, possibly even making them all provide their info on some shared knowledgebase with a useful search function. But that would require people to change their stuck habits.
So instead they develop a bot supposed to know all the knowledgebases and access them for users, answer simple queries, point them the right way for complex ones and potentially even help them raise tickets with the relevant departments. Surely, that will improve things?
The one time I tried it, I asked it a question that would have been my area of responsibility to see if people would actually find me or at least the general department. Yeah, nah, it pointed me at someone not just unrelated to that function or department, but also responsible for a different geographical area. IDK what they trained it on, but it probably didn’t include any mentions of that topic, which is fair, given it’s still in development.
But instead of saying “I have no information on that” or direct me to a general contact, it confidently told me to do the thing it’s supposed to fix: bother the wrong person.
And the project lead wonders why I didn’t inmediately jump at the offer to join his department.
My wife, who works at a college, was recently trying to locate some information from an old college newspaper that may not have been digitized yet and used their new work AI for help finding it. It directed her to the school’s archives, but provided made-up contact info for the office, and also recommended she contact herself.
It’s really the middle management they don’t understand, not the floor staff, the people who do all the checking and compliance which the top management now think can be replaced by AI
Any boss ramming a tool down their workers throats without understanding it or validating it’s usefulness is not a particularly good boss.
There’s bosses, and then there’s directors, and managers, and c-suites. Essentially, the people who don’t do any real fucking work are super impressed by it.













