yes but all the code will be wrong and you will spend your entire day chasing stupid mistakes and hallucinations in the code. I’d rather just write the code myself thanks.
Yeah! I can make my own stupid mistakes and hallucinations, thank you very much!
But… But I don’t want it to. 😮💨
yeah but then you have to fix everything in the code that they didn’t get right.
like using it to automate a shell is fine; but trusting it blindly and treating it as the finishing product? you’re delusional.
Writing code is the reward for doing the thinking. If the LLM does it then software engineering is no fun.
It’s like painting - once you’ve finally finished the prep, which is 90% of the effort, actually getting to paint is the reward
What a great way to frame it, I love this! I typically spend something like 60-80% of time available for a given task thinking through approaches and trade-offs, etc. Usually there comes a point when the way forward becomes clear, even obvious.
After that? Bliss. I’m snapping together a LEGO set I designed, composed of pieces I picked (maybe made one or two new ones!), and luxuriating in how it all feels, when put together.
These fuckers at MicroShit have lost all the ability needed to read a room.
When do you reckon they could last do that?
Maybe after windows 8? Last time I can remember.
I’d say 97 was the last time.
So why don’t they use it to unfuck Windows 11… before I finish my coffee?
It was the AI that messed it up to begin with lol. Vibe coding has often required coders having to go back and spend even more time fixing it then if they just did it themselves.
If you want it done before you finish your coffee, better tell it to start from scratch
best they can do is put more AI
I already finished my coffee too. :-/ Though I suppose I could throw on another pot while we wait.
I’ve finished several coffees since you posted this… pretty sure win11 is still fucked
Because you won’t have time to drink that coffee if you put this code into production
deleted by creator
And it will leave you debugging strange code for two weeks afterward.
Love how they’re pretending that an LLM is useful for any task that needs precision.
It says it will finish the code, it doesn’t say the code will work.
I was going to say. The code won’t compile but it will be “finished “
A couple agent iterations will compile. Definitely won’t do what you wanted though, and if it does it will be the dumbest way possible.
Yeah you can definitely bully AI into giving you some thing that will run if you yell at it long enough. I don’t have that kind of patience
Edit: typically I see it just silently dump errors to /dev/null if you complain about it not working lol
And people say that AI isn’t humanlike. That’s peak human behavior right there, having to bother someone out of procrastination mode.
The edit makes it even better, swiping things under the rug? Hell yeah!
Also just because the code works, doesn’t mean it’s good code.
I’ve had to review code the other day which was clearly created by an LLM. Two classes needed to talk to each other in a bit of a complex way. So I would expect one class to create some kind of request data object, submit it to the other class, which then returns some kind of response data object.
What the LLM actually did was pretty shocking, it used reflection to get access from one class to the private properties with the data required inside the other class. It then just straight up stole the data and did the work itself (wrongly as well I might add). I just about fell of my chair when I saw this.
So I asked the dev, he said he didn’t fully understand what the LLM did, he wasn’t familiar with reflection. But since it seemed to work in the few tests he did and the unit tests the LLM generated passed, he thought it would be fine.
Also the unit tests were wrong, I explained to the dev that usually with humans it’s a bad idea to have the person who wrote the code also (exclusively) write the unit tests. Whenever possible have somebody else write the unit tests, so they don’t have the same assumptions and blind spots. With LLMs this is doubly true, it will just straight up lie in the unit tests. If they aren’t complete nonsense to begin with.
I swear to the gods, LLMs don’t save time or money, they just give the illusion they do. Some task of a few hours will take 20 min and everyone claps. But then another task takes twice as long and we just don’t look at that. And the quality suffers a lot, without anyone really noticing.
They’ve been great for me at optimizing bite sized annoying tasks. They’re really bad at doing anything beyond that. Like astronomically bad.
Great description of a problem I noticed with most LLM generated code of any decent complexity. It will look fantastic at first but you will be truly up shit creek by the time you realise it didn’t generate a paddle.
Why would unit tests not be written by the same person? That doesn’t make a lot of sense…
They did say why they’re doing it
Whenever possible have somebody else write the unit tests, so they don’t have the same assumptions and blind spots.
Did that not make sense to you?
I usually wouldn’t do that, because it’s a bigger investment. But it certainly makes logical sense to me and is something teams can weigh and decide on.
So I asked the dev, he said he didn’t fully understand what the LLM did, he wasn’t familiar with reflection.
Big baffling facepalm moment.
If they would at least prefix the changeset description with that it’d be easier to interpret and assess.
Who hasn’t encountered that one jerk who builds only new code to impress management, and never maintains or fixes existing code? I think of them as proof-of-concept posers. They make things that look flashy, impress the execs, and barely work for a single use care, then dump all the bugs, maintenance and actual architecture on the other devs. LLMs are going to be a gift to these people and a pain for everyone who actually knows how to engineer things well. They’ll encourage this kind of shallow flashiness and make the maintenance problems worse, but the execs will be convinced that only the LLM posers are productive and everyone else is sitting idle.
Make it de the shit I don’t want to do, then we’ll talk
Actually it won’t be finishing anything because code is disposable now and nobody cares what trivial app somebody can churn out
Technically true, but nobody said the code will be at all functional. I’m pretty sure I can finish about 800000 coffees before Copilot generates anything usable that is longer than 3 lines.
If thats what they are aiming at, I feel like their AI is actually suppose to be the pilot and the user the copilot












