"We have data on the performance of >50k engineers from 100s of companies. ~9.5% of software engineers do virtually nothing: Ghost Engineers.”
Last week, a tweet by Stanford researcher Yegor Denisov-Blanch went viral within Silicon Valley. “We have data on the performance of >50k engineers from 100s of companies,” he tweeted. “~9.5% of software engineers do virtually nothing: Ghost Engineers.”
Denisov-Blanch said that tech companies have given his research team access to their internal code repositories (their internal, private Githubs, for example) and, for the last two years, he and his team have been running an algorithm against individual employees’ code. He said that this automated code review shows that nearly 10 percent of employees at the companies analyzed do essentially nothing, and are handsomely compensated for it. There are not many details about how his team’s review algorithm works in a paper about it, but it says that it attempts to answer the same questions a human reviewer might have about any specific segment of code, such as:
- “How difficult is the problem that this commit solves?
- How many hours would it take you to just write the code in this commit assuming you could fully focus on this task?
- How well structured is this source code relative to the previous commits? Quartile within this list
- How maintainable is this commit?”
Ghost Engineers, as determined by his algorithm, perform at less than 10 percent of the median software engineer (as in, they are measured as being 10 times worse/less productive than the median worker).
Denisov-Blanch wrote that tens of thousands of software engineers could be laid off and that companies could save billions of dollars by doing so. “It is insane that ~9.5 percent of software engineers do almost nothing while collecting paychecks,” Denisov-Blanch tweeted. “This unfairly burdens teams, wastes company resources, blocks jobs for others, and limits humanity’s progress. It has to stop.”
The Stanford research has not yet been published in any form outside of a few graphs Denisov-Blanch shared on Twitter. It has not been peer reviewed. But the fact that this sort of analysis is being done at all shows how much tech companies have become focused on the idea of “overemployment,” where people work multiple full-time jobs without the knowledge of their employers and its focus on getting workers to return to the office. Alongside Denisov-Blanch’s project, there has been an incredible amount of investment in worker surveillance tools. (Whether a ~9.5 percent rate of workers not being effective is high is hard to say; it’s unclear what percentage of workers overall are ineffective, or what other industry’s numbers look like).
Over the weekend, a post on the r/sysadmin subreddit went viral both there and on the r/overemployed subreddit. In that post, a worker said they had just sat through a sales pitch from an unnamed workplace surveillance AI company that purports to give employees “red flags” if their desktop sits idle for “more than 30-60 seconds,” which means “no ‘meaningful’ mouse and keyboard movement,” attempts to create “productivity graph” based on computer behavior, and pits workers against each other based on the time it takes to complete specific tasks.
What is becoming clear is that companies are becoming obsessed with catching employees who are underperforming or who are functionally doing nothing at all, and, in a job market that has become much tougher for software engineers, are feeling emboldened to deploy new surveillance tactics.
“In the past, engineers wielded a lot of power at companies. If you lost your engineers or their trust or demotivated the team—companies were scared shitless by this possibility,” Denisov-Blanch told 404 Media in a phone interview. “Companies looked at having 10-15 percent of engineers being unproductive as the cost of doing business.”
Denisov-Blanch and his colleagues published a paper in September outlining an “algorithmic model” for doing code reviews that essentially assess software engineer worker productivity. The paper claims that their algorithmic code assessment model “can estimate coding and implementation time with a high degree of accuracy,” essentially suggesting that it can judge worker performance as well as a human code reviewer can, but much more quickly and cheaply.
I asked Denisov-Blanch if he thought his algorithm was scooping up people whose work contributions might not be able to be judged by code commits and code analysis alone. He said that he believes the algorithm has controlled for that, and that companies have told him specific workers who should be excluded from analysis because their job responsibilities extend beyond just pushing code.
“Companies are very interested when we find these people [the ghost engineers] and we run it by them and say ‘it looks like this person is not doing a lot, how does that fit in with their job responsibilities?’” Denisov-Blanch said. “They have to launch a low-key investigation and sometimes they tell us ‘they’re fine,’ and we can exclude them. Other times, they’re very surprised.”
He said that the algorithm they have developed attempts to analyze code quality in addition to simply analyzing the number of commits (or code pushes) an engineer has made, because number of commits is already a well-known performance metric that can easily be gamed by pushing meaningless updates or pushing then reverting updates over and over. “Some people write empty lines of code and do commits that are meaningless,” he said. “You would think this would be caught during the annual review process, but apparently it isn’t. We started this research because there was no good way to use data in a scalable way that’s transparent and objective around your software engineering team.”
Much has been written about the rise of “overemployment” during the pandemic, where workers take on multiple full-time remote jobs and manage to juggle them. Some people have realized that they can do a passable enough job at work in just a few hours a day or less.
“I have friends who do this. There’s a lot of anecdotal evidence of people doing this for years and getting away with it. Working two, three, four hours a day and now there’s return-to-office mandates and they have to have their butt in a seat in an office for eight hours a day or so,” he said. “That may be where a lot of the friction with the return-to-office movement comes from, this notion that ‘I can’t work two jobs.’ I have friends, I call them at 11 am on a Wednesday and they’re sleeping, literally. I’m like, ‘Whoa, don’t you work in big tech?’ But nobody checks, and they’ve been doing that for years.”
Denisov-Blanch said that, with massive tech layoffs over the last few years and a more difficult job market, it is no longer the case that software engineers can quit or get laid off and get a new job making the same or more money almost immediately. Meta and X have famously done huge rounds of layoffs to its staff, and Elon Musk famously claimed that X didn’t need those employees to keep the company running. When I asked Denisov-Blanch if his algorithm was being used by any companies in Silicon Valley to help inform layoffs, he said: “I can’t specifically comment on whether we were or were not involved in layoffs [at any company] because we’re under strict privacy agreements.”
The company signup page for the research project, however, tells companies that the “benefits of participation” in the project are “Use the results to support decision-making in your organization. Potentially reduce costs. Gain granular visibility into the output of your engineering processes.”
Denisov-Blanch said that he believes “very tactile workplace surveillance, things like looking at keystrokes—people are going to game them, and it creates a low trust environment and a toxic culture.” He said with his research he is “trying to not do surveillance,” but said that he imagines a future where engineers are judged more like salespeople, who get commission or laid off based on performance.
“Software engineering could be more like this, as long as the thing you’re building is not just counting lines or keystrokes,” he said. “With LLMs and AI, you can make it more meritocratic.”
Denisov-Blanch said he could not name any companies that are part of the study but said that since he posted his thread, “it has really resonated with people,” and that many more companies have reached out to him to sign up within the last few days.
This guy is such a waste of carbon. Don’t be fooled by his title as a “researcher” or him being in Stanford. He’s just another Tech Bro, pushing his “product” to greedy companies to make a few bucks for himself.
And his sponsor? This guy.
Both deserve the deepest level of hell!
Notice that the Daily Heil seems quite happy with such a tool.
It has not been peer reviewed.
I could make a paper in 5 minutes about how AI can be used to uniquely identify people by smelling their farts. Doesn’t mean anything unless it’s been peer reviewed.
Until this paper has been peer reviewed, I give it as much credit as I give a flat earth conspiracy person.
The old adage of the engineer paid to know where to tap an X comes to mind: https://quoteinvestigator.com/2017/03/06/tap/?amp=1
Frankly anyone telling you they can measure the value of a line of code without any background knowledge is selling BS.
But I welcome this new BS system as the previous system of managers not so secretly counting total commits and lines added was comically stupid.
You don’t pay me for what I do, you pay me for what I know…
Frankly anyone telling you they can measure the value of a line of code without any background knowledge is selling BS.
the previous system of managers not so secretly counting total commits and lines added was comically stupid
That has been known not to work since the 1970s. There’s probably something in The Mythical Man-Month ridiculing lines of code as a performance metric.
Some of the most productive work I ever did involved ripping out 80k lines of executable code and replacing it with 1500.
But I welcome this new BS system
I don’t. Fuck snitchware in all its forms.
Ha! Hahaha! Hahahahaha!
Do you want AI to push garbage/useless code to push garbage/useless metrics? Because this is how you get your most skilled employees to do that.
I’ve seen it first hand but I don’t know if 9.5% is the correct number. One software guy at my company works for 11 years at this company. He went through so much shit that at this point he doesn’t even sit under the software department anymore, he’s just under finance. All he does is upgrade GitLab once every quarter or so and then he just watches TV and messes around with his homelab in his free time. Comes to the office couple times a week for 3-4 hours to show everyone he is still alive then goes home.
The legendary 0.1x engineer
The thing about being a big organization is that you need to have slack capacity most of the time in order to be able to go quickly in a different direction at certain times. If you don’t have excess capacity sitting idle, an unforeseen event can paralyze you
And slack capacity can be used effectively e.g., spend some time on process improvement. There’s always some saw to sharpen or some technical debt to repay.
LET’S LAY THEM OFF. If everyone is unemployed we can actually work on eating the rich
we can actually work on eating the rich
I’ve got a simple metric to measure performance on that job. Executable Heads of Billionaires, EHOBs.
I think most people misunderstand what software engineers do. Writing code is only a small portion of the work for most. Analyzing defects and performance issues, supporting production support that ends up with unqualified people due to the way support us handled these days, writing documentation or supporting those who do, design work, QE/QA/QC support, code reviews, product meetings, and tons of other stuff. That’s why “AI” is not having any luck with just replacing even junior engineers, besides the fact that it just doesn’t work.
“hey guy we had to let Jon go. His numbers just weren’t holding up over the last two quarters”
“Wtf?! That’s our team lead! Who’s going to sit with product and tell them No when they ask for something insane?”
“Yeah! Who’s going to help with our PR review?”
“And what about the juniors? He always made it a point to do pairing sessions with them!”
“We have to let you go as from our analysis you do mostly nothing, mr senior engineer”
1 week later everything is crashing and no one knows why
Ah yes, the classic evaluation of stupid shit that ends up shooting the company in the foot.
Yep.
This question doesn’t address what else these engineers do besides write code.
Who knows how many meetings they’re involved in to constrain the crazy from senior management?
Who knows how many meetings they’re involved in to constrain the crazy from senior management?
This is more than half of my job. Telling the company owners/other departments “No”. Or changing their request to something actually reasonable and selling them that they want that instead.
Sometimes the only way to get heard is for them to go attempt the simple, stupid approach and fail. Then their successors might pay attention.
Yes, but there’s also people actually not doing anything. I am dev lead and after building a team, which was a lot of work, I am at a point where I am doing fuck all on most days. Maybe join a few meetings, make some decisions and work on my own stuff otherwise.
Yeah, there are plenty of truly pointless workers, I’m not denying that. But doing stupid metrics like commit counting or lines of code per day is stupid and counter productive, and it emphasizes the out of touch and inhuman methods of corporate idiots
Makes me think of a trend in FTP gaming, where there was a correlation between play time and $ spent, so gaming companies would try and optimise for time played. They’d psychologically manipulate their players to spend more time in game with daily quests, battle passes, etc, all in an effort to raise revenues.
What they didn’t realise was that players spent time in game because it was fun, and they bought mtx because they enjoyed the game and wanted it to succeed. Optimising for play time had the opposite effect, and made the game a chore. Instead of raising revenues, they actually dropped.
This is why you always have to be careful when chasing metrics. If you pick wrong, it can have the opposite effect that you want.
This is why you always have to be careful when chasing metrics. If you pick wrong, it can have the opposite effect that you want.
I don’t know where the adage came from but I find it very true:
Once you turn a metric into a target, it ceases to be a good metric.
Goodhart’s law! One of my personal favorites after working in the field of healthcare regulatory reporting.
When your data “scientists” don’t understand the difference between causation and correlation
And why economists and sociologists are important to have in the room when marketing and sales heads throw stupid fucking ideas on the table.
Sneaking the old “this is why people want remote work” in there certainly makes this feel like something big tech created to push RTO.
For real. Dude seems to think people can jack around and do nothing in office
How much hubris/ignorance this guy has to believe his algorithm is accurate enough to detect “10%” of employees were deadbeats? What precision! If it found 50% deadbeats, that would mean the algorithm might be working.
The worst companies have only 10% deadbeats? Any company with only 10% deadbeats means their management team is doing a great job hiring/managing. Any company that only 50% deadbeat managers would be outstanding.It’s a long article that I admittedly didn’t read all of. I got to the part where it said the details of his algorithm are basically unknown, which means his data means nothing. If someone can’t provide the proof to their claims, they have no merit.
An LLM that’s built entirely on code repo data, and is somehow claiming workers “do virtually nothing” without any sort of outside data, is insane.
One of my big beefs with ML/AL is that these tools can be used to wrap bad ideas in what I will call “Machine legitimacy”. Which is another way of saying that there are many cases where these models are built up around a bunch of unrealistic assumptions, or trained on data that is not actually generalizable to the applied situation but will still spit out a value. That value becomes the truth because it came from some automated process. People cant critically interrogate it because the bad assumptions are hidden behind automation.
Yeah it’s similar to a computer spitting out 42 as the answer to life, the universe, and everything.
I’m not even going to bother to take this seriously at all.
There’s something to be said about unfulfilling and ‘bullshit jobs’. Aside from the potentially dubious methodology here, consider the implications of this ‘finding’.
How about look at the rentier and profit sapping features of these massive tech companies.