- cross-posted to:
- programming@programming.dev
- cross-posted to:
- programming@programming.dev
Four months ago, we asked Are LLMs making Stack Overflow irrelevant? Data at the time suggested that the answer is likely “yes:”
It’s not dead until I stop getting 10 year old outdated answers in my searches!
I’ve lost count the number of times where I try to find something in SO, and it’s just someone posting the exact same example code as the answer. Or someone suggesting you just google it. Then I ask ChatGPT… and I get an answer.
Make no mistake. LLMs aren’t killing stackoverflow. LLMs just arrived to finish it off. The stuff that was killing it are the regular posters there, and their passive aggressive bullshit
Yup. I once decided to spend an afternoon answering questions on a framework I was expert in, as a kind of profile-building exercise to help with job hunting, and after around the third smug self-satisfied comment picking me up on some piece of irrelevant bullshit I deleted my account.
I hate how cathartic it is to watch that mountain of bullies burn to the ground 😌
I never asked a question, despite using it daily. Too afraid of being berated 😅
Question closed as off-topic.
Question closed as off-topic.
Removed as duplicate of #264826376: “Question closed as duplicate.”
Sometimes my jokes need explaining...
I’m pointing out that questions on SO too often get closed as duplicates of adjacent (but distinctly different) questions, and I did so in the most confusing, recursive way possible.
Nothing passive about them it was just regular aggressive. Made my programming coursework so much worse. Indian guys on YouTube however, now those guys were helpful!
I’m not convinced that the number of questions asked is the correct metric. In the end the point is not to have a constant flow of questions, rather constant flow of answers found.
There is a point in proficiency in language/library/whatever after which it is faster to find the answer in the code/documentation/test example than to wait until another person on even higher level will come and answer your question.
Maybe we simply filled out what was needed to be asked in the beginner-bug found-intermediate space and, apart from questions stemming from new versions etc, SO does not need more questions?Expectation for everything to constantly grow is unrealistic
Honestly using the existing question stock to generate current-version answers using the current documentation as synthetic training data is probably the way to go.
As more and more libraries are open source on GitHub or gitlab or sourceforge or whateverthefuck, asking questions on the libraries themselves (as an issue) is often the right thing to do, too… Less centralised than SO but also the only people who care about how to do things in a lib are people using the lib, so…
Four months ago, we asked Are LLMs making Stack Overflow irrelevant?
“That’s a stupid question, marked as solved.”
marked as duplicate, see <other question from 2005, before LLMs were invented>
Anyone remember experts-exchange?
I remember when it didn’t have a dash. Until people started making fun of the old URL…
So easily avoided too
Ah yes, the place that never answered anything.
The sloppiest of slops before we got AI slop.
It was the pinterest of answering stuff
I used it in earnest! (to write shitty VB scripts and PHP websites)
Or if they had an answer, they paywalled it, until Google got pissed at them for including the answer in their SEO but blocking it once the user clicked through. Then they maliciously complied with Google’s demand to not censor by burying the answer under layers upon layers of ads and other “related” questions.
I was so glad to see SO eat their lunch.
It will endure as long as the LLM’s on there know how to misinterpret the question and fire back snarky unhelpful answers about how clueless you are for asking in the first place.
This is interesting because a huge amount of AI “knowledge” comes from stack exchange.
Now I’ll go read the other comments and article to see if that’s already been mentioned :)
Ever ask a question on SO? I tell my students to search there but never, ever ask a question. The unmitigated hostility is not what new developers need or deserve. ChatGPT won’t humiliate you for asking a question that someone else has already asked.
I see this hot take often, and it isn’t entirely without merit, but it is mitigated by moderation; in some Stack communities better than others. I’ve been an active member for many years, and in my view it goes like this.
If you contribute a question without reading the rules and How to Ask a Good Question, you don’t provide minimal reproducible steps with code, post images of code, etc. you may get flamed out of town. And that may feel bad and it may be mean if the questioner didn’t know to read those. But they are there for you.
If, however, you ask a thoughtful question, give examples, show what you’ve tried, etc. you definitely can get quality, courteous help.
Doesn’t change that video killed the radio star here. The show is over.
Beginners are the least likely to ask thoughtful questions. We include slides in lectures about how to ask a question, but when there’s an assignment deadline and you’re inexperienced, it’s more likely you’re going to just blurt out “help me!” rather than provide a detailed explanation that doesn’t require repeated prompting. It takes time to learn how to work through an issue yourself before asking. Students are often facing time pressure and that can drive bad behavior. Correcting them is important, just don’t do it in a way that crushes their spirit.
100% understood and agreed. I don’t want to defend the bad behavior. It is out there among questioners and in the experienced community alike. Just saying it is possible to find quality help there.
I’ve asked questions on S.O. I’ve answered some too.
What I’ve found works well on s-o is
- Researching a bit first
- Asking a question properly*
- Including that search attempt to prove you’ve done some due-diligence
I’ve found even a dick like me can get a lot of leeway by showing I’ve put in the effort and asked properly.
*Same as Usenet
ChatGPT won’t humiliate you for asking a question that someone else has already asked.
I don’t know, being told what a good question that was and what a good boy I am everytime I ask a stupid question feels pretty humiliating.
(Still better than SO)
That’s a pretty recent development, isn’t it? I remember ChatGPT being a lot more matter of factly earlier on.
Yep, old ChatGPT was much more blunt and factual.
Don’t really like the recent trend of every LLM talking to me like I’m in kindergarten.
Problem being that someone else asked the question 10 years ago and the answer is now irrelevant due to version changes. People with high scores are just early adopters who answered all of the easy questions. Hostile users generally can’t understand the question. The issue with llms answering your question is that they are going to be stuck in the current time period. In the future their answers will also be irrelevant due to version changes.
I mean that is already a problem, if you ask a question you have to be ready for the answer to be a mismatch of version conflicts.
But that is ok. ChatGPT is a tool that can either help you or hurt you. I like to think of it like a power hammer. If you are doing a roofing job, it can help you get things done faster compared to a manual hammer, but you still need to know how to build a roof to get started.
ChatGPT is great at helping you organize your thoughts or finding an answer to some error message buried in some log file, but you still need to know what questions to ask and you need to be ready for it to give you a stupid answer and how to get around that.
Earlier today I googled how to toggle full screen in dosbox-x and the AI-generated answer said to use alt+enter. Tried it and it didn’t work, so I look in the documentation and it turns out that they changed it to F12+f a while ago (probably to avoid interfering with actual dos input).
This is definitely already a problem.
Every LLM is shit at dealing with version changes. They don’t understand it as a concept, despite all their training data.
If LLMs just copied stack overflow they’d respond to every question with “Closed as duplicate. Question already answered.”
and link a slightly similar question, which’s answers can’t be used in your case, because of the small difference. also, it’s outdated since four years.
or 13 in case of python questions, and they are about python2
Said the same thing
is giving marked-as-duplicate vibes
That’s why I only post questions for bleeding-edge languages and code libraries. I have to answer them myself.
For me, strict rules are what make this website useful. No threads named “help me” is why I like reading it.
For newcomers there is https://stackoverflow.com/staging-ground
Even for non newcomers, having threads marked as duplicates for problems introduced by version changes that aren’t considered in the original question/answers is a major issue.
I forget where I heard the quote, but:
Stack Overflow is a great place to find answers. Stack Overflow is a terrible place to ask questions.
Their moderation approach is a big part of why it’s a great place to search for answers.
But if it results in edge issues that’re similar to another problem but not to the point of having the same solution being closed for being a duplicate, is it really helpful to the overall quality of the answers on Stack Overflow?
I’ve never had an issue asking a question on stack overflow.
I’d wager a lot of ‘you people’ that have issues with it probably didn’t do enough research on your own.
There’s issues on both sides. A lot of people who ask questions are clearly just asking others to do their homework or otherwise haven’t made any effort, but there are also a lot of people who are unnecessarily hostile.
I definitely agree with this. I think the easier and kinder thing to do is to just not reply to posts like that.
Not necessarily directly, many people may have abandoned learning programming because of LLMs, rather than Stack Overflow specifically.
You’re pulling this out of your ass. That is completely made up.
I don’t think such trend would be so big. And anyone who has used any LLM for programming learns very quickly that those are very far from replacing anyone
People who know programming already, yes. People who are getting into it / want to get into it, see it as an amazing shortcut.
I had two working students already, who thought and communicated that they don’t really need to learn programming, because they can do it with ChatGPT / Q. It was quite infuriating.
students
When I was a student I despised the idea of typeless
var
in C#. Then a few years later at my day job I fully embraced C++auto
. I understand the frustration but unfortunately being wrong is part of learning
For real. You can tell how good a programmer someone is, by how good they think an LLM is at programming.
I use it to bounce ideas around with or get it to direct me in the right direction if I am stumped for further research, but it will be a cold day in Hell before I have it write more than the most gruntiest of grunt boilerplate code. It just can’t do it to a useful standard without a lot of oversight.
Same, it’s largely doing pretty much as the article implies, replacing StackOverflow for when I need the correct runes to do something specific.
I had a decently awarded account on SO because I joined it in 2012. I asked and answered questions. For the first few years it was fucking awesome as a professional developer. Then it’s popularity on google search results ended up making it too well known and the comment quality dropped substantially. Then the fucking powerusers popped up and started flagging almost everye one my questions as duplicates while pointing to unrelated questions. The last I really used SO was around 2017. I got too fed up to participate in the platform because when I spent the time to make a well formed question, it would just got shut down and my time wasted.
Had the same experience, almost exactly.
Like it or hate it (personally I prefer the latter, posting there I felt like a middle schooler with a PUNCH ME sticker on my face) it was a great source of indexable data on programming.
I wonder how will this affect future search and llms, now that all similar questions are being asked in private llm threads.
I never once actually asked a question there. Partly because most of the time, the question I was asking had always been asked.
However, I have found the correct answer to 100s of questions there. Usually through google/ddg/kagi searches.
Not terribly surprising, Google would often direct me to StackOverflow threads as I was googling for an answer to a question. And as often as not, either the question was closed; or, instead of anyone providing an answer, the commenters would spiral off into questioning everything about the original question asker’s life choices. While I do get the whole XY Problem, this sort of thing seemed to be over-used on SO.
Granted, I don’t know if AI answers are any better. Sure, they can answer a lot of the simple questions, but I’ve not seen them be useful on hard, more obscure questions. Probably because those questions don’t have ready answers on SO.
the whole XY Problem
lol. I hate this. Just answer the damn question or don’t. I’m not asking you to validate if what I’m doing is weird or not. It’s weird! I know! That’s none of your business. Just answer the damn question or don’t. Simple as.
So here’s what I don’t get. LLMs were trained on data from places like SO. SO starts losing users ,and thus content. Content that LLMs ingest to stay relevant.
So where will LLMs get their content after a certain point? Especially for new things that may come out or unique situations. It’s not like it’ll scrape the answer from a web page if people are just asking LLMs.
The snake eats its tail and it all degenerates into slop. Happy coding!
This is an area where synthetic data can be useful. For example, you could scrape the documentation and source code for a Python library and then use an existing LLM to generate questions and answers about the content to train future coding assistants on. As long as the training data gets well curated for quality it’s perfectly useful for this kind of thing, no need for an actual forum.
AI companies have a lot of clever people working for them, they’re aware of these problems.
You’ll never be able to capture every source of questions that humans might have in LLM training data.
That’s the neat thing, you don’t.
LLM training is primarily about getting the LLM to understand concepts. When you need it to be factual, or are working with it to solve novel problems, you can put a bunch of relevant information into the LLM’s context and it can use that even if it wasn’t explicitly trained on it. It’s called RAG, retrieval-augmented generation. Most of the general-purpose LLMs on the net these days do that, when you ask Copilot or Gemini about stuff it’ll often have footnotes in the response that point to the stuff that it searched up in the background and used as context.
So for a future Stack Overflow LLM replacement, I’d expect the LLM to be backed up by being able to search through relevant documentation and source code.
Even then the summarizer often fails or bring up the wrong thing 🤷
You’ll still have trouble comparing changes if it needs to look at multiple versions, etc. Especially parsing changelogs and comparing that to specific version numbers, etc
How does this play out when you hold a human contributor to the same standards? They also often fail to summarize information accurately or bring up the wrong thing. Lots of answers on Stack Overflow are just plain wrong, or focus on the wrong thing, or don’t reference the correct sources (when they reference anything at all). The most common criticism of Stack Overflow I’m seeing is how its human contributors direct people to other threads and declare that the question is “already answered” there when it isn’t really.
LLMs can do a decent job. And right now they are as bad as they’re ever going to be.
Well trained humans are still more consistent and more predictable and easier to teach.
There’s no guarantee LLM will get reliably better at everything. It still makes some mistakes today that it did when introduced and nobody knows how to fix that yet
You’re still setting a high standard here. What counts as a “well trained” human and how many SO commenters count as that? Also “easier to teach” is complicated. It takes decades for a human to become well trained, an LLM can be trained in weeks. And an individual computer that’ll be running the LLM is “trained” in minutes, it just needs to load the model into memory. Once you have an LLM you can run as many instances of it as you want to spend money on.
There’s no guarantee LLM will get reliably better at everything
Never said they would. I said they’re as bad as they’re ever going to be, which allows for the possibility that they don’t get any better.
Even if they don’t, though, they’re still good enough to have killed Stack Overflow.
It still makes some mistakes today that it did when introduced and nobody knows how to fix that yet
And humans also make mistakes. Do we know how to fix that yet?
This is already a problem for LLMs now
Documentation will carry it a bit but yeah, it’ll be an issue
Because we all know how perfect documentation is. 😂
Fair point lol
Same question applies to all the other websites out there being mined to train LLMs. Google search Overviews removes the need for people to visit linked sites. Traffic plummets. Ads dry up, and the sites go out of business. No new content to train on 🤷🏻♂️
You are assuming that people act in logical ways.
This is only a problem right now if you think about it.
deleted by creator
They’re probably hoping to use people’s submitted code for training. But that seems like it will be diminishing returns
The need for the service that SO provided won’t go away. Eventually people will migrate to new places to discuss. LLM creators will either constantly scrape those as well, forcing them to implement more and more countermeasures and GenAI-poison, or the services themselves will enshittify and sell our content (i.e. the commons) to LLM-creators.
I worry that the replacement is more likely a move to platforms like Discord. I mean it’s already happened in a lot of projects.
Discord is terrible for this.
I hate Discord with a passion. Trying to get everyone I know away from it.
Yes, it’s what I was referring to in the second part.
I’ve never been accused of being a smart man.
If they move to Discord, nobody will ever be able to find the answers. They must use a website that is indexable by search engines or it will be pointless.
Yeah. But this already happens, unfortunately.
Even without LLMs, it’s possible StackOverflow would have eventually faded into irrelevance
Yeah, exactly. A lot of groups have a Discord :( or other forums where people ask questions. I know I’ve had to ask questions on Svelte’s Discord :( for example. And I think even once on some YouTube influencer’s Slack…
Sucks cuz both of those places are silos and my questions and answers are forever lost.
Projects that use Discord for support piss me right off. What a stupid way to keep answering the same question over and over again.
It’s not like discord is any better than SO. It’s a closed platform, often with no read access if you don’t want to register, and it’s not searchable in the slightest.
I would take SO any day over discord.
Yep. 200% agree. I still post questions on SO, but when I don’t get any answers, then I have to go to Discord… :(
Can people access to discord from corporate networks? I’m fucked if the Google answer gave me reddit or github as the answers because they’re blocked.
I don’t know, but for reddit you may try one of the redlib instances, and gothub for github. I don’t think discord has such frontends
Github is blocked for you? Bruh
I had to open a ticket for them to unblock the python pep pages! I was trying to teach my intern the pep-8 and didn’t had access. Fucking crazy.
How does that even get on a blocklist 😭
Don’t want the slaves reading the GPL and getting ideas.