I was just commenting on how shit the Internet has become as a direct result of LLMs. Case in point - I wanted to look at how to set up a router table so I could do some woodworking. The first result started out halfway decent, but the second section switched abruptly to something about routers having wifi and Ethernet ports - confusing network routers with the power tool. Any human/editor would catch that mistake, but here it is.
I’d say it was weird, not shit. It was hard to find niche sites, but once you did they tended to be super deep into the hobby, sport, movies, or games.
SEO (search engine optimization) was probably the first step down this path, where people would put white text on a white background with hundreds of words that they hoped a search engine would index.
Professionals (using the term loosely) are using LLMs to draft emails and reports, and then other professionals (?) are using LLMs to summarise those emails and reports.
I genuinely believe that the general effectiveness of written communication has regressed.
Yep. My work has pushed AI shit massively. Something like 53% of staff are using it. They’re using it to write reports for them for clients, all sorts. It’s honestly mad.
I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.
Like all tools, it is good for some things and not others.
“Make me an OS to replace Windows” is going to fail “Tell me the terminal command to rename a file” will succeed.
It’s up to the user to apply the tool in a way that it is useful. A person simply saying ‘My hammer is terrible at making screw holes’ doesn’t mean that the hammer is a bad tool, it tells you the user is an idiot.
I find it most useful as a means of getting answers for stuff that have poor documentation. A couple weeks ago chatgpt gave me an answer whose keyword had no matches on Google at all. No idea where it took that from (probably some private codebase), but it worked.
I’m glad you had some independent way to verify that it was correct. Because I’ve asked it stuff Google doesn’t know, and it just invents plausible but wrong answers.
I was just commenting on how shit the Internet has become as a direct result of LLMs. Case in point - I wanted to look at how to set up a router table so I could do some woodworking. The first result started out halfway decent, but the second section switched abruptly to something about routers having wifi and Ethernet ports - confusing network routers with the power tool. Any human/editor would catch that mistake, but here it is.
I can only see this get worse.
The Internet was shit before LLMs
I’d say it was weird, not shit. It was hard to find niche sites, but once you did they tended to be super deep into the hobby, sport, movies, or games.
SEO (search engine optimization) was probably the first step down this path, where people would put white text on a white background with hundreds of words that they hoped a search engine would index.
It had its fair share of shit and that gradually increased with time, but LLMs are like a whole new level of flooding everything with zero effort
It’s not just the internet.
Professionals (using the term loosely) are using LLMs to draft emails and reports, and then other professionals (?) are using LLMs to summarise those emails and reports.
I genuinely believe that the general effectiveness of written communication has regressed.
Yep. My work has pushed AI shit massively. Something like 53% of staff are using it. They’re using it to write reports for them for clients, all sorts. It’s honestly mad.
I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.
I like using gpt to generate powershell scripts, surprisingly its pretty good at that. It is a small task so unlikely to go off in the deepend.
Like all tools, it is good for some things and not others.
“Make me an OS to replace Windows” is going to fail “Tell me the terminal command to rename a file” will succeed.
It’s up to the user to apply the tool in a way that it is useful. A person simply saying ‘My hammer is terrible at making screw holes’ doesn’t mean that the hammer is a bad tool, it tells you the user is an idiot.
I use it to construct regex’s which, for my use cases, can get quite complicated. It’s pretty good at doing that.
Apparently Claude sonnet 3.7 is the best one for coding
I feel like it’s not that bad if you use it for small things, like single lines instead of blocks of code, like a glorified auto complete.
Sometimes it’s nice to not use it though because it can feel distracting.
truly who could have predicted that a glorified autocomplete program is best at performing autocompletion
seriously the world needs to stop calling it “AI”, it IS just autocomplete!
I find it most useful as a means of getting answers for stuff that have poor documentation. A couple weeks ago chatgpt gave me an answer whose keyword had no matches on Google at all. No idea where it took that from (probably some private codebase), but it worked.
I’m glad you had some independent way to verify that it was correct. Because I’ve asked it stuff Google doesn’t know, and it just invents plausible but wrong answers.
Relevant comic