I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things. I got an AI search response just yesterday that dramatically understated an issue by citing an unscientific ideologically based website with high interest and reason to minimize said issue. The actual studies showed a 6x difference. It was blatant AF, and I can’t understand why anyone would rely on such a system for reliable, objective information or responses. I have noted several incorrect AI responses to queries, and people mindlessly citing said response without verifying the data or its source. People gonna get stupider, faster.
I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things
Ahem. Weren’t there an election recently, in some big country, with uncanny similitude with that?
Yeah. Got me there.
That’s why I only use it as a starting point. It spits out “keywords” and a fuzzy gist of what I need, then I can verify or experiment on my own. It’s just a good place to start or a reminder of things you once knew.
An LLM is like taking to a rubber duck on drugs while also being on drugs.
I like to use GPT to create practice tests for certification tests. Even if I give it very specific guidance to double check what it thinks is a correct answer, it will gladly tell me I got questions wrong and I will have to ask it to triple check the right answer, which is what I actually answered.
And in that amount of time it probably would have been just as easy to type up a correct question and answer rather than try to repeatedly corral an AI into checking itself for an answer you already know. Your method works for you because you have the knowledge. The problem lies with people who don’t and will accept and use incorrect output.
Well, it makes me double check my knowledge, which helps me learn to some degree, but it’s not what I’m trying to make happen.
Neat snaps camera
Neat
You can tell by the way that it is!
It’s not often you get all this neatness in one place
I know a few people who are genuinely smart but got so deep into the AI fad that they are now using it almost exclusively.
They seem to be performing well, which is kind of scary, but sometimes they feel like MLM people with how pushy they are about using AI.
Most people don’t seem to understand how “dumb” ai is. And it’s scary when i read shit like that they use ai for advice.
People also don’t realize how incredibly stupid humans can be. I don’t mean that in a judgemental or moral kind of way, I mean that the educational system has failed a lot of people.
There’s some % of people that could use AI for every decision in their lives and the outcome would be the same or better.
That’s even more terrifying IMO.
I was convinced about 20 years ago that at least 30% of humanity would struggle to pass a basic sentience test.
And it gets worse as they get older.
I have friends and relatives that used to be people. They used to have thoughts and feelings. They had convictions and reasons for those convictions.
Now, I have conversations with some of these people I’ve known for 20 and 30 years and they seem exasperated at the idea of even trying to think about something.
It’s not just complex topics, either. You can ask him what they saw on a recent trip, what they are reading, or how they feel about some show and they look at you like the hospital intake lady from Idiocracy.
No, no- not being judgemental and moral is how we got to this point in the first place. Telling someone who is doing something foolish, when they are acting foolishly used to be pretty normal. But after a couple decades of internet white-knighting, correcting or even voicing opposition to obvious stupidity is just too exhausting.
Dunning-Kruger is winning.
New mental illness boutta drop.
Bath Salts GPT
People addicted to tech omg who could’ve guessed. Shocked I tell you.
It depends: are you in Soviet Russia ?
In the US, so as of 1/20/25, sadly yes.
I knew a guy I went to rehab with. Talked to him a while back and he invited me to his discord server. It was him, and like three self trained LLMs and a bunch of inactive people who he had invited like me. He would hold conversations with the LLMs like they had anything interesting or human to say, which they didn’t. Honestly a very disgusting image, I left because I figured he was on the shit again and had lost it and didn’t want to get dragged into anything.
Jesus that’s sad
Yeah. I tried talking to him about his AI use but I realized there was no point. He also mentioned he had tried RCs again and I was like alright you know you can’t handle that but fine… I know from experience you can’t convince addicts they are addicted to anything. People need to realize that themselves.
Not all RCs are created equal. Maybe his use has the same underlying issue as the AI friends: problems in his real life and now he seeks simple solutions
I’m not blindly dissing RCs or AI, but his use of it (as the post was about people with problematic uses of this tech I just gave an example). He can’t handle RCs historically, he slowly loses it and starts to use daily. We don’t live in the same country anymore and were never super close so I can’t say exactly what his circumstances are right now.
I think many psychadelics at the right time in life and the right person can produce lifelasting insight, even through problematic use. But he literally went to rehab because he had problems due to his use. He isn’t dealing with something, that’s for sure. He doesn’t admit it is a problem either which bugs me. It is one thing to give up and decide to just go wild, another to do it while pretending one is in control…
I plugged this into gpt and it couldn’t give me a coherent summary.
Anyone got a tldr?For those genuinely curious, I made this comment before reading only as a joke–had no idea it would be funnier after reading
It’s short and worth the read, however:
tl;dr you may be the target demographic of this study
Lol, now I’m not sure if the comment was satire. If so, bravo.
Probably being sarcastic, but you can’t be certain unfortunately.
Based on the votes it seems like nobody is getting the joke here, but I liked it at least
Power Bot 'Em was a gem, I will say
Negative IQ points?
New DSM / ICD is dropping with AI dependency. But it’s unreadable because image generation was used for the text.
This is perfect for the billionaires in control, now if you suggest that “hey maybe these AI have developed enough to be sentient and sapient beings (not saying they are now) and probably deserve rights”, they can just label you (and that arguement) mentally ill
Foucault laughs somewhere
I need to read Amusing Ourselves to Death…
My notes on it https://fabien.benetou.fr/ReadingNotes/AmusingOurselvesToDeath
But yes, stop scrolling, read it.
I mean, I stopped in the middle of the grocery store and used it to choose best frozen chicken tenders brand to put in my air fryer. …I am ok though. Yeah.
At the store it calculated which peanuts were cheaper - 3 pound of shelled peanuts on sale, or 1 pound of no shell peanuts at full price.
That’s… Impressively braindead
That’s the joke!
lmao we’re so fucked :D
And sunshine hurts.
Said the vampire from Transylvania.
Correlation does not equal causation.
You have to be a little off to WANT to interact with ChatGPT that much in the first place.
I don’t understand what people even use it for.
Compiling medical documents into one, any thing of that sort, summarizing, compiling, coding issues, it saves a wild amounts of time compiling lab results that a human could do but it would take multitudes longer.
Definitely needs to be cross referenced and fact checked as the image processing or general response aren’t always perfect. It’ll get you 80 to 90 percent of the way there. For me it falls under the solve 20 percent of the problem gets you 80 percent to your goal. It needs a shitload more refinement. It’s a start, and it hasn’t been a straight progress path as nothing is.
I use it to generate a little function in a programming language I don’t know so that I can kickstart what I need to look for.
There’s a few people I know who use it for boilerplate templates for certain documents, who then of course go through it with a fine toothed comb to add relevant context and fix obvious nonsense.
I can only imagine there are others who aren’t as stringent with the output.
Heck, my primary use for a bit was custom text adventure games, but ChatGPT has a few weaknesses in that department (very, very conflict adverse for beating up bad guys, etc.). There’s probably ways to prompt engineer around these limitations, but a) there’s other, better suited AI tools for this use case, b) text adventure was a prolific genre for a bit, and a huge chunk made by actual humans can be found here - ifdb.org, c) real, actual humans still make them (if a little artsier and moody than I’d like most of the time), so eventually I stopped.
Did like the huge flexibility v. the parser available in most made by human text adventures, though.
I use it to make all decisions, including what I will do each day and what I will say to people. I take no responsibility for any of my actions. If someone doesn’t like something I do, too bad. The genius AI knows better, and I only care about what it has to say.
I use it many times a day for coding and solving technical issues. But I don’t recognize what the article talks about at all. There’s nothing affective about my conversations, other than the fact that using typical human expression (like “thank you”) seems to increase the chances of good responses. Which is not surprising since it better matches the patterns that you want to evoke in the training data.
That said, yeah of course I become “addicted” to it and have a harder time coping without it, because it’s part of my workflow just like Google. How well would anybody be able to do things in tech or even life in general without a search engine? ChatGPT is just a refinement of that.