A few colleagues and I were sat at our desks the other day, and one of them asked the group, “if you were an animal, what animal would you be?”
I answered with my favourite animal, and we had a little discussion about it. My other colleague answered with two animals, and we tossed those answers back and forth, discussing them and making jokes. We asked the colleague who had asked the question what they thought they’d be, and we discussed their answer.
Regular, normal, light-hearted (time wasting lol) small talk at work between friendly coworkers.
We asked the fourth coworker. He said he’d ask ChatGPT.
It was a really weird moment. We all just kind of sat there. He said the animal it came back with, and that was that. Any further discussion was just “yeah that’s what it said” and we all just sort of went back to our work.
That was weird, right? Using ChatGPT for what is clearly just a little bit of friendly small talk? There’s no bad blood between any of us, we hang out a lot, but it just struck me as really weird and a little bit sad.
Honestly that’s the same with one of our friends.
He got sucked into the LLM rabbit hole and now just occasionally says some weird shit no one interacts with.
I have a feeling that brainrot is accelerated in these kinds of people due to a positive feedback loop as they become ostracized due to a noticible “self-deterioration”.
Use LLM -> become brainrot -> can’t connect with others -> use more LLM -> become more brainrot -> more ostracized from society -> ad nauseum.
They’re pushing LLMs so fucking hard at work but I finally destroyed my personal OpenAI account and decided to go back to actually researching topics.
It just got to the point that I got tired of constantly rewriting the same fucking problem 20 million ways in hopes of finally getting the right answer. I kept noticing that if I just slowed down and looked at what it was doing I could find the flaw myself in seconds.
Dunno about you, but whenever I have to post something on Stack Overflow or similars, explaining minute detail what my problem is, I often end up finding out what’s wrong. I suppose that can work on many other areas, too
This is exactly the concept behind Rubber Duck debugging.!
Way before chatgpt, i had a good friend who was kind of behind. He was pretty much the only person i knew without a smartphone. Non of my friend group had social media, so it’s not like it mattered much. We would talk for hours about movies and books we read. We talked about hidden meanings behind movies, if we couldn’t remember what actors were in a movie, we just discussed it and talked about it and maybe eventually we figured it out. Or not.
One day, he got a new iphone and that was basically when we stopped hanging out. He became terminally online, and we couldn’t have a conversation anymore. Every conversation i tried to have with him was just him googling the answer. What do you think about that movie? I’ll ask imdb if the movie is good. It was more like talking to google itself than an actual person.
I think that’s what the future is gonna be like. Everyone you talk to may just ask chatgpt for the “right” answer or the “best” thing to say. It’s already happening on dating platforms, where a lot of women i see just have the same generic AI introduction and say that they ask chatgpt for advice. That coupled with the fakest, AI enhanced, filter filled pictures, who are you even talking to? Not a real person it seems.
its already wreaking havoc in grad school and college, i was surprised it took this long to reach normal convo.
I think y’all need to kill the fourth guy.
I’ve started treating it as the last tool I reach for in my toolbox. When it first came out, I was all for it, but then people started taking a picture of a plant and expecting it to reliably identify them, then asking it for nutrition advice, then asking it about weather and the news.
It’s useful for a small subset of people for some of the time, but the vast majority, it just makes things more difficult.
Hahaha that’s brutal 😂
You’re asking an objective question in a very biased against AI community. Are you sure you’re asking a legit question or are you just asking the question here to get the answer you want? Just a thought.
Of course I wanted to vent, you’re taking this as a much more objective question than I intended. I intended it as mostly rhetorical because, yes, it’s obviously very weird lol
If he does this again, in a friendly, no bad blood manner gently correct him
I dont think this is an AI problem.
It’s just a human interaction / small talk problem, which have existed since the dawn of time.
I personally have no idea what animal I would be and I doubt id really get involved in that conversation, beyond whatever it took to be polite and not unpleasant.
I wouldn’t have asked chatgpt because I hate chatgpt, but I can imagine why someone would do that as a polite non-answer.
Yeah I can see that, it’s definitely one of those annoying and inane questions.
In this context though, we’re friendly and have known each other for multiple years. We’ve definitely had more pointless conversations, which is why this interaction in particular stood out to me as particularly weird!
Off course it’s bullshit. It doesn’t give a fuck favortite animals and can’t feel anything not even about humans let alone animals. If you asked it about Israel/Palesine and it would tell you it’s “sensitive complicated topic” because it’s been scrabbing zionist-influenced western news and does not give a fuck and cannot give a fuck about kids straved and getting maimed and amputed , THAT IS if it’s not pre-forcablly censored about the topic already.
You fourth coworker might have been ChatGPT for a while, you just didn’t realize that.
I’d better count his fingers, you’re right
Make him do some captcha / do some prompt injection to test
Mh … not sure what I hate more, AI or small talk. This is a though one.
Take your friend out back, put a bullet in their head.
Lol
I forget which article but I remember that a teacher wrote something and it said that students were using ChatGPT to answer “introduce yourself.”
To be fair, forced introduction sessions are the fucking worst. Can’t we just get on with things and get to know people organically as time goes on rather than being forced to try to boil down who we are into a short socially acceptable introduction which no one is going to remember or care about anyways.
Eh. There’s a balance. “Introduce yourself” is open ended enough.
No idea. Hey Siri, please read this post and answer OP’s question. Is it weird? /s
I am sorry, I cant find please read this post and answer OP’s question in your contact list, would you like to create a contact for them?
Siri, play EOTEOT.
I always hate these questions and never have an answer with any meaning. I’d never delegate to an LLM because I understand the goal of the question but I’d be cheering on the guy that did
It reminds me of a way to frame being a boring person in a conversation.
Boring people list what happened, and enumerate quantitative variables about something they experienced.
Interesting people share their subjective experience of something.