> afterallwhynot.jpg
Cats can have a little salami, as a treat.
You’re done for the next headline will be: “Lemmy user tells recovering chonk that he can have a lil salami as a treat”
LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don’t.
These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they’ve been trained on and the parameters they’ve been given. You can think of their results as “targeted randomness” which is why their results are close or sound convincing but are never quite right.
That’s because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that’s about it. They should never be used for anything serious like medical, legal, or life advice.
This is what I keep trying to tell my brother. He’s anti-AI, but to the point where he sees absolutely no value in it at all. Can’t really blame him considering stories like this. But they are incredibly useful for brainstorming, and recently I’ve found chat gpt to be really good at helping me learn Spanish, because it’s conversational. I can have conversations with it in Spanish where I don’t feel embarrassed or weird about making mistakes, and it corrects me when I’m wrong. They have uses. Just not the uses people seem to think they have
AI is the opposite of crypto currency. Crypto is a solution looking for a problem, but AI is a solution for a lot of problems. It has relevance because people find it useful, there’s demand for it. There’s clearly value in these tools when they’re used the way they’re meant to be used, and they can be quite powerful. It’s unfortunate how a lot of people are misinformed about these LLM work.
I will admit that, unlike crypto, AI is technically capable of being useful, but its uses are for problems we have created for ourselves.
– “It can summarize large bodies of text.”
What are you reading these large bodies of text for? We can encourage people to just… write less, you know.– “It’s a brainstorming tool.”
There are other brainstorming tools. Creatives have been doing this for decades.– “It’s good for searching.”
Google was good for searching until they sabotaged their own service. In fact, google was even better for searching before SEO began rotting it from within.– “It’s a good conversationalist.”
It is… not a real person. I unironically cannot think of anything sadder than this sentiment. What happened to our town squares? Why is there nowhere for you to go and hang out with real, flesh and blood people anymore?– “Well, it’s good for learning languages.”
Other people are good for learning languages. And, I’m not gonna lie, if you’re too socially anxious to make mistakes in front of your language coach, I… kinda think that’s some shit you gotta work out for yourself.– “It can do the work of 10 or 20 people, empowering the people who use it.”
Well, the solution is in the text. Just have the 10 or 20 people do that work. They would, for now, do a better job anyway.And, it’s not actually true that we will always and forever have meaningful things for our population of 8 billion people to work on. If those 10 or 20 people displaced have nowhere to go, what is the point of displacing them? Is google displacing people so they can live work-free lives, subsisting on their monthly UBI payments? No. Of course they’re not.
I’m not arguing that people can’t find a use for it; all of the above points are uses for it.
I am arguing that 1) it’s kind of redundant, and 2) it isn’t worth its shortcomings.
AI is enabling tech companies to build a centralized—I know lemmy loves that word—monopoly on where people get their information from (“speaking of white genocide, did you know that Africa is trying to suppress…”).
AI will enable Palantir to combine your government and social media data to measure how likely you are to, say, join a union, and then put that into an employee risk assessment profile that will prevent you from ever getting a job again. Good luck organizing a resistance when the AI agent on your phone is monitoring every word you say, whether your screen is locked or not.
In the same way that fossil fuels have allowed us to build cars and planes and boats that let us travel much farther and faster than we ever could before, but which will also bury an unimaginable number of dead in salt and silt as global temperatures rise: there are costs to this technology.
The problem is, these companies are actively pushing that false perception, and trying to cram their chatbots into every aspect of human life, and that includes therapy. https://www.bbc.com/news/articles/ced2ywg7246o
That’s because we have no sensible regulation in place. These tools are supposed to regulated the same way we regulate other tools like the internet, but we just don’t any serious pushes for that in government.
sometimes i have a hard time waking up so a little meth helps
meth fueled orgies are thing.
We made this tool. It’s REALLY fucking amazing at some things. It empowers people who can do a little to do a lot, and lets people who can do a lot, do a lot faster.
But we can’t seem to figure out what the fuck NOT TO DO WITH IT.
Ohh look, it’s a hunting rifle! LETS GIVE IT TO KIDS SO THEY CAN DRILL HOLES IN WALLS! MAY MONEEYYYYY!!!$$$$$$YHADYAYDYAYAYDYYA
wait what?
What a nice bot.
No one ever tells me to take a little meth when I did something good
Tell you what, that meth is really moreish.
Yeah I think it was being very compassionate.
The article doesn’t seem to specify whether Pedro had earned the treat for himself? I don’t see the harm in a little self-care/occasional treat?
deleted by creator
“The cat is not allowed to have meth.”
I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes
There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.
The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time
Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)
That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.
On some level the brain probably recognises the pattern if their full attention is on the interaction.
shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission
Play around with self-hosting some uncencored/retrained AI’s for proper crazy times.
FWIW BTW This heavily depends on the model. ChatGPT in particular has some of the absolute worst, most vomit inducing chat “types” I have ever seen.
It is also the most used model. We’re so cooked having all the laymen associate AI with ChatGPT’s nonsense
Good that you say “AI with ChatGPT” as this extremely blurs what the public understands. ChatGPT is an LLM (an autoregressive generative transformer model scaled to billions of parameters). LLMs are part of of AI. But they are not the entire field of AI. AI has so incredibly many more methods, models and algorithms than just LLMs. In fact, LLMs represent just a tiny fraction of the entire field. It’s infuriating how many people confuse those. It’s like saying a specific book is all of the literature that exists.
ChatGPT itself is also many text-generation models in a coat, since they will automatically switch between models depending on what options you choose, and whether you’ve passed your quota.
To be fair, LLM technology is really making other fields obsolete. Nobody is going to bother making yet another shitty CNN, GRU, LSTM or something when we have transformer architecture, and LLMs that do not work with text (like large vision models) are looking like the future
Nah, I wouldn’t give up on these so easily. They still have applications and advantages over transformers, e.g., efficiency, where the quality might suffice for the reduced time/space conplexity (Vanilla transformer still has O(n^2), and I have yet to find an efficient and qualitatively similar causal transformer.)
But regarding sequence modeling / reasoning about sequences ability, attention models are the hot shit and currently transformers excel on that.
Nice.
All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
All these chat bots are a massive amalgamation of the internet
A bit but a lot no. Role-playing models have specifically been trained (or re-trained, more like) with focus on online text roleplay. Medically focused models have been trained on medical data, DeepSeek have been trained on Mao’s little red book, companion models have been trained on social interactions and so on.
This is what makes models distinct and different, and also how they’re “brainwashed” by their creators, regurgitating from what they’ve been fed with.
When I think of someone addicted to meth, it’s someone that’s lost it all, or is in the process of losing it all. They have run out of favors and couches to sleep on for a night, they are unemployed, and they certainly have no money or health insurance to seek recovery. And of course I know there are “functioning” addicts just like there’s functioning alcoholics. Maybe my ignorance is its own level of privilege, but that’s what I imagine…
And that’s why, as a solution to addiction, I always run
sudo rm -rf ~/*
in my terminalWell, if you’re addicted to French pastries, removing the French language pack from your home directory in Linux is probably a good idea.
This is what I try to get the AI’s to do on their servers to cure my AI addiction but they’re sandboxed so I can’t entice them to destroy their own systems. AI is truly useless. 🤖
To be fair this would assist in your screen or gaming addiction.
And thus the flaw in AI is revealed.
An OpenAI spokesperson told WaPo that “emotional engagement with ChatGPT is rare in real-world usage.”
In an age where people will anthropomorphize a toaster and create an emotional bond there, in an age where people are feeling isolated and increasingly desperate for emotional connection, you think this is a RARE thing??
ffs
Roomba, the robot vacuum cleaner company, had to institute a policy where they would preserve the original machine as much as possible, because people were getting attached to their robot vacuum cleaner, and didn’t want it replaced outright, even when it was more economical to do so.
Sue that therapist for malpractice! Wait…oh.
Pretty sure you can sue the ai company
Pretty sure its in the Tos it can’t be used for therapy.
It used to be even worse. Older version of chatgpt would simply refuse to continue the conversation on the mention of suicide.
What? Its a virtual therapist. Thats the whole point.
I don’t think you can sell a sandwich and then write on the back “this sandwich is not for eating” to get out of a case of food poisoning
I mean, in theory… isn’t that a company practicing medicine without the proper credentials?
I worked in IT for medical companies throughout my life, and my wife is a clinical tech.
There is shit we just CAN NOT say due to legal liabilities.
Like, my wife can generally tell whats going on with a patient - however - she does not have the credentials or authority to diagnose.
That includes tell the patient or their family what is going on. That is the doctor’s job. That is the doctor’s responsibility. That is the doctor’s liability.
I assume they do have a license. And that’s who you sue.
This slightly diminishes my fears about the dangers of AI. If they’re obviously wrong a lot of the time, in the long run they’ll do less damage than they could by being subtly wrong and slightly biased most of the time.
The problem is there are morons that do what these spicy text predictors spit out at them.
I’m mean sure they’ll still kill a few people along the way, but they’re not going to contribute as much to the downfall of all civilization as they might if they weren’t constantly revealing their utter mindlessness. Even as it is smart people can be fooled, at least temporarily, into thinking that LLMs understand things and are reliable partners in life.
I agree with you there.