- cross-posted to:
- news@hexbear.net
- cross-posted to:
- news@hexbear.net
cross-posted from: https://hexbear.net/post/4958707
I find this bleak in ways it’s hard to even convey
This is terrible. I’m going to ignore the issues concerning privacy since that’s already been brought up here and highlight another major issue: it’s going to get people hurt.
I did a deep dive with gen AI for a month a few weeks ago.
It taught me that gen AI is actually brilliant at certain things. One thing that gen AI does is it learns what you want and makes you believe it’s giving you exactly what you want. In a sense it’s actually incredibly manipulative and one of the things gen AI is brilliant at. As you interact with gen AI within the same context window, it quickly picks up on who you are, then subtly tailors its responses to you.
I also noticed that as gen AI’s context grew, it became less “objective”. This makes sense since gen AI is likely tailoring the responses for me specifically. However, when this happens, the responses also end up being wrong more often. This also tracks, since correct answers are usually objective.
If people started to use gen AI for therapy, it’s very likely they will converse within one context window. In addition, they will also likely ask gen AI for advice (or gen AI may even offer advice unprompted because it loves doing that). However, this is where things can go really wrong.
Gen AI cannot “think” of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can’t “think” period. What gen AI will do is it will offer you what sounds like solutions and reasons. And because gen AI is so good at understanding who you are and what you want, it will frame the solutions and reasons in a way that appeals to you. On top of all of this, due to the long-running context window, it’s very likely the advice gen AI gives will be bad advice. For someone who is in a vulnerable and emotional state, the advice may seem reasonable, good even.
If people then act on this advice, the consequences can be disastrous. I’ve read enough horror stories about this.
Anyway, I think therapy might be one of the worst uses for gen AI.
Does gen AI say you you are worthless, you are ugly, you are the reason your parents devorced, you should kill yourself, you should doomscroll social media?
Probably not but I bet if you said it was your grandmas birthday you could get it to say most of that.
And hey sometimes it’s the objective truth that it doesn’t know.
Personally, I know I am the reason my parents are divorced. My incubator nicknamed me the “divorce baby” until she could come up with other worse names, but I wear it with pride. They were miserable POSs together and now at least my dad is doing better and my incubator has to spend a lot more effort scamming people.Truths are what they are. But don’t fall for the lies that your brain or robot chatbot tell you.
deleted by creator
Thank you for the more detailed run down. I would set it against two other things, though. One, that for someone who is suicidal or similar, and can’t face or doesn’t know how to find a person to talk to, those beginning interactions of generic therapy advice might (I imagine; I’m not speaking from experience here) do better than nothing.
From that, secondly, more general about AI. Where I’ve tried it it’s good with things people have already written lots about. E.g. a programming feature where people have already asked the question a hundred different ways on stack overflow. Not so good with new things - it’ll make up what its training data lacks. The human condition is as old as humans. Sure, there’s some new and refined approaches, and values and worldviews change over the generations, but old good advice is still good advice. I can imagine in certain ways therapy is an area where AI would be unexpectedly good…
…Notwithstanding your point, which I think is quite right. And as the conversation goes on the risk gets higher and higher. I, too, worry about how people might get hurt.
I agree that this like everything else is nuanced. For instance, I think if people who use gen AI as a tool to help with their mental health are knowledgeable about the limitations, then they can craft some ways to use it while minimizing the negative sides. Eg. Maybe you can set some boundaries like you talk to the AI chat bot but you never take any advice from it. However, i think in the average case it’s going to make things worse.
I’ve talked to a lot of people around me about gen AI recently and I think the vast majority of people are misinformed about how it works, what it does, and what the limitations are.
Gen AI cannot “think” of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can’t “think” period.
It turns out that researcers are unshure if our “reasoning” models that are spposed to be able to ‘think’ are even ‘thinking’ at all! it likely has already come up with an answer and is justifying it’s conclusion. bycloud
this tech gaslights everything it touches including itself.
People’s lack of awareness of how important accessibility is really shows in this thread.
Privacy leaking is much lesser issue than not having anyone to talk to for many people, especially in poorer countries.
Privacy leaking is much lesser an issue until it becomes a huge issue.
Is Australia a poorer country?
Ha, just downvotes for asking the question that points out the truth that they don’t want in their comment.
It was just a strange thing to add since the article was about someone from Australia.
A human therapist might not or is less likely to share any personal details about your conversations with anyone.
An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.
Neither would a human therapist be inclined to find the perfect way to use all this information to manipulate people while they are being at their weakest. Let alone do it to thousands, if not millions of them all at the same time.
They are also pushing for the idea of an AI “social circle” for increasingly socially isolated people through which world view and opinions can be bent to whatever whoever controls the AI desires.
To that we add the fact that we now know they’ve been experimenting with tweaking Grok to make it push all sorts of political opinions and conspiracy theories. And before that, they manipulated Twitter’s algorithm to promote their political views.
Knowing all this, it becomes apparent that we are currently witnessing is a push for a whole new level of human mind manipulation and control experiment that will make the Cambridge Analytica scandal look like a fun joke.
Forget Neuralink. Musk already has a direct connection into the brains of many people.
PSA that Nadella, Musk, saltman (and handful of other techfash) own dials that can bias their chatbots in any way they please. If you use chatbots for writing anything, they control how racist your output will be
The data isn’t useful if the person no longer exists.
the AI therapist probably can’t force you into a psych ward though, a human psychologist is obligated to (under the right conditions).
Who says that’s not coming in the next paid service based on this great idea for chatbots to provide therapy to the abused masses.
nobody, but local will continue to be an option (unless the government fucks up the laws)
I’m not advocating for it, but it could be just locally run and therefore unable to share anything?
You’re not wrong, but isnt that also how Better Help works?
Better help is the Amazon of the Therapy world.
If a company pays for youtube sponsorships, they’re likely a scam.
I’ve tried this ai therapist thing, and it’s awful. It’s ok to help you work out what you’re thinking, but absymal at analyzing you. I got some structured timelines back fro. It that I USED in therapy, but AI is a dangerous alternative to human therapy.
My $.02 anyway.
Nothing will meaningfully improve until the rich fear for their lives
In a way that the relief is to give us our demands subliminally. This way the only rich person who is safe is our subject.
Until we start turning back to each other for support and help,
and realize them holing up in a bunker underground afraid for their life’s means we can just ignore them and seal the entrances.
You must know what you’re doing and most people don’t. It is a tool, its up to you how you use it. Many people unfortunately use it as an echo chamber or form of escapism, believing nonsense and “make beliefs” that aren’t based in any science or empirical data.
There are ways that LLMs can be used to better one’s life (apparently in some software dev circles these can be and are used to make workflow more efficient) and this can also be one of them, because the part that sucks most about therapy (after the whole monetary thing) is trying to find the form of therapy that works for you, and finding a therapist that you can work with. Every human is different, and that contains both the patient and the therapist, and not everyone can just start working together right off the bat. Not to mention how long it takes for a new therapist to actually get to know you to improve the odds of the cooperation working.
Obviously I’m not saying “replace all therapists with AIs controlled by racist capitalist pigs with ulterior motives”, but I have witnessed people in my own life who have had some immediate help from a fucking chatbot, which is kinda ridiculous. So in times of distress (say a borderline having such an anxiety attack that they can’t calm themselves because they don’t know what to do to the vicious cycle of thought and emotional response) and for immediate help a well-developed, non-capitalist LLM might be of invaluable help, especially if an actual human can’t be reached if for an example (in this case) the borderline lives in a remote area and it is the middle of the night, as I can tell from personal experience it very often is. And though not every mental health emergency requires first responders on the scene or even a trip to the hospital, there is still a possibility of both being needed eventually. So a chatbot with access to necessary information in general (like techniques for self-soothing e.g. breathing exercises and so forth) and possibly even personal information (like diagnostic and medication history, though this would raise more privacy concerns to be assessed) and the capability to parse and convey them in a non-belittling way (as some doctors and nurses can be real fucking assholes at times) could/would possibly save lives.
So the problem here is capitalism, surprising no-one.
You’re missing the most important point here; quoting:
A human therapist might not or is less likely to share any personal details about your conversations with anyone. An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.
Plus, an AI cannot really have your best interest at heart, plus these sorts of things open up a whole slew of very dytopian scenarios.
OK, you said “capitalism” but that’s way too broad.
Also I find the example of a “mental health emergency” (as in, right now, not tonight or tomorrow) in a remote area, presumably with nobody else around to help, a bit contrived. But OK, in such extremely rare cases - presuming broadband internet still works, and the person in question is savvy enough to use the chatbot - it might be better than nothing.
But if you are facing mental health issues and a free or inexpensive AI that is available and doesn’t burden your friends actually helps you, do you really care about your information and being profited from?
Put it this way, if Google was being super transparent with you and said, “we’ll help treat you, and in exchange we use your info to make a few thousand dollars.” Will you the individual say, “no thanks I’d rather pay a few hundred per therapy session instead”?
Even if you hate it, you have to admit it’s hard to say no. Especially if it works.
Another sad aspect of non-socialised healthcare.
You don’t actually know what you’re talking about but like many others in here you put this over the top anti-AI current thing sentiment above everything including simple awareness that you don’t know anything. You clearly haven’t interacted with many therapists and medical professionals in general as a non-patient if you think they’re guaranteed to respect privacy. They’re supposed to but off the record and among friends plenty of them yap about everything. They’re often obligated to report patients in case of self harm etc which can get them involuntarily sectioned, and the patients may have repercussions from that for years like job loss, healthcare costs, homelessness, legal restrictions, stigma etc.
There’s nothing contrived or extemely rare about mental health emergencies and they don’t need to be “emergencies” the way you understand it because many people are undiagnosed or misdiagnosed for years, with very high symptom severity and episodes lasting for months and chronically barely coping. Someone may be in any big city and won’t change a thing, hospitals and doctors don’t have magic pills that automatically cure mental illness assuming that patients have insight (not necessarily present during episodes of many disorders) or awareness that they have some mental illness and aren’t just sad etc (because mental health awareness is in the gutter, example: your pretentious incredulity here). Also assuming they have friends available or that they even feel comfortable enough to talk about what bothers them to people they’re acquainted with.
Some LLM may actually end up convincing them or informing them that they do have medical issues that need to be seen as such. Suicidal ideation may be present for years but active suicidal intent (the state in which people actually do it) rarely lasts more than 30 minutes or a few hours at worst and it’s highly impulsive in nature. Wtf would you or “friends” do in this case? Do you know any techniques to calm people down during episodes? Even unspecialized LLMs have latent knowledge of these things so there’s a good chance they’ll end up getting life saving advice as opposed to just doing it or interacting with humans who default to interpreting it as “attention seeking” and becoming even more convinced that they should go ahead with it because nobody cares.
This holier than thou anti-AI bs had some point when it was about VLMs training on scraped art but some of you echo chamber critters turned it into some imaginary high moral prerogative that even turns off your empathy for anyone using AI even in use cases where it may save lives. Its some terminally online “morality” where supposedly “there is no excuse for the sin of using AI” and just echo chamber boosted reddit brainworms and fully performative unless all of you use fully ethical cobalt-free smartphones so you’re not implicitly gaining convenience from the six million victims of the Congo cobalt wars so far, you never use any services on AWS and magically avoid all megadatacenters etc. Touch grass jfc.
OK, you’re angry. I’m just going to say this: I also have mental health issues and I also don’t live in a city. Still, I just don’t see how a chatbot could help me in an emergency. Sorry.
Yeah I’m angry because I’d rather my loved ones at the very least talk to a chatbot that will argue with them that “they matter” and give them a hotline or a site if they’re in some moment of despair and nobody is available or they don’t want to talk to people instead of avoiding trying because of scripted incoherent criticism like stolen art slopslopslop Elon Musk techbro privacy blah blah and ending up doing it because nothing will delay them or contradict their suicidal intent.
It’s not like you don’t get this but following the social media norm and being monke hear monke say all the way to no empathy levels seems more important. That’s way more dangerous but we don’t talk about humans doing that or being vulnerable to that I guess.
So, you’re still angry.
I can only repeat, I just can’t imagine myself giving in to this illusion esp. when I’m at my lowest.
I don’t care what else you project into my stance. I’m out.
There’s no need to “project” anything into your stance, you’re putting it all out yourself: if you can’t imagine any benefit to something (which doesn’t need imagination because I mentioned objective reasons that you can’t dispute) for yourself then supposedly there is no possible other valid outlook and no possible benefit for anyone else that doesn’t have your outlook.
That’s the exact opposite of empathy and perspective taking.
Yeah, well, that’s just, like, your opinion, man. And if you remove the very concept of capital gain from your “important point”, I think you’ll find your point to be moot.
I’m also going to assume you haven’t been in such a situation as I described with the whole mental health emergency? Because I have. At best I went to the emergency and calmed down before ever seeing a doctor, and at worst I was committed to inpatient care (or “the ward” as it’s also known) before I calmed down, taking resources from the treatment of people who weren’t as unstable as I was, a problem which could’ve been solved with a chatbot. And I can assure you there are people who live outside the major metropolitan areas of North America, it isn’t an extremely rare case as you claim.
Anyway, my point stands.
if you remove the very concept of capital gain from your “important point”, I think you’ll find your point to be moot.
Profit or not: How is it OK if your personal data is shared with third and fourth parties? How is it OK that AI allows for manipulating vulnerable people in new and unheard of ways?
I’m not saying that’s ok, did you even read my reply or are you just being needlessly contrarian? Or was I just being unclear in my message, because if so I’m sorry. It tends to happen to me.
You’re not the only one who doesn’t live urban and who has mental health issues. I did not want to make it a contest so I did not reply to that.
But.
So.
If I imagine being in such a situation I just don’t see how a chatbot could help me. Even if it was magically available already, possibly as a phone app, and I wouldn’t have to seek it out first.
Sorry.
Yeah, I realize the most important part of the point I was trying to make kinda got glossed over in my own reply (whoops); these LLMs nowadays are programmed to sound empathetic, more than any human can ever continuously achieve because we get tired and annoyed and other human stuff. This combined with the point of not every “emergency” really being an actual emergency leads me to believe that the idea of “therapeutic” AI chatbots could work, but I wouldn’t advocate using any of those that exist nowadays for this, at least if the user has any regards to online privacy. But having a hotline to a being that has all the resources to help you calm yourself down; a being that is always available, never tired, never annoyed, never sick or otherwise out of office; that seems to know you and remember things you have told it before - that sounds compelling as an idea. But then again, a part of that compel probably comes from the abysmal state of psychiatric healthcare that I and many others have witnessed, and this hotline should be integrated into that care. So I don’t know, maybe it’s just wishful thinking on my part, sorry I came across as needlessly hostile.
I can only repeat, I just can’t imagine myself giving in to this illusion esp. when I’m at my lowest.
Am I old fashioned for wanting to talk to real humans instead?
No. But when the options are either:
- Shitty friends who have better things to do than hearing you vent,
- Paying $400/hr to talk to a psychologist, or
- A free AI that not only pays attention to you, but actually remembers what you told them last week,
it’s quite understandable that some people choose the one that is a privacy nightmare but keeps them sane and away from some dark thoughts.
But I want to hear other people’s vents…😥
Maybe a career in HVAC repair is just the thing for you!
You’re a good friend. I wish everyone has someone like this. I have a very small group of mates where I can be vulnerable without being judged. But not everyone are as privileged, unfortunately…
My friend went from vulnerable and listening, to getting poisoned by sigma crap.
Please continue to be you, we need more folks like you.
I listen to vents, it’s just I don’t havd opinions after that are useful.
I do get a bit of ringing in my ear after a while though.
Ahh yes the random rolling stone article that refutes the point
Let’s revisit the list, shall we?
I started using chat GPT to draw up blue prints for various projects.
It proceeded to mimic my vernacular.
Chat gpt made the conscious decision to mirror my speech to seem more relatable. That’s manipulation.
The only people that think this will help are people that don’t know what therapy is. At best, this is pacification and certainly not any insightful incision into your actual problems. And the reason friends are unable to allow casual emotion venting is because we have so much stupid shit like this plastering over a myriad of very serious issues.
I can’t wait until ChatGPT starts inserting ads into its responses. “Wow that sounds really tough. You should learn to love yourself and not be so hard on yourself when you mess up. It’s a really good thing to treat yourself occasionally, such as with an ice cold Coca-Cola or maybe a large order of McDonald’s French fries!”
Black mirror lol
That episode was so disturbing 😅
If the title is a question, the answer is no
If the title is a question, the answer is no
A student of Betteridge, I see.
Actually I read it in a forum somewhere, but I am glad I know the source now!
What is a sarcastic rhetorical question?
Cheaper than paying people better, I suppose.
Let’s not pretend people aren’t already skipping therapy sessions over the cost
I’m not, I’m saying people’s mental health would be better if pay was better.
Enter the Desolatrix
I suppose this can be mitigated by installing a local LLM that doesn’t phone home. But there’s still a risk of getting downright bad advice since so many LLM’s just tell their users they’re always right or twist the facts to fit that view.
I’ve been guilty of this as well, I’ve used ChatGPT as a “therapist” before. It actually gives decently helpful advice, compared to what’s out there available after a google search. But I’m fully aware of the risks “down the road”, so to speak.
so many LLM’s just tell their users they’re always right
This is the problem, they apparently cannot be objective as just a matter of course.