LLMs are less magical than upper management wants them to be, which is to say they won’t replace the creative staff that makes art and copy and movie scripts, but they are useful as a tool for those creatives to do their thing. The scary thing was not that LLMs can take tons of examples and create a Simpsons version of Cortana, but that our business leaders are super eager to replace their work staff with the slightest promise of automation.
But yes, LLMs are figuring in advancements of science and engineering, including treatments for Alzheimer’s and diabetes. So it’s not just a parlor trick, rather one that has different useful applications that were originally sold to us.
The power problem (LLMs take a lot of power) remains an issue.
I’m unaware of any substantial research on Alzheimer’s or diabetes that has been done using LLMs. As generative models they’re basically just souped up Markov chains. I think the best you could hope for is something like a meta study that is probably a bit worse than the usual kind.
I agree, things that occure the most in the training data set will have the highest weights/probabilities in the Markov chain. So it is useless in finding the one, tiny relation that humans would not see.
I understand the science behind those LLM‘s and yes, for my use cases it has been very useful. I use it to cope with emotional difficulties, depression, anxiety, loss. I know it is not helping me the same as a professional would. But it helps me to just get another perspective on situations, which then helps me to understand myself and others better.
Oh that’s totally valid. Sometimes we just need to talk and receive the validation we deserve. I’m sorry we don’t have a society where you have people you can talk to like this instead.
I haven’t personally used any of the offline open source models but if I were you that’s where I’d start looking. If they can be run inside a virtual machine, you can even use a firewall to ensure it never leaks info.
Totally valid? Getting mental health advice from an AI chatbot is one of the least valid use cases. Speak to a real human @[email protected], preferably someone close to you or who is professionally trained to assist with your condition. There are billions of English speakers in the world, so don’t pretend we live in a society where there’s “no one to talk to”.
They have already stated that they think they should be speaking to someone but are clearly having a hard time. If a chatbot is helping them right now I’m not going to lecture them about “pretending”. I recommend the approach of a polite and empathetic nudging when someone is or may be in crisis.
There is nothing “dismissive” about offering advice to people who clearly need it. In actual fact, you are the one who was dismissive of the issue here by offering some cowardly “feel good” reply instead of opening up and sharing your honest thoughts. Stop tiptoeing around issues and enabling harmful behaviours. Relying on AI chatbots for mental health advice is very dangerous, and it’s absolute madness to encourage this as a primary form of treatment when you are seemingly aware of the dangers yourself.
I think you are confused. The dismissive behavior was not to just give advice and I pointed out what it actually was. And it is not dismissive to meet people where they are at. I think you’re now reaching for some fairly basic defensive behaviors (straw men and even the “I’m rubber your glue” kind of retorts) so I’m going to disengage.
Please do try to interact with others with more empathy.
I think you need to chill - please don‘t be triggered by me having an option to make me feel better at the end of the day.
Instead of assuming, you could also just ask. I am using ChatGPT complementary to a mental health professional. Both help me. ChatGPT is here 24/7 and helps me with difficult situations immediately. The mental health professional is then here to solve the problem in a therapeutic way.
Is it really that useful for you? LLMs are basically parlor tricks.
LLMs are less magical than upper management wants them to be, which is to say they won’t replace the creative staff that makes art and copy and movie scripts, but they are useful as a tool for those creatives to do their thing. The scary thing was not that LLMs can take tons of examples and create a Simpsons version of Cortana, but that our business leaders are super eager to replace their work staff with the slightest promise of automation.
But yes, LLMs are figuring in advancements of science and engineering, including treatments for Alzheimer’s and diabetes. So it’s not just a parlor trick, rather one that has different useful applications that were originally sold to us.
The power problem (LLMs take a lot of power) remains an issue.
I’m unaware of any substantial research on Alzheimer’s or diabetes that has been done using LLMs. As generative models they’re basically just souped up Markov chains. I think the best you could hope for is something like a meta study that is probably a bit worse than the usual kind.
I agree, things that occure the most in the training data set will have the highest weights/probabilities in the Markov chain. So it is useless in finding the one, tiny relation that humans would not see.
I understand the science behind those LLM‘s and yes, for my use cases it has been very useful. I use it to cope with emotional difficulties, depression, anxiety, loss. I know it is not helping me the same as a professional would. But it helps me to just get another perspective on situations, which then helps me to understand myself and others better.
Oh that’s totally valid. Sometimes we just need to talk and receive the validation we deserve. I’m sorry we don’t have a society where you have people you can talk to like this instead.
I haven’t personally used any of the offline open source models but if I were you that’s where I’d start looking. If they can be run inside a virtual machine, you can even use a firewall to ensure it never leaks info.
Totally valid? Getting mental health advice from an AI chatbot is one of the least valid use cases. Speak to a real human @[email protected], preferably someone close to you or who is professionally trained to assist with your condition. There are billions of English speakers in the world, so don’t pretend we live in a society where there’s “no one to talk to”.
They have already stated that they think they should be speaking to someone but are clearly having a hard time. If a chatbot is helping them right now I’m not going to lecture them about “pretending”. I recommend the approach of a polite and empathetic nudging when someone is or may be in crisis.
You literally just encouraged them to continue using a chatbot for mental health support. You didn’t nudge them anywhere.
I was going to let them reply first. You are being rude and dismissive of them, however. Please show your fellow humans a bit more empathy.
There is nothing “dismissive” about offering advice to people who clearly need it. In actual fact, you are the one who was dismissive of the issue here by offering some cowardly “feel good” reply instead of opening up and sharing your honest thoughts. Stop tiptoeing around issues and enabling harmful behaviours. Relying on AI chatbots for mental health advice is very dangerous, and it’s absolute madness to encourage this as a primary form of treatment when you are seemingly aware of the dangers yourself.
I think you are confused. The dismissive behavior was not to just give advice and I pointed out what it actually was. And it is not dismissive to meet people where they are at. I think you’re now reaching for some fairly basic defensive behaviors (straw men and even the “I’m rubber your glue” kind of retorts) so I’m going to disengage.
Please do try to interact with others with more empathy.
I think you need to chill - please don‘t be triggered by me having an option to make me feel better at the end of the day.
Instead of assuming, you could also just ask. I am using ChatGPT complementary to a mental health professional. Both help me. ChatGPT is here 24/7 and helps me with difficult situations immediately. The mental health professional is then here to solve the problem in a therapeutic way.
Both help me.
That’s good, I’m glad to hear you’re getting professional treatment since your original statement indicated the opposite: