Ultimately everything can and will be blamed on “ai” getting shit wrong and that will mop up what remains of any of the culpability and accountability that any corporations still have.
Just another example as to why companies tripping over themselves to force AI into everything without doing the real work of actually stopping this shit. It seems like it would need very direct rules in the code to just defer to a human tech in the event of not “knowing” the answer. Just like how human level 1 customer help will just say that it seems to need a higher level person to get correct help for the situation. All these bots are trained to focus on sounding correct above everything else is eventually cause much worse problems as greed and hype rule.
It seems like it would need very direct rules in the code to just defer to a human tech in the event of not “knowing” the answer.
That would require a wholly different technology with some ability to interpret the things it’s saying and assess their validity. It’s a lot more cost efficient to have your AI spew bullshit and do damage control afterwards.
Well then, it seems that it would be good motivation if more people are able to find ways to force the bots to give deep discounts on things. Would need to be clever in tricking the AI so that it isn’t outright obvious in the prompts to aid in avoiding the companies just accusing the buyers of hacking.
Shortly afterward, several users publicly announced their subscription cancellations on Reddit, citing the non-existent policy as their reason. “I literally just cancelled my sub,” wrote the original Reddit poster, adding that their workplace was now “purging it completely.” Others joined in: “Yep, I’m canceling as well, this is asinine.” Soon after, moderators locked the Reddit thread and removed the original post.
Really kind of just slid that pretty notable part of the story in there.
This marks the latest instance of AI confabulations (also called “hallucinations”) causing potential business damage. Confabulations are a type of “creative gap-filling” response where AI models invent plausible-sounding but false information. Instead of admitting uncertainty, AI models often prioritize creating plausible, confident responses, even when that means manufacturing information from scratch.
It amazes me how many different terms we’ve assigned to “an LLM we call artificial ‘intelligence’ just made some shit up because it sounded like the most logical thing to say based on probability.”
“Lies.” It’s an old fashioned term, but a beautiful term
It’s funny how “AI bros” will argue that we shouldn’t use “lies” because that implies intelligence, and then simultaneously treat their LLM as intelligent
We love the word don’t we folks? Very popular word. We love it.
Seems like it’d be easier to say the piece of shit doesn’t work
Instead of admitting uncertainty
tech journalists stop assigning agency to the probability-based slop generator [Challenge: IMPOSSIBLE]
Officer I didn’t commit tax fraud, it was simply a hallucination you see.
You joke, but the accounting industry is working on LLM tools for auditing and tax