I could see this simply resulting in every chatbot having a disclaimer that it might be spitting straight bullshit and you should not use it for legal advice.
At this point, I do consider this a positive outcome, too, because it’s not always made obvious whether you’re talking with something intelligent or just a text generator.
But yeah, I would still prefer, if companies simply had to have intelligent support. This race to the bottom isn’t helping humanity.
Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
I could see this simply resulting in every chatbot having a disclaimer that it might be spitting straight bullshit and you should not use it for legal advice.
At this point, I do consider this a positive outcome, too, because it’s not always made obvious whether you’re talking with something intelligent or just a text generator.
But yeah, I would still prefer, if companies simply had to have intelligent support. This race to the bottom isn’t helping humanity.
That won’t hold up, though.
I don’t know about that. From the article: