- cross-posted to:
- technology@beehaw.org
i thought they were already doing that? idk i assume a lot of the news that gets read is AI generated. if you have a good prompter you can easy crank out a hundred thousand years worth of fake headlines.
They were generated by the news sites, not google itself
If hypothetically a false headline on a reputable site led to an incident involving injury or death, could Google be found liable in anyway?
Are you cooking something up?
No because on the google.com eula that you sign by having someone on your family ever Google something redeems them of any liability and gives them a right to sacrifice your first born to AI
EULAs are not legally enforceable anyways
yet.
They’re becoming closer and closer to it though. Scary court decisions are being made, it won’t be long before someone tests it as a legal argument
Liability waivers don’t apply outside the US.
I doubt it.
They could hypothetically. Will they? Probably not.
If hypotheticallywhen a false headline on a reputable site led to an incident involving injury or death,could Googleis anyone found liable in anyway?rarely
How else are we gonna carry around a pack full of skeleton parts for our necrarmy
Thanks for the archive link.
I kind of like the idea of a system allowing me to automatically remove clickbait and sensationalism from headlines and replace it with a good summary. But I really hate how Google is pushing that without customization, without consent and in such a crappy state.
didn’t this happen already? the thing is generating AI responses instead of showing me the results first and then I’m not clicking on it because I’m a person
it’s also de-listing a ton of websites and subpages of websites and continuing to scrape them with Gemini anyway
Apple had to turn it off for their sunmary mode after backlash, even though the option always had the “these summaries are generated by AI and can be inaccurate” warnings placed prominently.
Google doing this shit without warning or notice will get them in shit water. News portals and reporters are generally not too fond of their articles being completely misrepresented.
it’s not just a matter of misrepresentation. it’s directing traffic away from the websites which are creating the content, maybe depriving them of every means that they have of monetizing it
Well, the loss of traffic is a knock-on effect of the misrepresentation. So is the fact that every other portal will try to sling shit at the ones affected by it.
Clickbait and ai bullshit from Google feed is pretty much all I’ve ever seen from them in the past year.
So what’s happening here is Google is feeding headlines into a model with the instructions to generate a title of exactly 4 words.
Every example is 4 words.
Why they think 4 words is enough to communicate meaningfully, I do not know. The other thing is whether novel they’re shoving into their products for free is awful, hence the making things up and not knowing in the context of a video game exploit is not the same as the general use of the word.
I don’t think meaningful communication is a KPI they optinize for. More likely time spent in the Discover feed.
“Trump cry like baby”. Huh.











