Summary
Court records in an ongoing lawsuit reveal that Meta staff allegedly downloaded 81.7TB of pirated books from shadow libraries like Z-Library and LibGen to train its AI models.
Internal messages show employees raising ethical concerns, with one saying, “Torrenting from a corporate laptop doesn’t feel right.”
Meta reportedly took steps to hide the activity.
The case is part of a broader debate on AI data sourcing, with similar lawsuits against OpenAI and Nvidia.
I’m mostly upset that this puts z-library and libgen back high up on the anti-pirating enforcers radar
It’s fucked these guys can pirate all this shit and make money off it. But if the masses access it, shut it the fuck down! Break encryption! Curb the laws! Penalize! Penalize! Penalize!
Remember Aaron Swartz? Do you think Zuck will go to prison too?
lolno. Fuckerberg won’t see the inside of a prison cell. He’s what we like to call in the law industry, “rich”
yes, he’ll be tormented by the feds to the point he’ll take his own life.
I said this in another thread on this subject- this is a very clear violation of the Berne Convention and Meta could find itself in court all over the world because of this.
https://www.wipo.int/treaties/en/ip/berne/
Probably not, but I can hope.
Mega Chad. Keep on seeding Mark!
I don’t think you read the article. They leeched off the servers. They didn’t seed back.
They should be getting fined for hit-and-run violations!
That is an insane amount of data. I’m trying to fathom what 82TB of text files looks like and I can’t.
So… if we say every ebook is 10mb (that’s well into the high end, only a few are that big)
That’s 8,589,934 10mb books.
AI says the average public library in the USA has 116,481 items (but that includes all media formats), but if we go with that, then 82 TB is about 73.74 average sized libraries with no repeating content.
NYPL has around 10 million books and an additional 10 million manuscripts in its collection. Over 54 million total articles for lending.
Not the largest by far, but still mind boggling in size.
To torrent and ingest something of that size is crazy.
Damn, that’s huge.
Never seen a library that big before. The university here has about 1.5 million and that’s a big library.
I had to look it up but the Library of Congress is over 30 million books. If I wasn’t busy working on an exit from this country I would have liked to take my kids to see it.
Perfect example of ‘rules for thee but not for me’. Assholes have no issue throwing the book at individuals infringing on copyright, then will turn around and pull heinous shit like this. Heinous in their eyes mind you.
Heinous when it doesn’t benefit THEM
As much as I hate much of the news about AI, I love the dilemma this puts the copyright lawyers and tech bros in. Either they admit that the majority of copyright law enforcement is a joke and stifles innovation - or they admit the creation of AI using stolen works is standard practise and requires government intervention to get back on track.
Odds of severe punishment? Slim.
Word on the street is that they might be facing almost a dollar per petabyte.
Street value $8.7 trillion
Odds of
severepunishment?SlimNone.
Rules for thee, not for me.
And guess what will happen as a result of these discoveries - nothing.
only if you’re rich like fuckerberg. If you’re a poor person, straight to the execution chamber
That is like all of the books…
Yeah, they’re not exactly large in file size.
I guess they should have just borrowed them from the library.
Seed it Mark you asshole!
He can’t, the seed is stored in what Mark lacks!
The biggest hit and run in the history of torrenting.
What’s new here vs the last 87 articles that have been posted on this?
I didn’t see the others?
Time for the ol’ slap-on-the-wrist few million dollar settlement, or whatever amount Facebook makes in a day; if the courts even bother to function at this point.
Make it a royalty, until the data stops being used keep on paying.
Jammie Thomas had to pay over $9000 per song she shared on Kazaa and that was like 15 years ago. Inflation + millions of shares should mean billions of dollars owed to the publishers… Plus obviously deleting or forfeiting ownership of all the models trained on that data, naturally.