I created this account two days ago, but one of my posts ended up in the (metaphorical) hands of an AI powered search engine that has scraping capabilities. What do you guys think about this? How do you feel about your posts/content getting scraped off of the web and potentially being used by AI models and/or AI powered tools? Curious to hear your experiences and thoughts on this.
#Prompt Update
The prompt was something like, What do you know about the user [email protected] on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.
It even talked about this very post on item 3 and on the second bullet point of the “Notable Posts” section.
For more information, check this comment.
Edit¹: This is Perplexity. Perplexity AI employs data scraping techniques to gather information from various online sources, which it then utilizes to feed its large language models (LLMs) for generating responses to user queries. The scraping process involves automated crawlers that index and extract content from websites, including articles, summaries, and other relevant data. It is an advanced conversational search engine that enhances the research experience by providing concise, sourced answers to user queries. It operates by leveraging AI language models, such as GPT-4, to analyze information from various sources on the web. (12/28/2024)
Edit²: One could argue that data scraping by services like Perplexity may raise privacy concerns because it collects and processes vast amounts of online information without explicit user consent, potentially including personal data, comments, or content that individuals may have posted without expecting it to be aggregated and/or analyzed by AI systems. One could also argue that this indiscriminate collection raise questions about data ownership, proper attribution, and the right to control how one’s digital footprint is used in training AI models. (12/28/2024)
Edit³: I added the second image to the post and its description. (12/29/2024).
Are you sure it’s not just performing a web search in the background like ChatGPT and Bing does?
Yes, the platform in question is Perplexity AI, and it conducts web searches. When it performs a web search, it generally gathers and analyzes a substantial amount of data. This compiled information can be utilized in various ways, including creating profiles of specific individuals or users. The reason I bring this up is that some people might consider this a privacy concern.
I understand that Perplexity employs other language models to process queries and that the information it provides isn’t necessarily part of the training data used by these models. However, the primary concern for some people could be that their posts are being scraped (which raises a lot of privacy questions) and could also, potentially, be used to train AI models. Hence, the question.
While I try not to these days, sometimes I still state with authority that which I only believe to be true, and it then later turns out to have been a misunderstanding or confusion on my part.
And given that this is exactly the sort of thing that AIs do, I feel like they’ve been trained on far too many people like me already.
So, I’m just gonna keep doing what I have been. If an AI learns only from fallible humans without second guessing or oversight, that’s on its creators.
Now, if I was an artist or musician, media where accuracy and style are paramount, I might be a bit more concerned at being ripped off, but right now, they’re only hurting themselves.
I tested it out, not really very accurate and seems to confuse users, but scraping has been a thing for decades, this isn’t new.
Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?
I don’t exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.
See also: Forer Effect aka Barnum Effect
Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?
That doesn’t make much sense. I created this post to spark a discussion and hear different perspectives on data ownership. While I’ve shared some initial points, I’m more interested in learning what others think about this topic rather than expressing concerns. Please feel free to share your thoughts – as you already have.
I don’t exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.
Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity’s official docs.
Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity’s official docs.
You’re aware that it’s in their best interest to make everyone think their “”“AI”“” can execute advanced cognitive tasks, even if it has no ability to do so whatsoever and it’s mostly faked?
Taking what an “”“AI”“” company has to say about their product at face value in this part of the hype cycle is questionable at best.
Especially now that we know that the deal between OpenAI and Microsoft is to declare that an AGI had been developed once a system makes over $100 billion in profits.
https://gizmodo.com/leaked-documents-show-openai-has-a-very-clear-definition-of-agi-2000543339
They do not give a shit about the reality of their product.
You’re aware that it’s in their best interest to make everyone think their “”“AI”“” can execute advanced cognitive tasks, even if it has no ability to do so whatsoever and it’s mostly faked?
Are you sure you read the edits in the post? Because they say the exact contrary; Perplexity isn’t all powerful and all knowing. It just crawls the web and uses other language models to “digest” what it found. They are also developing their own LLMs. Ask Perplexity yourself or check the documentations.
Taking what an “”“AI”“” company has to say about their product at face value in this part of the hype cycle is questionable at best.
Sure, that might be part of it, but they’ve always been very transparent on their reliance on third party models and web crawlers. I’m not even sure what your point here is. Don’t take what they said at face value; test the claims yourself.
deleted by creator
Interesting question… I think it would be possible, yes. Poison the data, in a way.
if I have no other choice, then I’ll use my data to reduce AI into an unusable state, or at the very least a state where it’s aware that everything it spews out happens to be bullshit and ends each prompt with something like “but what I say likely isn’t true. Please double check with these sources…” or something productive that reduces the reliance on AI in general
How do you feel about your content getting scraped by AI models?
I think famous Hollywood actress Margot Robbie summed my thoughts up pretty well.
I don’t like it, but I accept it as inevitable.
I wouldn’t say I go online with the intent of deceiving people, but I think it’s important in the modern day to seed in knowingly false details about your life, demographics, and identity here and there to prevent yourself from being doxxed online by AI.
I don’t care what the LLMs know about me if I am not actually a real person, even if my thoughts and ideas are real.
Hey, I know her, I’m pretty sure she’s in that one movie I watched!
I’m perfectly down with everything being scraped and slammed into AI the same way I’ve been down with search engines having it all for ages. I just want any models that contain information scraped from the public to be publicly available.
I’m pretty much fine with AIs scraping my data. What they can see is public knowledge and was already being scraped by search engines.
I object to:
- sites like Reddit whose entire existence is due to user content, deciding they can police and monetize my content. They have no right
- sharing of data, which includes more personal and identifiable data
- whatever the AI summarizes me as being treated as fact, such as by a company hr, regardless of context, accuracy, hallucinations
sites like Reddit whose entire existence is due to user content, deciding they can police and monetize my content. They have no right
Um, not they do in fact have “every right” here. It’s shitty of course but you explicitly gave them that right in form of an perpetual, irrevocable, world-wide etc. license to do whatever they like to everything you publish on their site.
They also have every right to “police” your content, especially if it’s objectionable. If you post vile shit, trolling or other societal garbage behaviour on the internet, nobody wants to see it.
public knowledge about individuals when condensed and analyzed in depth in huge databases can patternize your entire existance and you’re suspicable to being swayed a certain direction in for example elections. Creating further divide and into someone elses pockets.
Maybe but I can’t object too much if I put my content out in public. When forced to create an account I use minimal/false information and a unique generated email. I imagine those web sites can figure out how to aggregate my accounts (especially given the phone number requirement for 2FA) but there shouldn’t be enough public info for a scraper to
Gotta think larger than yourself though. What happens when your spouse uses real info? your kids? your parents? they’ll shadowplay your person with great accuracy and fill in the gaps. You don’t even have to “put content” out there. Said databases can just put two and two together. How will you, or other uses even know you’re actually talking to a human? perhaps you’re on Lemmy and we’re all bots trying to get you to admit fragments of your latest crimes in order to get you into jail for said crime? etcetera. At first glance this all looks harmless but any accumulated information in huge databases is a major infringement to personal integrety at best; and complete control of your freedom at worst. The ultimate power is when someone can make you do X or Y and you don’t even realize you’re doing their bidding; but believe you have a choice when you don’t. (Similiar to how it is in my living situation at home with my gf that is :P jk.)
Hakuna matata. Happy new year
I completely agree, except that I think of them as multiple related privacy issues. In the scope of ai bots scraping my public content, most of these are out of scope
What did you mean by “police” your content?
Probably not the right word, but my content should still be my content. I offered it to Reddit but that doesn’t mean they have the right to charge others for it or restrict it to others for commercial reasons.
Not the person you are replying to but Reddit does not make the content you created available for everyone (blocking crawlers, removing the free API) but instead sells it to the highest bidder.
Right, that’s my objection. After benefitting from my content, they police it, as in restrict other sites from seeing it, until it’s monetized. It’s not Reddits to charge money for
I think it’s great, because there’s plenty of opportunity to covfefe
No matter how I feel about it, it’s one of those things I know I will never be able to do a fucking thing about, so all I can do is accept it as the new reality I live in.
I’ve been thinking for a while about how a text-oriented website would work if all the text in the database was rendered as SVG figures.
Not very friendly to the disabled?
Aside from that. Accessibility standards are hardly considered even now and I’d rather install a generated audio version option with some audio poisoning to mess with the AIs listening to it.
A lot of my comments are sarcastic shit posting, so if you want a good AI this is a bad idea
I feel a real problem with ai is not training them with curated content.
Lmao
Whatever I put on Lemmy or elsewhere on the fediverse implicitly grants a revocable license to everyone that allows them to view and replicate the verbatim content, by way of how the fediverse works. You may apply all the rights that e.g. fair use grants you of course but it does not grant you the right to perform derivative works; my content must be unaltered.
When I delete some piece of content, that license is effectively revoked and nobody is allowed to perform the verbatim content any longer. Continuing to do so is a clear copyright violation IMHO but it can be ethically fine in some specific cases (e.g. archival).
Due to the nature of how the fediverse, you can’t expect it to take effect immediately but it should at some point take effect and I should be able to manually cause it to immediately come into effect by e.g. contacting an instance admin to ask for a removed post of mine to be removed on their instance aswell.
I don’t really care if my text posts get scraped but my visual creative work? Na. I don’t like that.
Is it scraping or just searching?
RAG is a pretty common technique for making LLMs useful: the LLM “decides” it needs external data, and so it reaches out to configured data source. Such a data source could be just plain ol google.I think their documentation will help shed some light on this. Reading my edits will hopefully clarify that too. Either way, I always recommend reading their docs! :)
I guess after a bit more consideration, my previous question doesn’t really matter.
If it’s scraped and baked into the model; or if it’s scraped, indexed, and used in RAG; they’re both the same ethically.
And I generally consider AI to be fairly unethical