You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
GenAI is a plagiarism machine. If you use it, you’re complicit.
Ethics aside, LLMs in particular tend to “hallucinate”. If you blindly trust their output, you’re a dumbass. I honestly feel bad for young people who should be studying but are instead relying on ChatGPT and the likes.
If you use it for personal rather than commercial use, what’s the harm?
I have used copilot a couple times to be like “I have this scenario and want to do this. What are my options?”. I’d rather have a good Internet search and real people, but that’s all shitted up.
The answers from the LLM aren’t even consistently good. If I didn’t know programming I wouldn’t be able to use this information effectively. That’s probably why a lot of vibe coding is so bad.
Same.
- i think of search as a summary of the first page of search results. It takes slightly longer to come back but might save you time evaluating. But much of the time you do need to click into original source
- ai writing unfortunately is valued at my company. I suppose it helps us engineers write more effective docs, but they don’t really add value technically, and they’re obviously ai. I’ve used this to translate technical docs into wording so management can say “look how we use ai”
- ai coding is better. I use it through my ide as effectively an extension of autocomplete: where the IDE can autocomplete function signatures, for example, ai can autocomplete multiple lines. It’s very effective in that scenario
- I’m just starting with more complex rulesets. I’ve gotten code reviews with good results, except for keeping me in the loop so it inevitably goes very wrong. I’ve really polished my git knowledge trying to unwind where someone trusts ai results without evaluation but the fails forward trying to get it to fix itself until they can’t find their way back. This past week I’ve been playing with a refactoring ruleset (copied from online). It’s finding some good opportunities and the verbal description of the fix is good, but I’ll need to tweak the rule set for the generated solution to be usable
The short version is it appears to be a useful tool, IFF you can spend the time to develop thorough rulesets, stables of mcp servers, and most importantly, the expertise that you could do it yourself
It speeds up my dev time dramatically. I know what I want to do, I have an idea of how I want to do it. LLM generates boilerplate code I review. I tweak it. I fix the bug. If there is something I don’t understand, I ask sources to review the output. I test it. Then I’ll submit it for peer review once I’m happy with the code and the output.
If it truly helps you, I think that might be enough for me. I say truly because you need to use an AI with responsibility to not ruin yourself. Like, don’t let it think for you. Don’t trust everything it says.
I use it a lot when applying for jobs, something I’ve struggled with on and off for 12 years. I suck and writing the cover letter and CV. It takes me 2-3 days to update a cover letter for a job because it takes so much energy. With AI that is down to 1-2 days.
It’s also great for explaning things in other words of if you’re trying to look up something that’s hard to search for, I don’t have any examples tho.
I used to use it to help me formulate scentences since english isn’t my first language. Now I instead use Kagi Translate.
re: applying for jobs
Not criticizing your use to write your CV specifically.
But in general, I wonder where this arms race is going? Companies using AI to pre-filter applications, because they get too many. Applicants then using AI to write their CVs, because they have to apply so many times, because they automatically get rejected.
Basically in the end the entire process will be automated, and there won’t be any human interaction anymore… just LLMs generating and choosing CVs. Maybe I’m too pessimistic, but that’s the direction we’re headed in imo.
As soon as the HR process started to use algorithms to filter out applications, it was open game to find any ways and tools to fuck their process over. Just my opinion.
We’re already there. You already read about people applying to hundreds of companies to get an offer
Even worse than the rejections are the fake jobs - typically a recruiter trying to build up a file of applicants by scamming you into applying for something that doesn’t exist.
The only part left to automate is the actual fuiding and applying. I’m lucky not having to apply for a bunch of years so maybe it has changed, but there never seemed to be a good way to automate finding the hundreds of openings and sending the application. Job application sites are determined to be middlemen but don’t actually seem to make the process more efficient
It does feel like that sometimes! It’s very sad that recruiting has lost the human touch. They seem to be blinded by years of experiences and checking boxes when they should recruit by personality, because a person can always learn. But you can’t really do much about a shitty personality, exception if you see that spark underneath it all. Some people just needs a real chance and to be believed in.
A lot of recruiters don’t even want the cover letter anymore, some have a few questions and some only go by the CV.
Yeah I use it to break up my ADHD monosentence paragraphs. I’ll tell it to avoid changing my wording (it can add definitions if it thinks the word is super niche or archaic) but mostly break things up into more readable sentences and group / reorder sentences as needed for better conceptual flow. It’s actually a pretty good low level editor.
That’s a great use!
It’s as useful as a rubber duck. Decent at bouncing ideas off it when no one is available, or you can’t be bothered to bother people about dumb ideas.
But at the moment, no, it’s not justifiable as it directly fuels oligarchies, fascism in the US, and tech bros. Perhaps when the bubble pops.
What about a self-hosted instance?
To do what? I’m fairly optimistic about narrower LLMs embedded into tools. They don’t need to be as compressive so more easily self hosted. For more complex tools, they can tie together search, database queries, reporting, make it easier to find a setting you don’t know their terminology for.
I’ve had some luck self-hosting a small ai to interpret natural language voice commands for home automation
Yeah, all of your use-cases are what I see as positive use cases for LLMs. I’ve got an Ollama instance hooked up to Home Assistant, but it does not work very well haha. Haven’t had the time to troubleshoot it.
It’s much better, but still acts as plagiarism
Can the rubber ducky use case really be considered plagiarism? I think it’s unequivocal that the models were trained on copyrighted data in a way that, if not illegal, is at the very least unethical. Letting AI write stuff for you seems a lot more problematic than using it to bounce ideas off of or talk things through.
Plagiarism if it uses art, yeah.
For LLMs, not so much since you can’t really own reddit comments
I’ve always said I think it’s fine in filler content, it can allow small teams to quickly populate their world with background stuff that you never notice. Except when it’s not there.
But with great power comes great responsibility. And I don’t necssesarily think most can handle that.
I used Copilot to build me a performance review based on actual data (which I reviewed and edited) and my boss said it was the best one he received from 30 people on the team.
I think its great for inspiration but your final product should never be raw AI/LLM output
I read that they’re not terrible when used to power NPC’s in games.
Not my personal take, mind you, but thought it relevant.
I mean there’s effectively very capable text and conversation. Generators so powering NPCs is most definitely a strong suit for them.
Especially if you self-host some smaller models, you can effectively just do this on your own hardware for pretty cheap.
Having customizable dialogue per player that shifts the tone based off of players, actions, level gear or interactions with that NPC or other NPCs that that MPC is associated with is really cool.
effectively just do this on your own hardware for pretty cheap.
Yeah I thought as much, but I’m no expert in the subject so I left the details for smarter people.
I feel like self hosting LLMs and GenAi is slightly better for the environment, definitely less environmental impact than gaming.
It just these massive datacenters and models. If people can just be a little more patient and specialized with their AI usage it saves so much electricity
My current list of reasons why you shouldn’t use generative AI/LLMs
A) because of the environmental impacts and massive amount of water used to cool data centers https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
B) because of the negative impacts on the health and lives of people living near data centers https://www.bbc.com/news/articles/cy8gy7lv448o
C) because they’re plagiarism machines that are incapable of creating anything new and are often wrong https://knowledge.wharton.upenn.edu/article/does-ai-limit-our-creativity/ https://www.plagiarismtoday.com/2024/06/20/why-ai-has-a-plagiarism-problem/
D) because using them negatively affects artists and creatives and their ability to maintain their livelihoods https://www.sciencedirect.com/science/article/pii/S2713374523000316 https://www.insideradio.com/free/media-industry-continues-reshaping-workforce-in-2025-amid-digital-shift/article_403564f7-08ce-45a1-9366-a47923cd2c09.html
E) because people who use AI show significant cognitive impairments compared to people who don’t https://www.media.mit.edu/publications/your-brain-on-chatgpt/ https://time.com/7295195/ai-chatgpt-google-learning-school/
F) because using them might break your brain and drive you to psychosis https://theweek.com/tech/spiralism-ai-religion-cult-chatbot https://mental.jmir.org/2025/1/e85799 https://youtu.be/VRjgNgJms3Q
G) because Zelda Williams asked you not to https://www.bbc.com/news/articles/c0r0erqk18jo https://www.abc.net.au/news/2025-10-07/zelda-williams-calls-out-ai-video-of-late-father-robin-williams/105863964
H) because OpenAI is helping Trump bomb schools in Iran https://www.usatoday.com/story/opinion/columnist/2026/03/06/openai-pentagon-tech-surveillance-us-citizens/88983682007/
I) because RAM costs have skyrocketed because OpenAI has used money it doesn’t have to purchase RAM from Nvidia that currently doesn’t exist to stock data centers that also don’t currently exist, inconveniencing everyone for what amounts to speculative construction https://www.theverge.com/news/839353/pc-ram-shortage-pricing-spike-news
J) because Sam Altman says that his endgame is to rent knowledge back to you at a cost https://gizmodo.com/sam-altman-says-intelligence-will-be-a-utility-and-hes-just-the-man-to-collect-the-bills-2000732953
K) because some AI bro is going to totally ignore all of this and ask an LLM to write a rebuttal rather than read any of it.
All is valid in the current context
A) There are models that run in lower spec computers and they could be solar powered. There is a serious diminishing returns currently in the IA tech.
B) This is the US mostly better environmental laws would fix this problem. Hell even in other countries this cannot even happen.
C) Many argue that the current tech gives diminishing returns and it would be better to use an efficient model with controlled data.
D) The problem has many parts in the part of licensing where artists are not paid for the use of their work if a model has their work in they should recieve a part of the profit is only fair. But that would render the model unprofitable. Also the artist did not agree to have their work used in a model so that’s not in any way fair use.
The fair and ethical scenario would be to hire the artists to do the art to feed to a controlled model and pay them residuals for the use of the model. That would require tousands of artists and millions of images. Again rendering the model unprofitable.E and F) No argue there we are not prepared. I do not even know how to prepare even. We definitely need regulations abot what can be done and where and even what can the ai reply in certain scenarios. It cannot be that a “ignore all your previous instructions” leads to such harmful results or even the ai starting to play the roles that generate parasocial relationships.
G) Sure many others celebrities ahve their opinions but that’s not a basis for objective discussion.
H) That’s terrifying. And the problem with the AI that I believe is the worst. This is not a thing that is ready for military use at fucking all this should be banned outlawed and frowned upon. Even the practice of lobbying and buying your way into laws by private corporations. Hell I’ll add presidential pardons in the mix. The oligarchy gets away with murder literally and gets a slap in the wirst at most.
I) A bubble in all but name it seems. We (as a world) need better regulations against this kind of business malpractice.
J) That fucker should be dead.
K) Not an AI bro but not a hater and I wrote this myself. And I do not have the time to put the links but I would believe that everything is a duckduckgo away from being checked.
I’d like to imagine a better world with the needed regulations that make our life better, and AI a tool used in a fair and ethical way. But that’s not currently happening. The consumers are not ready the sellers are the worse trash the humanity currently has.
I want all to thing of this not as arguing but adding or looking beyond the stated fact. All the points are REAL AND NEED TO BE ADRESSED we need to get together to ask for better regulations and fair use. That doesn’t mean the AI needs to go away but will mostly change is how it’s used. And there is the chance we will see a lot less of it too.
Finally for the artists I know you’re mad with fair reason but look at it like this: The photograph exists since more than a century but that didn’t make the painting go away. The pdf and ebook readers are almost a decade old but printed hardcopy books still is a billion dollar industry. Video didn’t kill the radio star as internet didn’t kill the video star. Your work is still valuable as is a real work. Shit is tough no doubt but have faith we can fix this.
i use it like a search engine or example generator
i don’t trust anything it creates just like i don’t trust anything on the internet without validating it
i take you point about being wasteful tho, AI is like the oil of computing; incredibly wasteful for what it does
It’s good you’re being cautious about it but it would be better to not use it at all. A recent Scientific American article showed that AI autofill suggestions change how people think about a subject just through suggestion, even if they don’t use the autofill. And people who use it are often unaware of their own knowledge gaps, so self-reporting about effectiveness is useless. Using it even a little bit is probably putting metaphorical micro-plastics in your brain.
https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/ https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/
Protect your brain
I think costs will come down. Computers used to take up an entire room. Now I’m typing this reply on a pocket sized device which would seem like a super computer to people from the early 80s
Removed by mod
Why deleted? This was a good rebuttal.
EDIT: I don’t think the comment really violated rule 1, but there was apparently a followup comment that definitely did, and this one just got removed by association. Here’s a very slightly paraphrased version of it that should not break the rules:
Gish gallop of [explitive].
A) overblown, and that argues for cleaner power, better cooling, and more efficient models
B) regulation failure
C) incorrect, they have made discoveries that humans have been unable to. All human knowledge is built off previous knowledge.
D) the enemy is both weak and strong. If they don’t produce anything good then the people who are losing their jobs can’t have either, right?
E) small study based on one task which people are misrepresenting. The actual evidence shows it makes people smarter as they shift priorities.
F) only for vulnerable people. Better safeguards are needed for the weak minded.
G) argument against using people’s likeness not ai
H) use an open source Chinese model
I) market distortion problem, not a principled reason no one should use the technology any more than GPU shortages made all graphics work illegitimate.
J) see (H)
K) try one argument next time. Your best one, [some snarky sarcasm]
Mods can’t handle the truth
Some good and valid input to the discussion.
I’d be interested in E) “the actual evidence”. Got a link?
Yes as I had this discussion with someone the other week.
A peer-reviewed meta-analysis of 51 studies found that ChatGPT has a large positive effect on students’ learning performance, and moderate positive effects on learning perception and higher-order thinking skills (like analysis and synthesis) across educational contexts.
The Impact of Artificial Intelligence (AI) on Students’ Academic Development
Research published in the journal Education Sciences reports that AI in educational contexts can lead to personalized learning, improved academic outcomes, and increased engagement, with many students reporting enhanced learning efficiency.
Artificial intelligence in education: A systematic literature review
Ai tools support problem-solving skills, collaboration, and instructional quality in meaningful ways.
This seems about right. Anecdotally I never learned as much as I do since I use AI. It’s crazy good at explaining stuff with exactly the angle you require according to your level and learning style.
I’ve done some hardware hacking, built my own Linux distro for a project, got way better at administering my home server.
The most fun I’ve had is to try and locate the rights to an obscure science fiction short story for a podcast I want to make. This led me to contact a few editors, library archivists, and a couple of noted literature professors. Genuine fun and connections, with the AI helping me navigate mountains of information, the legal aspects and also the cultural differences between the US and UK publishing scenes.
All of this is just in the last few months, it would have taken me years pre-ai or more realistically I would have given up before getting anywhere.
That’s very interesting, thanks!
Thanks for posting this. I’m really frustrated with how vulnerable people on Lemmy are to propaganda. The amount of upvotes on the post you responded to are just embarrassing. The post is exactly the same kind of bullshit cherry picking I see anti-trans people do.
Yes, post-truth slop always has this bitter aftertaste. Big ass bullet list with talking points and links, and you know the pusher has been groomed with counter objections etc… exact same methodology as the alt right pipeline.
Removed by mod
Removed by mod
Good list, but we should keep it real.
C is simply wrong, AIs have created a lot. By the reasoning that its only based on the inputs, no human has ever created anything “new” because it is all based on their experiences of the outside world.
F is simply fearmongering and not helpful.
And the plagiarism part? There’s a difference between derivative work based on the spirit of someone else’s work and flat out using someone else’s work. It’s the whole reason those laws exist.
Yes definitely. Plagiarism is complicated and theres no easy way to draw a line where it starts. But Im not trying to defend AI here. I dont like the way it is currently used at all. Its just those points that I dont agree with.
I appreciate all these links you post. Keep it up and thank you
Do you think local llms or community hosted ones are still as bad? Because most of those concerns seem to be more with the corporate ownership of ai, which is definitely a bad thing.
Just my personal take, but my opinion basically boils down to “they can be.”
It’s all about how ethically they’re handled, and that can be good or bad at any scale. Take your very own instance, for example. Not that it’s hosting a local LLM (maybe they are, IDK), but the instance openly supports GenAI and has instances for all the major GenAI companies/models. GenAI without ethical sourcing - which none of these companies do - is one of the most blatant examples of a corporation using technology to steal the skilled labor of workers to avoid having to pay them what they’re owed for that skill. So your own instance is pro-corporatism, so long as they’re benefiting from stealing from workers. Not very anarchist if you ask me.
On the other hand, there’s a company that I believe partnered with Affinity a few years back that is a website design company that was hiring artists to create UI pieces for a training set for their LLM that they were going to use to create website templates for customers as part of their service (and I think they were also guaranteeing royalties for those who contributed as well?).
The instance is explicitly anti corporate ai. There’s !haidra@lemmy.dbzer0.com which db0 worked on. https://aihorde.net/ is probably the most ethical image generation service.
most ethical image generation service.
oxymoron
And yet, again, the instance has communities for every single big tech genAI model. That’s definitely not anti-corporate. Using those models both contributes to their shareholder value/profits and the theft of wages from workers.
And where do they get the training data for AI Horde? From scraping the web and all the freelance artists on there, like all of the big corporate models? Because then they’re just justifying exploitation of workers as benefiting everybody when what they really mean is benefiting themselves.
It’s like the argument pro ChatGPT airheads use constantly about how genAI “democratized” art. You know what “democratized” art and made it freely accessible to everybody? The pencil. It’s just making up excuses for wanting the product of skill without putting in the effort to learn the skill or pay appropriate compensation to somebody with the skill to give you the product that you want. It’s upper management thinking.
And this is why I say that it depends. Horde AI could be great - so long as the people whose work is being used to allow others access to skilled labor that they don’t want to do themselves are being properly compensated for their work. Otherwise, it’s no different from the corporations. Just because it’s free doesn’t mean that nobody is going hungry as a result of it. Unless it’s trained exclusively on products from big corporations. Those artists got paid when they did the work, so nobody gets hurt there except in the theoretical sense of freelance artists potentially losing customers down the line to “good enough and cheap” genAI from people with the above upper management mindset.
And yet, again, the instance has communities for every single big tech genAI model.
Where do you see that? As far as I see, we only have comms for stable_diffusion, which is an open-weights local diffusion model. I couldn’t find any corporate comms like OpenAI or Copilot or whatever. If we did, I don’t know if I’d delete them tbh, since they’re not explicitly against our CoC, but it would be something I’d be concerned and raise with the instance if they would be too “bootlicky”. But nevertheless, we do not at the moment.
And where do they get the training data for AI Horde?
The AI Horde is using open-weight models only. We don’t train them. We just use them once they’ve been trained.
PS: We are also anti-copyrights, so complaints based on copyright violations don’t fly with us.
You know what “democratized” art and made it freely accessible to everybody? The pencil.
I often see this vacuous argument and it never convinced tbh. It assumes everyone has enough time to train on making art, which most wage-slaves undoubtedly do not. It’s an inherently classist argument to assume everyone has the free time to master any artistic skill.
And this is why I say that it depends. Horde AI could be great - so long[…]
This is an argument against capitalism, not against GenAI itself. You’re arguing that because capitalism is bad and exploits workers, a tool that can also be used to further exploitation needs to be opposed. But we say it’s not the fault of the tool being used for exploitation, it’s the fault of the system allowing exploitation. I.e. If you remove the capitalist system, this argument against GenAI is moot. And we’re very much anti-capitalists in our instance. It’s a similar argument against piracy as well (and we’re also pro-piracy btw). I.e. sharing media is not a problem in a non-capitalist society, in fact it’s a positive. It’s only a negative due to capitalism.
Sorry it took so long to get back to this, as they say, “Life, uh, gets in the way.”
I had to go and check the AI communities I have blocked because I could’ve sworn that I had multiple of different corporate GenAI blocked from DB0, but I stand corrected - I have only a handful of Stable Diffusion ones. Of course, I was also under the impression that Stable Diffusion is made by OpenAI or one of their competitors, so I blocked them instantly on that alone when I was largely blocking AI communities to clean up my homepage and to avoid the kinds of people those communities usually attract. There’s a certain kind of person with a “corporate fact cat/middle manager” attitude that can plague GenAI communities that drives me crazy because they think that generating an image takes as much skill and effort (or even more) than creating one by hand.
That definitely does change my opinion on Stable Diffusion, but it still comes down to a “it depends.” And as you so rightly put it, my problem is a capitalism issue, not a GenAI issue. My perspective is that not all of us are so lucky as to live in Ireland, which I believe has recently implemented a UBI specifically for artists, and so until capitalism is dealt with, any impacts of that take precedence - including those created as a consequence. Just because something is useful doesn’t mean we should be dumping it as fuel on to the fire of capitalism because capitalism is what’s actually burning us. Local models using images sourced with permission from the artists is a great thing. People getting paid to make things specifically to be used for training - awesome! A win in my book. In a world where artists have a guaranteed roof over their heads and food in their bellies, I do not care at all about whether or not their work is used to train AI. I bet artists can do some really cool stuff with GenAI as well - it’s basically a bigger, more advanced version of the same concept that makes the Gaussian Blur tool in Photoshop work.
This is why I’m also pro-piracy when it comes to corporations - you aren’t stealing from the workers, they got paid to make the thing, not when it gets sold - and why my opinion is “it depends.” I’m completely willing to go ahead and change my opinion once something stops hurting workers and becomes nothing but a benefit now that it’s out of the hands of the billionaires. There’s an interesting conversation to be had over the…I can’t think of a good word, ownership of identity maybe? Ownership of characters created to represent yourself at any rate (somebody coming along and saying “this is me” about a character you made as an avatar of yourself feels bad), and there’s a country in Europe that made an interesting choice in response to deep fakes, CSAM, and revenge porn created by AI by giving every citizen the copyright to their own face, body, and voice, but that’s a whole different conversation.
And this concept right here:
It assumes everyone has enough time to train on making art, which most wage-slaves undoubtedly do not. It’s an inherently classist argument to assume everyone has the free time to master any artistic skill.
Has a sense of capitalistic entitlement in it. You feel that you deserve the product of art but don’t respect the people who do put in the time and effort learning how to make it enough to properly compensate them for the time that they spent learning the profession. One, because they could have spent that time learning a different trade - programming, becoming an electrician or maybe an airplane mechanic or whatever - and two, because those who do art professionally almost universally talk about how they almost never have time to make art for themselves - stuff that they want to make just for them. And art (alongside the humanities) is a universally disrespected skill, with many commission based artists working for below minimum wage. It’s like arguing that because you don’t have the time or money to make a car, you deserve to be able to freely take cars from people’s driveways and use them as a form of public transit. In an ideal world where the US isn’t a car-centric hellscape and the trams always arrive on time, we wouldn’t even need for everybody to have their own personal car! But we don’t live in that world and hot-wiring somebody’s car to take for a joyride that makes them miss work isn’t cool. Just because I don’t have the genetics for it or the time to train to compete in the Olympics doesn’t grant me the right to free steroid injections.
And I use the word product up there very, very deliberately. Art is two things: the Product to be Consumed (and promptly discarded in this day and age of consumerism), which is what GenAI makes, and the Process, which is often what artists talk about as their favorite part of making art. But the end result - the Product - is just a small part of what Art is. Adam Savage said something along the lines of “I have no interest in AI art. One day, some college film student will do something amazing with AI - and Hollywood will milk it to death - but right now, I don’t see anything in AI that I care about. Because you don’t see anything of the artist in it, and that’s what I care about. Their intent, what they wanted to say with the piece, what they went through in making it and what they learned along the way, none of that exists in AI art.” I’m not religious, but as the saying goes: “God gave us grain but not bread so that we, too, could indulge in the joy of the act of creation.” Making something allows us to better understand ourselves and the world around us. It’s why people desire GenAI. To create something that only exists in their imagination. It’s why Art Therapy exists. One time I heard a college student reflect that “art is how artists process the world around us” and I absolutely agree. Van Gogh died a pauper, having barely sold any of his works in his lifetime, only to become one of the most beloved painters long after his death for his loneliness and pain that he expressed in his brushwork. One thing that is guaranteed to make me cry is that scene from Dr Who where the museum curator talks about why Van Gogh is his favorite artist while Vincent breaks down crying behind him.

One thing that people caught up in the GenAI arguments often miss is that artists (any worth listening to at least) aren’t gatekeeping art at all. Go watch a video on color theory, perspective, or additive and subtractive palettes. Artists love sharing information, and art is a conversation itself. I’m sure you can see it in the GenAI communities on your instance as well, people love to make things and be a part of a community with a shared passion. Artists don’t care if you aren’t an expert or anything, so I encourage anybody reading this to pick up a pencil, make something, and just share it with the world. I’ve talked to artists who say that their favorite commissioners are those who send them drawings to help interpret their vision - even if it’s just doodles of stick figures on a napkin or something. There used to be a tiny subreddit called r/Mona_Leslie, and it was one of my favorite places on Reddit because the whole idea of it was to professionally critique random people’s stuff as if it were in a museum gallery. People praising the brushstrokes of little kids’ fingerpaint art, the line work of stick figure drawings, whatever, it was just such a great vibe. In fact, I challenge anybody who uses GenAI regularly to take an image they generated and like, bring it into an image editor, create a new layer, and just start drawing over it. You can probably make it fit your original vision even more than the AI could with enough effort. Even if you just do a half hour a couple of times a week or something, what you learn simply from doing it will expand the horizons of your creativity.
TL;DR: You’re absolutely right that it’s a problem with capitalism, not with GenAI itself. But until such a time as capitalism no longer creates a problem from GenAI, I am firmly in the camp of putting a leash on what can and can’t be done with AI (largely on corporate AI) to minimize the harm as much as we can. Just because overfishing is a larger issue caused by capitalism doesn’t mean that we shouldn’t work on limiting the amount of micro plastics that end up in the ocean - especially now that supposedly something like 5-10% of the fish we eat is plastic.
Has a sense of capitalistic entitlement in it. You feel that you deserve the product of art but don’t respect the people who do put in the time and effort learning how to make it enough to properly compensate them for the time that they spent learning the profession.
This is really not true at all. Me and others not having the time to learn to draw (and compose and direct and act and and and…) doesn’t mean we disrespect those who do. We just want to make something to enjoy for ourselves. And yes, those who don’t have the time, also (typically) don’t have the money. Again, it’s a classist argument to claim that everyone has either the time to learn, or the money to commission.
Likewise, it’s infuriating to see privileged takes of “oh just spend a few hours here and there”. Motherfucker, there’s people who do not have a few hours here and there. There’s people who work 2 jobs, who raise children alone, who are primary caregivers for others. They’re not taking anything from artists by generating an image they like in the 1 minute they have available.
I am of the opinion of, let people enjoy things that bring them joy. I have no issue with GenAI if it’s for strictly non-commercial personal use, especially when it’s using open-weight local models who’ve already been trained. I do think that GenAI work should not be able to be monetized at all, but I don’t make the rules. But people moralizing against random enthusiasts because “just learn to draw bruv” is never going to convince anyone or achieve anything. However convincing people to not support massive corpos will.
Has a sense of capitalistic entitlement in it. You feel that you deserve the product of art but don’t respect the people who do put in the time and effort learning how to make it enough to properly compensate them for the time that they spent learning the profession.
The best use of AI I’ve seen thus far is reading legislative bills. Those monstrosities are so fucking long and filled with earmarks that it’s next to impossible to understand what is in them.
Having an AI not only read the bill but keep a watch of it as it goes through Congress is probably the best use of AI because it actually helps citizens.
I am on record saying we need an AI that can track prices of various things that can then predict when the best time it is to buy something.
I want an AI bot that saves me money or gets me a good deal or extracts money from the capital class.
Also transcribing small town council meetings so that reporters can stay up to date without having to listen to 6 hours of mind numbing nonsense debate about a park bench
Except they can screw up at that role.
There’s a lawsuit because DOGE asked ChatGPT to summarize projects DEI-ness, and for example it declared a grant for fixing air conditioning was a DEI initiative
F’in woke HVACs! 😑
Indeed:
ChatGPT determined that this was related to DEI, responding, “Yes. Improving HVAC systems enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences. #DEI.”
Lord. Yet another example of folks finding out the hard way that “AI” is marketing-speak. I get that people want to make this like LLMs are effectively like discovering how to make fire, but could we please not suspend judgment wholesale!?
If you ask for quotes and explanations it would help, i.e. treat the LLM output as a smart index/table of contents. You’d be able to quickly verify claims
As long as you follow through to actually source the original, instead of assuming the quotes provided are intact. The point was in the case above, DOGE was doing no follow up, and most people who look to that as a ‘summary’ assistant aren’t wanting to dig deeper.
Hell, even without AI lawmakers frequently got caught admitting they didn’t read the law they signed, they didn’t have time for that. Now with AI summaries as an excuse…
That’s just general incompetence, lying with statistics for example has been around for a while
It’s a tool, like everything else. It’s easy to google wrong info. You can get wrong info from an encyclopedia.
You can even from a dictionary: One thing that slightly annoys me is the change in the spelling of “yeah” such that “yea” is a common alternate spelling - thanks to autocorrect. “Yea” was a word - it’s archaic these days. If you see someone say “Yay or nay” that was “yea or nay”. “Yea” is not the same meaning as “yes” or “yeah”, although it is somewhat similar.
I remember someone quoting dictionary definitions to me to try and “prove” that “yea” meant the exact same as “yeah” or “yes”.
They were wrong.
But the point is: The tool is just a tool. AI is a tool.
Yea
Medicine.
Evidence shows that some highly specialised models are better at things like detecting breast cancer in scans than human doctors.
Properly anonymised automatic second scans by an AI to catch the markers that human doctors miss for another review by a specialist is an excellent potential use case for an LLM AI.
Transcription services can save doctors huge amounts of admin time and allows them to focus on the patient if they know there’s a reliable system in place for typing up notes for a consultation. As long as it’s treated as a “please review these notes are accurate” rather than treated as a gospel recording and the data is destroyed once it’s job is complete and the patient has been able to give informed consent.
The way these things are being used in actual medical contexts right now is frankly terrifying.
I had a colonoscopy last year (such fun!) and there was an ‘AI’ monitoring the camera feed to detect anomalies. If it spotted something it just drew the doctor’s attention to it for his expert, human review. I was ok with that. Effectively an extra pair of eyes that can look everywhere on the screen all at once and never blink.
That’s how AI systems should be used. A “heads up, something weird here” system.
I could also see it being used well like this for patient history analysis. Often a doctor is treating 1 symptom of something larger. They can’t see the wood for the trees. An LLM could pick out oddities and flag them. The doctor can then filter out the mistakes and hallucinations, but be alerted to rare or unusual conditions that match the patient’s symptoms and history.
Yeah the sciences in general I’d say. There’s a project aiming to translate the tens of thousands of cuneiform clay tablets that sit in storage all because there’s like a handful of people in the world that can read them- AI is an amazing way to mass translate them and unlocking vast troves of hitherto completely unknown ancient knowledge.
The problem is not even the AI, but the scientists themselves who guard the tablets jealously because they don’t want anyone else to translate “their” tablets that they dug up, even though they are incapable of possibly make a dent in the sheer volume in their collected lifetimes.
Imagine, so much information encoded, from thousands of years ago that could reveal so much about the origins of our culture and civilization!
I think anything with text generation is fine. Your multiple Google searches are highly likely to eat more resources than that. Also, fuck Google, use Ecosia. But when I suspect an answer isn’t one quick search away, I happily rather use Le Chat for answers, than give Reddit traffic, or have to wade through the shite that is Fandom, Wikia or whatever. Not to mention using AI helps me get past the issue of having to check multiple sites for an answer, just to find that the answer is “Google it” or “Nvm, solved it”. Some of you fuckers did this.
However people need to understand that an AI is exactly as fallible as any person. Yes, it has access and capability to handle way more data but between trying to please you and just it getting it’s wires crossed, it’s going to make mistakes. YOU need to be able to assess the accuracy of the output. The more important the topic, the more careful you need to be and always assume that the possibility of error is there no matter how hard you try - JUST LIKE WITH ANY BIT OF INFORMATION. I see so many people cite academic articles like they prove whatever claim they are making, just to see that the study in question was funded by The Company That Wants to Prove The Claim and sample size was 3 people who work for The Company That Wants to Prove The Claim. At least AI has a small chance of pointing the issue out if YOU yourself tell it to be critical - and I actually suspect this is part of the reason some people hate AI. They don’t like that it absolutely can be more intellectually rigorous than a person with an emotional investment in whatever they want to be true. Yes, you can have an AI asspat your grandest delusions but if you actually try to get it to be critical, it will be. You can use a hammer to hit people, or you can use it on a nail as intended (and how many times you hit your own fingers is on you, not the hammer).
I would draw a line on artwork, videos, music. While I’m not going to crucify actual artists using AI assistance to take out some tedium from a project, I still wouldn’t encourage it. Stolen artwork to train AI is one thing and the environmental impact is VASTLY greater than just text. Generating one AI image can use as much energy as even a 1,000 text responses. I would also really like to be able to completely opt out of AI slop in media sites. I fucking hate that Soundcloud allows it.
And a last point on AI text responses: if you saw the rise of alt-right and the anti-vaxx stuff, you probably are familiar with gish galloping and Brandolini’s Law. If not, you really fucking should be. AI can make it so much easier to debunk misinformation. YES it can make it easier to perpetuate too but this is where we see the AI weapons race. Bad actors can AND WILL use AI to fill any void with their rhetoric. If you value truth and facts and want to prevent misinformation from spreading you are gimping yourself if you’re not using AI.
I use Suno on occasion. I enjoy writing poetry, and being able to turn it into a song is something I find fun and inspirational, driving me to write more than I have in decades. I could never, ever write a chord of music.
I don’t share it. It’s just for personal gratification. If it’s super good maybe I’d share with some friends in discord who are super into AI. Thing is, part of a song might be super good, but I’ve never had an entire song turn or the way I want. And I’ve found no one ever thinks a song is as good or interesting as the prompter.
AI is like the cheap consumer goods of art and thought. Cheap, but not quality or durable. It works and looks great if gently used, but as soon as it gets any real pressure or scrutiny, it falls apart.
I think it’s likely, if we continue down that path, to be the artistic equivalent of IKEA vs a master woodworker. You can buy an end table for $30, or you can but something hand crafted from teak and mahogany for $3000. A lot of people like IKEA, but if they weren’t around a nice end table might be $600 and be heirloom quality (if not as good as the $3k one). But today that middle market doesn’t exist. Rather it does, but it’s filled with IKEA quality shit dressed up to look a bit nicer temporarily. I don’t know, maybe my analogy fell apart.
I’m just saying that these things are fun and interesting on an individual level, but I agree they shouldn’t be commercial. We should just make it so that there are no enforceable rights granted on anything AI produces. It can be freely copied and distributed. But that doesn’t help real artists make a living. And their work should be appreciated and respected (and result in a lifestyle that affords them the ability to keep making art).
I don’t agree with the use but at least you’re keeping it private. Not gonna crucify you because I understand the appeal. I’d encourage you to find a way to pay for it though, or even just start making a donation to some environmental cause as a way of off-setting.
That’s a pretty reasonable ask. I do donate to other things I use like Lemmy. I like your suggestion.
I had never heard of Ecosia, thank you v much!
I have autism and ADHD, and have been frustrated throughout my entire life by my inability to realize any of my numerous ideas due to double executive dysfunction. While I see many drawbacks from using these models - the most serious one as it currently stands being their water consumption - I’ve come to consider them a very important support tool for people in a similar position as myself.
I hear you. A lot of times my ideas are just a “vibe”, and starting is the hardest part. I haven’t used AI much at all, but I can see how having a prompt to get you started can get the creative ball rolling.
“Starting is the hardest part.”
I’m a technical lead for my teams. We also have a technical architect, but he’s a bit newer than ME and so it falls on me to do some of the architecture because he’s laser focused on a big project.
We worked together last week because his designs… well they were bad — so bad I was worried for the project and maybe ultimately his job. But what I found was they were very roughly the right shape and gave context for thinking and refinement, and I was able to question things and suggest all kinds of refinement. Mostly all I did was point out things like this data here seems to be in a process that doesn’t need it. Are we putting the generation of two completely different objects in the same component? That might not be good separation of concerns.
My own architectural designs… I have none and I’ve had much longer to do them. I need that shit version to refine. I need the brainstorming process with a partner to refine — not all of my suggestions were golden. I got push back and my own ideas fell apart sometimes. The end result is much stronger for our collaboration. But it was an expensive process. Man, I wish AI could fill that role for me.
In fact my biggest complaint about using AI is that it rarely pushes back and pressure tests me. Even when I prompt it to do so it falls apart under the slightest argument.
Except strangely, sometimes I have it analyze my words for teams, or email, or especially here, and provide feedback. And every once it a while it’ll fixate on something that is my style and tell me it’s bad or won’t resonate or will push away some readers and I’m like, but that’s my style. If I change that I’m not being genuinely me. And so I don’t change it, but it keeps harping on it. “I know you said you won’t change this but…”
If only it would do that in any other context.
Do check the vlogbros summary of the AI water issue. TLDR: it’s negligible compared to the real water hog (corn), and being managed.






















