I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

  • corvus@lemmy.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    3
    ·
    edit-2
    13 days ago

    Most people are against AI because of what corporations are doing with it. What do you expect corporations and governments are going to do with any new scientific or technological advance? Use it for for the benefit of humanity? Are you going to stop using computers because coorporations use them for their benefit harming the environment with their huge data centers? By rejecting the use of this new technological advance you are avoiding to take advantage of free and open source AI tools, that you can run locally on your computer, for whatever you consider a good cause. Fortunately many people who care about other human beings are more intelligent and are starting to use AI for what it really is, A TOOL.

    “According to HRF’s announcement, the initiative aims to help global audiences better understand the dual nature of artificial intelligence: while it can be used by dictatorships to suppress dissent and monitor populations, it can also be a powerful instrument of liberation when placed in the hands of those fighting for freedom.”

    HRF AI Initiative

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    6
    ·
    edit-2
    13 days ago

    If it’s real life, just talk to them.

    If it’s online, especially here on lemmy, there’s a lot of AI brain rotted people who are just going to copy/paste your comments into a chatbot and you’re wasting time.

    They also tend to follow you around.

    They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.

    • enchantedgoldapple@sopuli.xyzOP
      link
      fedilink
      arrow-up
      1
      ·
      13 days ago

      They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.

      That’s the issue. I do wish to warn me or even just inform them of what using AI recklessly could lead to.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        13 days ago

        Why care?

        You’re wanting to go out and argue with people and try to use logic when that part of their brain has literally atrophied.

        It’s not going to accomplish anything, and likely just drive them deeper into AI.

        Plenty of people that need help actually want it, put your energy towards that if you want to help people.

        • enchantedgoldapple@sopuli.xyzOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          13 days ago

          The post is aimed at me facing situations where I state among people I know that I don’t use AI, followed by them asking why not. Instead of driving them out by stating “Just because” or get into jargons that are completely unbeknownst to them, I wish to properly inform them why I have made this decision and why they should too.

          I am also able to identify people to whom there’s no point discussing this. I’m not asking to convince them too.

          • givesomefucks@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            13 days ago

            I wish to properly inform them why I have made this decision and why they should too.

            You’re asking how to verbalize why you don’t like AI, but you won’t say why you don’t like AI…

            Let’s see if this helps, imagine someone asks you:

            I don’t like pizza, how do I tell people the reasons why I don’t like pizza?

            How the absolute fuck would you know how to explain it when you don’t know why they don’t like pizza?

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      13
      arrow-down
      10
      ·
      13 days ago

      They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.

      More likely they feel insulted by people saying how “brain-rotted” they are.

      • Carnelian@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        3
        ·
        13 days ago

        What would the inoffensive way of phrasing it be?

        Genuinely every single pro-AI person I’ve spoken with both irl and online has been clearly struggling cognitively. It’s like 10x worse than the effects of basic social media addiction. People also appear to actively change for the worse if they get conned into adopting it. Brain rot is apparently a symptom of AI use as literally as tooth rot is a symptom of smoking.

        Speaking of smoking and vaping, on top of being bad for you objectively, it’s lame and gross. Now that that narrative is firmly established we have actually started seeing youth nicotine use decline rapidly again, just like it was before vaping became a thing

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          7
          arrow-down
          6
          ·
          13 days ago

          What would the inoffensive way of phrasing it be?

          …and then you proceed to spend the next two paragraphs continuing to rant about how mentally deficient you think AI users are.

          Not that, for starters.

          • Carnelian@lemmy.world
            link
            fedilink
            arrow-up
            9
            arrow-down
            4
            ·
            13 days ago

            The lung capacity of smokers is deficient, yes? Is the mere fact offensive? Should we just not talk about how someone struggling to breathe as they walk up stairs is the direct result of their smoking?

              • Carnelian@lemmy.world
                link
                fedilink
                arrow-up
                5
                ·
                13 days ago

                I don’t think it is, nor do I think name dropping random fallacies without engaging with the topic makes for particularly good conversation. If you have issues with OP’s phrasing it would benefit all of us moving forward if we found a better way to talk about it, yes?

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  4
                  arrow-down
                  5
                  ·
                  13 days ago

                  It’s not a random fallacy, it’s the one you’re engaging in. Look it up. Your analogy presupposes an answer to the question that is actually at hand. It’s the classic “have you stopped beating your wife” situation.

  • _cryptagion [he/him]@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    10
    ·
    13 days ago

    just say that you don’t want to use it. why are you trying to figure out good reasons that somebody else came up with to not use something you have to elect to use in the first place? just say “I don’t want to use genAI”. you don’t need to explain yourself any further than that.

    • corvus@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      13 days ago

      That’s perfectly fine if anyone just doesn’t want to use it, but he’s “strictly against” it and he’s searching for reasons. Pretty irrational IMO. It doesn’t surprise me, it’s the general trend regarding almost any subject nowadays, and you can’t blame AI for that.

  • canofcam@lemmy.world
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    12 days ago

    A discussion in good faith means treating the person you are speaking to with respect. It means not having ulterior motives. If you are having the discussion with the explicit purpose of changing their minds or, in your words, “alarming them to take action” then that is by default a bad faith discussion.

    If you want to discuss with a pro-AI person in good faith, you HAVE to be open to changing your own mind. That is the whole point of a good faith discussion - but rather, you already believe you are correct, and are wanting to enter these discussions with objective ammunition to defeat somebody.

    How do you actually discuss in good faith? You ask for their opinions and are open to them, then you share your own in a respectful manner. You aren’t trying to ‘win’ you are just trying to understand and in turn, help others to understand your own POV.

    • 🔍🦘🛎@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      12 days ago

      Once you realize you can change your opinion about something after you learn about it, it’s like a super power. So many people only have the goal of proving themselves right or safeguarding their ego.

      It’s okay to admit a mistake. It’s normal to be wrong about things.

      • canofcam@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        11 days ago

        The problem is it’s incredible rare to find others that are willing to change their minds in return, so every discussion either involves you changing your mind, or the other person getting agitated.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      12 days ago

      Chiming in here:

      Most of the arguments against ai - the most common ones being plagiarism, the ecological impact - are not things people making the arguments give a flying fuck about in any other area.

      Having issues with the material the model is trained on isn’t an issue with ai - it’s an issue with unethical training practices, copyright law, capitalism. These are all valid complaints, by the way, but they have nothing to do with the underlying technology. Merely with the way it’s been developed.

      For the ecological side of things, sure, ai uses a lot of power. Lots of data enters. So does the internet. Do you use that? So does the stock market. Do you use that? So do cars. Do you drive?

      I’ve never heard anyone say “we need less data centers” until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it’s something you give a fuck about. Which you don’t.

      If a model, once trained, is being used entirely locally on someone’s personal pc - do you have an issue with the ecological footprint of that? The power has been used. The model is trained.

      It’s absolutely valid to have an issue with the increased power consumption used to train ai models and everything else but these are all issues with HOW and not the ontological arguments against the tech that people think they are.

      It doesn’t make any of these criticisms invalid, but if you refuse to understand the nuance at work then you aren’t arguing in good faith.

      If you enslave children to build a house then the issue isn’t that youre building a house, and it doesn’t mean houses are evil, the issue is that YOURE ENSLAVING CHILDREN.

      Like any complicated topic there’s nuance to it and anyone that refuses to engage with that and instead relies on dogmatic thinking isn’t being intellectually honest.

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        12 days ago

        I’ve never heard anyone say “we need less data centers” until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it’s something you give a fuck about. Which you don’t.

        AI data centers take up substantially more power than regular ones. Nobody was talking about spinning up nuclear reactors or buying out the next several years of turbine manufacturing for non-AI datacenters. Hell, Microsoft gave money to a fusion startup to build a reactor, they’ve already broken ground, but it’s far from proven that they can actually make net power with fusion. They actually think they can supply power by 2028. This is delusion driven by an impossible goal of reaching AGI with current models.

        Your whole post is missing out on the difference in scale involved. GPU power consumption isn’t comparable to standard web servers at all.

      • aesthelete@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        12 days ago

        For the ecological side of things, sure, ai uses a lot of power. Lots of data enters. So does the internet. Do you use that? So does the stock market. Do you use that? So do cars. Do you drive?

        There are many, many differences between AI data centers and ones that don’t have to run $500k GPU clusters. They require a lot less power, a lot less space, and a lot less cooling.

        Also you’re implying here that your debate opponents are being intellectually dishonest while using the same weasely arguments that people that argue in bad faith constantly employ.

        • krooklochurm@lemmy.ca
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          12 days ago

          The fact that a gou data center uses more power than one that does not does not matter at all.

          You’re completely missing the point.

          The sum total of power usage for all non ai data centers is an ecological issue whether ai data centers use more, the same, or less power.

          All data centers have an ecological footprint, all use shitloads of power, and it doesn’t matter if one kind is worse than any other kind.

          This is exactly what I was trying to point out in my comment.

          If I take a shit in a canoe that’s a problem. Not an existential one but a problem. If I dump another ten pounds of shit in the canoe it doesn’t mean the first pound of shit goes away.

          If I dump two pounds of shit in the canoe then the first pound of shit is still in the canoe. The first pound of shit doesn’t stop being an issue because now there are two more.

          You can have an issue with shit in the canoe on principle, which is fine. Then it’s all problematic.

          But if you’re fine with having one pound of shit in the canoe, and find with three, but not okay with eleven, then the issue isn’t shit in the canoe, it’s the amount of shit in the canoe. They’re distinct issues.

          But it’s NOT intellectually honest to be okay with having one pound of shit in the canoe and not being okay with the other two. You can’t point at the two pounds of shit and say: this abominable! While ignoring the other pound of shit. Because it’s all shit.

          • Frezik@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            4
            ·
            12 days ago

            When a family in the global south uses coal to cook their food, they release CO2. When a billionaire flies around the continent on a private jet, they also release CO2.

            Do you consider the two to be equivalent in need or output?

          • aesthelete@lemmy.world
            link
            fedilink
            arrow-up
            7
            arrow-down
            1
            ·
            edit-2
            11 days ago

            But it’s NOT intellectually honest to be okay with having one pound of shit in the canoe and not being okay with the other two. You can’t point at the two pounds of shit and say: this abominable! While ignoring the other pound of shit. Because it’s all shit.

            Sure, because that’s a terrible analogy.

            Gen AI data centers don’t just require more power and space, they require so much more power and space that they are driving up energy costs in the surrounding area and the data centers are becoming near impossible to build.

            People didn’t randomly become “anti-data center”. Many of them are watching their energy bills go up. I’m watching as they talk about building new coal plants to power “gigawatt” data centers.

            And it’s all so you can have more fucking chat bots.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    12 days ago

    There isn’t a way to use AI in good faith.

    Either you are ignorant of the tech and its negative effects, or you arent.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      11 days ago

      What about cancer research? Or are you specificly talking about LLM & Image generating AI?

      • SoftestSapphic@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        11 days ago

        Generative AI isn’t really useful except for slop.

        It’s kind of a cool idea to use it for finding unknown chemicals and stuff like that, but for media and most other uses it’s been a travesty

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          11 days ago

          Generative AI, sure, seems to be hard to find anything remotely useful for that (+the environment impact etc is stupidly high).

          But neural networks are used everywhere in research, fast, cheap (a 2k€ graphic card can do it) and better than any other machine learning.

          I’m not disagreing here, just pointing out all AI is not bad.

          • SoftestSapphic@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            11 days ago

            Neural Networks are Machine Learning

            I think a lot of the things we used Machine Learning and LLMs for are good ideas, but we were doing that before we slapped them together and called it AI

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              11 days ago

              Or people stopped calling machine learning AI, and tried to hype the NN with it. Then switched AI to language models and generative networks.

              I mean Deep blue was AI back in the day, and so was pathfinders 🤷🏻‍♀️

              Anyway, I’m not arguing with you.

  • Nalivai@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    11 days ago

    “it looks like shit from a butt and sounds like shit from a butt, and if I wanted to look at a shit from a butt, I would do that for free”

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    12 days ago

    Very simple.

    It’s imprecise, and for your work, you’d like to be sure the work product you’re producing is top quality.

  • venusaur@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    13 days ago

    The most reasonable explanation I’ve heard/read is that generative AI is based on stealing content from human creators. Just don’t use the word “slop” and you’ll be good.

  • bstix@feddit.dk
    link
    fedilink
    arrow-up
    2
    ·
    11 days ago

    You don’t need artificial intelligence. We already have intelligence at home.

  • solomonschuler@lemmy.zip
    link
    fedilink
    arrow-up
    9
    ·
    12 days ago

    I just mentioned to a friend of mine why I don’t use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

    First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it’s incapable of creating thoughts outside from the data it’s trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

    There are several issues I can think of that makes the LLM do poorly at it’s job. remember LLM’s are trained exclusively on the internet, as large as the internet is, it doesn’t have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT “whats the issue with my codebase” it will notice the code you provided isn’t what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

    On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

    This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it’s true. When I started using chatGPT to fix my codebases or to do this problem, it induced a lot of doubt in my knowledge and intelligence that I gathered these past years in college.

    The second reason why I don’t like LLMs are the business models of these companies. To reiterate, these tech billionaires make this bubble of delusions and fearmongering to get their userbase to stay. Titles like “chatGPT-5 is terrifying” or “openAI has fired 70,000 employees over AI improvements” they can do this because people see the title, reinvesting more money into the company and because employees heads are up these tech giants asses will of course work with openAI. It is a fucking money making loophole for these giants because of how many employees are fucking far up their employers asses. If I end up getting a job at openAI and accept it, I want my family to put me into a god damn psych ward, that’s how much I frown on these unethical practices.

    I often joke about this to people who don’t believe this to be the case, but is becoming more and more a valid point to this fucked up mess: if AI companies say they’ve fired X amount of employees for “AI improvements” why has this not been adopted by defense companies/contractors or other professions in industry. Its a rhetorical question, but it makes them conclude on a better trajectory than “the reason X amount of employees were fired was because of AI improvement”

    • mirshafie@europe.pub
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      11 days ago

      This really is a problem with expectations and hype though. And it will probably be a problem with cost as well.

      I think that LLMs are really cool. It’s way faster and more concise than traditional search engines at answering most questions nowadays. This is partly because search engines have degraded in the last 10 years, but LLMs blow them out of the water in my opinion.

      And beyond that, I think you can generate some pretty cool things with it to use as a template. I’m not a programmer but I’m making a quite massive and relatively complicated application. That wouldn’t be possible without an LLM. Sure I still have to check every line and clean up a ton of code, and of course I realize that this is all going to have to go to a substantial code review and cleanup by real programmers if I’m ever going to ship it, but the thing I’m making is genuinely already better (in terms of performance and functionality) than a lot of what’s on the market. That has to count for something.

      Despite all that, I think we’re in the same kind of bubble now as we were in the early 2000s, except bigger. The oversell of AI comes from CEOs claiming (and to the best of my judgement they appear to be actually believing) that LLMs somehow magically will transcend into AGI if they’re given enough compute. I think part of that stems from the massive (and unexpected) improvements that happened from GPT-2 to GPT-3.

      And lots of smart people (like Linus Tordvals for example) point out that really, when you think about it, what is intelligence other than a glorified auto-correct? Our brains essentially function as lossy compression. So I think for some people it is incredibly alluring to believe that if we just throw more chips on the fire a true consciousness will arise. And so, we’re investing all of our extra money and our pension funds into this thing.

      And the irony is that I and millions of others can therefore use LLMs at a steep discount. So lots of people are quickly getting accustomed to LLMs thinking that they’re always going to be free or cheap, whereas it’s paid for by the bubble money and it’s not super likely that it will get much more efficient in the near future.

  • AA5B@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    12 days ago

    Maybe part of the answer is to not be so strictly against it. AI is starting to be used in a variety of tools and not all your criticisms are valid for all of them. Being able to see where it is useful and maybe you even find it desirable helps explain that you’re not against the technology per se.

    For example Zoom has an ai tool that can generate meeting summaries. It’s pretty accurate with discussions although sometimes gets confused about who said what. That ai likely used much less power, might not have been trained on copyrighted content

  • s@piefed.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    13 days ago

    “It’s a machine made to bullshit. It sounds confident and it’s right enough of the time that it tricks people into not questioning when it is completely wrong and has just wholly made something up to appease the querent.”

  • NoSpotOfGround@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    12
    ·
    13 days ago

    What are some good reasons why AI is bad?

    There are legitimate reasons people worry about AI. Here are some of the strongest, clearly framed concerns:

    1. Bias and unfair decisions

    AI systems often learn from biased data and can unintentionally discriminate—against certain races, genders, ages, or socioeconomic groups—in hiring, lending, housing, policing, and more.

    2. Lack of transparency

    Many AI models act as “black boxes,” making decisions that are hard to explain. This creates problems when the stakes are high (medical diagnosis, legal decisions, etc.).

    3. Privacy risks

    AI can analyze huge amounts of personal data, track behavior, or identify people through facial recognition—often without explicit consent.

    4. Job displacement

    Automation threatens certain categories of work, particularly routine or repetitive jobs. Without proper planning, this can increase inequality and unemployment.

    5. Misinformation and deepfakes

    AI makes it easier to create convincing fake audio, video, or text. This can undermine trust in media, fuel propaganda, and destabilize democratic processes.

    6. Weaponization

    AI can be used in autonomous weapons, cyberattacks, targeted surveillance, or manipulation—raising serious security and ethical issues.

    7. Overreliance and loss of human skills

    As AI does more tasks, people may become too dependent, reducing critical thinking, creativity, or expertise in certain fields.

    8. Concentration of power

    Powerful AI tools tend to be controlled by a few big companies or governments, potentially leading to monopolies, inequality, and reduced individual autonomy.

    9. Alignment and control risks

    Advanced AI systems may behave in unexpected or harmful ways if their goals aren’t perfectly aligned with human values—even without malicious intent.

    10. Environmental impact

    Training large AI models consumes significant energy and resources, contributing to carbon emissions.


    If you want, I can also provide reasons why AI is good, help you construct an argument for a debate, or analyze specific risks more deeply.

    Were you looking for this kind of reply? If you can’t express why you have an opinion maybe your opinion is not well founded in the first place. (Not saying it’s wrong, just that it might not be justified/objective.)

    • Armok_the_bunny@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      13 days ago

      Please, for the love of god, tell me you didn’t write that post with AI, because it really looks like that was written with AI.

      • NoSpotOfGround@lemmy.world
        link
        fedilink
        arrow-up
        14
        arrow-down
        11
        ·
        13 days ago

        Except the first phrase and last paragraph, it was AI. Honestly, it feels like OP is taunting us with such a vague question. We don’t even know why they dislike AI.

        I’m not an AI lover. It has its place and it’s a genuine step forward. Less than what most proponents think it’s worth, more than what detractors do.

        I only use it myself for documentation on the framework I program in, and it’s reasonably good for that, letting me extract more info quicker than reading through it. Otherwise haven’t used it much.

        • athatet@lemmy.zip
          link
          fedilink
          arrow-up
          6
          ·
          13 days ago

          “Good catch! I did make that up. I haven’t been able to parse your framework documentation yet”

        • enchantedgoldapple@sopuli.xyzOP
          link
          fedilink
          arrow-up
          7
          ·
          13 days ago

          My question was genuine. I haven’t been an avid user of generative AI when it was first released and decided against using it at all lately. I tried to use it in niche projects and was completely unreliable. Its tone of speech is bland and the way it acts as a friend feels disturbing to me. Plus the environmental destruction it is causing on such a large scale is honestly depressing to me.

          All that being said, it is not easy for me to communicate these points clearly to someone the way I have experienced it. It’s like the case for informing people about privacy; casual users aren’t inherently aware of the consequences of using this tool and consider it a godsend. It will be difficult for them to convince that the tool they cherish to use so much is not that great after all, thus I am asking here what the beat approach should be.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            13 days ago

            I haven’t been an avid user of generative AI when it was first released and decided against using it at all lately. I tried to use it in niche projects and was completely unreliable. Its tone of speech is bland and the way it acts as a friend feels disturbing to me. Plus the environmental destruction it is causing on such a large scale is honestly depressing to me.

            Isn’t that exactly the answer you are looking for?

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              5
              arrow-down
              1
              ·
              13 days ago

              The “environmental destruction” angle is likely to cause trouble because it’s objectively debatable, and often presented in overblown or deceptive ways.

    • AmidFuror@fedia.io
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      13 days ago

      You beat me to it. To make it less obvious, I ask the AI to be concise, and I manually replace the emdashes with hyphens.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        13 days ago

        I haven’t tested it, but I saw an article a little while back that you can add “don’t use emdashes” to ChatGPT’s custom instructions and it’ll leave them out from the beginning.

        It’s kind of ridiculous that a perfectly ordinary punctuation mark has been given such stigma, but whatever, it’s an easy fix.