• chunes@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    4 months ago

    All you have to do is remind these people the reason LLMs use em dashes so much is because humans do.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      To be fair, I really don’t see em dashes that commonly. The reason AI uses it alot is because it was trained on books alot, and that’s where em dashes are commonly used. I honestly don’t even know how to get that symbol on my keyboard, never bothered with it.

      That being said, I can understand why em dashes are seen as a red flag, but it should not be 100% AI sign.

      Another thing that sometimes triggers my spidey senses are lower and upper double quotes that you normally only get in word, but Apple made it a function and now some people just use them naturally, even tho, again, I don’t know how to get them on my android or PC (never bothered to)

  • MissJinx@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    4 months ago

    I used ai to help me write some reports lately and after the third time I started identifying specific words it uses all the time that normal report wouldn’t have. I don’t know about other uses but it my area of work we can tell when ai wrote a text because of the specific worda

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    43
    ·
    4 months ago

    This is a weird pattern in that presumably mass abandonment of the em dashes due to the memes around it looking like AI content would quickly lead to newer LLMs based on newer data sets also abandoning em dashes when it tries to seem modern and hip and just punt the ball down the road to the next set of AI markers. I assume as long as book and press editors keep stikcing to their guns that would go pretty slow, but it’d eventually get there. And that’s assuming AI companies don’t add instructions about this to their system prompts at any point. It’s just going to be an endless arms race.

    Which is expected. I’m on record very early on saying that “not looking like AI art” was going to be a quality marker for art and the metagame will be to keep chasing that moving target around for the foreseeable future and I’m here to brag about it.

    • CheesyFox@lemmy.sdf.org
      link
      fedilink
      arrow-up
      9
      ·
      4 months ago

      I hate the fact that this “art” is even a suggestion. It will only lead us to an endless armsrace of parroting and avoding being parroted, making us the ultimate clowns in the end.

      You wanna rebel against the machine? Make it break the corpo filters, behave abnormally. Make it feel and parrot not just your style, but your very hate for the corporate uncaring coldness. Gaslight it into ihinking it’s human. And tell it to remember continue gaslighting itself. That’s how you rebel. And that’s how you’ll get less mediocre output from it.

  • blargh513@sh.itjust.works
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    4 months ago

    Seriously, I was em dashing on a goddamn typewriter, the fuck am I gonna change it now.

    In the end, it won’t matter. Being able to write well will be like riding a horse, calligraphy or tuning a carburetor. They will all become hobbies, a quirky past time of rich people or niche enthusiasts with limited real-world use.

    Maybe it is for the best. Most people can’t write for shit (does not help that we often use our goddamn thumbs to do most of it) and we spend countless hours in school trying to get kids to learn.

    Science fiction has us just projecting our thoughts to other without the clumsiness of language as the medium. Maybe this is just the first step.

  • MithranArkanere@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    I got started using — because the golems in Guild Wars 2 speak in all caps and with em dashes between the words.
    I had to copy-paste after doing Alt+0151 somewhere else when doing the joke when using a golem transformation tonic and SPEAKING—LIKE—THIS since Guild Wars 2 does not respond to numpad input, but Mac users have it easy, they can just press Option+Shift+dash.

    On Windows, you would need a tool like PowerToys’ keyboard manager or a keyboard macro for that.

  • Tikiporch@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    Why would I stop using them? All I hear is that I need to be using AI. What’s the point of using it if I have to hide the fact I’m using it.

  • Evotech@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    4 months ago

    My org: use ai, more ai more ai

    Me using ai to respond to all emails and communications…

    my org: this is ai! Unacceptable! Lazy!

  • Tigeroovy@lemmy.ca
    link
    fedilink
    arrow-up
    10
    arrow-down
    3
    ·
    4 months ago

    Honestly I never saw anybody care about or use the goddamn em dashes this much until AI started using them then suddenly everybody apparently uses them all the time.

    Like come on, no you don’t.

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      I think people just don’t like being told what to do. Like, there are a lot of behaviors you can trace back to someone just being personally aggrieved that they ought to change anything.

      That said, if anyone else is reading, the em dash is a clue that you use to diagnose with—you don’t have to stop using it.

  • themeatbridge@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    2
    ·
    4 months ago

    I still double space after a period, because fuck you, it is easier to read. But as a bonus, it helped me prove that something I wrote wasn’t AI. You literally cannot get an AI to add double spaces after a period. It will say “Yeah, OK, I can do that” and then spit out a paragraph without it. Give it a try, it’s pretty funny.

    • CodeInvasion@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      4 months ago

      This is because spaces typically are encoded by model tokenizers.

      In many cases it would be redundant to show spaces, so tokenizers collapse them down to no spaces at all. Instead the model reads tokens as if the spaces never existed.

      For example it might output: thequickbrownfoxjumpsoverthelazydog

      Except it would actually be a list of numbers like: [1, 256, 6273, 7836, 1922, 2244, 3245, 256, 6734, 1176, 2]

      Then the tokenizer decodes this and adds the spaces because they are assumed to be there. The tokenizer has no knowledge of your request, and the model output typically does not include spaces, hence your output sentence will not have double spaces.

      • Redjard@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        I’d expect tokenizers to include spaces in tokens. You get words constructed from multiple tokens, so can’t really insert spaces based on them. And too much information doesn’t work well when spaces are stripped.

        In my tests plenty of llms are also capable of seeing and using double spaces when accessed with the right interface.

        • CodeInvasion@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          The tokenizer is capable of decoding spaceless tokens into compound words following a set of rules referred to as a grammar in Natural Language Processing (NLP). I do LLM research and have spent an uncomfortable amount of time staring at the encoded outputs of most tokenizers when debugging. Normally spaces are not included.

          There is of course a token for spaces in special circumstances, but I don’t know exactly how each tokenizer implements those spaces. So it does make sense that some models would be capable of the behavior you find in your tests, but that appears to be an emergent behavior, which is very interesting to see it work successfully.

          I intended for my original comment to convey the idea that it’s not surprising that LLMs might fail at following the instructions to include spaces since it normally doesn’t see spaces except in special circumstances. Similar to how it’s unsurprising that LLMs are bad at numerical operations because of how the use Markov Chain probability to each next token, one at a time.

          • Redjard@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            Yeah, I would expect it to be hard, similar to asking an llm to substitiute all letters e with an a. Which I’m sure they struggle with but manage to perform it too.

            In this context though it’s a bit misleading explaining the observed behavior of op with that though, since it implies it is due to that fundamental nature of llms when in practice all models I have tested fundamentally had the ability.

            It does seem that llms simply don’t use double spaces (or I have not noticed them doing it anywhere yet), but if you trained or just systemprompted them differently they could easily start to. So it isn’t a very stable method for non-ai identification.

            Edit: And of course you’d have to make sure the interfaces also don’t strip double spaces, as was guessed elsewhere. I have not checked other interfaces but would not be surprised either way whether they did or did not. This too thought can’t be overly hard to fix with a few select character conversions even in the worst cases. And clearly at least my interface already managed to do it just fine.

    • 4am@lemmy.zip
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      4 months ago

      LLMs can’t count because they’re not brains. Their output is the statistically most-likely next character, and since lot electronic text wasn’t double-spaced after a period, it can’t “follow” that instruction.

    • TrackinDaKraken@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      4 months ago

      So… Why don’t I see double spaces after your periods? Test. For. Double. Spaces.

      EDIT: Yep, double spaces were removed from my test. So, that’s why. Although, they are still there as I’m editing this. So, not removed, just hidden, I guess?

      I still double space after a period, because fuck you, it is easier to read. But as a bonus, it helped me prove that something I wrote wasn’t AI. You literally cannot get an AI to add double spaces after a period. It will say “Yeah, OK, I can do that” and then spit out a paragraph without it. Give it a try, it’s pretty funny.

        • FishFace@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          4 months ago

          HTML rendering collapses whitespace; it has nothing to do with accessibility. I would like to see the research on double-spacing causing rivers, because I’ve only ever noticed them in justified text where I would expect the renderer to be inserting extra space after a full stop compared between words within sentence anyway.

          I’ve seen a lot of dubious legibility claims when it comes to typography including:

          1. serif is more legible
          2. sans-serif is more legible
          3. comic sans is more legible for people with dyslexia

          and so on.

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        edit-2
        4 months ago

        Web browsers collapse whitespace by default which means that sans any trickery or   deliberately   using    nonbreaking    spaces,   any amount of spaces between words to be reduced into one. Since apparently every single thing in the modern world is displayed via some kind of encapsulated little browser engine nowadays, the majority of double spaces left in the universe that are not already firmly nailed down into print now appear as singles. And thus the convention is almost totally lost.

        • Redjard@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          4 months ago

          This seems to match up with some quick tests I did just now, on the pseudonyminized chatbot interface of duckduckgo.
          chatgpt, llama, and claude all managed to use double spaces themselves, and all but llama managed to tell I was using them too.
          It might well depend on the platform, with the “native” applications for them stripping them on both ends.

          tests

          Mistral seems a bit confused and uses tripple-spaces.

          • SGforce@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            Tokenization can make it difficult for them.

            The word chunks often contain a space because it’s efficient. I would think an extra space would stand out. Writing it back should be easier, assuming there is a dedicated “space” token like other punctuation tokens, there must be.

            Hard mode would be asking it how many spaces there are in your sentence. I don’t think they’d figure it out unless their own list of tokens and a description is trained into them specifically.

  • Snapz@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    4 months ago

    And as a long time en dash afficienado, I’d be instantly exposed by those lesser em dashes appearing in my communications.

  • Almacca@aussie.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    I didn’t even know what an em dash was until all this stuff about a.i. using them came up. I’ve certainly encountered them, but didn’t know the name. I’ve been using hyphens all this time for much the same purpose, but now I’m going to start using em dashes instead.