• overload@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      19 days ago

      I go one further and also use public/private key pairs that my acquaintances must use to decrypt the scrambled letters I mail them.

    • spyd3r@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      19 days ago

      I use some decoder ring I found in a cereal box, it’s totally secure.

      B̷̡̡̢̧̺̩̝̤̜̪̰͖̻̗͇͓͙͍̦̹̹͚̠̲͔͕̫̤͎̳̱̦̜̖̤͙̎͌͑̂̿̋͐͂̉͜͜͜ͅe̸̺̠̰̋̐͑͒͗͑̑͂̿͑͘͠͝ ̴̡̨̢̨̨̡̯̺̤̝͇̠̯͚͇̰͈͙͍͕̖͕͖̜̹̰̗͙̈̍̄͂́͜ṣ̵̡̞̰͎̝͙͚̘̞̓̊̿̂̉͐͐̐̀̍̂́͋̏́̚͘͠͠ư̴̧̧̨̧̝͙̰̗͓͉͚͇̻͇̝͖̞͙̤͙̞͔̯͈̙̗̰̖̺̼͕͇̗̂̎̐̅͊̔͋̄̿̅̎̍͂̏͘̚ͅṛ̶͙͙͚͖̭̆̄̎̔̾͛̏̈̽͌̎͋̿̈̌̃̃͑̑̏̐̽̎̉́̊̿̆̌̕͜͝͠e̵̛̝̱͓̐̂͊̀̓̑̈́̒̓́̂̿̒̒̔͌̆͌̎͆̓͂̂̏͆̑͜͝͝ ̶̧̧̳̮̬̤̱̯͚̜̜͔̞̰̠̼̩̘͖̹͕̥͔̰͎͖̩̠͇̭̭̺̮̔͊͛̉͐͗͛͌̓̂͐̇̔̑̓̐̇̀̅̿̿̃͛̈́̔̏͛̓͂̏̕̚̕͜͠͠ͅͅͅͅţ̵͔͂̋͌̋͊͗̇ơ̷̘̱͙̝͖͍̪̗̮̫͉͖̪͉̯͙͛̋̾̑͛̇́̑̒̓͐̀̇̓͒̾͛͆̾͗̒̕̚͘͜͝ ̶̧̡̢̭̥͚̱̲̮͙̠̼͉͖̞̩̞̰̠͍̭̭͖͖̻̜͖͇̬͎̮͙̦͗͌̈̌̍̔̋̔̈́̈́̃̍̓͌͒̉̓͐̓̏̓̃̇̅́̐̃̂̚̕͜͝͝d̸̢̨̢̧̢͔͚̼̩̮͖̭̥̮͓̭͇͖̞̰̞̰̋̓̊̈́̈̐̄̆͊̈͑̓̉͝͠ͅŗ̵̲͓̠̮͉̹͍̰̟̘̄̈́̈́̂̀̆͗̔̓̔̐̀̍̓̄̾̋͋̆̈́̓͐͊͒͋͂̓̽͌̂̊͂̔͋̓͌͐̈́̓͠͝ĩ̴̛̛̝̹͓͚̦̱̰̫̌̋͌̏̒́̇̂̅̎̄͒̏̎̈͊͊̽͘̕͜͝͝͝͠n̴̨̡̡̛͚͖̼̖̦͔̬̩̝̞͔̥͖̫̮͎̻͔̪͍͖̣̻̯͉̝̜͓̐̏̾̋̂͛́̍̄̿̔͛̉̾̏̆̍͋͒̂́̽̆͐̋̈͆̊̈̈́̽̔̏̏̎̕̚͘̚͠k̴̡̭̙̼̻̟͔̏̂ ̵̨͓̺̲͇͔̪͇͓̥̰͈̲͊́̂́͋̊̀̾̌͋̉͑̍̿̆̊͐͆̏̑̑͛̾̀̀̏͆̽́͝͠ỵ̶̡̝̺̙͇̪̮͚̣̓̍̐̄̉̇̀͋̔̀̂͒̾̋͘ǫ̴͇̝̤͕̮̺̦̼̪̯̟̼̳͙̼̃̈́́͗̓̊͑́̾̈́͘̕͜͝͠ͅͅų̷̢̛̭̟̭̖̟͇̪̦̪̳̯̟̬͉̬͉͎̫͎̮̜̠͔̝̜̭̪̤͆̆͋̉̆̓̽̋̀̆̌͝r̵̨̡̳͈̝͈̖͈̻̺̮͖̻͓͓͇̩͖̬̣̪͙̗̥̯̍̍͂͂́̑ͅ ̷̢̧̢̧̛̛̖̹͉̳͚̞̟̻̮̟͙̥̥͓͙̻̩̙̈̓͆͌̈́͊́̈́̎̑͗̑̆̀̈́͆̏ͅƠ̴̛̛̱̰̬̲̼̹̬̰̮͓̜̐̔̈́̾̓͆̔͂̂͂̂̓̏̾͐͌͘̕͘͝͝͝v̴̛̤̝̹͙̩͌̾̾̒͋͐͂̍̽̈́͛̎̆̋̓̔̀́̍͑͌͌͂͆̈̚̚̚͘͜͝͝ͅå̶̡̢̹̻͙͗͒̌̓̑̋̂̉̿̌̋͋̆͋͋̈́̋̎̀͝͝ĺ̶̡̨̨̨̛̻͙̘̖͍̥̝̺͔͙̱̼͙̱̀͌̃̍́͊̉͑̐ͅt̶̡̛͎͕̥͉̙̰̫̲̺̩̘̜̖͔̝̜̤̮͙̳̻̮̠̦́̌͌̍̑̃̿̔͒͗̑̏̎̿̉̀̀͊̽̃̽͌͆̏͗͗̋̈́̔̉́̒͗̑̊͜͝ͅį̴̡̢̡̪̥͉̩̯͎̩̤̺̙̩̳̘͓̣̮̰͔̯̘̰̖̪̻͉̣̖̬̩͉̦̃̂̍͜ͅͅņ̵̡̢̧̢̯̠͍͖͔̬̜̥̗̜͈̮͖̗̺̳̱̣̟̦̗͉̮̥̏̿͒̏͆̔̀͐̉̀͗͋͐͌͒̀́̿́͗͂́̏̂͊̑̅͝͝͝͝ȩ̶̨̡̨̫͉̱͉̦̫͇̪̼̰̺̩̘̼̬̝̘̥͖͎̬̺̀̓͋̄̂̉͝͝

  • phoneymouse@lemmy.world
    link
    fedilink
    English
    arrow-up
    232
    arrow-down
    2
    ·
    19 days ago

    The US Govt 5 years ago: e2e encryption is for terrorists. The govt should have backdoors.

    The US Govt now: Oh fuck, our back door got breached, everyone quick use e2e encryption asap!

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      19 days ago

      Different parts of the government. Both existed then and now. There has for a long time been a substantial portion of the government, especially defense and intelligence, that rely on encrypted comms and storage.

        • elucubra@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          18
          ·
          19 days ago

          I have never understood why electronic communications are not protected as physical mail

          • Astronauticaldb@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            19 days ago

            Lobbying as well as developmental issues I would assume. I’m no real developer just yet but I’d imagine creating robust security protocols is time-consuming and thinking of every possible vulnerability is not entirely worth it.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              19 days ago

              No, security is pretty easy and has been for decades. PGP has been a thing since 1991, and other encryption schemes were a thing long before. ProtonMail uses PGP and SMTP, the latter of which predates PGP by about a decade (though modern SMPT with extensions wasn’t a thing until 1995).

              So at least for email, there’s little technical reason why we couldn’t all use top of the line security. It’s slightly more annoying because you need to trade keys, but email services could totally make it pretty easy (e.g. send the PGP key with the first email, and the email service sends it with an encrypted reply and stores them for later use).

              The reason we don’t is because servers wouldn’t be able to read our email. The legitimate use case here is searching (Tuta solves this by searching on the client, ProtonMail stores unencrypted subject lines), and 20 years ago, that would’ve been a hardship with people moving to web services. Today, phones can store emails, so it’s not an issue anymore, so it probably comes down to being able to sell your data.

              Many to many encryption is more complicated (e.g. Lemmy or Discord), so I understand why chat took a while to be end to end encrypted (Matrix can do this, for example), but there are plenty of FOSS examples today, and pretty much every device has encryption acceleration in the CPU, so there’s no technical reason why it’s impractical today.

              The reason it’s not uniquitous today is because data is really valuable, both to police and advertisers.

          • JackbyDev@programming.dev
            link
            fedilink
            English
            arrow-up
            10
            ·
            19 days ago

            Because physical mail can be easily opened with a warrant. Encryption can be nigh impossible to break. The idea of a vault that cannot be opened no matter how hard you try is something that scares law makers.

          • ayyy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            19 days ago

            Because the USA has been a broken fascist husk ever since the red scare and has been in slow decline ever since.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      14
      ·
      19 days ago

      More like 23 years ago when the Patriot Act was signed, and every time it has been re-authorized/renamed since. Every President since Bush Jr. is complicit, and I’m getting most of them in the previous 70-ish years (or more) wish they could’ve had that bill as well.

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        46
        ·
        19 days ago

        I laughed so much at that. Encryption is literally just long complicated numbers combined with other long complicated numbers using mathematical formulae. You can’t ban maths.

        If I remember correctly, there’s also a law in Australia where they can force tech companies to introduce backdoors in their systems and encryption algorithms, and the company must not tell anyone about it. AFAIK they haven’t tried to actually use that power yet, but it made the (already relatively stagnant) tech market in Australia even worse. Working in tech is the main reason I left Australia for the USA - there’s just so many more opportunities and significantly higher paying jobs for software developers in Silicon Valley.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          19 days ago

          I laughed so much at that. Encryption is literally just long complicated numbers combined with other long complicated numbers using mathematical formulae. You can’t ban maths.

          Now laugh at banning chemistry and physics (guns and explosives and narcotics). Take a laugh at banning murder too - how do you ban every action leading to someone’s death?

          and the company must not tell anyone about it

          Any “must not tell” law is crap. Unless you signed some NDA knowing full well what it is about.

          Any kind of “national secret disclosure” punishment when you didn’t sign anything to get that national secret is the same.

          It’s an order given to a free person, not a voluntarily taken obligation.

          That said, you can’t fight force with words.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          19 days ago

          You can try, and in the US, we have export restrictions on cryptography (ITAR restrictions), so certain products cannot be exported. But you can print out the algorithm and carry it on a plane though, so I’m not sure what the point is…

  • circuitfarmer@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    120
    arrow-down
    5
    ·
    20 days ago

    It’s probably also good practice to assume that not all encrypted apps are created equal, too. Google’s RCS messaging, for example, says “end-to-end encrypted”, which sounds like it would be a direct and equal competitor to something like Signal. But Google regularly makes money off of your personal data. It does not behoove a company like Google to protect your data.

    Start assuming every corporation is evil. At worst you lose some time getting educated on options.

    • kingthrillgore@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      19 days ago

      If its not Open Source and Audited yearly, its compromised. Your best option for secure comms is Signal and Matrix.

    • mosiacmango@lemm.ee
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      4
      ·
      20 days ago

      End to end is end to end. Its either “the devices sign the messages with keys that never leave the the device so no 3rd party can ever compromise them” or it’s not.

      Signal is a more trustworthy org, but google isn’t going to fuck around with this service to make money. They make their money off you by keeping you in the google ecosystem and data harvesting elsewhere.

      • jagged_circle@feddit.nl
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        19 days ago

        They do encrypt it and they likely dont send the messages unencrypted.

        Likely what’s happening is they’re extracting keywords to determine what you’re talking about (namely what products you might buy) on the device itself, and then uploading those categories (again, encrypted) up to their servers for storing and selling.

        This doesn’t invalidate their claim of e2ee and still lets them profit off of your data. If you want to avoid this, only install apps with open source clients.

        • mosiacmango@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          19 days ago

          E2EE means a 3rd party cant extract anything in the messages at all, by definition.

          If they are doing the above, it’s not E2EE, and they are liable for massive legal damages.

          • jagged_circle@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            19 days ago

            Thats not what it means. It means that a third party cannot decrypt it on their servers.

            Of course if the “third party” is actually decrypting it on your device, then they can read the messages. I dont know why this is not clear to you.

      • renzev@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        19 days ago

        Of course our app is end-to-end encrypted! The ends being your device and our server, that is.

        • jagged_circle@feddit.nl
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          19 days ago

          That’s literally what zoom said early in the pandemic.

          Then all the business in the world gave them truck loads of money, the industry called them out on it, and they hired teams of cryptographers to build an actual e2ee system

      • sem@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        2
        ·
        19 days ago

        It could be end to end encrypted and safe on the network, but if Google is in charge of the device, what’s to say they’re not reading the message after it’s unencrypted? To be fair this would compromise signal or any other app on Android as well

        • mosiacmango@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          9
          ·
          edit-2
          19 days ago

          That’s a different threat model that verges on “most astonishing corporate espinoage in human history and greatest threat to corporate personhood” possible for Google. It would require thousands if not tens of thousands of Google employees coordinating in utter secrecy to commit an unheard of crime that would be punishable by death in many circumstances.

          If they have backdoored all android phones and are actively exploting them in nefarious ways not explained in their various TOS, then they are exposing themselves to ungodly amounts of legal and regulatory risks.

          I expect no board of directors wants a trillion dollars of company worth to evaporate overnight, and would likely not be okay backdooring literally billions of phones from just a fiduciary standpoint.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            19 days ago

            How do spyware services used by nation-state customers, like Pegasus, work?

            They use backdoors in commonly used platforms on an industrial scale.

            Maybe some of them are vulnerabilities due to honest mistakes, the problem is - the majority of vulnerabilities due to honest mistakes also carry denial of service risks in widespread usage. Which means they get found quickly enough.

            • mosiacmango@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 days ago

              So your stance is that Google is applying self designed malware to its own services to violate its own policies to harvest data that could bring intense legal, financial and reputational harm to it as an org it was ever discovered?

              Seems far fetched.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                19 days ago

                Legal and financial - doubt it. Reputational - counter-propaganda is a thing.

                I think your worldview lags behind our current reality. I mean, even in 30-years old reality it would seem a bit naive.

                Also you’ve ignored me mentioning things like Pegasus, from our current, not hypothetical, reality.

                • mosiacmango@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  19 days ago

                  So yes.

                  You think a nearly trillion dollar public company has an internal division that writes malware against flaws in its own software in order to harvest data from its own apps. It does this to gain just a bit more data about people it already has a lot of data on, because why not purposely leave active zero days in your own software, right?

                  That is wildly conspiratorial thinking, and honestly plain FUD. It undermines serious, actual privacy issues the company has when you make up wild cabals that are running double secret malware attacks against themselves inside Google.

          • circuitfarmer@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            1
            ·
            19 days ago

            It would require thousands if not tens of thousands of Google semployees coordinating in utter secrecy

            This is usually used for things like the Moon Landing, where so many folks worked for NASA to make it entirely impossible that the landing was faked.

            But it doesn’t really apply here. We know for example that NSA backdoors exist in Windows. Were those a concerted effort by MS employees? Does everyone working on the project have access to every part of the code?

            It just isn’t how development works at this scale.

            • rottingleaf@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              19 days ago

              This is usually used for things like the Moon Landing, where so many folks worked for NASA to make it entirely impossible that the landing was faked.

              I think it’s also confirmed by radio transmissions from the Moon received in real time right then by USSR and other countries.

            • Pips@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              3
              ·
              19 days ago

              Ok but no one is arguing Windows is encrypted. Google is specifically stating, in a way that could get them sued for shitloads of money, that their messaging protocol is E2EE. They have explicitly described how it is E2EE. Google can be a bad company while still doing this thing within the bounds we all understand. For example, just because the chat can’t be backdoored doesn’t mean the device can’t be.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                19 days ago

                Telegram has its supposedly E2EE protocol which isn’t used by most of Telegram users, but also there have been a few questionable traits found in it.

                Google is trusted a bit more than Pavel Durov, but it can well do a similar thing.

                And yes, Android is a much larger heap of hay where they can hide a needle.

        • mosiacmango@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          7
          ·
          edit-2
          20 days ago

          Thats a different tech. End to end is cut and dry how it works. If you do anything to data mine it, it’s not end to end anymore.

          Only the users involved in end to end can access the data in that chat. Everyone else sees encrypted data, i.e noise. If there are any backdoors or any methods to pull data out, you can’t bill it as end to end.

          • micballin@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            1
            ·
            20 days ago

            They can just claim archived or deleted messages don’t qualify for end to end encryption in their privacy policy or something equally vague. If they invent their own program they can invent the loophole on how the data is processed

            • cheesemoo@lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              ·
              20 days ago

              Or the content is encrypted, but the metadata isn’t, so they can market to you based on who you talk to and what they buy, etc.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                19 days ago

                Provided they have an open API and don’t ban alternative clients, one can make something kinda similar to TOR in this system, taking from the service provider the identities and channels between them.

                Meaning messages routed through a few hops over different users.

                Sadly for all these services to have open APIs, there needs to be force applied. And you can’t force someone far stronger than you and with the state on their side.

              • mosiacmango@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                ·
                19 days ago

                This part is likely, but not what we are talking about. Who you know and how you interact with them is separate from the fact that the content of the messages is not decryptable by anyone but the participants, by design. There is no “quasi” end to end. Its an either/or situation.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  19 days ago

                  It doesn’t matter if the content is encrypted in transit if Google can access the content in the app after decryption. That doesn’t violate E2EE, and they could easily exfiltrate the data though Google Play Services, which is a hard requirement.

                  I don’t trust them until the app is FOSS, doesn’t rely on Google Play Services, and is independently verified to not send data or metadata to their servers. Until then, I won’t use it.

            • circuitfarmer@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              19 days ago

              Exactly. We know corporations regularly use marketing and doublespeak to avoid the fact that they operate for their interests and their interests alone. Again, the interests of corporations are not altruistic, regardless of the imahe they may want to support.

              Why should we trust them to “innovate” without independent audit?

            • mosiacmango@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              19 days ago

              The messages are signed by cryptographic keys on the users phones that never leave the device. They are not decryptable in any way by google or anyone else. Thats the very nature of E2EE.

              How end-to-end encryption works

              When you use the Google Messages app to send end-to-end encrypted messages, all chats, including their text and any files or media, are encrypted as the data travels between devices. Encryption converts data into scrambled text. The unreadable text can only be decoded with a secret key.

              The secret key is a number that’s:

              Created on your device and the device you message. It exists only on these two devices.

              Not shared with Google, anyone else, or other devices.

              Generated again for each message.

              Deleted from the sender’s device when the encrypted message is created, and deleted from the receiver’s device when the message is decrypted.

              Neither Google or other third parties can read end-to-end encrypted messages because they don’t have the key.

              They cant fuck with it, at all, by design. That’s the whole point. Even if they created “archived” messages to datamine, all they would have is the noise.

          • ITGuyLevi@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 days ago

            End to end doesn’t say anything about where keys are stored, it can be end to end encrypted and someone else have access to the keys.

          • circuitfarmer@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            1
            ·
            20 days ago

            You are suggesting that “end-to-end” is some kind of legally codified phrase. It just isn’t. If Google were to steal data from a system claiming to be end-to-end encrypted, no one would be surprised.

            I think your point is: if that were the case, the messages wouldn’t have been end-to-end encrypted, by definition. Which is fine. I’m saying we shouldn’t trust a giant corporation making money off of selling personal data that it actually is end-to-end encrypted.

            By the same token, don’t trust Microsoft when they say Windows is secure.

            • mosiacmango@lemm.ee
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              7
              ·
              edit-2
              19 days ago

              Its a specific, technical phrase that means one thing only, and yes, googles RCS meets that standard:

              https://support.google.com/messages/answer/10262381?hl=en

              How end-to-end encryption works

              When you use the Google Messages app to send end-to-end encrypted messages, all chats, including their text and any files or media, are encrypted as the data travels between devices. Encryption converts data into scrambled text. The unreadable text can only be decoded with a secret key.

              The secret key is a number that’s:

              Created on your device and the device you message. It exists only on these two devices.

              Not shared with Google, anyone else, or other devices.

              Generated again for each message.

              Deleted from the sender’s device when the encrypted message is created, and deleted from the receiver’s device when the message is decrypted.

              Neither Google or other third parties can read end-to-end encrypted messages because they don’t have the key.

              They have more technical information here if you want to deep dive about the literal implementation.

              You shouldn’t trust any corporation, but needless FUD detracts from their actual issues.

              • circuitfarmer@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                11
                arrow-down
                2
                ·
                19 days ago

                You are missing my point.

                I don’t deny the definition of E2EE. What I question is whether or not RCS does in fact meet the standard.

                You provided a link from Google itself as verification. That is… not useful.

                Has there been an independent audit on RCS? Why or why not?

                • mosiacmango@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  arrow-down
                  7
                  ·
                  edit-2
                  19 days ago

                  Not that I can find. Can you post Signals most recent independent audit?

                  Many of these orgs don’t post public audits like this. Its not common, even for the open source players like Signal.

                  What we do have is a megacorp stating its technical implementation extremely explicitly for a well defined security protocol, for a service meant to directly compete with iMessage. If they are violating that, it opens them up to huge legal liability and reputational harm. Neither of these is worth data mining this specific service.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                19 days ago

                Even if we assume they don’t have a backdoor (which is probably accurate), they can still exfiltrate any data they want through Google Play services after it’s decrypted.

                They’re an ad company, so they have a vested interest in doing that. So I don’t trust them. If they make it FOSS and not rely on Google Play services, I might trust them, but I’d probably use a fork instead.

      • CatLikeLemming@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        19 days ago

        Note that it doesn’t mean metadata is encrypted. They may not know what you sent, but they may very well know you message your mum twice a day and who your close friends are that you message often, that kinda stuff. There’s a good bit you can do with metadata about messages combined with the data they gather through other services.

      • MonkderVierte@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        19 days ago

        End to end matters, who has the key; you or the provider. And Google could still read your messages before they are encrypted.

        • mosiacmango@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          19 days ago

          You have the key, not the provider. They are explicit about this in the implementation.

          They can only read the messages before encryption if they are backdooring all android phones in an act of global sabotage. Pretty high consequences for soke low stakes data.

          • ITGuyLevi@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 days ago

            I’m pretty sure the key is stored on the device, which is backed up to Google. I cannot say for sure if they do or don’t backup your keyring, but I feel better not using it.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          19 days ago

          Yup, they can read anything you can, and send whatever part they want through Google Play services. I don’t trust them, so I don’t use Messenger or Play services on my GrapheneOS device.

      • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        19 days ago

        End to end could still - especially with a company like Google - include data collection on the device. They could even “end to end” encrypt sending it to Google in the side channel. If you want to be generous, they would perform the aggregation in-device and don’t track the content verbatim, but the point stands: e2e is no guarantee of privacy. You have to also trust that the app itself isn’t recording metrics, and I absolutely do not trust Google to not do this.

        They make so of their big money from profiling and ads. No way they’re not going to collect analytics. Heck, if you use the stock keyboard, that’s collecting analytics about the texts you’re typing into Signal, much less Google’s RCS.

      • zergtoshi@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        19 days ago

        Signal doesn’t harvest, use, sell meta data, Google may do that.
        E2E encryption doesn’t protect from that.
        Signal is orders of magnitude more trustworthy than Google in that regard.

        • mosiacmango@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          19 days ago

          Agreed. That still doesnt mean google is not doing E2EE for its RCS service.

          Im not arguing Google is trustworthy or better than Signal. I’m arguing that E2EE has a specific meaning that most people in this thread do not appear to understand.

          • zergtoshi@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            19 days ago

            Sure!
            I was merely trying to raise awareness for the need to bring privacy protection to a level beyond E2EE, although E2EE is a very important and useful step.

        • renzev@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          19 days ago

          There’s also Session, a fork of Signal which claims that their decentralised protocol makes it impossible/very difficult for them to harvest metadata, even if they wanted to.Tho I personally can’t vouch for how accurate their claims are.

    • s_s@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      19 days ago

      End-to-end encryption matters if your device isn’t actively trying to sabotage your privacy.

      If you run Android, Google is guilty of that.

      If you run Windows in a non-enterprise environment Microsoft is guilty of that.

      If you run iOS or MacOS, Apple is (very likely) guilty of that.

  • Obinice@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    2
    ·
    20 days ago

    Real encrypted apps, …or just the ones their own government can use to spy on them?

  • mox@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    39
    ·
    20 days ago

    End-to-end encryption is indispensable. Our legislators (no matter where we live) need to be made to understand this next time they try to outlaw it.

      • pdxfed@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        20 days ago

        “you wouldn’t put a dump truck full of movies on a snowy road without chains on the tires would you?”

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 days ago

        Ew.

        Think of it like this:

        • no encryption - sending a postcard
        • client to sever encryption - dropping off the postcard at the post office instead of the mailbox
        • end to end encryption - security envelope in the mailbox
        • read receipts - registered mail

        Hopefully you’re less wrong now Mr/Mrs legislator.

  • walden@sub.wetshaving.social
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    18
    ·
    20 days ago

    Sounds bad I guess, but the USA has been spying on us for a long time now. Is the bad part that it’s China?

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      19 days ago

      Yes. Wars happen. Even corrupt politicians are nicer when their control base is inside the country.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        39
        arrow-down
        2
        ·
        edit-2
        20 days ago

        RTFA

        The third has been systems that telecommunications companies use in compliance with the Commission on Accreditation for Law Enforcement Agencies (CALEA), which allows law enforcement and intelligence agencies with court orders to track individuals’ communications. CALEA systems can include classified court orders from the Foreign Intelligence Surveillance Court, which processes some U.S. intelligence court orders.

      • stinky@redlemmy.com
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        3
        ·
        20 days ago

        Wouldn’t surprise me. “We’re doing this to be helpful to you!” is actually moustached disney villain behavior.

        ^ similar to the prisoners with cats gimmick. “look how nice we’re being to our prisoners” is actually “stop yelling at your bunkmate or we’ll take away your cat”

    • mox@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      1
      ·
      20 days ago

      When a whole nation’s communications are intercepted by another entity, yes, the bad part is that it’s another nation. Especially an adversarial one.

      This is not about individuals’ personal privacy. It’s about things that happen at a much larger scale. For example, leverage for political influence, or leaking of sensitive info that sometimes finds its way into unsecured channels. Mass surveillance is powerful.

  • PagingDoctorLove@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    19 days ago

    Question for more tech savvy people: should I be worried about wiping old data, and if so for which apps? Just messaging apps, or also email and social media? Or can I just use the encrypted apps moving forward?

    • kava@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      19 days ago

      the safest perspective to have is this -

      every single thing you send online is going to be there forever. “the cloud” is someone’s server and constitutes online. even end to end encryption isn’t necessarily going to save you.

      for example iCloud backup is encrypted. but Apple in the past has kept a copy of your encryption key on your iCloud. why? because consumers who choose to encrypt and lose their passwords are gonna freak out when all their data is effectively gone forever.

      so when FBI comes a’knocking to Apple with a subpoena… once they get access to that encryption key it doesn’t matter if you have the strongest encryption in the world

      my advice

      never ever ever write something online that you do not want everybody in the world seeing.

      to put on my tin foil hat, i believe government probably has access to methods that break modern encryptions. in theory with quantum computers it shouldn’t be difficult

      • PagingDoctorLove@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 days ago

        I agree with you and I don’t put anything that I would consider questionable online, at least not these days. I’m just having a hard time figuring out what adjustments to make in addition to worrying about personal things I’ve already shared, like my gender and race. You know what I mean? I’m a married woman, and I have info in various places about our family planning choices, to give an example. That’s really starting to worry me, but how can I even begin to delete my data? It’s everywhere. Every doctor has their own patient portal, I have multiple email accounts, and I don’t even want to think about the dumb shit I might have posted when YouTube comment sections were new.

        It’s all really overwhelming.

        • kava@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 days ago

          yeah i just try not to think about it. I’m glad I was in the myspace generation during my teenage years. so I was actually able to just delete my myspace later on as an adult

          i feel worse for the kids growing up today. they don’t fully understand the implications of what they are posting online. anything and everything is being recorded forever. my generation got a chance to be a stupid kid and have it be forgotten. today’s kids don’t get that opportunity

          the best you can do, though, is just stop posting potentially damaging things online. you can’t change what you already posted. and 999 times out of a thousand, it’s not gonna hurt you.

          i understand the overwhelmed feeling though

      • archomrade [he/him]@midwest.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        19 days ago

        I’d imagine operating a quantum computer for blanket surveillance is cost-prohibitive, but yea, if you’ve given them reason to look at you just assume they have the means to break your encryption.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      19 days ago

      just wanted to add that deleting an app will not result in deletion of your data stored in the cloud (e.g. your emails)

      • PagingDoctorLove@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 days ago

        That I do know. I’m not worried about emails, or really anything specific. My online activity is pretty tame, but that’s within the context of a country with a functioning democracy that treats women like free humans. Not a surveillance state that plans to criminalize reproductive healthcare and turn women into sex slaves. I guess the problem I’m having is I don’t know how much I need to change my online habits because I have no idea how bad things are going to get.

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 days ago

          that’s great to hear. in your case not wiping emails and social media is not that much of a danger, I would assume, but I would do it anyway, even if I was not a women, just for the sake of it not being used (theoretically) for ads and such anymore. but be sure you have backed up every email and post you will delete, and storing it securely

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      19 days ago

      That depends on the privacy protections where you live and the policies of each service:

      • most places in the US - they already have your data and aren’t obligated to delete it
      • outside the EU - probably the same as the US
      • the EU or select states (e.g. CA) - you have some protections and a legal obligation to honor delete requests

      For the first two, I wouldn’t bother. I personally poisoned my data with Reddit before leaving, because I’ve heard of then reversing deletions. For the third, deleting may make sense.

      But in general, I’d keep your other accounts open until you fully transition to the new one.

      Below is information when considering a replacement service.

      Anything where data is stored on a server you don’t directly control can be leaked or subpoenad from the org that owns that server. Any unencrypted communication can be intercepted, and any regular encryption (HTTPS) can be logged by that server (e.g. under court order without notifying the customer).

      Even “secure” services can be ordered to keep logs. Here’s an example from Proton mai, and here’s one involving Tutanota.

      So it depends on your threat model, or in other words, who you’re trying to keep away from your data. Just think about how screwed you might be if:

      • a hacker dumps the servers data
      • a police agency secretly orders recording of data and metadata
      • someone steals your device
      • the police confiscate your device

      The answers to the above should help you decide which to type of service you’d feel comfortable with, and what tradeoffs you’re willing to make.

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        19 days ago

        Check out your old reddit account. I poisoned my data, too, then deleted it, but they restored it completely like the bastards they are. I deleted my 2F too, so it’s there forever now.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          19 days ago

          Yup, I figured that would be the case. I “deleted” my account, so I can’t go verify, but I let it sit for a couple weeks and my poisoned posts were still there (even got a couple replies asking WTF is up w/ my comments).

          So yeah, not sure if my data is still there or not, but at least I tried.

          • Buddahriffic@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            19 days ago

            Thing is, if they have backups, even editing data doesn’t do anything. Or they could even just have it set up to only display the most recent version but still keep each edit on the db. Wouldn’t even be hard to implement. Hell, it wouldn’t even be that hard to implement a historical series of diffs so they don’t have to store the full comments for each edit if the edit is a small one.

            Like if I wanted to run a service that made it easier to find interesting data, part of that would be to flag deletes and edits as “whatever was there before has a higher chance of being interesting”.

            Once something is posted, IMO just assume that it can’t be unposted and trying to unpost it might work similarly to the Streisand effect.

            Even here. Sure, the source is open and I’d bet looking at the delete and edit functions would make it look like everything is fine. But other federated servers don’t have to run the same code and can react to delete and edit directives from other servers however they want. The main difference between this platform and Reddit in regards to control over posted information is the fediverse can’t prevent entities from accessing the data for free (albeit with less user metadata like IP and email).

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              19 days ago

              it wouldn’t even be that hard to implement a historical series of diffs

              And external services provide this as well, like those services where you can find deleted comments (or the internet archive).

              I just try to disassociate my identity as much as I can from sites like Reddit. I never used my email on Reddit, and I haven’t used mine here. I’m guessing an enterprising individual could triangulate who I am based on my posts (though I do post false information sometimes), but that’s a lot less likely than if I handed over that association (i.e. through Facebook or whatever).

              Do what you can, but yeah, assume that everything you post on the internet exists forever.

  • Lost_My_Mind@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    1
    ·
    20 days ago

    Everybodies aunt at thanksgiving:

    “I should be fine. I only trust the facebook with my information. Oh, did I tell you? We have 33 more cousins we didn’t know about. I found out on 23andme.com. All of them want to borrow money.”

  • Maeve@kbin.earth
    link
    fedilink
    arrow-up
    183
    arrow-down
    3
    ·
    20 days ago

    Oh gee, forcing companies to leave backdoors for the government might compromise security, everyone. Who’d have thunk it? 🤦

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      19 days ago

      They knew, they were putting backdoors when they needed them.

      Now the new administration will take half of the blame in public opinion (that’s how this works) and also half of the profits, so they won’t investigate too strictly those who’ve done such things.

      But also words don’t cost anything. They can afford to say the obvious after the deed has been done.

  • OldManBOMBIN@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    11
    ·
    20 days ago

    Just stop using your electronic devices. Not like they don’t all have monitors built in already anyway. Every connected device could be sending screenshots home and we’d never know. I mean, I guess you could use something like Wireshark to monitor your home network, but something tells me nowadays there are ways around even that. I’m not a certified network tech or even a script kiddie, but I don’t trust my tech as far as my dog can throw it. I just try to secure through obfuscation as much as possible. Everyone thinks I have carbon monoxide poisoning, but it’s a small price to pay for peace of mind - even a small one.

  • iknowitwheniseeit@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    48
    ·
    19 days ago

    From RFC 2804:

    • The IETF believes that adding a requirement for wiretapping will make affected protocol designs considerably more complex. Experience has shown that complexity almost inevitably jeopardizes the security of communications even when it is not being tapped by any legal means; there are also obvious risks raised by having to protect the access to the wiretap. This is in conflict with the goal of freedom from security loopholes.

    https://datatracker.ietf.org/doc/rfc2804/

    This was written in 2000 in response to US government requests to add backdoors to voice-over-IP (VoIP) standards.

    It was recognized 25 years ago that having tapping capabilities is fundamentally insecure.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      19 days ago

      It was always recognized.

      Every time I go to the Interwebs and read what people have to say on security, it’s always the same high horse absolutism.

      I’ve read Attwood’s book on Asperger’s syndrome a couple weeks ago. There such absolutism was mentioned as a natural trait of aspies, but one that, when applied to social power dynamics or any military logic, gets you assroped in jail.

      People who want to spy on you or read all your communications understand too that general security suffers, but just not having that power is out of question for them, and also with the power they already have the security effect on them personally won’t be too big.

      It’s a social problem of the concept of personal freedom being vilified in the Western world via association with organized crime, terrorism, anarchism, you get the idea.

      It’s not hard to see that the pattern here is that these things are chosen because they challenge state’s authority and power, because, well, subsets of what’s called organized crime and terrorism that can be prevented by surveillance are not what people generally consider bad, and anarchism is not something bad in any form.

      What’s more important, people called that do not need to challenge the state if the state is functional, as in - representative, not oppressive and not a tool for some groups to hurt other groups.

      As we’ve seen in all the world history, what’s called organized crime and what’s called terrorism are necessary sometimes to resolve deadlocks in a society. It has never happened in history that a society could function by its formalized laws for long without breaking consistency of those. And it has never happened that an oppressed group\ideology\movement would be able to make its case in accordance with the laws made by its oppressor.

      Why I’m typing all this - it’s not a technical problem. It’s a problem of bad people who should be afraid not being afraid and thus acting, and good people who should be afraid not being afraid and thus not acting.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      19 days ago

      You don’t need technical knowledge to see the problem.

      If you live in an apartment and your landlord has a master key, then all an attacker needs to do is get that master key. In an apartment complex, maybe that’s okay because who’s going to break in to the landlord’s office? But on the internet, tons of people are trying to break in every day, and eventually someone will get the key.

      Even for the landlord, I’d rather them have a copy of my key than a master key, because that way they’d need to steal my key specifically.

  • Bluetreefrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    17
    ·
    20 days ago

    Interpretation - the NSA can now crack all common encryption methods, so let’s disadvantage our adversaries at no real cost to us.

    • Lofenyy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      19 days ago

      I vaguely recall Bruce Schneier saying that there is good evidence that the NSA cannot crack certain encryption methods. At the time, RSA was on the list. Maybe common methods mean roll-your-own corporate encryption, but it’s my understanding that GNUpg and similar software are safe.

      • jagged_circle@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 days ago

        Well, GPG doesn’t have PFS, but its a good starting point to say hello and then upgrade to some better encrypted messaging app