This isn’t a gloat post. In fact, I was completely oblivious to this massive outage until I tried to check my bank balance and it wouldn’t log in.

Apparently Visa Paywave, banks, some TV networks, EFTPOS, etc. have gone down. Flights have had to be cancelled as some airlines systems have also gone down. Gas stations and public transport systems inoperable. As well as numerous Windows systems and Microsoft services affected. (At least according to one of my local MSMs.)

Seems insane to me that one company’s messed up update could cause so much global disruption and so many systems gone down :/ This is exactly why centralisation of services and large corporations gobbling up smaller companies and becoming behemoth services is so dangerous.

  • SeattleRain@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    8
    ·
    4 months ago

    It’s proving that POSIX architecture is necessary even if it requires additional computer literacy on the part of users and admins.

    The risk of hacking (which is what Crowdstrike essentially does to get so deeply embedded and be so effective at endpoint protection) a monolithic system like Windows OS is if you screw up the whole thing comes tumbling down.

  • TCB13@lemmy.world
    link
    fedilink
    arrow-up
    90
    arrow-down
    11
    ·
    4 months ago

    While I don’t totally disagree with you, this has mostly nothing to do with Windows and everything to do with a piece of corporate spyware garbage that some IT Manager decided to install. If tools like that existed for Linux, doing what they do to to the OS, trust me, we would be seeing kernel panics as well.

    • Mikina@programming.dev
      link
      fedilink
      arrow-up
      25
      arrow-down
      1
      ·
      4 months ago

      I wouldn’t call Crowdstrike a corporate spyware garbage. I work as a Red Teamer in cybersecurity, and EDRs are bane of my existence - they are useful, and pretty good at what they do. In the last few years, I’m struggling more and more to with engagements we do, because EDRs just get in the way and catch a lot of what would pass undetected a month ago. Staying on top of them with our tooling is getting more and more difficult, and I would call that a good thing.

      I’ve recently tested a company without EDR, and boy was it a treat. Not defending Crowdstrike, to call that a major fuckup is great understatement, but calling it “corporate spyware garbage” feels a little bit unfair - EDRs do make a difference, and this wasn’t an issue with their product in itself, but with irresponsibility of their patch management.

      • TCB13@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        Fair enough.

        Still this fiasco proved once again that the biggest thread to IT sometimes is on the inside. At the end of the day a bunch of people decided to buy Crowdstrike and got screwed over. Some of them actually had good reason to use a product like that, others it was just paranoia and FOMO.

      • Jako301@feddit.de
        link
        fedilink
        arrow-up
        20
        arrow-down
        1
        ·
        4 months ago

        Why should it be? A faulty software update from a 3rd party crashes the operating system. The exact same thing could happen to Linux hosts as well with how much access those IPSec programms usually get.

          • jet@hackertalks.com
            link
            fedilink
            English
            arrow-up
            29
            arrow-down
            2
            ·
            4 months ago

            Your fixated on the wrong part of the story. Synchronized supply chain update takes out global infrastructure isn’t a windows problem, this happens on linux too!

            Just because a drunk driver crashes their BMW into a school doesn’t mean drunk driving is only a BMW vehicle problem.

            • limelight79@lemm.ee
              link
              fedilink
              arrow-up
              21
              arrow-down
              1
              ·
              4 months ago

              I love how quickly everyone has forgotten about that xz attack.

              I use and love Linux and have for over two decades now, but I’m not going to sit here and claim that something similar to the current Windows issue can’t happen to Linux.

              • Aniki 🌱🌿@lemmings.world
                link
                fedilink
                arrow-up
                3
                arrow-down
                11
                ·
                4 months ago

                xz attack

                That has nothing to do with this. That was a security vulnerability, solved in record time, blame where it was due, and patched in hours.

                • limelight79@lemm.ee
                  link
                  fedilink
                  arrow-up
                  15
                  ·
                  4 months ago

                  You’re missing the point. That compromised xz made it into some production distributions. The point here is that shit can happen to Linux, too.

            • Aniki 🌱🌿@lemmings.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              10
              ·
              edit-2
              4 months ago

              If BMW makes a car that has square wheels and needs to have everyone install round wheels so the fucking thing works you can’t blame a company for making wheels.

              It’s a Microsoft problem through and through.

              • jet@hackertalks.com
                link
                fedilink
                English
                arrow-up
                12
                arrow-down
                1
                ·
                edit-2
                4 months ago

                Your counter to the BMW Drunk driver example didn’t address drunk driving in volvos, toyotas, fords… you just introduced a variable that your upset with. BMW’s having weird wheels has nothing to do with Drunk Driving incidents.

                Again your focused on the wrong thing, this story is a warning about supply chain issues.

                Your just memeing on the hate for windows.

                Have you never seen a DNS outage, a ansible outage, a terraform outage, a RADIUS outage, a database schema change outage, a router firmware update outage?

                • Aniki 🌱🌿@lemmings.world
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  14
                  ·
                  4 months ago

                  Again, you’re talking about something I am not. I am talking about THIS problem, right here, that is categorically a windows problem, in that it’s not on the linux kernel stack, or mac. How is this NOT a windows problem??

      • marcos@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        4 months ago

        It is on the sense that Windows admins are the ones that like to buy this kind of shit and use it. It’s not on the sense that Windows was broken somehow.

      • DigitalDilemma@lemmy.ml
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        4 months ago

        The fault seems to be 90/10 CS, MS.

        MS allegedly pushed a bad update. Ok, it happens. Crowdstrike’s initial statement seems to be blaming that.

        CS software csagent.sys took exception to this and royally shit the bed, disabling the entire computer. I don’t think it should EVER do that, so the weight of blame must lie with them.

        The really problematic part is, of course, the need to manually remediate these machines. I’ve just spent the morning of my day off doing just that. Thanks, Crowdstrike.

        EDIT: Turns out it was 100% Crowdstrike, and the update was theirs. The initial press release from CS seemed to be blaming Microsoft for an update, but that now looks to be misleading.

      • kautau@lemmy.world
        link
        fedilink
        arrow-up
        58
        arrow-down
        3
        ·
        edit-2
        4 months ago

        And if it was a kernel-level driver that failed, Linux machines would fail to boot too. The amount of people seeing this and saying “MS Bad,” (which is true, but has nothing to do with this) instead of “how does an 83 billion dollar IT security firm push an update this fucked” is hilarious

        • Badabinski@kbin.earth
          link
          fedilink
          arrow-up
          10
          ·
          edit-2
          4 months ago

          Falcon uses eBPF on Linux nowadays. It’s still an irritating piece of software, but it no make your boxen fail to boot.

          edit: well, this is a bad take. I should avoid commenting on shit when I’m sleep deprived and filled with meeting dread.

            • Badabinski@kbin.earth
              link
              fedilink
              arrow-up
              4
              ·
              4 months ago

              Were you using the kernel module? We’re using Flatcar which doesn’t support their .ko, and we haven’t been getting panics on any of our machines (of which there are many).

              • Bitrot@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                6
                ·
                4 months ago

                Nah it was specifically related to their usage of BPF with the Red Hat kernel, since fixed by Red Hat. Symptom was, you update your system and then it panics. Still usable if you selected a previous kernel at boot though.

    • biscuitswalrus@aussie.zone
      link
      fedilink
      arrow-up
      32
      ·
      4 months ago

      Hate to break it to you, but most IT Managers don’t care about crowdstrike: they’re forced to choose some kind of EDR to complete audits. But yes things like crowdstrike, huntress, sentinelone, even Microsoft Defender all run on Linux too.

  • Asidonhopo@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 months ago

    US and UK flights are grounded because of the issue, banks, media and some businesses not fully functioning. Likely we’ll see more effects as the day goes on.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    3
    ·
    4 months ago

    Same here. I was totally busy writing software in a new language and a new framework, and had a gazillion tabs on Google and stackexchange open. I didn’t notice any network issues until I was on my way home, and the windows f-up was the one big thing in the radio news. Looks like Windows admins will have a busy weekend.

  • aard@kyu.de
    link
    fedilink
    arrow-up
    184
    ·
    4 months ago

    The annoying aspect from somebody with decades of IT experience is - what should happen is that crowdstrike gets sued into oblivion, and people responsible for buying that shit should have an epihpany and properly look at how they are doing their infra.

    But will happen is that they’ll just buy a new crwodstrike product that promises to mitigate the fallout of them fucking up again.

    • 0x0@programming.dev
      link
      fedilink
      arrow-up
      93
      ·
      4 months ago

      decades of IT experience

      Do any changes - especially upgrades - on local test environments before applying them in production?

      The scary bit is what most in the industry already know: critical systems are held on with duct tape and maintained by juniors 'cos they’re the cheapest Big Money can find. And even if not, There’s no time. or It’s too expensive. are probably the most common answers a PowerPoint manager will give to a serious technical issue being raised.

      The Earth will keep turning.

      • ik5pvx@lemmy.world
        link
        fedilink
        arrow-up
        30
        ·
        4 months ago

        Unfortunately falcon self updates. And it will not work properly if you don’t let it do it.

        Also add “customer has rejected the maintenance window” to your list.

        • marcos@lemmy.world
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          4 months ago

          Well, “don’t have self-upgrading shit on your production environment” also applies.

          As in “if you brought something like this, there’s a problem with you”.

      • goodgame@feddit.uk
        link
        fedilink
        arrow-up
        36
        arrow-down
        2
        ·
        4 months ago

        some years back I was the ‘Head’ of systems stuff at a national telco that provided the national telco infra. Part of my job was to manage the national systems upgrades. I had the stop/go decision to deploy, and indeed pushed the ‘enter’ button to do it. I was a complete PowerPoint Manager and had no clue what I was doing, it was total Accidental Empires, and I should not have been there. Luckily I got away with it for a few years. It was horrifically stressful and not the way to mitigate national risk. I feel for the CrowdStrike engineers. I wonder if the latest embargo on Russian oil sales is in anyway connected?

        • 0x0@programming.dev
          link
          fedilink
          arrow-up
          18
          ·
          4 months ago

          I wonder if the latest embargo on Russian oil sales is in anyway connected?

          Doubt it, but it’s ironic that this happens shortly after Kaspersky gets banned.

      • HumanPenguin@feddit.uk
        link
        fedilink
        English
        arrow-up
        25
        ·
        4 months ago

        Not OP. But that is how it used to be done. Issue is the attacks we have seen over the years. IE ransom attacks etc. Have made corps feel they needf to fixed and update instantly to avoid attacks. So they depend on the corp they pay for the software to test roll out.

        Autoupdate is a 2 edged sword. Without it, attackers etc will take advantage of delays. With it. Well today.

        • 0x0@programming.dev
          link
          fedilink
          arrow-up
          15
          ·
          edit-2
          4 months ago

          I’d wager most ransomware relies on old vulnerabilities. Yes, keep your software updated but you don’t need the latest and greatest delivered right to production without any kind of test first.

          • HumanPenguin@feddit.uk
            link
            fedilink
            English
            arrow-up
            13
            ·
            4 months ago

            Very much so. But the vulnerabilities do not tend to be discovered (by developers) until an attack happens. And auto updates are generally how the spread of attacks are limited.

            Open source can help slightly. Due to both good and bad actors unrelated to development seeing the code. So it is more common for alerts to hit before attacks. But far from a fix all.

            But generally, time between discovery and fix is a worry for big corps. So why auto updates have been accepted with less manual intervention than was common in the past.

            • SayCyberOnceMore@feddit.uk
              link
              fedilink
              English
              arrow-up
              5
              ·
              4 months ago

              I would add that a lot of attacks are done after a fix has been released - ie compare the previous release with the patch and bingo - there’s the vulnerability.

              But agree, patching should happen regularly, just with a few days delay after the supplier release it.

        • Avatar_of_Self@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          4 months ago

          I get the sentiment but defense in depth is a methodology to live by in IT and auto updating via the Internet is not a good risk to take in general. For example, should Crowdstrike just disappear one day, your entire infrastructure shouldn’t be at enormous risk nor should critical services. Even if it’s your anti-virus, a virus or ransomware shouldn’t be able to easily propagate through the enterprise. If it did, then it is doubtful something like Crowdstrike is going to be able to update and suddenly reverse course. If it can then you’re just lucky that the ransomware that made it through didn’t do anything in defense of itself (disconnecting from the network, blocking CIDRs like Crowdsource’s update servers, blocking processes, whatever) and frankly you can still update those clients anyway from your own AV update server which is a product you’d be using if you aren’t allowing updates from the Internet in order to roll them out in dev first, phasing and/or schedules from your own infrastructure.

          Crowdstrike is just another lesson in that.

  • Tenkard@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    4 months ago

    I would be too, except Firefox just started crashing on Wayland all the morning D;

      • Tenkard@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        Yes but I upgraded to 555 at least a week or two ago and it started crashing a couple of days ago, I think there’s an issue with explicit sync

        explicit sync is used, but no acquire point is set

        If you Google this you’ll find various bug reports

  • Swarfega@lemm.ee
    link
    fedilink
    English
    arrow-up
    64
    ·
    4 months ago

    I’ve just spent the past 6 hours booting into safe mode and deleting crowd strike files on servers.

    • allywilson@lemmy.ml
      link
      fedilink
      arrow-up
      18
      ·
      4 months ago

      Feel you there. 4 hours here. All of them cloud instances whereby getting acces to the actual console isn’t as easy as it should be, and trying to hit F8 to get the menu to get into safe mode can take a very long time.

      • Swarfega@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 months ago

        Ha! Yes. Same issue. Clicking Reset in vSphere and then quickly switching tabs to hold down F8 has been a ball ache to say the least!

        • Blank@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          Just go into settings and add a boot delay, then set it back when you’re done.

        • Avatar_of_Self@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          4 months ago

          What I usually do is set next boot to BIOS so I have time to get into the console and do whatever.

          Also instead of using a browser, I prefer to connect vmware Workstation to vCenter so all the consoles insta open in their own tabs in the workspace.

      • ArrogantAnalyst@infosec.pub
        link
        fedilink
        arrow-up
        10
        ·
        4 months ago

        Since it has to happen in windows safe mode it seems to be very hard to automate the process. I haven’t seen a solution yet.

      • Swarfega@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        Sadly not. Windows doesn’t boot. You can boot it into safe mode with networking, at which point maybe with anaible we could login to delete the file but since it’s still manual work to get windows into safe mode there’s not much point

        • lengau@midwest.social
          link
          fedilink
          arrow-up
          7
          ·
          4 months ago

          It is theoretically automatable, but on bare metal it requires having hardware that’s not normally just sitting in every data centre, so it would still require someone to go and plug something into each machine.

          On VMs it’s more feasible, but on those VMs most people are probably just mounting the disk images and deleting the bad file to begin with.

          • Swarfega@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            ·
            4 months ago

            I guess it depends on numbers too. We had 200 to work on. If you’re talking hundreds more than looking at automation would be a better solution. In our scenario it was just easier to throw engineers at it. I honestly thought at first this was my weekend gone but we got through them easily in the end.

          • Natanael@slrpnk.net
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            4 months ago

            The real problem with VM setups is that the host system might have crashed too

  • Angry_Autist (he/him)@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    20
    ·
    4 months ago

    Is there an easy way to silence every fuckdamn sanctimonious linux cultist from my lemmy experience?

    Secondly, this update fucked linux just as bad as windows, but keep huffing your own farts. You seem to like it.

    • Morphit @feddit.uk
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      4 months ago

      I’d unsubscribe from [email protected] for a start.

      I’m pretty sure this update didn’t get pushed to linux endpoints, but sure, linux machines running the CrowdStrike driver are probably vulnerable to panicking on malformed config files. There are a lot of weirdos claiming this is a uniquely Windows issue.

      • Angry_Autist (he/him)@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        4 months ago

        Thanks for the tip, so glad Lemmy makes it easy to block communities.

        Also: It seems everyone is claiming it didn’t affect Linux but as part of our corporate cleanup yesterday, I had 8 linux boxes I needed to drive to the office to throw a head on and reset their iDrac so sure maybe they all just happened to fail at the same time but in my 2 years on this site we’ve never had more than 1 down at a time ever, and never for the same reason. I’m not the tech head of the site by any means and it certainly could be unrelated, but people with significantly greater experience than me in my org chalked this up to Crowdstrike.

  • suoko@feddit.it
    link
    fedilink
    arrow-up
    4
    arrow-down
    4
    ·
    4 months ago

    A couple of days ago a Windows 2016 server started a license strike in my farm … Coincidence?

  • Reddfugee42@lemmy.world
    link
    fedilink
    arrow-up
    34
    arrow-down
    1
    ·
    4 months ago

    Most people are completely oblivious because it only affects people using crowdstrike, which practically excludes general consumers.

    • 0ops@lemm.ee
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      4 months ago

      I just had an Amazon package delayed for a week it says. It doesn’t name names but…

      A small number of deliveries may arrive a day later than anticipated due to a third-party technology outage.