The U.S. government’s road safety agency is again investigating Tesla’s “Full Self-Driving” system, this time after getting reports of crashes in low-visibility conditions, including one that killed a pedestrian.

The National Highway Traffic Safety Administration says in documents that it opened the probe on Thursday with the company reporting four crashes after Teslas entered areas of low visibility, including sun glare, fog and airborne dust.

In addition to the pedestrian’s death, another crash involved an injury, the agency said.

Investigators will look into the ability of “Full Self-Driving” to “detect and respond appropriately to reduced roadway visibility conditions, and if so, the contributing circumstances for these crashes.”

  • jordanlund@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    Maybe have a safety feature that refuses to engage self drive if it’s too foggy/rainy/snowy.

    • bcgm3@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      Inb4 someone on TikTok shows how to bypass that sensor by jamming an orange in it -__-

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 month ago

      Preventing engaging something in bad conditions is a lot easier than what do you do if the conditions suddenly happen.

      If it’s suddenly foggy it needs to be able to handle the situation well.

      Cameras/Lidar don’t work well in fog. Radar does, but it isn’t a primary sensor and can’t be driven on safely alone in any circumstance.

      So now you need to slow down (which humans will do) but also since the sensors are failing or insufficient, safely get out of the way of what might be other incoming vehicles behind you, or slow/stopped vehicles ahead of you.

      You could restrict hours the system can be engaged which will reduce the likely hood of certain events (e.g morning fog, or sunrise/sunset head on sun) but there’s still unpredictability.

    • DerArzt@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      1 month ago

      Tesla Musk: Why would we need lidar? Just use visual cameras

      FTFY

  • elgordino@fedia.io
    link
    fedilink
    arrow-up
    28
    arrow-down
    4
    ·
    1 month ago

    If anyone was somehow still thinking RoboTaxi is ever going to be a thing. Then no, it’s not, because of reasons like this.

    • testfactor@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      5
      ·
      1 month ago

      It doesn’t have to not hit pedestrians. It just has to hit less pedestrians than the average human driver.

      • elgordino@fedia.io
        link
        fedilink
        arrow-up
        13
        ·
        1 month ago

        It needs to be way way better than ‘better than average’ if it’s ever going to be accepted by regulators and the public. Without better sensors I don’t believe it will ever make it. Waymo had the right idea here if you ask me.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 month ago

          But why is that the standard? Shouldn’t “equivalent to average” be the standard? Because if self-driving cars can be at least as safe as a human, they can be improved to be much safer, whereas humans won’t improve.

          • medgremlin@midwest.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 month ago

            I’d accept that if the makers of the self-driving cars can be tried for vehicular manslaughter the same way a human would be. Humans carry civil and criminal liability, and at the moment, the companies that produce these things only have nominal civil liability. If Musk can go to prison for his self-driving cars killing people the same way a regular driver would, I’d be willing to lower the standard.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 month ago

              Sure, but humans are only criminally liable if they fail the “reasonable person” standard (i.e. a “reasonable person” would have swerved out of the way, but you were distracted, therefore criminal negligence). So the court would need to prove that the makers of the self-driving system failed the “reasonable person” standard (i.e. a “reasonable person” would have done more testing in more scenarios before selling this product).

              So yeah, I agree that we should make certain positions within companies criminally liable for criminal actions, including negligence.

              • medgremlin@midwest.social
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 month ago

                I think the threshold for proving the “reasonable person” standard for companies should be extremely low. They are a complex organization that is supposed to have internal checks and reviews, so it should be very difficult for them to squirm out of liability. The C-suite should be first on the list for criminal liability so that they have a vested interest in ensuring that their products are actually safe.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 month ago

                  Sure, the “reasonable person” would be a competitor who generally follows standard operating procedures. If they’re lagging behind the industry in safety or something, that’s evidence of criminal negligence.

                  And yes, the C-suite should absolutely be the first to look at, but the problem could very well come from someone in the middle trying to make their department look better than it is and lying to the C-suites. C-suites have a fiduciary responsibility to the shareholders, whereas their reports don’t, so they can have very different motivations.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 month ago

        Exactly. The current rate is 80 deaths per day in the US alone. Even if we had self-driving cars proven to be 10 times safer than human drivers, we’d still see 8 news articles a day about people dying because of them. Taking this as ‘proof’ that they’re not safe is setting an impossible standard and effectively advocating for 30,000 yearly deaths, as if it’s somehow better to be killed by a human than by a robot.

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 month ago

          But they aren’t and likely never will be.

          And how are we to correct for lack of safety then? With human drivers you obvious discourage dangerous driving through punishment. Who do you punish in a self driving car?

        • III@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          The problem with this way of thinking is that there are solutions to eliminate accidents even without eliminating self-driving cars. By dismissing the concern you are saying nothing more than it isn’t worth exploring the kinds of improvements that will save lives.

          • Billiam@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 month ago

            If you get killed by a robot, you can at least die knowing your death was the logical option and not a result of drunk driving, road rage, poor vehicle maintenance, panic, or any other of the dozens of ways humans are bad at decision-making.

        • ano_ba_to@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          1 month ago

          “10 times safer than human drivers”, (except during specific visually difficult conditions which we knowingly can prevent but won’t because it’s 10 times safer than human drivers). In software, if we have replicable conditions that cause the program to fail, we fix those, even though the bug probably won’t kill anyone.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 month ago

        That is the minimal outcomes for an automated safety feature to be an improvement over human drivers.

        But if everyone else is using something you refused to that would have likely avoided someone’s death, while misnaming you feature to mislead customers, then you are in legal trouble.

        When it comes to automation you need to be far better than humans because there will be a higher level of scrutiny. Kind of like how planes are massively safer than driving on average, but any incident where someone could have died gets a massive amount of attention.

      • dmention7@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 month ago

        It’s bit reductive to put it in terms of a binary choice between an average human driver and full AI driver. I’d argue it has to hit less pedestrians than a human driver with the full suite of driver assists currently available to be viable.

        Self-driving is purely a convenience factor for personal vehicles and purely an economic factor for taxis and other commercial use. If a human driver assisted by all of the sensing and AI tools available is the safest option, that should be the de facto standard.

  • fluxion@lemmy.world
    link
    fedilink
    English
    arrow-up
    117
    arrow-down
    1
    ·
    1 month ago

    National Highway Traffic Safety Administration is now definitely on Musk’s list of departments to cut if Trump makes him a high-ranking swamp monster

    • lurker8008@lemmy.world
      link
      fedilink
      English
      arrow-up
      94
      ·
      1 month ago

      Why do you think musk dumping so much cash to boost Trump? The plan all along is to get kickbacks like stopping investigation, lawsuits, and regulations against him. Plus subsidies.

      Rich assholes don’t spend money without expectation of ROI

      He knows Democrats will crack down on shady practices so Trump is his best bet.

      • vxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        edit-2
        1 month ago

        He’s not hoping for a kickback, he is offered a position as secretary of cost-cutting.

        He will be able to directly shut down everything he doesn’t like under the pretense of saving money.

        Trump is literally campaigning on the fact that government positions are up for sale under his admin.

        “I’m going to have Elon Musk — he is dying to do this… We’ll have a new position: secretary of cost-cutting, OK? Elon wants to do that,” the former president said"

    • skyspydude1@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      This is legitimately one of the real reasons Musk is pushing for Trump so hard. NHTSA (and all the other regulatory agencies) were effectively gutted completely by the Trump admin and it’s basically the entire reason Elon could grift his way to where he is today. The moment Biden got into office, basically every single agency in existence began investigating him and pushing blocks out of the proverbial Jenga tower of the various Musk companies. He’s praying that Trump will get elected and allow him to keep grifting, because otherwise he’s almost definitely going to jail, or at a minimum losing the vast majority of his empire.

    • WalnutLum@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      Alongside the EPA for constantly getting in the way of the FAA trying to slip his SpaceX flight licenses through with a wink and a nudge instead of properly following regulations, and the FAA for trying to keep a semblance of legality through the whole process.

  • raspberriesareyummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    1 month ago

    Charge the stupid fuck Tesla chain of decision making with murder. This bullshit “self driving” advertising is premeditated, that’s no longer manslaughter.

    And charge the driver(s) with manslaughter under aggravating circumstances.

    But oh no, muh profts, hurr-durrr…

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Wouldn’t it be death by negligence rather than pre-meditated murder. After all I don’t think anyone at Tesla actually wanted this particular person to die, they just didn’t really care to take any action to prevent it.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        “I aimed my rifle at that person’s head and pulled the trigger, but I swear I didn’t want them to die”

        Tesla should be broken up and reassembled with zero overlap in management.

        And yes, legally it won’t stick, but the shitty south african oligarch should absolutely be tried for murder.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 month ago

    Makes you wonder if removing the lidar and using fucking cameras isn’t part of the problem… cheap bastards.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Well he said all sorts to try and justify it but really it was a cost-cutting exercise, of course it was a cost cutting exercise, why else would they do it?

        Anyway that explanation doesn’t make sense, if using lidar was a crutch then surely that’s a good solution right. It’s a bit like going, no you shouldn’t use wings on your aircraft that’s a crutch, you should be using the antigravity tech that we don’t have yet.

        In the long run there probably are going to be better solutions (that’s how civilizations advance), but those better solutions don’t exist yet, so… maybe we should use what we have.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    Investigators will look into the ability of “Full Self-Driving” to “detect and respond appropriately to reduced roadway visibility conditions

    They will have to look long and hard…

  • Konala Koala@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    7
    ·
    1 month ago

    Every time I hear something about pedestrian being killed by something self-driving, it begins to irk me as to why are we pushing for such and such technology.

    • PeroBasta@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      5
      ·
      1 month ago

      Because it is generally proven to save lifes. You’ll never hear of “thanks for the auto-brake system no one got injured and everything was boring as usual” but it happened a lot (also to me in first person).

      I don’t like Musk but in general its a good thing to push self driving cars IMO. I drive 2 hours per day and the amount of time where I see retarded people doing retarded stuff at the wheel is crazy.

      • Konala Koala@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        No, it is not generally proven to save lives, you are listening to lies somewhere. Its not a good thing to push self-driving cars and Musk is the one being retarded. Plus he supports Trump and not Harris.

        • PeroBasta@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          The technology behind it is proven to save lifes. The reaction time of a full brake to stop a car crash i had the “luck” of experiencing on a Volkswagen was outstanding.

          Same thing for the lane assist function if you are sleepy

          • Konala Koala@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Same thing for the lane assist function if you are sleepy

            If you are sleepy behind the wheel, you need to pull over, get off the road, and take a rest.

            • PeroBasta@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              Thanks mom. I brought cases to prove my point I’m not saying you should go on a road trip while sleepy.

      • ano_ba_to@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Air travel is generally safer than driving too, but every accident is studied thoroughly. Self-driving is fine, but anyone trying to implement it should be held to a high standard. Boeing slacked off and they’re facing some backlash.

      • DillyDaily@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        edit-2
        1 month ago

        This is the thing. Musk and everything his company does in terms of labour and marketing, and just their whole ethos is unethical as fuck, and I can’t stand that as a society we are celebrating Tesla.

        But self driving cars are not inherently bad or dangerous to persue as a technological advancement.

        Self driving cars will kill people, they’ll will hit pedestrians and crash into things.

        So do cars driven by humans.

        Human driven cars kill a lot of people.

        Self driving cars need to be safer than human driven cars to even consider letting them on the the road, but we can’t truly expect a 0% accident rate on self driving cars in the early days of the technology when we don’t expect that of the humanity driven cars.

    • Psythik@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 month ago

      Because self-driving cars are safer than human drivers, when implemented properly. A proper one is absolutely loaded with sensors, radar, laser, sonar; not just some cameras like Tesla’s system.

      If you ever get the chance to, hop in a Waymo and you’ll become a believer too (currently available only in Cali and AZ). These little robotaxis see everything at all times, not just what’s in front of them like humans. I trust them more than I’d trust any human driver. They can avoid accidents that you and I would never see coming. Witnessed this first-hand.

        • Psythik@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Again, ride in one yourself when you get the chance and I promise you you’ll change your mind immediately.

          • Konala Koala@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 month ago

            Again, not only no valid proof they are safe, but they are being used to put people out of work like Taxi and Uber drivers.

            • Psythik@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              27 days ago

              It’s for the better. They will find other jobs. You sound like the people crying about coal mines being closed down.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      1
      ·
      1 month ago

      The bad news is people hitting and killing pedestrians is so common you don’t hear about it. Fuck Musk and all that, but some number of people are always going to get killed. Even the FSD system that was as close to perfect as possible would still occasionally kill someone in large enough numbers, because there’s too many variables to account for. If the numbers are lower than a human driving, it’s a positive.

      We should be trying to move away from cars though ideally. Fuck electric cars, FSD cars, and all other cars. A bus, train, bike, or whatever else would be safer and better for the environment.

      • beanlink@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 month ago

        Lets install adaptive headlights to stop blinding people or allowing manufacturers to install chrome accents on the rear of a vehicle to again stop blinding people or even just maybe make a smaller truck that isn’t lifting ego and instead actual building materials.

        NHSTA:

      • fne8w2ah@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        1 month ago

        Public transport is the way to go, just need to break the cycle of six decades of automobile addiction.

      • Fuzzy_Red_Panda@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Fuck Musk and all that, but some number of people are always going to get killed.

        That’s easy to say, but do you want to be one of the people who gets killed? I don’t.

        • Cethin@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Yeah, it’s that easy to say and yeah, I don’t want to be one of the people killed, driverless or not. Cars are fucking deadly. 20 pedestrians die every day to cars. If we can get that number down but have them die to FSD vehicles, that’s better. I don’t care who or what is driving.

          I’d rather not have cars everywhere, but if we do I want them to be as safe as possible (for everyone, not just the driver). If that includes FSD we should do it, even if the number of pedestrian deaths doesn’t hit zero (it never will) because the alternative is well above zero.

          • Blackmist@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            If I’m get killed by a driver on their phone, I want them to go to prison.

            If I’m killed by a Waymo or whatever, who goes to jail then?

            If we really must have self driving cars, limit their speed to 20mph in built up areas.

            • Cethin@lemmy.zip
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              If you’re dead, why the fuck do you care who goes to jail? Shouldn’t we care more about people dying than revenge after?

  • demizerone@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    I purchased FSD when it was 8k. What a crock of shit. When I sold the car, that was this only gave the car value after 110k miles and it was only $1500 at most.

  • breadsmasher@lemmy.world
    link
    fedilink
    English
    arrow-up
    149
    arrow-down
    4
    ·
    1 month ago

    Eyes can’t see in low visibility.

    musk “we drive with our eyes, cameras are eyes. we dont need LiDAR”

    FSD kills someone because of low visibility just like with eyes

    musk reaction -

    • RandomStickman@fedia.io
      link
      fedilink
      arrow-up
      25
      ·
      1 month ago

      You’d think “we drive with our eyes, cameras are eyes.” is an argument against only using cameras but that do I know.

    • aramis87@fedia.io
      link
      fedilink
      arrow-up
      13
      arrow-down
      5
      ·
      1 month ago

      What pisses me off about this is that, in conditions of low visibility, the pedestrian can’t even hear the damned thing coming.

      • SmoothLiquidation@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        6
        ·
        1 month ago

        I hear electric cars all the time, they are not much quieter than an ice car. We don’t need to strap lawn mowers to our cars in the name of safety.

        • 1984@lemmy.today
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          1 month ago

          I think they are a lot more quiet. I’ve turned around and seen a car 5 meter away from me, and been surprised. That never happens with fuel cars.

          I think if you are young, maybe there isn’t a big difference since you have perfect hearing. But middle aged people lose quite a bit of that unfortunately.

          • idunnololz@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 month ago

            I’m relatively young and it can still be difficult to hear them especially the ones without a fake engine sound. Add some city noise and they can be completely inaudible.

            • spacesatan@lazysoci.al
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 month ago

              ‘city noise’ you mean ICE car noise. We should be trying to reduce noise pollution not compete with it.

              • idunnololz@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 month ago

                It’s not safe for cars to be totally silent when moving imo since I’d imagine it’s more likely to get run over.

      • III@lemmy.world
        link
        fedilink
        English
        arrow-up
        29
        ·
        1 month ago

        Correction - Older Teslas had lidar, Musk demanded they be removed because they cut into his profits. Not a huge difference but it does show how much of a shitbag he is.

      • normanwall@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        1 month ago

        Honestly though, I’m a fucking idiot and even I can tell that Lidar might be needed for proper, safe FSD

    • flames5123@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 month ago

      The cars used to have RADAR. But they got rid of that and even disabled it on older models when updating because they “only need cameras.”

      Cameras and RADAR would have been good enough for most all conditions…

    • Lucidlethargy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      edit-2
      1 month ago

      He really is a fucking idiot. But so few people can actually call him out… So he just never gets put in his place.

      Imagine your life with unlimited redos. That’s how he lives.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      89
      arrow-down
      2
      ·
      edit-2
      1 month ago

      It’s worse than that, though. Our eyes are significantly better than cameras (with some exceptions at the high end) at adapting to varied lighting conditions than cameras are. Especially rapid changes.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Not only that, when we have trouble seeing things, we can adjust our speed to compensate (though tbf, not all human drivers do, but I don’t think FSD should be modelled after the worst of human drivers). Does Tesla’s FSD go into a “drive slower” mode when it gets less certain about what it sees? Or does its algorithms always treat its best guess with high confidence?

      • jerkface@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        26
        ·
        1 month ago

        Hard to credit without a source, modern cameras have way more dynamic range than the human eye.

        • magiccupcake@lemmy.world
          link
          fedilink
          English
          arrow-up
          39
          arrow-down
          1
          ·
          1 month ago

          Not in one exposure. Human eyes are much better with dealing with extremely high contrasts.

          Cameras can be much more sensitive, but at the cost of overexposing brighter regions in an image.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            21
            ·
            1 month ago

            They’re also pretty noisy in low light and generally take long exposures (a problem with a camera at high speeds) to get sufficient input to see anything in the dark. Especially if you aren’t spending thousands of dollars with massive sensors per camera.

            • jerkface@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              1 month ago

              I dunno what cameras you are using but a standard full frame sensor and an F/4 lens sees way better in low light than the human eye. If I take a raw image off my camera, there is so much more dynamic range than I can see or a monitor can even represent, you can double the brightness at least four times (ie 16x brighter) and parts of the image that looked pure black to the eye become perfectly usable images. There is so so so much more dynamic range than the human eye.

              • conciselyverbose@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                6
                ·
                1 month ago

                Do you know what the depth of field at f/4 looks like? It’s not anywhere in the neighborhood of suitable for a car, and it still takes a meaningful exposure length in low light conditions to get a picture at all, which is not suitable for driving at 30mph, let alone actually driving fast.

                That full frame sensor is also on a camera that’s several thousand dollars.

  • tekato@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    1 month ago

    This is why you can’t have an AI make decisions on activities that could kill someone. AI models can’t say “I don’t know”, every input is forced to be classified as something they’ve seen before, effectively hallucinating when the input is unknown.

    • pycorax@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      I’m not very well versed in this but isn’t there a confidence value that some of these models are able to output?

      • FatCrab@lemmy.one
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 month ago

        All probabilistic models output a confidence value, and it’s very common and basic practice to gate downstream processes around that value. This person just doesn’t know what they’re talking about. Though, that puts them on about the same footing as Elono when it comes to AI/ML.

        • tekato@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          Right, which is why that marvelous confidence value got somebody ran over.

          • FatCrab@lemmy.one
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            Are you under the impression that I think Teslas approach to AI and computer vision is anything but fucking dumb? The person said a stupid and patently incorrect thing. I corrected them. Confidence values being literally baked into how most ML architectures work is unrelated to intentionally depriving your system of one of the most robust ccomputer vision signals we can come up with right now.

            • tekato@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 month ago

              Yes, but confidence values are not magic. These values are calculated based on how familiar the current input is to a previous observed input. If the type of input is unfamiliar to the model, what do you think happens? Usually, there will be a category with a high enough confidence score so that it will be chosen as the correct one, while being wrong. Now, assuming you somehow manage to not get a favorable confidence score for any decision. What do you think happens in that case? I never encountered this, but there can only be 3 possible paths: 1) Choose a random value. Not good. 2) Do nothing. Not good. 3) Rerun the model with slightly newer data? Maybe helps, but in the case of driving a car, slightly newer data might be too late.

              • FatCrab@lemmy.one
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                There’s plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of “rerunning” the model isn’t that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I’m sure plenty more an actual domain and subject matter expert might come up with–or a whole team of them. But while we’re on the topic, it’s not really right to even label these confidence intervals as such–they’re just output weighting associated with respective levels. We’ve sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren’t based on a ground truth like I’m understanding your comment to imply–they entirely derive out of the trained model weights and their confluence. Don’t really have anywhere to go with that thought beyond the observation itself.

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    4
    ·
    1 month ago

    Really fucking stupid that we as a society intentionally choose to fuck around and find out rather than find out before we fuck around.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        By refusing to vote in competent regulatory bodies, the ones finding out are a part of the problem with the societal ails. I don’t want specific people punished with prejudice, I want a rule of law that holds all people accountable as equals and averts all harm before it can happen.