• 1 Post
  • 123 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle


  • Russia and Ukraine are two countries that have thrown everything they had at each other: from good soldiers, to inmates, to good people who’d probably never held a weapon before.

    At this point I imagine that having troops who are alive and actual trained soldiers, not emotionally and physically drained (if not outright mutilated) by years of fighting is a big advantage

    If I was taken from my home and suddenly sent to fight for my country, no matter how full of patriotic love I might be, one North Korean child with a knife would be enough to take me out.









  • andallthat@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    I’m not sure we, as a society, are ready to trust ML models to do things that might affect lives. This is true for self-driving cars and I expect it to be even more true for medicine. In particular, we can’t accept ML failures, even when they get to a point where they are statistically less likely than human errors.

    I don’t know if this is currently true or not, so please don’t shoot me for this specific example, but IF we were to have reliable stats that everything else being equal, self-driving cars cause less accidents than humans, a machine error will always be weird and alien and harder for us to justify than a human one.

    “He was drinking too much because his partner left him”, “she was suffering from a health condition and had an episode while driving”… we have the illusion that we understand humans and (to an extent) that this understanding helps us predict who we can trust not to drive us to our death or not to misdiagnose some STI and have our genitals wither. But machines? Even if they were 20% more reliable than humans, how would we know which ones we can trust?