• qprimed@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 hours ago

    Instead of making its code more efficient, the system tried to modify its code to extend beyond the timeout period.

    doing the “stupid”, “easy” thing. pack it up, bois. been a good run but we finally made a better human.

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    5 hours ago

    Clickbait title. It’s just LLMs doing what they’re designed to do. Since they’re basically complex iterative algorithms, the person in question did a thing using a tool they didn’t fully understand, and that had consequences.

    People should be looking at LLMs like Monkey Paws instead of “assistants.”

    • treadful@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 minutes ago

      Shlegeris, CEO of the nonprofit AI safety organization Redwood Research, developed a custom AI assistant using Anthropic’s Claude language model.

      The Python-based tool was designed to generate and execute bash commands based on natural language input.

      Saying the person didn’t understand what they were doing is quite a mischaracterization. That said, they absolutely knew the risks they were taking and are using this story for free advertising.

      Still neat to think about though.

  • 314@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    6 hours ago

    Is the computer really “bricked”? Or will repairing GRUB fix it? I get the main message of unexpected access / consequences…