• UraniumBlazer@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    8
    ·
    1 个月前
    1. The morals of LLMs match us closely, as they’ve been trained on human data. Therefore, weighing two laws against each other isn’t difficult for them.
    2. For the interpretation of reality part, it’s all logic again. Logic, which a fine tuned model can potentially be quite good at.
    • Tobberone@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 个月前
      1. Ai, which lacks morality by definition, is as capable in morals as it is describing smells. As for that human data the question quickly becomes which data? As expressed in literature, social media or un politics? And also which century? It’s enough to compare today with pre-millenia conditions to see how widely it differs.

      As for 2. You assume that there is an objective reality free from emotion? There might be, but I am unsure if it can be perceived by anything living. Or AI, for that matter. It is after all, like you said, trained on human data.

      Anyways, time will tell if Openai is correct in their assessment, or if humans will want the human touch. As a tool for trained professionals to use, sure. As a substitute for one? I’m not convinced yet.