• 4am@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    1 year ago

    Algebra and related logistic mathematics are a language just like English, you can train a model how to solve for variables, different mathematical properties of equations, transformations etc and it would be able to “speak” math like other LLMs speak English.

    It’s not any more self aware than earlier generative AIs.

    • Diabolo96@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      It’s not about whether it’s self aware or not. It’s about when it’ll have an illusion of consciousness good enough for you to constantly question if there’s really a conscious inside or not.

    • Lvxferre@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      Important detail: a language (like English, or Libras, or written Chinese) is a system that conveys general meaning, among other things. Without that, we can’t really claim that something is a language.

      This has the following consequences:

      • It’s at least possible that a hypothetical Q* model did reach an intelligence breakthrough, by handling abstract units of meaning (concepts) instead of “raw” tokens. However frankly, this whole story is looking more and more like OpenAI employees undergoing mass hysteria than like anything real.
      • There is some overlap between language and maths, when it comes to logic. However, maths are not really a language. It’s like saying that dogs are cats because dogs have fur, you know?
      • LLMs don’t really speak. They’re great at replicating grammatical patterns but, as shown by their hallucinations (that people often cherry pick “out”), they don’t handle meaning.

      For reference on the third point, give this a check. I have further examples highlighting that LLMs don’t understand what they’re uttering, if you want.