- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
There is a discussion on Hacker News, but feel free to comment here as well.
This is the best summary I could come up with:
OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company.
The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.
The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers.
The reports followed days of turmoil at San Francisco-based OpenAI, whose board sacked Altman last Friday but then reinstated him on Tuesday night after nearly all the company’s 750 staff threatened to resign if he was not brought back.
As part of the agreement in principle for Altman’s return, OpenAI will have a new board chaired by Bret Taylor, a former co-chief executive of software company Salesforce.
However, his brief successor as interim chief executive, Emmett Shear, wrote this week that the board “did not remove Sam over any specific disagreement on safety”.
The original article contains 504 words, the summary contains 192 words. Saved 62%. I’m a bot and I’m open source!
Algebra and related logistic mathematics are a language just like English, you can train a model how to solve for variables, different mathematical properties of equations, transformations etc and it would be able to “speak” math like other LLMs speak English.
It’s not any more self aware than earlier generative AIs.
It’s not about whether it’s self aware or not. It’s about when it’ll have an illusion of consciousness good enough for you to constantly question if there’s really a conscious inside or not.
Important detail: a language (like English, or Libras, or written Chinese) is a system that conveys general meaning, among other things. Without that, we can’t really claim that something is a language.
This has the following consequences:
- It’s at least possible that a hypothetical Q* model did reach an intelligence breakthrough, by handling abstract units of meaning (concepts) instead of “raw” tokens. However frankly, this whole story is looking more and more like OpenAI employees undergoing mass hysteria than like anything real.
- There is some overlap between language and maths, when it comes to logic. However, maths are not really a language. It’s like saying that dogs are cats because dogs have fur, you know?
- LLMs don’t really speak. They’re great at replicating grammatical patterns but, as shown by their hallucinations (that people often cherry pick “out”), they don’t handle meaning.
For reference on the third point, give this a check. I have further examples highlighting that LLMs don’t understand what they’re uttering, if you want.
Got to be honest, this Q* model still sounds like a hoax for me. Is this supposed to be a “chrust me” = “be gullible as a brick” matter?
Hic Rhodes, hic salta.