Not an expert so I might be wrong, but as far as I understand it, those specialised tools you describe are not even AI. It is all machine learning. Maybe to the end user it doesn’t matter, but people have this idea of an intelligent machine when its more like brute force information feeding into a model system.
By definition AI (artificial intelligence) is any algorithm by which a computer system automatically adapts to and learns from its input. That definition also covers conventional algorithms that aren’t even based on neural nets. Machine learning is a subset of that.
AGI (artifical general intelligence) is the thing you see in movies, people project into their LLM responses and what’s driving this bubble. It is the final goal, and means a system being able to perform everything a human can on at least human level. Pretty much all the actual experts agree we’re a far shot from such a system.
It may be too late on this front, but don’t say AI when there isn’t any I to it.
Of course it could be successfully argued that humans (or at least a large amount of them) are also missing the I, and are just spitting out the words that are expected of them based on the words that have been ingrained in them.
Intelligence: The ability to acquire, understand, and use knowledge.
A self-driving car is able to observe its surroundings, identify objects and change its behaviour accordingly. Thus a self-driving car is intelligent. What’s driving such car? AI.
You’re free to disagree with how other people define words but then don’t take part in their discussions expecting everyone to agree with your definiton.
AI as a field of computer science is mostly about pushing computers to do things they weren’t good at before. Recognizing colored blocks in an image was AI until someone figured out a good way to do it. Playing chess at grandmaster levels was AI until someone figured out how to do it.
Along the way, it created a lot of really important tools. Things like optimizing compilers, virtual memory, and runtime environments. The way computers work today was built off of a lot of things out of the old MIT CSAIL labs. Saying “there’s no I to this AI” is an insult to their work.
Recognizing colored blocks in an image was AI until someone figured out a good way to do it. Playing chess at grandmaster levels was AI until someone figured out how to do it.
You make it sound like these systems stopped being AI the moment they actually succeeded at what they were designed to do. When you play chess against a computer it’s AI you’re playing against.
That’s exactly what I’m getting at. AI is about pushing the boundary. Once the boundary is crossed, it’s not AI anymore.
Those chess engines don’t play like human players. If you were to look at how they determine things, you might conclude they’re not intelligent at all by the same metrics that you’re dismissing ChatGPT. But at this point, they are almost impossible for humans to beat.
I’m not the person you originally replied to. At no point have I dismissed ChatGPT.
I disagree with your logic about the definition of AI. Intelligence is the ability to acquire, understand, and use knowledge. A chess-playing AI can see the board, understand the ramifications of each move, and respond to how the pieces are moved. That makes it intelligent - narrowly so, but intelligent nonetheless. And since it’s artificial too, it fits the definition of AI.
Not an expert so I might be wrong, but as far as I understand it, those specialised tools you describe are not even AI. It is all machine learning. Maybe to the end user it doesn’t matter, but people have this idea of an intelligent machine when its more like brute force information feeding into a model system.
Don’t say AI when you mean AGI.
By definition AI (artificial intelligence) is any algorithm by which a computer system automatically adapts to and learns from its input. That definition also covers conventional algorithms that aren’t even based on neural nets. Machine learning is a subset of that.
AGI (artifical general intelligence) is the thing you see in movies, people project into their LLM responses and what’s driving this bubble. It is the final goal, and means a system being able to perform everything a human can on at least human level. Pretty much all the actual experts agree we’re a far shot from such a system.
It may be too late on this front, but don’t say AI when there isn’t any I to it.
Of course it could be successfully argued that humans (or at least a large amount of them) are also missing the I, and are just spitting out the words that are expected of them based on the words that have been ingrained in them.
This is not up to you or me : AI is an area of expertise / a scientific field with a precise definition. Large, but well defined.
A self-driving car is able to observe its surroundings, identify objects and change its behaviour accordingly. Thus a self-driving car is intelligent. What’s driving such car? AI.
You’re free to disagree with how other people define words but then don’t take part in their discussions expecting everyone to agree with your definiton.
AI as a field of computer science is mostly about pushing computers to do things they weren’t good at before. Recognizing colored blocks in an image was AI until someone figured out a good way to do it. Playing chess at grandmaster levels was AI until someone figured out how to do it.
Along the way, it created a lot of really important tools. Things like optimizing compilers, virtual memory, and runtime environments. The way computers work today was built off of a lot of things out of the old MIT CSAIL labs. Saying “there’s no I to this AI” is an insult to their work.
You make it sound like these systems stopped being AI the moment they actually succeeded at what they were designed to do. When you play chess against a computer it’s AI you’re playing against.
That’s exactly what I’m getting at. AI is about pushing the boundary. Once the boundary is crossed, it’s not AI anymore.
Those chess engines don’t play like human players. If you were to look at how they determine things, you might conclude they’re not intelligent at all by the same metrics that you’re dismissing ChatGPT. But at this point, they are almost impossible for humans to beat.
I’m not the person you originally replied to. At no point have I dismissed ChatGPT.
I disagree with your logic about the definition of AI. Intelligence is the ability to acquire, understand, and use knowledge. A chess-playing AI can see the board, understand the ramifications of each move, and respond to how the pieces are moved. That makes it intelligent - narrowly so, but intelligent nonetheless. And since it’s artificial too, it fits the definition of AI.