AI tools are helpful and cool as long as you know their limitations. AI doesn’t exist.There is no fidelity in AI. AI is build on biased data sets and will gi...
I don’t think they said anything like that “it can’t be intelligent because it’s wrong sometimes”. It’s more like the AI doesn’t exist outside of the prompts you feed it. Humans can introspect, reflect on the actions we’ve done and question what effect our actions had on the situation. Humans can have desires, we can want to be more accurate, truthful in our actions, and reflect on how we might have failed doing this in the past. AI cannot do this. And we can do this outside of the prompt of a similar situation. AI only takes an input and then generates an output, wipes its hands, and calls it a day. It doesn’t matter if it gave you a correct answer, wrong answer, or gave you a completely illegible sentence.
The previous guy and I agreed that you could trivially write a wrapper around it that gives it an internal monologue and feedback loop. So that limitation is artificial and easy to overcome, and has been done in a number of different studies.
And it’s also trivially easy to have the results of its actions go into that feedback loop and influence its weights and models.
And is having wants and desires necessary to be an “intelligence”? That’s getting into the philosophy side of the house, but I would argue that’s superfluous.
I don’t think they said anything like that “it can’t be intelligent because it’s wrong sometimes”. It’s more like the AI doesn’t exist outside of the prompts you feed it. Humans can introspect, reflect on the actions we’ve done and question what effect our actions had on the situation. Humans can have desires, we can want to be more accurate, truthful in our actions, and reflect on how we might have failed doing this in the past. AI cannot do this. And we can do this outside of the prompt of a similar situation. AI only takes an input and then generates an output, wipes its hands, and calls it a day. It doesn’t matter if it gave you a correct answer, wrong answer, or gave you a completely illegible sentence.
The previous guy and I agreed that you could trivially write a wrapper around it that gives it an internal monologue and feedback loop. So that limitation is artificial and easy to overcome, and has been done in a number of different studies.
And it’s also trivially easy to have the results of its actions go into that feedback loop and influence its weights and models.
And is having wants and desires necessary to be an “intelligence”? That’s getting into the philosophy side of the house, but I would argue that’s superfluous.