It lacks cohesion the longer it goes on, not so much “hallucinating” as it is losing the thread, losing the plot. Internal consistency goes out the window, previously-made declarations are ignored, and established canon gets trounced upon.
But that’s cuz it’s not AI, it’s just LLM all the way down.
Its kind of an exponential falloff, for a few lines it can follow concrete mathematical rules, for a few paragraphs it can remember basic story beats, for a few pages it can just about remember your name.
I’m curious to know what happens if you ask ChatGPT to make you a text adventure based on that prompt.
Not curious enough to try it and play it myself, though.
It works okay for a while, but eventually it loses the plot. The storylines are usually pretty generic and washed out.
My god… they’ve reached PS1-era JRPG level in terms of video game storytelling…
It lacks cohesion the longer it goes on, not so much “hallucinating” as it is losing the thread, losing the plot. Internal consistency goes out the window, previously-made declarations are ignored, and established canon gets trounced upon.
But that’s cuz it’s not AI, it’s just LLM all the way down.
just for my ego, how long does it take to lose the plot?
Depends on complexity and the number of elements to keep track of, and varies between models and people. Try it out for yourself to see! :)
Its kind of an exponential falloff, for a few lines it can follow concrete mathematical rules, for a few paragraphs it can remember basic story beats, for a few pages it can just about remember your name.
LLMs are AI, just not AGI.