Most AI models out there are pretty brain dead as far as understanding goes, these types of things show the problems because it’s abundantly clear it’s getting it wrong. Makes you wonder how much it’s getting wrong even when it isn’t obvious.
Shitty Skynet doesn’t realize it’s teaching us how to hide from it
But where is the pink elephant?
promptng sur is a funi <3
i… i lik that part about it… i dun lik imag modls bt txt modls feel fun to prmt with —
“prompt engerieer” 🤮
ChatGPT: “don’t generate a dog, don’t generate a dog, don’t generate a dog”
Generates a dog.
Why wouldn’t you want a dog in your static? Why are you a horrible person?
I asked mistral to “generate an image with no dog” and it did
The fact that it chose something else to generate instead makes me wonder if this is some sort of free will?
I think all the big image generators support negative prompts by now, so if it interpreted “no dog” as a negative for “dog”, then it will check its outputs for things resembling dogs and discard those. No free will, just a much more useful system than whatever OP is using.
Mistral likely does “prompt enhancement,” aka feeding your prompt to an LLM first and asking it to expand it with more words.
So internally, a Mistral text LLM is probably writing out “sure! Here’s a long prompt with no dog: …” and then that part is fed to the image generator.
Other “LLMs” are truly multimodal and generate image output, hence they still get the word “dog” in the input.
Hmmm
That’s a land shrimp.
There could be a dog behind any one of those bushes though.
it just did what you wanted, since you asked for an image. free will would be if you asked it not to generate an image but it still did, if it just generated an image without you prompting it to, or if you asked for an image and it just didn’t respond
free will is when it generates an image of a billboard saying “suck my dongle, fleshbag”
fair enough
The only thing I have in common with this piece of shit software is we both can’t stop thinking about silly dogs
This is some Ceci n’est pas une pipe shit
Update:
lmfaao, ai tryna gaslight
Get gaslit idiot
“want me to try again with even more randomized noise?” literally makes no sense if it had generated what you asked (which the chatbot thinks it did)
Remember, “AI” (autocomplete idiocy) doesn’t know what sense is; it just continues words and displays what may seem to address at least some of the topic with no innate understanding of accuracy or truth.
Never forget that ChatGPT 2.0 can literally be run in a giant Excel spreadsheet with no other program needed. It’s not “smart” and is ultimately millions of formulae at work.
Wow. I ABSOLUTLY saw an image of a dog in the middle. Our brain sure is fascinating sometimes.
AI: Hmm, yeah, they said “dog” and “without”. I got the dog so lemme draw a without real quick…
I see no dog in that image fellow human.
I am not sure what your issue is.
Beep boop.
Fellow human, you seem to be beeping like a robot. Might you need to consider visiting the human repair shop for some bench time?
That’s an anti-dog duh
I used to use Google assistant to spell words I couldn’t remember the spelling of in my English classes (without looking at my phone) so the students could also hear the spelling out loud in a voice other than mine.
Me: “Hey Google, how do you spell millennium?” GA: “Millennium is spelled M-I-L-L-E-N-N-I-U-M.”
Now, I ask Gemini: “Hey Google, how do you spell millennium.” Gemini: “Millennium”.
Utterly useless.
I don’t get it, it’s just a picture of some static?