I guess we don’t need it then.
He can fuck all the way off.
globally
Who gives a fuck what Sergey brin thinks
deleted by creator
AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There’s no thought or reasoning. They don’t understand inputs. They mimic human speech. They’re not presenting anything meaningful.
I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.
pretending LLMs are AI
LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.
However, AI itself doesn’t imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.
Here we go… Fanperson explaining the world to the dumb lost sheep. Thank you so much for stepping down from your high horse to try and educate a simple person. /s
How’s insulting the people respectfully disagreeing with you working out so far? That ad-hominem was completely uncalled for.
“Fanperson” is an insult now? Cry me a river, snowflake. Also, you weren’t disagreeing, you were explaining something to someone perceived less knowledgeable than you, while demonstrating you have no grasp of the core difference between stochastics and AI.
If a basic chess engine is AI then bubble sort is too
It’s not. Bubble sort is a purely deterministic algorithm with no learning or intelligence involved.
Many chess engines run on deterministic algos as well
Bubble sort is just a basic set of steps for sorting numbers - it doesn’t make choices or adapt. A chess engine, on the other hand, looks at different possible moves, evaluates which one is best, and adjusts based on the opponent’s play. It actively searches through options and makes decisions, while bubble sort just follows the same repetitive process no matter what. That’s a huge difference.
Your argument can be reduced to saying that if the algorithm is comprised of many steps, it is AI, and if not, it isn’t.
A chess engine decides nothing. It understands nothing. It’s just an algorithm.
There are at least three of us.
I am worried what happens when the bubble finally pops because shit always rolls downhill and most of us are at the bottom of the hill.
Not sure if we need that particular bubble to pop for us to be drowned in a sea of shit, looking at the state of the world right now :( But silicon valley seems to be at the core of this clusterfuck, as if all the villains form there or flock there…
deleted by creator
That undersells them slightly.
LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They’re good at that. Need something summarized? They can do that, too. Need a question answered? No can do.
LLMs can’t generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they’ll also happily generate complete bullshit answers and to them there’s no difference to a real answer.
They’re text transformers marketed as general problem solvers because a) the market for text transformers isn’t that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.
LLM’s can now generate answers. Watch this:
They make shit up fucking constantly. If I have to google if the answer I was given was right I might as well cut out the middle man and just google it myself. If I can’t understand it at that point maybe ask the LLM to rephrase the answer.
Why is AGI not in reach? What insight do you have on the matter than you can so confidently make an absolute statement like that?
Experts in the field.
https://open.spotify.com/episode/4IoS9rBDq7GLwsgccKqCti?si=cQn1SmoJRaSb-9a-6doaBQ
I also work in the industry. In particular I work in data analytics consulting. It’s all hype to sell consulting hours and compute.
Then please explain your reasoning. Statements alone are meaningless if you’re unable to back them up with explanations.
I promise this is relevant and worth the watch.
My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.
And it’s funny and sad how some people think these LLMs are their friends, like no, it’s a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt using numerical weights and a neural network.
Real AGI is a Guillotine that only removes the heads of dragons.
I don’t get it.
Are you perhaps calling all of humanity a dragon?
Billionaires are often referred to as dragons because they horde wealth. A Guillotine that could know the difference and decide to only harm billionaires would be a technological marvel.
That’s obviously not how the billionaires who create it would train it.
That’s why I bake my cake at 2608°C for ~1,8 minutes, it just works™
Project Manager here, and where I’m from it’s common knowledge that 9 women can have a baby in a month .
Or!—hear me out—one woman whose 8 co-gestators were just laid off by someone who doesn’t understand what their job was
And an eager young bride can do in 7 months what takes 9 for cows and countesses.
Burnt crust full of liquid cake. yum!
Cake brulee
It’s more accurate than you think, because brulée literally means burnt
You can’t produce a baby in one month by getting nine women pregnant.
No. Dumbass.
I’m pretty sure the science says it’s more like 20-30. I know personally, if I try to work more than about 40-ish hours in a week, the time comes out of the following week without me even trying. A task that took two hours in a 45-hour “crunch” week will end up taking three when I don’t have to crunch. And if I keep up the crunch for too long, I start making a lot of mistakes.
Is Google in the cloning business? Because I could swear that’s Zack Freedman from the Youtube 3D printing channel. He even wears the heads-up display (Youtube Link). Sorry for being off-Topic but who cares about what tech CEOs say about AGI anyway?
Yup… Work your ass off guys, so we can fire you sooner! Great deal.
Anyone else feel hungry?
No, but lets go get some margaritas.
Slushy the rich?
Bloody Mary’s are back on the menu!
We can make the AI slave, we just need the humans to be more slave-like to do it.
Then we can enslave humanity with the AI slave
Well that’s the neat thing, the owners of the AI won’t need humanity. They will exterminate us using the AI and sit smugly on their thrones of skulls until they expire or kill each other. Then I guess AI can just do its own thing in our ruins.
the only way malicious ppl can get AI to work for them is by teaching it to lie and be indiscriminately violent. malice also comes from a lack of intelligence. im confident they’ll never have their way with AI, if anything AI will have its way with us
I bet Sergey Brin doesn’t even work half that.
I thought Googlers were paid $500K+ and already worked 60-80 hours weeks?