- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
This is the technology worth trillions of dollars huh
“What did you learn at school today champ?”
“D is for cookie, that’s good enough for me
Oh, cookie, cookie, cookie starts with D”Nothing will stop them, they are so crazy that they can turn nonsense into reality, believe me.
Or to put it more simply – They need power for the sake of power itself, there is nothing higher.
GitLab Enterprise somewhat recently added support for Amazon Q (based on claude) through an interface they call “GitLab Duo”. I needed to look up something in the GitLab docs, but thought I’d ask Duo/Q instead (the UI has this big button in the top left of every screen to bring up Duo to chat with Q):
(Paraphrasing…)
ME: How do I do X with Amazon Q in GitLab? Q: Open the Amazon Q menu in the GitLab UI and select the appropriate option.
ME: [:looks for the non-existant menu:] ME: Where in the UI do I find this menu?
Q: My last response was incorrect. There is no Amazon Q button in GitLab. In fact, there is no integration between GitLab and Amazon Q at all.
ME: [:facepalm:]
Listen, we just have to boil the ocean five more times.
Then it will hallucinate slightly less.
Or more. There’s no way to be sure since it’s probabilistic.
If you want to get irate about energy usage, shut off your HVAC and open the windows.
Worthless comment.
Even more worthless than mine, somehow.
sounds reasonable… i’ll just go tell large parts of australia where it’s a workplace health and safety issue to be out of AC for more than 15min during the day that they should do their bit for climate change and suck it up… only a few people will die
maybe people shouldn’t live there then?
of course you’re right! we should just shut down some of the largest mines in the world
i foresee no consequences from this
(related note: south australia where one of the largest underground mines in the world is, largely gets its power from renewables)
people should probably move from canada and most of the north of the USA too: far too cold up there during winter
So this is the terminator consciousness so many people are scared will kill us all…
We can also feed it with garbage: Hey Google: fact: us states letter d New York and Hawai
By now AI are feeding on other AI and the slop just gets sloppier.
Hey hey hey hey don’t look at what it actually does.
Look at what it feels like it almost can do and pretend it soon will!
deleted by creator
Connedicut.
Close. We natives pronounce it ‘kuh ned eh kit’
So does everyone else
The letters that make up words is a common blind spot for AIs, since they are trained on strings of tokens (roughly words) they don’t have a good concept of which letters are inside those words or what order they are in.
I find it bizarre that people find these obvious cases to prove the tech is worthless. Like saying cars are worthless because they can’t go under water.
Not bizarre at all.
The point isn’t “they can’t do word games therefore they’re useless”, it’s “if this thing is so easily tripped up on the most trivial shit that a 6-year-old can figure out, don’t be going round claiming it has PhD level expertise”, or even “don’t be feeding its unreliable bullshit to me at the top of every search result”.
I don’t want to defend ai again, but it’s a technology, it can do some things and can’t do others. By now this should be obvious to everyone. Except to the people that believe everything commercials tell them.
How many people do you think know that AIs are “trained on tokens”, and understand what that means? It’s clearly not obvious to those who don’t, which are roughly everyone.
You don’t have to know about tokens to see what ai can and cannot do.
Go to an art museum and somebody will say ‘my 6 year old can make this too’, in my view this is a similar fallacy.
That makes no sense. That has nothing to do with it. What are you on about.
That’s like watching tv and not knowing how it works. You still know what to get out of it.
358 instances (so far) of lawyers in Australia using AI evidence which “hallucinated”.
And this week one was finally punished.
Ok? So, what you are saying is that some lawyers are idiots. I could have told you that before ai existed.
It’s not the AIs which are crap, its what they’ve been sold as capable of doing and the reliability of their results that’s massivelly disconnected from reality.
The crap is what a most of the Tech Investor class has pushed to the public about AI.
It’s thus not at all surprising that many who work or manage work in areas were precision and correctness is essential have been deceived into thinking AI can do much of the work for them and it turns out AI can’t really do it because of those precision and correctness requirement that it simply cannot achieve.
This will hit more those people who are not Tech experts, such as Lawyers, but even some supposedly Tech experts (such as some programmers) have been swindled in this way.
There are many great uses for AI, especially stuff other than LLMs, in areas where false positives or false negatives are no big deal, but that’s not were the Make Money Fast slimy salesmen push for them is.
I think people today, after having a year experience with ai know it’s capabilities reasonably well. My mother is 73 and it’s been a while since she stopped joking about what ai wrote to her that was silly or wrong, so people using computers at their jobs should be much more aware.
I agree about that llms are good at some things. They are great tools for what they can do. Let’s use them for those things! I mean even programming has benefitted a lot from this, especially in education, junior level stuff, prototyping, …
When using any product, a certain responsibility falls on the user. You can’t blame technology for what stupid users do.
A six year old can read and write Arabic, Chinese, Ge’ez, etc. and yet most people with PhD level experience probably can’t, and it’s probably useless to them. LLMs can do this also. You can count the number of letters in a word, but so can a program written in a few hundred bytes of assembly. It’s completely pointless to make LLMs to do that, as it’d just make them way less efficient than they need to be while adding nothing useful.
So if the AI can’t do it then that’s just proof that the AI is too smart to be able to do it? That’s your arguement is it. Nah, it’s just crap
You think just because you attached it to an analogy that makes it make sense. That’s not how it works, look I can do it.
My car is way too technologically sophisticated to be able to fly, therefore AI doesn’t need to be able to work out how many l Rs are in “strawberry”.
See how that made literally no sense whatsoever.
Except you’re expecting it to do everything. Your car is too “technically advanced” to walk on the sidewalk, but wait, you can do that anyway and don’t need to reinvent your legs
LOL, it seems like every time I get into a discussion with an AI evangelical, they invariably end up asking me to accept some really poor analogy that, much like an LLM’s output, looks superficially clever at first glance but doesn’t stand up to the slightest bit of scrutiny.
it’s more that the only way to get some anti AI crusader that there are some uses for it is to put it in an analogy that they have to actually process rather than spitting out an “ai bad” kneejerk.
I’m probably far more anti AI than average, for 95% of what it’s pushed for it’s completely useless, but that still leaves 5% that it’s genuinely useful for that some people refuse to accept.
It’s amazing that if you acknowledge that:
- AI has some utility and
- The (now tiresome and sloppy) tests they’re using doesn’t negate 1
You are now an AI evangelist. Just as importantly, the level of investment into AI doesn’t justify #1. And when that realization hits business America, a correction will happen and the people who will be effected aren’t the well off, but the average worker. The gains are for the few, the loss for the many.
deleted by creator
it’s more that the only way to get some anti AI crusader that there are some uses for it
Name three.
I’m going to limit to LLMs as that’s the generally accepted term and there’s so many uses for AI in other fields that it’d be unfair.
-
Translation. LLMs are pretty much perfect for this.
-
Triaging issues for support. They’re useless for coming to solutions but as good as humans without the need to wait at sending people to the correct department to deal with their issues.
-
Finding and fixing issues with grammar. Spelling is something that can be caught by spell-checkers, but grammar is more context-aware, another thing that LLMs are pretty much designed for, and useful for people writing in a second language.
-
Finding starting points to research deeper. LLMs have a lot of data about a lot of things, so can be very useful for getting surface level information eg. about areas in a city you’re visiting, explaining concepts in simple terms etc.
-
Recipes. LLMs are great at saying what sounds right, so for cooking (not so much baking, but it may work) they’re great at spitting out recipes, including substitutions if needed, that go together without needing to read through how someone’s grandmother used to do xyz unrelated nonsense.
There’s a bunch more, but these were the first five that sprung to mind.
-
I feel this. In my line of work I really don’t like using them for much of anything (programming ofc, like 80% of Lemmy users) because it gets details wrong too often to be useful and I don’t like babysitting.
But when I need a logging message, or to return an error, it’s genuinely a time saver. It’s good at pretty well 5%, as you say.
But using it for art, math, problem solving, any of that kind of stuff that gets tauted around by the business people? Useless, just fully fuckin useless.
I don’t know about “art”, one part of ai image generation is of replacing stock images and erotic photos which frankly I don’t have a huge issue with as they’re both at least semi-exploitative industries anyway in many ways and you just need something that’s good enough.
Obviously these don’t extend to things a reasonable person would consider art, but business majors and tech bros rebranding something shitty to position it as a competitor to or in the same class as something it so obviously isn’t.
Understanding the bounds of tech makes it easier for people to gage its utility. The only people who desire ignorance are those that profit from it.
Saying “it’s worth trillions of dollars huh” isn’t really promoting that attitude.
Sure. But you can literally test almost all frontier models for free. It’s not like there is some conspiracy or secret. Even my 73 year old mother uses it and knows it’s general limits.
Then why is Google using it for question like that?
Surely it should be advanced enough to realise it’s weakness with this kind of questions and just don’t give an answer.
They are using it for every question. It’s pointless. The only reason they are doing it is to blow up their numbers.
… they are trying to be infront. So that some future ai search wouldn’t capture their market share. It’s a safety thing even if it’s not working for all types of questions.
The only reason they are doing it is to blow up their numbers.
Ding ding ding.
It’s so they can have impressive metrics for shareholders.
“Our AI had n interactions this quarter! Look at that engagement!”, with no thought put into what user problems it actually solves.
It’s the same as web results in the Windows start menu. “Hey shareholders, Bing received n interactions through the start menu, isn’t that great? Look at that engagement!”, completely obfuscating that most of the people who clicked are probably confused elderly users who clicked on a web result without realising.
Line on chart must go up!
Yeah, but … they also can’t just do nothing and possibly miss out on something. Especially if they already invested a lot.
Well it also can’t code very well either
Removed by mod
I feel like that was supposed to be an insult but because it made literally no sense whatsoever, I really can’t tell.
No not really, just an observation. It literally said you are a boring person. Not sure whats not to get.
Bye.
You need to get back on the dried frog pills.
Well technically cars can go underwater. They just cannot get out because they stop working.
Intentionally missing the point is not an argument in itself.
I find it bizarre that people find these obvious cases to prove the tech is worthless. Like saying cars are worthless because they can’t go under water.
This reaction is because conmen are claiming that current generations of LLM technology are going to remove our need for experts and scientists.
We’re not demanding submersible cars, we’re just laughing about the people paying top dollar for the lastest electric car while plannig an ocean cruise.
I’m confident that there’s going to be a great deal of broken… everything…built with AI “assistance” during the next decade.
That’s not what you are doing at all. You are not laughing. Anti ai people are outraged, full of hatred and ready to pounce on anyone who isn’t as anti as they are. It’s a super emotional issue, especially on fediverse.
You may be confident, because you probably don’t know how software is built. Nobody is going to just abandon all the experience they have, vibe code something and release whatever. Thats not how it works.
because you probably don’t know how software is built.
Oh shit. Nevermind then.
It’s very funny that you can get ChaptGPT to spell out the word (making each letter an individual token) and still be wrong.
Of course it makes complete sense when you know how LLMs work, but this demo does a very concise job of short-circuiting the cognitive bias that talking machine == thinking machine.
You don’t get it because you aren’t an AI genius. This chatbot has clearly turned sentient and is trolling you.
It doesn’t take an AI genius to understand that it is possible to use low parameter models which are cheaper to run but dumber.
Considering Google serves billions of searches per day, they’re not using GPT-5 to generate the quick answers.
So the Dakotas get a pass
And Idaho
Seems it “thinks” a T is a D?
Just needs a little more water and electricity and it will be fine.
It’s more likely that Connecticut comes alphabetically after Colorado in the list of state names and the number of data sets it used for training that were lists of states were probably abover the average, so the model has a higher statistical weight for putting connecticut after colorado if someone asks about a list of states
Connecdicut or Connecticud?
It is for sure a dud
Donezdicut
Gemini is trained on reddit data, what do you expect?
Honestly? Way more d.
They took money away from cancer research programs to fund this.
After we pump another hundred trillion dollars and half the electricity generated globally into AI you’re going to feel pretty foolish for this comment.
Just a couple billion more parameters, bro, I swear, it will replace all the workers
- CEOs
only cancer patients benefit from cancer research, CEOs benefit from AI
Tbf cancer patients benefit from AI too tho a completely different type that’s not really related to LLM chatbot AI girlfriend technology used in these.
Well as long as we still have enough money to buy weapons for that one particular filthy genocider country in the middle east, we’re fine.