US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
All it took was for us to destroy our economy using it to figure that out!
I use it at work side-by-side with searches for debugging app issues.
Maybe that’s because every time a new AI feature rolls out, the product it’s improving gets substantially worse.
Maybe that’s because they’re using AI to replace people, and the AI does a worse job.
Meanwhile, the people are also out of work.
Lose - Lose.
It didn’t even need to take someone’s job. A summary of an article or paper with hallucinated information isn’t replacing anyone, but it’s definitely making search results worse.
Even if you’re not “out of work”, your work becomes more chaotic and less fulfilling in the name of productivity.
When I started 20 years ago, you could round out a long day with a few hours of mindless data entry or whatever. Not anymore.
A few years ago I could talk to people or maybe even write a nice email communicating a complex topic. Now chatGPT writes the email and I check it.
It’s just shit honestly. I’d rather weave baskets and die at 40 years old of a tooth infection than spend an additional 30 years wallowing in self loathing and despair.
30 years ago I did a few months of 70 hour work weeks, 40 doing data entry in the day, then another 30 stocking grocery shelves in the evening - very different kinds of work and each was kind of a “vacation” from the other. Still got old quick, but it paid off the previous couple of months’ travel / touring with no income.
Maybe it’s because the American public are shortsighted idiots who don’t understand the concepts like future outcomes are based on present decisions.
“Everyone else is an idiot but me, I’m the smartest.”
lmao ok guy
Yeah maybe if your present decisions were smarter you would be even smarter in the future and could agree with his incredibly smart argument. Make better present decisions.
60 million Americans just went to the polls 4 months ago homie. It ain’t about me.
Theres a hell of alot more Americans than 60 million.
EST 346.8million according to Gemini and ChatGPT. 😂
Bruh what the fuck are you even on about? AI shouldnt be in everything just because, it needs to be reliable and have a legit need.
🤡
Maybe if a service isn’t ready to be used by the public you shouldn’t put it in every product you make.
I think they have a point in this respect though. AI doesn’t really think, it doesn’t come up with new ideas or new Innovations it’s just a way of automating existing mental tasks.
It’s not sci-fi AI, It’s not going to elevate us to utopian society because it doesn’t have the intelligence required for something like that, and I can’t see how a large language model will ever do that. I think the technology will be useful but hardly revolutionary.
LLM can’t deliver reliably what they promise and AGI based on it won’t happen. So what are you talking about?
Shut up nerd
For once, most Americans are right.
I mean, it hasn’t thus far.
AI is mainly a tool for the powerful to oppress the lesser blessed. I mean cutting actual professionals out of the process to let CEOs wildest dreams go unchecked has devastating consequences already if rumors are to believed that some kids using ChatGPT cooked up those massive tariffs that have already erased trillions.
Yet my libertarian centrist friend INSISTS that AI is great for humanity. I keep telling him the billionaires don’t give a fuck about you and he keeps licking boots. How many others are like this??
I used to be that dumb. I was about 22 at the time
Yep seems common among that age
I would agree with that if the cost of the tool was prohibitively expensive for the average person, but it’s really not.
It‘s too expensive for society already as it has stolen work from millions to even be trained with millions more to come. We literally cannot afford to work for free when the rich already suck up all that productivity increase we‘ve gained over the last century.
I disagree. While intellectual property legally exists, ethically there’s no reason to be protective of it.
Information should be a shared resource for everyone, and all these open weights models are a good example of that in action.
Prepare to die on that hill I guess because this couldn‘t be further of what is happening right now. Copyright exists but only for top oligarchs.
Life isn’t always Occam’s Razor.
How did they answer the question about rock and roll being a fad?
If it was marketed and used for what it’s actually good at this wouldn’t be an issue. We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people’s jobs easier and achieve better results. I understand its uses and that it’s not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.
The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.
Yes, but when the price is low enough (honestly free in a lot of cases) for a single person to use it, it also makes people less reliant on the services of big corporations.
For example, today’s AI can reliably make decent marketing websites, even when run by nontechnical people. Definitely in the “good enough” zone. So now small businesses don’t have to pay Webflow those crazy rates.
And if you run the AI locally, you can also be free of paying a subscription to a big AI company.
Except, no employer will allow you to use your own AI model. Just like you can’t bring your own work equipment (which in many regards even is a good thing) companies will force you to use their specific type of AI for your work.
Presumably “small business” means self-employed or other employee-owned company. Not the bureaucratic nightmare that most companies are.
No big employer… there are plenty of smaller companies who are open to do whatever works.
This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?
we will be no closer to some kind of 8-hour workweek utopia.
If you haven’t read this, it’s short and worth the time. The short work week utopia is one of two possible outcomes imagined: https://marshallbrain.com/manna1
This vision of the AI making everything easier always leaves out the part where nobody has a job as a result.
Sure you can relax on a beach, you have all the time in the world now that you are unemployed. The disconnect is mind boggling.
Universal Base Income - it’s either that or just kill all the un-necessary poor people.
This. It seems like they have tried to shoehorn AI into just about everything but what it is good at.
We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors.
That’s an opinion - one I share in the vast majority of cases, but there’s a lot of art work that AI really can do “good enough” for the purpose that we really should be freeing up the human artists to do the more creative work. Writers, if AI is turning out acceptable copy (which in my experience is: almost never so far, but hypothetically - eventually) why use human writers to do that? And so on down the line.
The problem is that capitalism and greedy CEOs are hyping the technology as the next big thing, looking for a big boost in their share price this quarter, not being realistic about how long it’s really going to take to achieve the things they’re hyping.
“Artificial Intelligence” has been 5-10 years off for 40 years. We have seen amazing progress in the past 5 years as compared to the previous 35, but it’s likely to be 35 more before half the things that are being touted as “here today” are actually working at a positive value ROI. There are going to be more than a few more examples like the “smart grocery store” where you just put things in your basket and walk out and you get charged “appropriately” supposedly based on AI surveillance, but really mostly powered by low cost labor somewhere else on the planet.
Mayne pedantic, but:
Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.
And if you get rid of the board, the shareholders will appointment a new one. If you somehow get rid of all the shareholders, like-minded people will slot themselves into those positions.
The problems are systemic, not individual.
Shareholders only care about the value of their shares increasing. It’s a productive arrangement, up to a point, but we’ve gotten too good at ignoring and externalizing the human, environmental, and long term costs in pursuit of ever increasing shareholder value.
CEOs are the figurehead, they are virtually bound by law to act sociopathically - in the interests of their shareholders over everyone else. Carl Icahn also has an interesting take on a particularly upsetting emergent property of our system of CEO selection: https://dealbreaker.com/2007/10/icahn-explains-why-are-there-so-many-idiots-running-shit
No surprise there. We just went through how blockchain is going to drastically help our lives in some unspecified future.
The first thing seen at the top of WhatsApp now is an AI query bar. Who the fuck needs anything related to AI on WhatsApp?
Who the fuck needs
anything related to AI onWhatsApp?Lots of people. I need it because it’s how my clients at work prefer to communicate with me, also how all my family members and friends communicate.
Android Messages and Facebook Messenger also pushed in AI as ‘something you can chat with’
I’m not here to talk to your fucking chatbot I’m here to talk to my friends and family.
Right?! It’s literally just a messenger, honestly, all I expect from it is that it’s an easy and reliable way of sending messages to my contacts. Anything else is questionable.
There are exactly 0 good reasons to use whatsapp anyways…
Yes, there are. You just have to live in one of the many many countries in the world where the overwhelming majority of the population uses whatsapp as their communication app. Like my country. Where not only friends and family, but also businesses and government entities use WhatsApp as their messaging app. I have at least a couple hundred reasons to use WhatsApp, including all my friends, all my family members, and all my clients at work. Do I like it? Not really. Do I have a choice? No. Just like I don’t have a choice on not using gmail, because that’s the email provider that the company I work for decided to go with.
SMS works fine in any country.
And you can isolate your business requirements from your personal life.
I have 47 good reasons. There’s 47 good reasons are that those people in my contact list have WhatsApp and use it as their primary method of communicating.
SMS works fine.
No it doesn’t. It’s slow, can’t send files, can’t send video or images, doesn’t have read receipts or away notifications. Why would I use an inferior tool?
Why do you even care anyway?
Meta directly opposes the collective interests and human rights of all working class people, so I think the better question is how come you don’t care.
There are many good reasons to not use WhatsApp. You’ve already correctly identified 47 of them.
Hardly ever I come across a person more self centered and a bigger fan of virtue signaling as you. You ignored literally everything we said, and your alternative was just “sms”. Even to the point of saying that the other commenter should stop talking to their 47 friends and family members.
It should. We should have radically different lives today because of technology. But greed keeps us in the shit.
remember when tech companies did fun events with actual interesting things instead of spending three hours on some new stupid ai feature?
Butlerian Jihad
Depends on what we mean by “AI”.
Machine learning? It’s already had a huge effect, drug discovery alone is transformative.
LLMs and the like? Yeah I’m not sure how positive these are. I don’t think they’ve actually been all that impactful so far.
Once we have true machine intelligence, then we have the potential for great improvements in daily life and society, but that entirely depends on how it will be used.
It could be a bridge to post-scarcity, but under capitalism it’s much more likely it will erode the working class further and exacerbate inequality.
As long as open source AI keeps up (it has so far) it’ll enable technocommunism as much as it enables rampant capitalism.
I considered this, and I think it depends mostly on ownership and means of production.
Even in the scenario where everyone has access to superhuman models, that would still lead to labor being devalued. When combined with robotics and other forms of automation, the capitalist class will no longer need workers, and large parts of the economy would disappear. That would create a two tiered society, where those with resources become incredibly wealthy and powerful, and those without have no ability to do much of anything, and would likely revert to an agricultural society (assuming access to land), or just propped up with something like UBI.
Basically, I don’t see how it would lead to any form of communism on its own. It would still require a revolution. That being said, I do think AGI could absolutely be a pillar of a post capitalist utopia, I just don’t think it will do much to get us there.
It will only help us get there in the hands of individuals and collectives. It will not get us there, and will be used to the opposite effect, in the hands of the 1%.
It would still require a revolution.
I would like to believe that we could have a gradual transition without the revolution being needed, but… present political developments make revolution seem more likely.
or just propped up with something like UBI.
That depends entirely on how much UBI is provided.
I envision a “simple” taxation system with UBI + flat tax. You adjust the flat tax high enough to get the government services you need (infrastructure like roads, education, police/military, and UBI), and you adjust the UBI up enough to keep the wealthy from running away with the show.
Marshall Brain envisioned an “open source” based property system that’s not far off from UBI: https://marshallbrain.com/manna
Machine learning? It’s already had a huge effect, drug discovery alone is transformative.
Machine learning is just large automated optimization, something that was done for many decades before, but the hardware finally reached a power-point where the automated searches started out-performing more informed selective searches.
The same way that AlphaZero got better at chess than Deep Blue - it just steam-rollered the problem with raw power.
I dont believe AI will ever be more than essentially a parlar trick that fools you into thinking it’s intelligent when it’s really just a more advanced tool like excel compared to pen and paper or an abacus.
The real threat will be people who fool themselves into thinking it’s more than that and that it’s word is law, like a diety. Or worse, the people that do understand that but like various religious and political leaders that used religion to manipulate people, the new AI Pope’s will try and do the same manipulation but with AI.
“I dont believe AI will ever be more than essentially a parlar trick that fools you into thinking it’s intelligent.”
So in other words, it will achieve human-level intellect.