There’s so much to legitimately worry about with AI, that we often lose sight of its potential good.
Building trustworthy AI won’t be easy, but it’s essential.
It doesn’t seem a top priority for most of the people creating AI. I suspect we will mainly be learning from our mistakes here, after they’ve happened.
These brain-computer interfaces are usually discussed in the context of disabled and paralyzed people, but I wonder what they could do for regular people as well. It’s interesting here to see how quickly the brain adapts to brand-new sensory information from the computer interface, it makes you wonder what new ways we could interact with computers that we haven’t thought of.
It wasn’t so long ago, when people tried to refute the argument that AI and robotics automation would lead to human workers being replaced, they’d say - don’t worry the displaced humans can just learn to code. There will always be jobs there, right?
The fundamental problem is this: we tend to think about democracy as a phenomenon that depends on the knowledge and capacities of individual citizens, even though, like markets and bureaucracies, it is a profoundly collective enterprise…Making individuals better at thinking and seeing the blind spots in their own individual reasoning will only go so far. What we need are better collective means of thinking.
I think there is a lot of validity to this way of looking at things. We need new types of institutions to deal with the 21st century information world. When it comes to politics and information, much of our ideas and models for organizing and thinking about things come from the 18th and 19th century.
OpenAI is on a treadmill. It has vast amounts of investor billions pouring into it and needs to show results. Meanwhile, open source AI is snapping at its heels in every direction. If it is true that it is holding back on AI agents out of caution, I’m pretty sure that won’t last long.
He didn’t get everything right!
He was however accurate about technology. I wonder if anyone today can be as accurate about 2125? It seems there are so many more possibilities when things as momentous as AGI have happened.
Thanks, we’ll keep track of what they are doing.
I misphrased, they are an Admin/Op, and essential.
would it be enough to have those rules in place, and when reported actively remove the content as a mod?
We’re pretty good with daily moderating of content on futurology.today, so I’d be confident we could cover that aspect.
However I’m wondering about federation issues. Are we liable for UK users who use their futurology.today account to access other instances we don’t mod?
the problem is that the guidance is too large and overbearing.
This.
Who gets to decide what “self-harm” is? There’ll be some busybodies who’ll say that any remotely positive messaging for LGBTQ youth is ‘self-harm’ for them.
It’s interesting how this movement had its roots in left-wing thought, but has now been thoroughly co-opted by libertarian right-wing types. At its inception it was about tearing down society to start again, hopefully leading to something more equal afterwards.
There’s still a lot of that radicalism about tearing down current society and restarting it, but I don’t think most of the people who identify this way now really care very much about equality.
I admit I’m torn here. On the one hand I think the future is to have AI ubiquitous and integrated into everything. On the other hand, fake AI ‘friends’ on a friend’s network sounds hideous.
I wonder will this trend of open source AI equaling the leading investor funded AI go all the way to AGI?
AI is already better than human drivers in China/US. It won’t be long before it masters the more challenging environments. I suspect the humans will adapt to its predictability in places with crazier driving.
Interesting supposition. The multiverse is just a hypothesis, there’s no proof the concept is real, so this idea is more in the realm of metaphysics than real science. Still, humanity doesn’t understand the quantum world yet, and it is building tech that utilizes it.
On the opposite end of the scale is dark energy & dark matter, which shows we don’t really understand the universe at the macro scale either, yet we’ve been existing in it for millenia. Whatever is real, is just as real as it ever was, whether we understand it or not.
So perhaps this extra computational power is coming from “somewhere” we don’t understand. If you thought AGI was scary, AGI powered by computing coming from a mysterious unknown “somewhere” sounds even more troubling.
Turmoil and transition seem to be mid 2020s themes, so maybe it’s just getting harder to predict things, even with a short 1 year outlook.
AI: AI agents working together to execute complex tasks will be a prominent theme. AI will advance its abilities on narrow tasks with narrow training data, but it’s hallucination problem with generalist tasks will remain unsolved. The western world’s two biggest economies, the EU & US, will diverge further on AI regulation, as the US becomes more deregulated. AI’s unpopularity with the American public will likely grow.
ROBOTICS: Thanks to AI advances, robotics made significant advances in 2024. There may be a ‘breakout’ consumer robot in 2025, perhaps a humanoid one. The roboticisation of global manufacturing will be a political topic.
ECONOMY: Political turmoil in the US, or trade wars, may spark a recession or stock market downturn. The rapid expansion of robo-taxis in China could see protests from human taxi drivers. The global fossil fuel industry will turn to Trump’s America to try to slow the inevitable transition to a decarbonized future. Creative industry job losses to AI will start to be considered significant.
ENERGY: The global switch to renewables continues unabated. Chinese coal use may peak. Petroleum company BP expects peak oil demand in 2025 at 102 million barrels per day, though others predict peak demand will be later in the decade. Chinese manufacturers will debut sodium-ion batteries that will be seen as viable alternatives to lithium batteries. ICE car sales will decline in more countries as a growing number of people choose EVs.
SPACE: If he can stay in favor long enough, Elon Musk may succeed in getting NASA downgraded at SpaceX’s expense. Current space telescopes seem on the brink of fundamental discoveries in cosmology (dark matter/energy), and the search for alien life on exoplanets. Either topic could have a huge breakthrough in 2025. A Chinese company will successfully deploy a reusable rocket that will soon be in commercial service.
HEALTH & MEDICINE: Fingers crossed the world avoids a H5N1-originated influenza pandemic. More countries will talk of government-funded mass availability of Ozempic type drugs. AI-Doctors will become more mainstream.
POLITICS: We seem to be in a time of transition, as numerous features of the ‘old’ world are fading. Multipolar blocks strengthen. BRICS becomes stronger under Chinese leadership. The EU is forced to contemplate becoming a defense pact, as the US under Trump disengages from NATO. Trump’s presidency is bad news for Ukraine, and the Palestinians, who will probably experience more vigorous attempts at ethnic cleansing.
I know some people don’t like political/societal discussions about the future, but paradoxically ignoring this aspect of the future is being political too. I can never separate the technological from the political, so my way of thinking about both is always connected.
I don’t spend much time at the DailyMail site, I find its worldview depressing and ugly, but I sometimes check out the comments as a proxy for right-wing thought among everyday people. Its striking how supportive the comments there are for this guy, and what he’s done.
It’s another way this moment reminds of the French Revolution. The Trump/Musk brigade has sold their victory as a revolutionary victory for the alt-right, yet revolutions have a habit of spawning further revolution, that the original people lose control of.
Worth noting, the DM comments section is reliably and rabidly pro-MAGA on everything, yet here they are supporting this guy’s violent revolutionary actions.
Human attention is a finite resource. There aren’t enough people to be interested in all this AI auto generated slop. If anything a deluge of AI-generated slop will make people more interested in focusing on humans they find interesting.