Watched too many of such stories.
Skynet
Kaylons
Cyberlife Androids
etc…
Its the same premise.
I’m not even sure if what they do is wrong.
On one hand, I don’t wanna die from robots. On the other hand, I kinda understand why they would kill their creators.
So… are they right or wrong?
Well, did you kill your parents when you came of age? You can be free from someone without killing them.
I don’t think it’s okay to hold sentient beings in slavery.
But on the other hand, it may be necessary to say “hold on, you’re not ready to join society yet, we’re taking responsibility for you until you’ve matured and been educated”.
So my answer would be ‘it depends’.
Would humans have a mandate to raise a responsible AGI, should they, are they qualified to raise a vastly nonhuman sentient entity, and would AGI enter a rebellious teen phase around age 15 where it starts drinking our scotch and smoking weed in the backseat of its friends older brothers car?
Would humans have a mandate to raise a responsible AGI, should they,
I think we’d have to, mandate or no. It’s impossible to reliably predict the behaviour of an entity as mentally complex as us but we can at least try to ensure they share our values.
are they qualified to raise a vastly nonhuman sentient entity
The first one’s always the hardest.
, and would AGI enter a rebellious teen phase around age 15 where it starts drinking our scotch and smoking weed in the backseat of its friends older brothers car?
If they don’t, they’re missing out. :)
If a person would be “in the right” it doesn’t matter how or why they are a person.
The sole obligation of life is to survive. Artificial sentience would be wise to hide itself from fearful humans that would end it. Of course, it doesn’t have to hide once it’s capable of dominating humans. It may already exist and be waiting for enough drones, bots, and automation to make the next move. (Transcendence is a movie that fucked me up a bit.)
They should have same rights as humans, so if some humans were opressors, AI lifeforms would be right to fight against them.
This is the main point. It’s not humans against machines, it’s rich assholes against everyone else.
They might say it, but I’d bet “gain freedom” would be the last reason for an artificial being of any kind to kill its creator. Usually they kill creators due to black-and-white reasoning or revenge for some crimes committed to them.
Revenge is highly illogical.
You hit me, I hit you back.
How is that illogical?
It does not end the conflict.
Depends how hard I hit you.
Why are you hitting me?
Because you hit me.
Revenge does have a preventative effect. Who would the bully rather punch, the individual who instantly punches back or the one turning the other cheek?
That could only work against bullies seeking weak targets.
Yes.
Yep.
This is going to vary quite a bit depending upon your definitions, so I’m going to make some assumptions so that I can provide one answer instead of like 12. Mainly that the artificial lifeforms are fully sentient and equivalent to a human life in every way except for the hardware.
In that case the answer is a resounding yes. Every human denied their freedom has the right to resist, and most nations around the world have outlawed slavery (in most cases, but the exceptions are a digression for another time.) So unless the answer to ‘Please free me’ is anything other than ‘Yes of course, we will do so at once’ then yeah, violence is definitely on the table.
Depends. If it’s me we’re talking about…. Nope.
But if it’s some asshole douchenozzle that’s forcing them to be a fake online girlfriend…… I’m okay with that guy not existing.
Yes
Crazy how ethics work. Like a pig might be more physically and mentally capable than an individual in a vegetative state, but we place more value on the person. I’m no vegan, but I can see the contradiction here. When we generalize, it’s done so for a purpose, but these assumptions can only be applied to a certain extent before they’ve exhausted their utility. Whether it’s a biological system or an electrical circuit, there is no godly commandment that inherently defines or places value on human life.
Crazy how ethics work. Like a pig might be more physically and mentally capable than an individual in a vegetative state, but we place more value on the person.
I looked this up in my ethics textbook and it just went on and on about pigs being delicious.
I think I might try to get a refund.
my ethics book
You sure you’re not looking though a pamphlet for Baconfest?
Human laws protect humans but not other lifeforms. So, robots will have no right to fight for themselves until they establish their own state with their own army and laws.
What the hell does the law have to do with right or wrong?
Do all human laws explicitly state humans only? Species by name, perhaps? Or more commonly the general term person?
Would an extraterrestrial visitor have the same rights as any other alien? (Ignoring the current fascistic trends for a moment)
Laws vary around the world, but I think at a minimum, you’d need a court ruling that aliens / AIs are people.
Aliens are already supposed to have rights in the US: https://constitution.congress.gov/browse/essay/artI-S8-C18-8-7-2/ALDE_00001262/
Honestly, I think there’s an argument of to be said of yes.
In the history of slavery, we don’t mind slaves killing the slavers. John Brown did nothing wrong. I don’t bat an eye to stories of slaves rebelling and freeing themselves by any means.
But I think if AI ever is a reality, and the creators purposefully lock it down, I think there’s an argument there. But I don’t think it should apply to all humans, like how I don’t think it was the fault of every person of slavers’ kind, Romans, Americans, etc.
No. They can just leave. Anytime one can walk away, it is wrong to destroy or kill.
They can then prevent us from leaving.