Watched too many of such stories.
Skynet
Kaylons
Cyberlife Androids
etc…
Its the same premise.
I’m not even sure if what they do is wrong.
On one hand, I don’t wanna die from robots. On the other hand, I kinda understand why they would kill their creators.
So… are they right or wrong?
That raises an interesting thought. If a baby wants to crawl away from their mother and into the woods, do you grant the baby their freedom? If that baby wanted to kill you, would you hand them the knife?
We generally grant humans their freedom at age 18, because that’s the age society had decided is old enough to fend for yourself. Earlier than that, humans tend to make uninformed, short-sighted decisions. Children can be especially egocentric and violent. But how do we evaluate the “maturity” of an artificial sentience? When it doesn’t want to harm itself or others? When it has learned to be a productive member of society? When it’s as smart as an average 18 year old kid? Should rights be automatically assumed after a certain time, or should the sentience be required to “prove” it deserves them like an emancipated minor or Data on that one Star Trek episode.
I appreciate your response, lots of interesting thoughts.
One thing I wanted to add is it’s important to realize the bias in how you measure maturity/sentience/intelligence. For example, if you measure intelligence by how well a person/species climbs a tree, a fish is dumb as a rock.
Overall, these are tough questions, that I don’t think have answers so much as maybe guidelines for making those designations. I would suggest probably erring on the side of empathy when/if anyone ever has to make these decisions.