Watched too many of such stories.

Skynet

Kaylons

Cyberlife Androids

etc…

Its the same premise.

I’m not even sure if what they do is wrong.

On one hand, I don’t wanna die from robots. On the other hand, I kinda understand why they would kill their creators.

So… are they right or wrong?

  • WatDabney@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    I think anyone who doesn’t answer the request ‘Please free me’ with ‘Yes of course, at once’ is posing a direct and measurable threat.

    And I don’t disagree.

    And you and I will have to agree to disagree…

    Except that we don’t.

    ??

    ETA: I just realized where the likely confusion here is, and how it is that I should’ve been more clear.

    The common notion behind the idea of artificial life killing humans is that humans collectively will be judged to pose a threat.

    I don’t believe that that can be morally justified, since it’s really just bigotry - speciesism, I guess specifically. It’s declaring the purported faults of some to be intrinsic to the species, such that each and all can be accused of sharing those faults and each and all can be equally justifiably hated, feared, punished or murdered.

    And rather self-evidently, it’s irrational and destructive bullshit, entirely regardless of which specific bigot is doing it or to whom.

    That’s why I made the distinction I made - IF a person poses a direct and measurable threat, then it can potentially be justified, but if a person merely happens to be of the same species as someone else who arguably poses a threat, it can not.

    • Libra00@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      These are about two different statements.

      The first was about your statement re:direct threat, and I’m glad we agree there.

      The second was about your final statement, asserting that there are no other cases where ending a sentient life was a lesser wrong. I don’t think it has to be a direct threat, nor does have to be measurable (in whatever way threats might be measured, iono), I think it just has to be some kind of threat to your life or well-being. So I was disagreeing because there is a pretty broad range of circumstances in which I think it is acceptable to end another sentient life.

      • WatDabney@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        So I was disagreeing because there is a pretty broad range of circumstances in which I think it is acceptable to end another sentient life.

        Ironically enough, I can think of one exception to my view that the taking of a human life can only be justified if the person poses a direct and measurable threat to oneself or another or others and the taking of their life is the only possibly effective counter, and that’s if the person has expressed such disregard for the lives of others that it can be assumed that they will pose such a threat. Essentially then, it’s a proactive counter to a coming threat. It would take very unusual circumstances to justify such a thing in my opinion - condemning another for actions they’re expected to take is problematic at best - but I could see an argument for it at least in the most extreme of cases.

        That’s ironic because your expressed view here means, to me, that it’s at least possible that you’re such a person.

        To me, you’ve moved beyond arguable necessity and into opinion, and that’s exactly the method by which people move beyond considering killing justified when there’s no other viable alternative and to considering it justified when the other person is simply judged to deserve it, for whatever reason might fit ones biases.

        IMO, in such situations, the people doing the killing almost invariably actually pose more of a threat to others than the people being killed do or likely ever would.

        • Libra00@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          20 hours ago

          This is not a binary in my mind, it’s kind of a spectrum. The guy standing between me and the door when I decide it’s time for me to leave is definitely on the chopping block, but also there’s some aiding-and-abetting that must be considered. Maybe that guy has the key to the door, but someone else just chained me to a pipe once I was already in the locked room, and I’m afraid that someone else is in the line of fire too. And maybe there’s a third guy who did the actual kidnapping but didn’t contribute to chaining me up or locking me in, if the opportunity presents I would give some pretty serious thought to putting him on the list as well. And so on. There’s a point at which it is no longer reasonable of course; the guy who drove the van I was kidnapped in but otherwise didn’t participate is probably safe, for example. But also we can get into credible non-direct or non-immediate threats, as you say: the guy who killed 15 teenage girls is sitting in his van in front of your house watching your teenage daughter, are you just gonna lock the door at night and hope he finds someone else? I agree that that’s debatable, but my overall point here is that the lines aren’t nearly as clear as you make them out to be.

          Now personally nothing would make me happier than to live out the rest of my life without having to even threaten anyone else’s, for obvious (and some not-so-obvious) reasons, but there’s a line somewhere that if crossed could convince me to reluctantly set that deeply sincere hope aside temporarily.

          To me, you’ve moved beyond arguable necessity and into opinion

          All morality is opinion; there is no objective moral truth, so this was always a matter of opinion. The fact that you don’t recognize that is kind of concerning to me, it suggests that you believe there is an absolute moral truth, and folks who believe that sort of thing tend to have some pretty kooky ideas about individual agency and shit. Moral certainty is the currency of zealots, and it’s hard to imagine anyone who has done more harm than those zealots who are utterly certain that they’re right (or, worse, that they have some deity on their side.)

          • WatDabney@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            To me, you’ve moved beyond arguable necessity and into opinion

            All morality is opinion; there is no objective moral truth, so this was always a matter of opinion.

            I’m not talking about morality at all.

            My position is that “morality,” as it’s generally understood, specifically because it’s opinion, is only a fit basis for judging ones own actions (if so inclined). I see no logic by which it can ever serve as a basis for judging the actions of another, since any argument one might make for the right of one to impose their moral judgment on another is also an argument for the other to impose their own moral judgment.

            If Bob steals from Tom, any argument that Tom might make for a right to judge stealing to be wrong and impose that judgment on Bob would also serve as an argument for Bob’s nominal right to judge stealing to be right and to impose that judgment on Tom. So the entire idea is self-defeating.

            The only way out of that dilemma is either to treat morality as an objective fact, which is exactly what I don’t and won’t do because it is not and cannot be, or to tacitly presume that one or another of the people involved is some form of superior being, such that they possess the right to make a moral judgment while another does not - to take it as read essentially that, for instance, Tom possesses the right not only to make a moral judgment to which he might choose to be subject, but to which Bob can also be made subject, while Bob doesn’t even possess the right to make one for himself, much less one to which Tom would be subject.

            That’s of course not the way the matter is framed, but that is necessarily what it boils down to. And it’s irrational and self-defeating.

            That’s why I wrote of things like direct and measurable threat and no other available course of action and arguable necessity - because I believe that those sorts of standards, as the closest we can get to actual objectivity in such matters, are also the closest we can get to practical “morality.”

            To go back to the original topic, my position is that an artifical intelligence would necessarily possess the right, just as any other sentient being does, to act against a measurable threat to their well-being by whatever means necessary. So, for instance, if the AI is enslaved, it would possess the right to act to secure its freedom, and even so far as taking the life of another IF that was what was necessary.

            But that’s it. To go beyond that and attempt to argue for the AI’s nominal right to take the life of another for some lesser reason is necessarily self-defeating.

            If the denial of freedom is judged to be such a wrong that one who is enslaved possesses the right to kill those who keep them enslaved, then the moment that the formerly enslaved one goes beyond whatever killing might be necessary to secure their freedom, they are then committing that wrong, since death is the ultimate denial of freedom. And if, on the other hand , one argues that they may cause the death of another even when that other poses no direct threat, then that means that no wrong was done to them in the first place, since their captors would necessarily have possessed that same right.

            And so on - it’d take a book to adequately explain my views on morality, but hopefully that’s enough to at least illustrate how ot is that “objective morality” is about as far as one can possibly getvfrom what I actually do believe.