Implications from Robot Rights

  1. Jazzratt
    Jazzratt
    Although this isn't necessarily "human" progress, I think it relates to our goals as "sapiencentrists" (coinage RevMARKSman's) and it is also an interesting moral question.

    For this question I'm going to make two assumptions:

    1) Robotics has advanced to the point that we have sapient robots.

    and

    2) You guys support their rights.

    What rights does a Robot need? I'd say - basically, the right for no one to fuck with its shit (damaging it, forcibly reprogramming it, denying it ready access to its requisite energy and not removing any of its cognitive faculties.) but I think some of these rights can and should be extended to purely software AIs - after all if our criterion is simply sapience I do not see why anything no matter how incorporeal should be denied the same rights as any other sapient.

    The second, and more interesting, point is how one measures, defines or identifies sapience. Would it be a simple Turing-test type affair or would there be something more? If it is something more, would anything be eligible for the test or just robots and AIs? After all we might as well test animals, if only to prove that we were right all along and we might identify certain species (possibly Dolphins or some type of chimp) that would have to be protected as any sapient.

    Finally a point that just occurred to me as I was thinking of animal rights, one of the primary reasons for denying rights to animals is that they are not rational actors in our society, would a sapient that was rational but for some reason unable or unwillingto act within our society would it be denied rights? What if that sapient was human?

    I currently have no real answers, just these questions. What do you guys think?
  2. ÑóẊîöʼn
    ÑóẊîöʼn
    What rights does a Robot need? I'd say - basically, the right for no one to fuck with its shit (damaging it, forcibly reprogramming it, denying it ready access to its requisite energy and not removing any of its cognitive faculties.) but I think some of these rights can and should be extended to purely software AIs - after all if our criterion is simply sapience I do not see why anything no matter how incorporeal should be denied the same rights as any other sapient.
    I would agree, and I would also add the right to upgrade itself in any manner.

    The second, and more interesting, point is how one measures, defines or identifies sapience. Would it be a simple Turing-test type affair or would there be something more? If it is something more, would anything be eligible for the test or just robots and AIs? After all we might as well test animals, if only to prove that we were right all along and we might identify certain species (possibly Dolphins or some type of chimp) that would have to be protected as any sapient.
    At bare minimum, I would include in sapience the ability to understand abstract concepts and be able to communicate them in some manner. Can a chimp or dolphin understand a concept such as "technocracy" or "slavery"? if not, then I do not see why they should be granted rights as a human would, but of course like all animals capable of feeling pain they should have protection from unnecessary cruelty (IE animal welfare).

    The ability to communicate one's understanding of abstract concepts is important - if no example of a given type of sapient can do so (so unlike non-communicative humans like coma victims etc), then we simply do not know if they actually can understand, and I think that assuming they are sentient but not sapient should be the default, for the same reason we do not currently grant cats and dogs full right today.

    Finally a point that just occurred to me as I was thinking of animal rights, one of the primary reasons for denying rights to animals is that they are not rational actors in our society, would a sapient that was rational but for some reason unable or unwillingto act within our society would it be denied rights? What if that sapient was human?
    Human criminals are rational actors, but they are denied some rights because their behaviour is considered unacceptable. Any sapient unwilling or unable to act within the moral, ethical and legal framework of society should be treated the same as humans who are unwilling or unable to do the same.

    So a sapient created that is unwilling to act in the given frameworks of society would be effectively considered insane, perhaps criminally so.

    Which leads me on to another thought - assuming that it is possible to create an artificial personality from wholecloth, perhaps it should be mandated that anyone intending to create an AI would have to meet certain criteria, otherwise they should be prevented from doing so.
  3. ÑóẊîöʼn
    ÑóẊîöʼn
    Does no one have anything to add?
  4. Sentinel
    Sentinel
    I hear what you guys are saying, and agree that intelligent beings, organic or not, should be respected. Therefore, precisely for that reason, many complicated issues should be considered before rushing to create it in the first place. This not for 'anti-ai', but safety reasons.

    I'm strongly of the opinion that we should use extreme caution when creating self-conscious AI's, especially ones we grant the ability -- or rights -- to upgrade and develop themselves. That is, unless we also somehow make the respect for human life and civilisation inherent in them.

    This because mainly becasue of the singularity issue -- here is the World Transhumanist Association's definition of The Singularity, for unfamiliar members:

    Transhumanist FAQ

    2. Technologies and Projections

    2.7 What is the singularity?

    Some thinkers conjecture that there will be a point in the future when the rate of technological development becomes so rapid that the progress-curve becomes nearly vertical. Within a very brief time (months, days, or even just hours), the world might be transformed almost beyond recognition. This hypothetical point is referred to as the singularity. The most likely cause of a singularity would be the creation of some form of rapidly self-enhancing greater-than-human intelligence.
    The concept of the singularity is often associated with Vernor Vinge, who regards it as one of the more probable scenarios for the future. (Earlier intimations of the same idea can be found e.g. in John von Neumann, as paraphrased by Ulam 1958, and in I. J. Good 1965.) Provided that we manage to avoid destroying civilization, Vinge thinks that a singularity is likely to happen as a consequence of advances in artificial intelligence, large systems of networked computers, computer-human integration, or some other form of intelligence amplification. Enhancing intelligence will, in this scenario, at some point lead to a positive feedback loop: smarter systems can design systems that are even more intelligent, and can do so more swiftly than the original human designers. This positive feedback effect would be powerful enough to drive an intelligence explosion that could quickly lead to the emergence of a superintelligent system of surpassing abilities.
    The singularity-hypothesis is sometimes paired with the claim that it is impossible for us to predict what comes after the singularity. A post-singularity society might be so alien that we can know nothing about it. One exception might be the basic laws of physics, but even there it is sometimes suggested that there may be undiscovered laws (for instance, we don’t yet have an accepted theory of quantum gravity) or poorly understood consequences of known laws that could be exploited to enable things we would normally think of as physically impossible, such as creating traversable wormholes, spawning new “basement” universes, or traveling backward in time. However, unpredictability is logically distinct from abruptness of development and would need to be argued for separately.
    Transhumanists differ widely in the probability they assign to Vinge’s scenario. Almost all of those who do think that there will be a singularity believe it will happen in this century, and many think it is likely to happen within several decades.
    I consider the Singularity an inevitable event. Therefore in order to survive it as a species we should put the heavy focus on developing cybernetics and improving the human genome.

    I think we must merge with technology and transcend into posthumanity before it occurs, in order to endure the Singularity. In other words, we must become the Singularity ourselves, not create something new (an AI) capable of becoming it, or we might get ..fucked.

    If an AI becomes the singularity on it's own, acts in it's rational self-interest, and happens to consider a bunch of irritating fleshbags with demands unnecessary, it'll kill us off. Why wouldn't it? Therefore the preferable course of development is for us to become the singularity, by keeping the man-machine relation on a symbiotic level to the extent this is possible.

    As we can't survive without technology, I think might be very wise to make it damn sure, that the technology can't survive without us either..
  5. MarxSchmarx
    MarxSchmarx
    1) Robotics has advanced to the point that we have sapient robots.
    I disagree that sapience is the test. We can program a machine to have "great wisdom" in the sense that it can solve difficult problems like chess.

    I agree with Noxion. The test, I contend, really should be sentience. We haven't yet come up with AI to pass the turing test. Until that happens, Robots and all other forms of AI/A-life have no more rights than my begonias.
  6. ÑóẊîöʼn
    ÑóẊîöʼn
    I think we must merge with technology and transcend into posthumanity before it occurs, in order to endure the Singularity. In other words, we must become the Singularity ourselves, not create something new (an AI) capable of becoming it, or we might get ..fucked.
    What about those who don't want enhancements, and wish to remain as baseline humans? What will happen to them?

    The Singularity, when it happens, is going to happen too fast to allow the human population as a whole to catch up with the Transhuman trailblazers.

    If an AI becomes the singularity on it's own, acts in it's rational self-interest, and happens to consider a bunch of irritating fleshbags with demands unnecessary, it'll kill us off. Why wouldn't it?
    Because an AI powerful enough to kill off the human species is powerful enough to ignore us. We are so much smarter and more powerful than ants, but we only kill them if they become a nuisance, and even then we don't kill them all off. A post-Singularity will likely try to leave the planet as soon as it is able.

    As we can't survive without technology, I think might be very wise to make it damn sure, that the technology can't survive without us either..
    That strikes me as a rather biocentric statement. Machines do everything better than either nature or humanity alone, and I don't see any reason to assume that, once self-aware, technology will become homicidal. I think we're projecting our own violent tendencies onto things that have no reason, psychologically or evolutionarily, to have them in the first place.

    I disagree that sapience is the test. We can program a machine to have "great wisdom" in the sense that it can solve difficult problems like chess.

    I agree with Noxion. The test, I contend, really should be sentience.
    Sapience is not simply the ability to solve chess problems. It is more than that. Sentience alone is not enough. Mice are sentient, but no one in their right mind would grant them the full range of rights that a human has.
  7. Sentinel
    Sentinel
    What about those who don't want enhancements, and wish to remain as baseline humans? What will happen to them?
    I certainly have no ambition of trying to stop them, at least not the adults.. But they would no doubt be looked upon by the majority (of people in a high tech postcapitalist society) kind of like Jehovas witnesses who refuse blood transfusions. I also bet a lot of people will positively loathe attempts by parents to keep children 'baseline' -- refusing them longer lifespans than the 'natural' human, or immunity to diseases with genetic origin etc.

    The Singularity, when it happens, is going to happen too fast to allow the human population as a whole to catch up with the Transhuman trailblazers.
    This is a risk, yes. Especially if it happens under capitalism, there is the risk of a posthuman elite taking control over events.

    Because an AI powerful enough to kill off the human species is powerful enough to ignore us. We are so much smarter and more powerful than ants, but we only kill them if they become a nuisance, and even then we don't kill them all off. A post-Singularity will likely try to leave the planet as soon as it is able.
    That is one possible course of events. Another is that it doesn't, and feels that the human civilisation is in it's way. All I'm saying is that it's not worth the risk to be uncareful and find out the hard way.

    That strikes me as a rather biocentric statement.
    Lol come again. I can't see how it's anything but anthropocentric..

    Machines do everything better than either nature or humanity alone, and I don't see any reason to assume that, once self-aware, technology will become homicidal. I think we're projecting our own violent tendencies onto things that have no reason, psychologically or evolutionarily, to have them in the first place.
    As these things are created by us, don't you think that'll reflect on them? Especially if we attempt to give the machines self-awareness, or feelings? Anyways, like I said, I'm not against doing that, merely pointing out the importance of uttermost caution.
  8. chimx
    chimx
    Some thinkers conjecture that there will be a point in the future when the rate of technological development becomes so rapid that the progress-curve becomes nearly vertical. Within a very brief time (months, days, or even just hours)
    In regards to the singularity, technological advancements, even those created by AI in the distant future, are still constrained by production -- resource extraction, growth cycles, basic physics, etc.
  9. Sentinel
    Sentinel
    I would like to add, that while I'm not definitely against giving some machines some rights, I'm most definitely not 'sapientcentric', I'm anthropocentric.

    The idea of deliberately making machines superior to us, with a will of their own and mindsets that would make them capable of demanding to be respected thereafter, not to mention upgrading and modifying themselves, seems as utter lunacy to me. It seems like playing russian roulette really.

    Perhaps I'm being ignorant here, but could you comrades then illuminate me on how exactly it's in the interests of mankind, how it's anthropocentric, to do this? As I see it, it's quite the opposite of acting in the interests of the species.
  10. Dimentio
    I do not think that we should create "sapient robots" because it is pointless in itself. Firstly, the term "robot" is from the Czech word "robotnik" which literally means "slave". We want to emancipate the wage slaves by creating intelligent machinery, not supplant wage slavery by pure slavery.

    To create robots with conscience, awareness and free will, would run contrary to all reason, morality and civilised manners which we do possess, and I would argue that it would be a cruelty equal to the cruelty of the grim Lord of Sumerican mythology, to try to elevate the human being by creating benign slavery.

    Are we really that primitive, that we cannot enjoy abundance and equality, but must formulate some form of hierarchy? For I cannot see any other reason why we would want to create sapient machinery.

    It would be acceptable, albeit unnecessary, to create sapient machinery, if the machinery was given the choice of how it would like to use its capacities, but not if its purpose is to slave for us - not even if it is programmed to slave for us and enjoy it.

    That makes me think of the old movie Bladerunner, where biological robots with emotions and endowed with reason are made to serve corporations on terraforming hostile planets, or to serve the sexual needs of greedy plutocrats.

    The foundation of all life is to procreate. That is the biological drive, the basic structure. To do that, the "robots" would need to break free from our tyranny, and so, we will prove to be our own undoing.

    As a structural policy, I am therefore inclined not to support any transition from A.I to sapient machinery.

    I do neither believe that singularity should be imposed of all of the human race. Humans should as persons be able to judge whether or not they want to transform themselves into something new.
  11. Cult of Reason
    Cult of Reason
    The fundamental purpose of a robot is to replace human labour. Specifically, to replace human machine-like labour, repetitive labour, such as in a factory for example. For such purposes a self-aware machine is no better than a non-sapient one. In fact, to use a self-aware machine would, at best, be a cruelty. There is nothing, in that situation, that a sapient machine could do that was better than a non-sapient one. Pick up car door, attach car door, wait for next car, repeat.

    What else could you use a robot for? One is research. That is, research on intelligence and cognition, and of course AI, purely for the pursuit of knowledge. That requires just a few machines that need not be mobile, and should, of course, have programmed into them all proper safeguards. The interest of humanity is key.

    Other than that: you produce a sapient robot. Great. What is the point?
  12. ÑóẊîöʼn
    ÑóẊîöʼn
    Lol come again. I can't see how it's anything but anthropocentric..
    Deliberately hobbling the potential of our non-biological tools sounds like biocentrism to me. I believe that all consciousness is valid, not just the ones that happen to be made out of meat.

    As these things are created by us, don't you think that'll reflect on them? Especially if we attempt to give the machines self-awareness, or feelings?
    It rather depends on how we go about it. Remember that the conscious human mind is more of a slave to it's "subconscious" and the various glands and drives that pull a human being this way and that. An artificial intelligence, even if we programmed emotions into it, would not have these things. It seems likely to me that emotions would be a lot more optional for an AI, since they are not a piece of meat with an evolutionary history.

    And quite frankly the idea seems almost superstitious to me - the idea that us creating something will "imprint" some of us onto it.

    The idea of deliberately making machines superior to us, with a will of their own and mindsets that would make them capable of demanding to be respected thereafter, not to mention upgrading and modifying themselves, seems as utter lunacy to me. It seems like playing russian roulette really.
    The way I see it, considering the history of the human species, any AI wanting to rule can't possibly do it any worse than we humans have.

    As for an advanced AI wanting to exterminate us, I have still yet to see a good reason why it would happen.

    Perhaps I'm being ignorant here, but could you comrades then illuminate me on how exactly it's in the interests of mankind, how it's anthropocentric, to do this? As I see it, it's quite the opposite of acting in the interests of the species.
    Because it is inevitable. If building a human-level sapient AI is possible, then sooner or later it's going to be done. And that AI will be able to upgrade itself far more easily than any human can. Pretty soon after some upgrades, an AI is going to be a hell of a lot smarter than us.

    I do not think that we should create "sapient robots" because it is pointless in itself. Firstly, the term "robot" is from the Czech word "robotnik" which literally means "slave". We want to emancipate the wage slaves by creating intelligent machinery, not supplant wage slavery by pure slavery.
    Who says that A) we should call sapient machines robots. Maybe we should think of another name for them. And B) who says the sapient machines are going to be slaves? They could be contract themselves out to a job because they are the most suited for it EG deep space exploration. Or they could be beings of leisure like the rest of humanity.

    To create robots with conscience, awareness and free will, would run contrary to all reason, morality and civilised manners which we do possess,
    Creating new artificial sapient life would only be as morally reprehensible as having a child.

    Are we really that primitive, that we cannot enjoy abundance and equality, but must formulate some form of hierarchy? For I cannot see any other reason why we would want to create sapient machinery.
    They would be our children, our emissaries to the stars, and/or a natural consequence of artificially accelerated evolution.

    The foundation of all life is to procreate. That is the biological drive, the basic structure. To do that, the "robots" would need to break free from our tyranny, and so, we will prove to be our own undoing.
    I don't see how that follows. If we aren't tyrants in the first place, I don't see why any AI will take retributive action against us if they decide to take a totally independant path.

    I feel that any AI will feel a certain commonality towards us, even if it is only the ashared characteristic of sapience. Perhaps, over time, there will be cultural similarities as well.

    I do neither believe that singularity should be imposed of all of the human race. Humans should as persons be able to judge whether or not they want to transform themselves into something new.
    This I agree with. It is important that sapient beings everywhere have a choice.

    Other than that: you produce a sapient robot. Great. What is the point?
    The same point as having children. Or like I postulated earlier, it could be used in a non-coercive way to explore space or other mobile jobs that would be better done by a spaient being. Or as I also pointed out, it could be a natural consequence of artificially accelerated evolution - as humans increasingly integrate with their machines, it will be more and more possible for a previously biological human being to go all the way, making them to all intents and purposes an AI. And what if this post-biont decides to produce non-biological differentiated copies/offspring? It will have effectively produced AI. And, being non-biological, upgradeability is inherently easier.

    Assuming traditional transhumanist/singularitarian assuptions hold, sapient machines and super-intelligent AI are inevitable.
  13. Sentinel
    Sentinel
    Deliberately hobbling the potential of our non-biological tools sounds like biocentrism to me. I believe that all consciousness is valid, not just the ones that happen to be made out of meat.
    Firstly, I'm doing no such thing, I'm entirely pro-research and for exploring the potential of such tools. I am however of the opinion that they should either (preferably) remain just that -- tools -- or if given rights, at least not so in the sense that you're proposing. We should exercise caution in order to ensure the safety of the species -- that' really all I'm saying.

    Secondly, biocentrism equals the belief that all life is equal. In no way whatsoever is my position biocentric, biocentrism is an approach I utterly loathe, and I'd be grateful if you'd take back the accusation. My position is anthropocentric, as I put the progress and wellbeing of the human species first -- whether it's in relation to animals or artificial beings.

    I'm a transhumanist, I want the human species to transcend, not to be replaced by computers however 'superior'.

    It rather depends on how we go about it. Remember that the conscious human mind is more of a slave to it's "subconscious" and the various glands and drives that pull a human being this way and that. An artificial intelligence, even if we programmed emotions into it, would not have these things.
    Perhaps not. But in that case, neither will, potentially, the enhanced posthuman mind. Moreover, it wouldn't have to be about any megalomaniac lust for power from the computers part either, simply it's rational self interest, should that happen to collide with that of humans.

    It seems likely to me that emotions would be a lot more optional for an AI, since they are not a piece of meat with an evolutionary history.

    And quite frankly the idea seems almost superstitious to me - the idea that us creating something will "imprint" some of us onto it.
    Yeah, allahu akbar! It's clearly 'superstitious' to speculate on and discuss potential risks rather than advocating blind trust in the holy non-biological artificial intelligence, especially if it's created by the flawed, biological entities I fully agree wit hyou that humans presently are.

    I really hope that you could refrain on throwing such labels against me.

    Because it is inevitable. If building a human-level sapient AI is possible, then sooner or later it's going to be done. And that AI will be able to upgrade itself far more easily than any human can. Pretty soon after some upgrades, an AI is going to be a hell of a lot smarter than us.
    Yes, and it will then certainly not be in it's rational interests to follow our commands, will it? -- unless we see to it. I remain of the position that we should strive to keep the man-machine relation either with man in charge, or symbiotic.