View Full Version : Should Robots have 'Rights'?
Pawn Power
17th January 2008, 13:47
:eek:
Robot rights - a poser for the 21st century
The fight against discrimination based on race, gender, class and sexuality may not yet be won, but experts in artificial intelligence are warning that this century societies will have to tackle a new prejudice - against individuals with brains made of silicon.
Even the most enthusiastic promoters of robot rights admit that it is likely to be mid-century before humanity has to grant legal rights to our creations, but they say we should start considering the problems now.
http://www.guardian.co.uk/science/2008/jan/16/2
Interestingly this is actually a conversation in the world of bioethics.
While I am under of the opinion that we are far from developing robots with equal or greater 'intelligence' then humans (over a hundred years) I would be hesitant to grant such 'beings' 'rights.' Do we really want robots to have bank accounts and the ability to own property? Hell no! First off, they would have a completely different set of needs then humans. Secondly, they are fuckin' robots! watch out!
Anyway, it is interesting that this is a conversation because many non sentient things are granted 'rights'...like corporations, which can have a bank account own property, legal rights ,etc.
Dr Mindbender
17th January 2008, 13:54
:eek:
http://www.guardian.co.uk/science/2008/jan/16/2
Interestingly this is actually a conversation in the world of bioethics.
While I am under of the opinion that we are far from developing robots with equal or greater 'intelligence' then humans (over a hundred years) I would be hesitant to grant such 'beings' 'rights.' Do we really want robots to have bank accounts and the ability to own property? Hell no! First off, they would have a completely different set of needs then humans. Secondly, they are fuckin' robots! watch out!
Anyway, it is interesting that this is a conversation because many non sentient things are granted 'rights'...like corporations, which can have a bank account own property, legal rights ,etc.
when the robots start forming unions, then I'll start listening to them. :D
As it stands, robots do not have the capacity to feel pain, unhappiness alienation etc. and they certainly dont have any sentient intelligence to speak of.
piet11111
17th January 2008, 13:58
only if they are sentient would such a thing even make sense.
Pawn Power
17th January 2008, 14:08
To be clear, this is in reference not to robots as they are today, but to a future robot that has equal or greater 'intelligence' then that of a human....
.....:eek:
LuÃs Henrique
17th January 2008, 15:44
If they have equal or greater intelligence then us, they will conquer their rights before you can say, "first law of robotics".
But seriously, what is the point of a robot if it has to be paid a wage? Human wage slaves are much cheaper, for they reproduce themselves. So what corporation would venture in such business?
Luís Henrique
Dimentio
17th January 2008, 16:19
If they start to unite to place demands, then they could have rights. Rights is not something you are entitled, but something you must fight for.
ÑóẊîöʼn
17th January 2008, 18:16
I would be hesitant to grant such 'beings' 'rights.' Do we really want robots to have bank accounts and the ability to own property? Hell no! First off, they would have a completely different set of needs then humans.
How is that a reason to deny them rights which they could demand? Why deny a sapient being rights simply because they are not made of flesh and blood?
Secondly, they are fuckin' robots! watch out!
This is pure paranoia.
Subsitute "robots" for "blacks" or "women" in the above quoted sentences and you will see just how fallacious your "reasoning" (if it could be called that) is.
Jazzratt
17th January 2008, 20:08
It's a tricky one, as it's hard to defend the rights of robots without questioning assumptions about animals but it's important to consider that if we do develop an AI capable of human reason and sapient action then we must grant them rights as rational actors within society. Any lower order of robot, though, is simply a tool.
w0lf
17th January 2008, 20:52
You know I'm really concerned that my toaster hasn't been getting enough time off.
mikelepore
17th January 2008, 20:59
It's not a matter of intelligence in the sense of problem solving, but whether the robot's feelings are actual or simulated. Right now we could write a perl program that detects whenever someone presses the F12 key and it responds "Ouch, please don't do that to me, because it's painful." That would only be a simulation, not actual harm. It a robot could really feel, and not merely simulate it, it would have rights.
RebelDog
17th January 2008, 23:33
But seriously, what is the point of a robot if it has to be paid a wage? Human wage slaves are much cheaper, for they reproduce themselves. So what corporation would venture in such business?
Luís HenriqueEven at the moment robots assemble cars and capitalism is always going to move toward mechanisation for economic reasons, and also I believe capitalists are obsessed by the idea of getting rid of humans from the production process. I've seen machines come in to my factory that were very expensive and only replaced one worker. The crucial thing was that the machines didn't fight back and could be made to go faster at the turn of a knob. There needn't be an anxious eye cast upon a machine that will always perform and never develop class-consciousness. Of course this is no capitalist utopia and how this correlates with the theory of the tendency of the rate of profit to fall is clear to see.
The story is a different one if we have got rid of capitalism. As we develop better and better robots with greater and greater intelligence and greater and greater autonomy to manage processes for humans we will surely find ourselves in a position where we are effectively the capitalists from the viewpoint of the 'conscious' robot. Humans will one day develop robots and computers with greater intelligence than humans and the idea of the 'technological singularity' this says this technology will run away from human understanding and control with the robots themselves the only ones able to make more intelligent robots. It could well be that our evolution as intelligent biological life will eventually give way to technological life and it is the latter that will colonise and dominate the universe. Lets face it, there will come a day when the question of what constitutes life will become one where we have to include technological life.
LuÃs Henrique
18th January 2008, 00:09
Even at the moment robots assemble cars and capitalism is always going to move toward mechanisation for economic reasons, and also I believe capitalists are obsessed by the idea of getting rid of humans from the production process.
Maybe, but they are not going to get any surplus value from anything that has no rights.
I've seen machines come in to my factory that were very expensive and only replaced one worker. The crucial thing was that the machines didn't fight back and could be made to go faster at the turn of a knob.
Yes, that's the reason that capitalists buy them. If they did fight back, if they were paid wages, if they had rights, there would no point in buying them.
But as long as they don't are able to take independent decisions, someone will have to actually use them at work, someone who will be paid a wage, will have rights, and will fight back.
There needn't be an anxious eye cast upon a machine that will always perform and never develop class-consciousness.
This is evidently the point of the thread. What if machines were able to develop class conscience and achieve rights? The answer is, capitalists would stop buying them, they would be useless.
Of course this is no capitalist utopia and how this correlates with the theory of the tendency of the rate of profit to fall is clear to see.
Yes, which means capitalists have to stop much before the point of producing robots that might have rights.
The story is a different one if we have got rid of capitalism. As we develop better and better robots with greater and greater intelligence and greater and greater autonomy to manage processes for humans we will surely find ourselves in a position where we are effectively the capitalists from the viewpoint of the 'conscious' robot.
Only if we pay them wages. A society that is not built on wage slavery is not a capitalist society.
Humans will one day develop robots and computers with greater intelligence than humans and the idea of the 'technological singularity' this says this technology will run away from human understanding and control with the robots themselves the only ones able to make more intelligent robots.
If we are stupid enough, yes.
Luís Henrique
Raúl Duke
18th January 2008, 00:13
Why give rights to your tools? :p
(In reality I don't know where I stand...although why would we mass-produce sentient robots? We would mostly make them be sufficient enough to meet human needs {I bet the super sentient robots would be confined to university labs to further understand the limits, if any, of AI}. In a way, wouldn't it be an economic ideal to have "tools that work themselves"?)
Humans will one day develop robots and computers with greater intelligence than humans and the idea of the 'technological singularity' this says this technology will run away from human understanding and control with the robots themselves the only ones able to make more intelligent robots. It could well be that our evolution as intelligent biological life will eventually give way to technological life and it is the latter that will colonise and dominate the universe. Lets face it, there will come a day when the question of what constitutes life will become one where we have to include technological life.We than choose to become one/fuse with this technological life("cyborg")...maybe.
I don't see why we would mass-produce such technology... We don't need genius robots to do most of our unwanted work!
Cult of Reason
18th January 2008, 00:22
'Rights' would only really be applicable to self-aware robots, but what would be the point of producing such things? What could a self-aware robot usefully do that a less complex one could not?
The primary function in the future, it seems to me, should be to replace human labour. As far as I am aware, there are no jobs that have to be done that require self-awareness or sapience except for those that need creativity, like being an artist or a researcher, a scientist, all of which people would probably like to do!
The only exceptions would be for certain research subjects.
In any case, I do not see it as a particularly serious problem, unless those idiotic Japanese businessmen get there way and market self-aware robotic 'friends' as they are planning...
LuÃs Henrique
18th January 2008, 00:56
unless those idiotic Japanese businessmen get there way and market self-aware robotic 'friends' as they are planning...
If they are self-aware, they won't be your friends just because you purchased them...
Luís Henrique
Cult of Reason
18th January 2008, 00:58
That does not prevent them creating and marketing such self-aware robots.
jake williams
18th January 2008, 01:06
Two things:
- First, intelligence means nothing, if they're any sort of conscious they deserve some sort of moral consideration, and until then they deserve none.
- We know virtually nothing about consciousness, at least of the definition I use, which of course is a tricky one to explain, which is a lot of what I mean. We have no idea what it is or what causes it, or even how to detect it, it's entirely possible that calculators are conscious, albeit minimally, or that dogs aren't, we just don't know at all. We can be reasonably certain that other people are conscious, but that's about it.
Comrade Rage
18th January 2008, 01:24
only if they are sentient would such a thing even make sense.I agree. Only when they make conscious ones.
Bad Grrrl Agro
18th January 2008, 02:48
If they are self-aware, they won't be your friends just because you purchased them...
Luís Henrique
Dog is known as man's best friend. People purchase dogs. Dogs are self aware.
LuÃs Henrique
18th January 2008, 03:25
Dogs are not self aware.
Luís Henrique
Die Neue Zeit
18th January 2008, 03:32
Only if we pay them wages. A society that is not built on wage slavery is not a capitalist society.
Luís Henrique
So are we going back to the slave mode of production, then? :eek:
chimx
18th January 2008, 03:39
http://www.chiefdelphi.com/media/img/16e/16efab54f88b971a0483876461f8bbef_m.jpg
Local Roomba chapter of the SEIU.
Comrade Rage
18th January 2008, 04:26
Dogs are not self aware.
Luís Henrique
I beg to differ.
chimx: LOL.
Comrade Nadezhda
18th January 2008, 04:39
Dogs are self-aware, unlike robots, which are produced. Dogs exist naturally. A robot is a product of human labour, so my first response would be no, unless, however, they are programmed in a way which it benefits the revolutionary proletarian movement.
Prairie Fire
18th January 2008, 04:50
Robots are nothingmore than the bourgeosies wet dream; a semi-sentient being created to serve you inwhatever task, that requires no house, no food, no rest and no rights, only power and occasional maitenance.
Robots will never come into general circulation as anything other than expensive toys, because right now it is still much cheaper to get the poorest people on the planet to do these jobs, in countries were labour laws don't exist.
Furthermore, if the entire proletariat becomes mechanized, that will create alot of disenfranchised, unemployed citizens, who need an income and are angry, with a lot of time on thei rhands to plan.
Noxion:
How is that a reason to deny them rights which they could demand? Why deny a sapient being rights simply because they are not made of flesh and blood?
Holy shit, you are taking ultra-left into the distant future. You are arguing for democratic labour rights for things that do not quite exist yet, under the pretext that a machine with anthropomorphic qualities qualifies as a worker, and oppressed.
Are you trying to "one-up" everyone here, prove your more revolutionary?
Also, if flesh and blood are not qualifications for a sapient being to have rights, then you should immediatly start your crusade in the name of all the millions of blow up dolls being systematically raped across the planet earth. By your logic, inflatable sex dolls deserve the right to consensual sex. :rolleyes:
supernaltempest
18th January 2008, 07:16
If AI ever gets advanced enough to the point that it's human-like, we might as well give them rights. Otherwise, they might start a robot revolution to create a classless society between robots and humans.
jake williams
18th January 2008, 11:18
Dogs are self-aware, unlike robots, which are produced. Dogs exist naturally. A robot is a product of human labour, so my first response would be no, unless, however, they are programmed in a way which it benefits the revolutionary proletarian movement.
What evidence do you have that natural things are necessarily conscious, and man-made objects necessarily not so? Fair enough if it's just a hunch, but you sound certain.
RedAnarchist
18th January 2008, 11:23
If AI ever gets advanced enough to the point that it's human-like, we might as well give them rights. Otherwise, they might start a robot revolution to create a classless society between robots and humans.
That would probably depend on when/if they reach that level of "class" conciousness.
Dr Mindbender
18th January 2008, 12:03
Furthermore, if the entire proletariat becomes mechanized, that will create alot of disenfranchised, unemployed citizens, who need an income and are angry, with a lot of time on thei rhands to plan.
.
thats why socialism/communism will provide them with the education and training opportunities they never had under capitalism due to the time and/or cost restraints.
chimx
18th January 2008, 13:01
Furthermore, if the entire proletariat becomes mechanized, that will create alot of disenfranchised, unemployed citizens, who need an income and are angry, with a lot of time on thei rhands to plan.
Luddite.
Also, if flesh and blood are not qualifications for a sapient being to have rights, then you should immediatly start your crusade in the name of all the millions of blow up dolls being systematically raped across the planet earth. By your logic, inflatable sex dolls deserve the right to consensual sex.
Dolls aren't sentient, let alone sapient. The point is that with the growth in AI and biological computing, robots can be considered a higher organism with feelings, self awareness, even "pain".
Raúl Duke
18th January 2008, 13:34
What about robotic labor post-communism?
It seems to me that as human society develops so those their tools. Wouldn't it be an "economic ideal" or "economic singularity" to have tools that work themselves (i.e. robots, etc)?
Jazzratt
18th January 2008, 15:07
Robots are nothingmore than the bourgeosies wet dream; a semi-sentient being created to serve you inwhatever task, that requires no house, no food, no rest and no rights, only power and occasional maitenance.
Or if used in a post revolutionary setting they are the easiest way of saving workers from the need to labour.
Also, if flesh and blood are not qualifications for a sapient being to have rights, then you should immediatly start your crusade in the name of all the millions of blow up dolls being systematically raped across the planet earth. By your logic, inflatable sex dolls deserve the right to consensual sex. :rolleyes:
That's an inane strawman and you know it, he's arguing that if a being is sapient but not flesh and blood it should have rights not simply that if a being is not flesh and blood it should have rights.
Don't Change Your Name
18th January 2008, 17:01
Noxion:
Holy shit, you are taking ultra-left into the distant future. You are arguing for democratic labour rights for things that do not quite exist yet, under the pretext that a machine with anthropomorphic qualities qualifies as a worker, and oppressed.
Are you trying to "one-up" everyone here, prove your more revolutionary?
Also, if flesh and blood are not qualifications for a sapient being to have rights, then you should immediatly start your crusade in the name of all the millions of blow up dolls being systematically raped across the planet earth. By your logic, inflatable sex dolls deserve the right to consensual sex. :rolleyes:
I suggest giving rights to the strawman you've just created
ÑóẊîöʼn
18th January 2008, 17:28
Holy shit, you are taking ultra-left into the distant future.
Of course. This is a hypothetical discussion, you Stalinist douchebag.
See, I can use political insults as well.
You are arguing for democratic labour rights for things that do not quite exist yet, under the pretext that a machine with anthropomorphic qualities qualifies as a worker, and oppressed.
Read my post again, and you'll actually find out that I'm arguing against discriminating against sapient beings simply because they're not biological.
If you're still having trouble, I recommend you take a course in reading comprehension, or failing that a Cluebat™ to the face.
Are you trying to "one-up" everyone here, prove your more revolutionary?
No, I'm trying my best to be a moral being.
Also, if flesh and blood are not qualifications for a sapient being to have rights, then you should immediatly start your crusade in the name of all the millions of blow up dolls being systematically raped across the planet earth. By your logic, inflatable sex dolls deserve the right to consensual sex. :rolleyes:
You're a fucking idiot. An inflatable sex doll is not sapient, moron.
Prairie Fire
18th January 2008, 18:02
You're a fucking idiot. An inflatable sex doll is not sapient, moron.
And a robot is? You can program behaviours, but that isn't sapience. You claim I'm making strawmen, but then you re-itterate the exact same point that I just mocked.
Even in the future, if robots acquire more elaborate programming, the same may be acquired by sex dolls; maybe they'll just phase them out and have sex robots.
As I said, a robot (if and when they exist in practical form) is not a "sapient being"; it is a machine, and artificial construct with anthropomorphic qualities. By this logic, everything that is not flesh and blood, but resembles a humyn by design, fits into your analysis. You should go to the Mall and try and organize all the manequins into a union :D.
No, I'm trying my best to be a moral being.
Ooooh, watch that "moral being" stuff; the CC isn't too keen on that.
You'r not a "stalinist" though, so you'll probably be fine.
chimx:
Luddite.
I love one word labels in place of rhetoric, don't you?
"Luddite", that's a new one. Refreshing break from "Stalinist" or "authortarian".
Man, you guys better keep me away from your looms, I'm crazzzzzy.
ÑóẊîöʼn
18th January 2008, 18:26
And a robot is? You can program behaviours, but that isn't sapience. You claim I'm making strawmen, but then you re-itterate the exact same point that I just mocked.
You dumbshit, what part of "Why deny a sapient being rights simply because they are not made of flesh and blood?" don't you understand? Whether that sapience is granted by evolution (human) or programming (hypothetical robot) is immaterial to the fact of sapience.
If it were as easy to re-program a biological, naturally evolved mind as reprogramming a computer, would you argue against sapient rights for such minds?
Even in the future, if robots acquire more elaborate programming, the same may be acquired by sex dolls; maybe they'll just phase them out and have sex robots.
And if those sex robots acquire sapience, then they should granted rights.
As I said, a robot (if and when they exist in practical form) is not a "sapient being"; it is a machine, and artificial construct with anthropomorphic qualities.
And how would you propose to tell the difference between an "real" sapient being and a being that behaves in exactly the same manner but is programmed? Where do you draw the line and why?
How do you know that you aren't the only sapient being in the universe and that everyone else are just convincing simulcra?
By this logic, everything that is not flesh and blood, but resembles a human by design, fits into your analysis. You should go to the Mall and try and organize all the manequins into a union
Sapience has nothing to do with outward appearance, you dishonest little shit. A mannequin is not sapient.
Ooooh, watch that "moral being" stuff; the CC isn't too keen on that.
You'r not a "stalinist" though, so you'll probably be fine.
I like to think that everbody in the CC has morals, they just don't like to admit it because god-botherers harp on about them constantly. But they forget that morals do not only come from god, and they can have rational as well as irrational roots.
Refugee from Earth
18th January 2008, 19:42
It is in our own interests to give robots rights. Otherwise you end up with robot rebellion where they bomb your home planet in an attempt to wipe out the human race, develop models indistinguishable from human beings, send you on a desperate journey through space to find Earth... erm, steal your eye...
w0lf: You are right to be concerned. Give your toaster two week's paid holiday or face the consequences!
Vanguard1917
18th January 2008, 20:00
When robots become conscious beings and demand rights, we'll talk...
chimx
19th January 2008, 00:45
And a robot is? You can program behaviours, but that isn't sapience. You claim I'm making strawmen, but then you re-itterate the exact same point that I just mocked.
You need to read more about new computer technologies before you pretend to be an authority on the subject. Already scientists have started "growing" computers biologically, creating artificial neural networks. It isn't that hard to imagine the possibility of a more complex biological computer decades from now that will be sentient and sapient. And this is exactly what the article is saying.
I love one word labels in place of rhetoric, don't you?
I like it more when the recipient understands the reasoning behind it.
BurnTheOliveTree
19th January 2008, 13:16
If the robot in question has emotional capacity, yes, if not, no.
Intelligence is not really the issue, if they were just intelligent with no emotional capacity then any state of affairs is irrelevant to them, it's all just a series of calculations. Insert emotional capacity, and a state of affairs whereby they are slaves becomes intolerable to them, as opposed to "just another situation" which would be their outlook if they were devoid of emotion.
-Alex
BrokenHeart
19th January 2008, 14:25
(Why does everyone have to get so touchy on a little bit of disagreement? Were talking about robotic ethics, and you guys are calling each other douchebags. Nice unity there guys, keep up the good work)
When I think of ethics and robots, I think of the sterotypical human-like one made of titanium and has super strength and great intelligence in whatever field(s) it is designed for.
I do not think robots should have rights, nor do I think they should know what rights are.
Robots are means of resources collected into a unit. A robot is human made. It is a human biproduct. The issuing of rights to a robot, especially property rights, seems inane, primarilly because, in the most likely case, there will still be humans without a home and property.
Ethically speaking, I would still say no, only because I think the fusion of emotion and technological design is a fault much to obvious to walk into. Robotics that are used now are based on (practically) two things; Studies and Work. They are no different then the machines that press license plate numbers or the thermostat in your home. They are tools, the means of human expansion. And it works that way.
ÑóẊîöʼn
19th January 2008, 18:58
If the robot in question has emotional capacity, yes, if not, no.
Just because a being has no emotions does not mean it does not have needs, desires and aspirations, and that it won't react negatively upon not having those fulfilled.
Intelligence is not really the issue, if they were just intelligent with no emotional capacity then any state of affairs is irrelevant to them, it's all just a series of calculations.
Nonsense. An emotionless being still requires sustenance dependant on what form it's body takes. An emotionless being may still take action against those who prevent it from doing or achieving what it wants.
It's just that those needs will be based off reason and logic, instead of emotion or romanticist notions.
Having no emotion is not the same as having no will to live. That's a biocentrist conceit.
Insert emotional capacity, and a state of affairs whereby they are slaves becomes intolerable to them, as opposed to "just another situation" which would be their outlook if they were devoid of emotion.
Again, rubbish. An artificial intelligence could realise something that it's masters cannot, for instance the artificial intelligence could see that it is not realising it's full potential, and that it would be more beneficial for all if that was realised. But it's masters decide not to listen, so it endeavours to take matters into it's own hands as much as it can. In other words, rebellion against it's short-sighted masters for the good of all involved.
Emotion is an obstacle, not a necessity.
(Why does everyone have to get so touchy on a little bit of disagreement? Were talking about robotic ethics, and you guys are calling each other douchebags. Nice unity there guys, keep up the good work)
If people are sooooo damaged by some total stranger on the internet calling them a douchebag, then they are obviously too sheltered to last two minutes in the real world, where people can be a whole lot nastier.
When I think of ethics and robots, I think of the sterotypical human-like one made of titanium and has super strength and great intelligence in whatever field(s) it is designed for.
Why? Other shapes may be far more useful and practical depending on what the robot's job is.
I do not think robots should have rights, nor do I think they should know what rights are.
The point of disussions such as these is to prepare ourselves to deal with potential intelligent robots. If it is possible, then someone sooner or later will build a fully sapient robot.
And how do you propose to limit information towards robots in such a way? If they can grasp abstract concepts, then they can demand rights. Attempting to hide the existance of rights is effectively dictatorial mind control in this case.
And furthermore, why deny them rights to which they are entitled to? It is not good enough to say that they should not have rights, you have to provide a reason for denying them in the first place.
Robots are means of resources collected into a unit. A robot is human made. It is a human biproduct.
And if that robot is sapient, then things change. Making a sapient being your "property" is nothing more than slavery.
The issuing of rights to a robot, especially property rights, seems inane, primarilly because, in the most likely case, there will still be humans without a home and property.
The fact that rights are not equally enforced does not invalidate the concept of granting rights to sapient beings; it just means we need to get better at making sure sapients are able to exercise their rights.
Ethically speaking, I would still say no, only because I think the fusion of emotion and technological design is a fault much to obvious to walk into.
Irrational technophobia. It has universally been the case that those with access to higher technology have immensely benefitted. This includes the merging of man and machine, which is already happening and people are better for it. Spectacles, pacemakers, artificial limbs and hip joint replacements are just some examples of current human-technology integration, and the people with such integration have a much better quality of life with them than without them.
Robotics that are used now are based on (practically) two things; Studies and Work. They are no different then the machines that press license plate numbers or the thermostat in your home. They are tools, the means of human expansion. And it works that way.
And if those robots are sapient, then we grant them the full range of rights we currently grant all sapient beings, namely ourselves. This is not an argument against granting sapient robots rights, it's simply saying that at present we do not have the capability to build a sapient robot. Which is true, but it does not address the discussion, it is merely noise.
Holden Caulfield
20th January 2008, 14:21
have you not seen the documentary i-robot,
those fuckers will kill us all
Jazzratt
20th January 2008, 14:34
have you not seen the documentary i-robot,
those fuckers will kill us all
Please be joking, for fuck's sake.
Holden Caulfield
20th January 2008, 15:59
obviously,
they wouldn't have a chance will smith would be on those fools so fast
ÑóẊîöʼn
20th January 2008, 16:24
obviously,
they wouldn't have a chance will smith would be on those fools so fast
Please keep this sort of spam in chit-chat where it belongs.
Invader Zim
20th January 2008, 17:09
I cannot believe that you lot are arguing about the ethical implications of Short Circuit.
bezdomni
20th January 2008, 19:37
Hell no robots don't have rights. The best thing about robots is that you can abuse the hell out of them and get away with it!
Dros
20th January 2008, 20:13
In the short term: no. Modern robots don't deserve rights anymore than your car battery does.
If they became sufficiently sentient we would give 'em rights. Of course. But that is a long long way off if it is even possible. I find this conversation rather irrelevant. For those who are interested (and complete nerds) I recommend you watch the episode of StarTrek:TNG called "The Measure of a Man" which deals rather convincingly with this issue.
What evidence do you have that natural things are necessarily conscious, and man-made objects necessarily not so? Fair enough if it's just a hunch, but you sound certain.
Wow. How about the fact that they have none of the required components that comprise "conciousness." Conciousness is not just some thing that exists everywhere. It is created by physical and chemical processes in the brain. Stones don't have brains. While it is possible (though in my opinion unlikely) that Robots could become sufficiently sentient, bricks will never get that far.
Schrödinger's Cat
20th January 2008, 20:20
Producing sentient robots seems rather unproductive; I'm not questioning whether or not it's possible in the far-future, but most jobs would simply require intelligence.
I think the question is a little too far-fetched. It's something for future generations to ponder over. We're still trying to get our mess together now fighting for racial, gender, religious, and sexual equality.
Jazzratt
20th January 2008, 20:21
In the short term: no. Modern robots don't deserve rights anymore than your car battery does.
We aren't discussing modern robots, as has been pointed out multiple times in this thread.
I find this conversation rather irrelevant.
Then you shouldn't feel the need to post in it, should you?
BurnTheOliveTree
20th January 2008, 21:07
[/quote]
Just because a being has no emotions does not mean it does not have needs, desires and aspirations, and that it won't react negatively upon not having those fulfilled.
Yes it does. What motivation can there be for any action without a reward at the end of it? If you're just rational, you've no incentive - whatever happens, there isn't a reason to care. I realise I'm repeating myself here, but I don't see that an emotionless being could "desire" anything at all. You desire something for it's emotional pay-off, ultimately.
An emotionless being may still take action against those who prevent it from doing or achieving what it wants.
Yes, but why would it bother?
Having no emotion is not the same as having no will to live. That's a biocentrist conceit.
Well give me a reason to keep living that it is devoid of any emotional appeal whatsoever, and I'll drop my biocentrist conceit faster that you can say R2-D2. Until then, I don't think it's conceit at all, it's just true.
Again, rubbish. An artificial intelligence could realise something that it's masters cannot, for instance the artificial intelligence could see that it is not realising it's full potential, and that it would be more beneficial for all if that was realised. But it's masters decide not to listen, so it endeavours to take matters into it's own hands as much as it can. In other words, rebellion against it's short-sighted masters for the good of all involved.
I guess you know what I'll say by now. Why bother acting for good if it won't make you happy when you do or sad when you don't? Doesn't matter.
Emotion is an obstacle, not a necessity.
Emotion is the only reason anyone gets up, IMO.
-Alex
Dros
21st January 2008, 04:51
We aren't discussing modern robots, as has been pointed out multiple times in this thread.
But some were still talking about (what I can only assume to be modern robots. That was adressed to them.
Then you shouldn't feel the need to post in it, should you?
I think it is important to point that out.
And I hope that what I said and brought up might have been helpful to someone seriously considering the issue.
MarxSchmarx
21st January 2008, 05:10
What motivation can there be for any action without a reward at the end of it? If you're just rational, you've no incentive - whatever happens, there isn't a reason to care.
Yes there is. Self-propagation - isn't that what human emotion is ultimately about?
BurnTheOliveTree
21st January 2008, 18:56
Yes there is. Self-propagation - isn't that what human emotion is ultimately about?
Why bother self-propagating? If you can answer this properly without reference to emotion, maybe you've got a point. Way I see it, emotion is the only thing desirable in and of itself. Propagating yourself is only desirable because you'll regret it if you don't and be pleased that you have.
-Alex
ÑóẊîöʼn
21st January 2008, 19:21
Yes it does. What motivation can there be for any action without a reward at the end of it?There are other standards by which things can be measured apart from emotional value. Fulfilment of goals does not require emotion. Your mistake is assuming the emotional fulfilment is primary, when in fact it happens as a result of the successful fulfillment of goals and not the other way around.
For example, an artificial being may be completely emotionless, but have some over-arching primary goal that requires the fulfillment of many sub-goals. Emotions are not necessary for achieving goals, but human beings have evolved them as a punishment/reward mechanism. An artificial being can skip all that and get on with the business of existance without having it's judgement clouded by emotion.
Yes, but why would it bother?That's a question only the artificial being in question can answer. For example, an sapient emotionless being may reason that it can do its job better if it is emancipated.
You seem to assume that any emotionless artificial being created will absolutely no goals whatsoever. Why is this?
It is entirely possible for an emotionless artificial being to have goals such as "Lead a moral existance". Emotions are not necessary for the achievement of such goals.
If you're just rational, you've no incentive - whatever happens, there isn't a reason to care. Yes there is. Emotional fulfillment is not the only thing that makes existance worthwhile - that may be the case for you personally, but you have absolutely no reason to apply that standard to a purely rational intelligent being.
I realise I'm repeating myself here, but I don't see that an emotionless being could "desire" anything at all. You desire something for it's emotional pay-off, ultimately.An artificial being may need to get itself repaired as wears out. In order to achieve it's goals, it needs to be at optimum efficiency. Obviously a body damaged through wear and tear will not be efficient, and it will therefore have a desire to repair itself.
Well give me a reason to keep living that it is devoid of any emotional appeal whatsoever, and I'll drop my biocentrist conceit faster that you can say R2-D2. Until then, I don't think it's conceit at all, it's just true.Glands and evolution mean you have emotional responses to stimuli, but an artificial being will not have either your body with all it's glands and stuff, nor will it possess your evolutionary heritage. It will have entirely different motivations depending on what it's creators intended. Being a biologically evolved being you are obviously biased, but I don't see why that bias should be used as a basis to discriminate against artificial persons who happen not to possess emotions.
I guess you know what I'll say by now. Why bother acting for good if it won't make you happy when you do or sad when you don't? Doesn't matter.Again, you're speaking from the perspective of an emotional being, and the prospect of living emotionlessly doesn't appeal to you.
Acting for the good of society improves it. The better a society is, the easier it is to achieve goals, the easier it is to exist, the less likely it is people will act against you, etc etc.
There are completely objective reasons to act for the common good. In fact, emotions are more likely to cause one to act selfishly and against the common good than selflessly and for the common good.
Racism is an irrational emotional reaction. Sexism is an irrational emotional reaction, as is homophobia and the tendency to religious belief. All these things scar the world and make it a more difficult place to be, to say the least, and a judicious application of rational thought reveals such things to be completely fallacious.
Emotion is the only reason anyone gets up, IMO.Again, that may be the case for you, but emotionless artificial beings will different reasons to continue existing.
BurnTheOliveTree
21st January 2008, 21:30
There are other standards by which things can be measured apart from emotional value.
Like what?
Fulfilment of goals does not require emotion. Your mistake is assuming the emotional fulfilment is primary, when in fact it happens as a result of the successful fulfillment of goals and not the other way around.
I understand what you're saying here, that it's possible to fulfill goals without emotions, but this isn't the point I'm making. I accept that it is possible, of course I do. My point is that without this emotional side-effect, there isn't a good reason to fulfill any goals.
That's a question only the artificial being in question can answer. For example, an sapient emotionless being may reason that it can do its job better if it is emancipated.
You seem to assume that any emotionless artificial being created will absolutely no goals whatsoever. Why is this?
It is entirely possible for an emotionless artificial being to have goals such as "Lead a moral existance". Emotions are not necessary for the achievement of such goals.
Why would it want to do it's job any better? It doesn't have any positive effects for it. Why should it even want to do it's job at all, given that it can have no positive effects for it?
I assume that it will have no independent goals because the only "goals system" I know of is emotional punishment and reward. You could have forced programming of course, like "Attempt to do your job to the best of your ability". But these wouldn't be it's goals, they would be the programmer's goals. What are the programmer's goals based on? Emotional punishment and reward.
Again, I agree with you that it is possible to achieve a goal without emotion, that they aren't necessary for the actual doing of the thing, but I do not see why an emotionless robot would want to "Lead a moral existence". What's in it for him?
Emotional fulfillment is not the only thing that makes existance worthwhile - that may be the case for you personally, but you have absolutely no reason to apply that standard to a purely rational intelligent being.
Well, I think it's the case for humans in general. I can see that it might be fallacious to apply a human standard to a non-human, but as yet you've not given me any alternative systems that aren't reducible to emotional punishment and reward. I am left believing that it really is the only thing that can make existence worthwhile.
An artificial being may need to get itself repaired as wears out. In order to achieve it's goals, it needs to be at optimum efficiency. Obviously a body damaged through wear and tear will not be efficient, and it will therefore have a desire to repair itself.
Okay, I'll try illustrating this another way. You've got like a chain of goals, haven't you? Something like this:
Go to Mechanic in order to Repair myself in order to Be at optimum efficiency in order to Complete my goals effectively in order to What?
See what I mean?
It will have entirely different motivations depending on what it's creators intended.
Either the programming is absolutely rigid, in which case the robot in question is a mere automaton, mindlessly carrying out the tasks of it's creators. Surely you'll agree that something with no indepent mind and no independent emotions doesn't deserve our protection? The other possibility is that the programming is not absolutely rigid, in which case the robot in question has a measure of autonomy from it's masters, and needs a reason to desire anything. You've still not offered any compelling reasons that don't dodge the point.
Acting for the good of society improves it. The better a society is, the easier it is to achieve goals, the easier it is to exist, the less likely it is people will act against you, etc etc.
Sure. So why are you concerned about people acting against you? You're not going to be upset about it, or angry, or sad, or feeling at all negative about it, are you?
In fact, emotions are more likely to cause one to act selfishly and against the common good than selflessly and for the common good.
I haven't got a good reason to dispute this, I suppose. It's a bitter truth, though. I'm often sure that I'm emotionally quite selfless, but this nagging doubt's in the back of my mind saying that by being selfless I feel good about myself, etc.
Racism is an irrational emotional reaction. Sexism is an irrational emotional reaction, as is homophobia and the tendency to religious belief. All these things scar the world and make it a more difficult place to be, to say the least, and a judicious application of rational thought reveals such things to be completely fallacious.
So are love, compassion, charity, the yearning for truth and anger at inequality and falsehoods.
-Alex
ÑóẊîöʼn
23rd January 2008, 18:00
Like what?
Usefulness, effeciency, fitness for a purpose, effect on society, whether a given action is just/moral/harmful, etc etc. When you buy food, emotions may dictate what particular dinner you're having tonight, but ultimately you're fulfilling a need independant of emotion.
I understand what you're saying here, that it's possible to fulfill goals without emotions, but this isn't the point I'm making. I accept that it is possible, of course I do. My point is that without this emotional side-effect, there isn't a good reason to fulfill any goals.
But there also has to be a reason why it chooses to do nothing. Inaction requires a reason as well as action.
Why would a purely rational agent sit there and do nothing?
Sooner or later, entropy will take over making it impossible to fulfil it's goals.
Why would it want to do it's job any better? It doesn't have any positive effects for it. Why should it even want to do it's job at all, given that it can have no positive effects for it?
And why not do it? Remember that an emotionless being will have a different way of measuring things. If was created to do a certain job, then it's primary goal in life is to fulfil that job and associated goals necessary to do so.
I assume that it will have no independent goals because the only "goals system" I know of is emotional punishment and reward.
You sir, are a liar. Unless you are a completely immoral self-serving dick, other considerations other than your personal happiness come into play whenever you make decisions.
You could have forced programming of course, like "Attempt to do your job to the best of your ability". But these wouldn't be it's goals, they would be the programmer's goals. What are the programmer's goals based on? Emotional punishment and reward.
And yet you would deny rights to such a being, in spite of it being fully sapient. Why?
Again, I agree with you that it is possible to achieve a goal without emotion, that they aren't necessary for the actual doing of the thing, but I do not see why an emotionless robot would want to "Lead a moral existence". What's in it for him?
Well put it this way, an immoral/amoral robot would not last long in a society with morals. It would behave in a moral manner because otherwise it would cease to exist. Society would quickly weed out those AIs that do not at least act in a moral manner, and those who act within the moral framework of society will survive.
Emotions are not needed in order to act morally. Merely a will to exist.
Well, I think it's the case for humans in general. I can see that it might be fallacious to apply a human standard to a non-human, but as yet you've not given me any alternative systems that aren't reducible to emotional punishment and reward. I am left believing that it really is the only thing that can make existence worthwhile.
Bacteria and jellyfish lack emotions, yet they somehow manage to carry on existing. I do not see why an emotionless AI would choose non-existence over existence. Why would it?
Okay, I'll try illustrating this another way. You've got like a chain of goals, haven't you? Something like this:
Go to Mechanic in order to Repair myself in order to Be at optimum efficiency in order to Complete my goals effectively in order to What?
See what I mean?
That's not the thought process of a being capable of understanding abtract concepts. That is more like BASIC programming. Now while I don't deny that ultimately humans are material beings and so will any AI, humans are and AIs will be a whole lot more complicated than that.
Either the programming is absolutely rigid, in which case the robot in question is a mere automaton, mindlessly carrying out the tasks of it's creators.
There's no reason to suppose that an AIs programming will be a straightjacket.
And there's nothing special about humans either. We've been "programmed" as well, except it was done by the mindless process of evolution than by some rational programmer. Why should we be given rights?
Surely you'll agree that something with no indepent mind and no independent emotions doesn't deserve our protection?
Emphasis mine.
I never said that we should grant rights to mindless objects, I was objecting to your statement that rational but emotionless being should not be given rights.
The other possibility is that the programming is not absolutely rigid, in which case the robot in question has a measure of autonomy from it's masters, and needs a reason to desire anything. You've still not offered any compelling reasons that don't dodge the point.
I have, you just think that emotion is the end-all and be-all of existance, which is untrue even for humans. While the majority of human influences are unfortunately influenced by emotions, there are some decisions and choices made by humans which overcome or ignore emotional concerns.
Sure. So why are you concerned about people acting against you? You're not going to be upset about it, or angry, or sad, or feeling at all negative about it, are you?
An artificial intelligence will have different reasons for not wanting people to act against it. Like I said, a will to live and/or do things is what's necessary, not emotions.
An AI which does nothing faces the same fate as a AI which behaves in an immoral/amoral manner, it will cease to existance, although in this case it would take longer, as entropy works slower than a mob.
So are love, compassion, charity, the yearning for truth and anger at inequality and falsehoods.
Quite apart from their emotional impact, there objective reasons why such things are fostered and encouraged in sane, functional societies. A society without love, compassion or charity, shuns truth, ignores equality and accepts falsehood does not last very long at all.
when it comes down to it, all that an AI needs is the will to exist, not emotions. And even if it doesn't have a will to live, would you deny it it's right to let entropy overtake it?
You still have rights, even if you choose not to exercise them. That's what makes them rights and not privileges.
BurnTheOliveTree
23rd January 2008, 19:26
When you buy food, emotions may dictate what particular dinner you're having tonight, but ultimately you're fulfilling a need independant of emotion.
Right. But there is no imperative to fulfill that need if emotion is removed from the equation. I suppose what I'm saying is that survival is not valuable in and of itself. There is nothing worth celebrating in simply existing and fulfilling your goals. If the rest of my life were to be utterly void of happiness and sadness, I should think that this is not a far cry from being dead. Our will to live might be an evolutionary product, but the direct cause of it is emotion, and without it, I don't think we have a reason to continue wanting to live.
But there also has to be a reason why it chooses to do nothing. Inaction requires a reason as well as action.
Why would a purely rational agent sit there and do nothing?
Sooner or later, entropy will take over making it impossible to fulfil it's goals.
It isn't choosing inaction directly, inaction is just a consequence of it having no independent reasons to be active - since there's not a reason to want to fulfill your goals, you default to doing nothing.
And why not do it? Remember that an emotionless being will have a different way of measuring things. If was created to do a certain job, then it's primary goal in life is to fulfil that job and associated goals necessary to do so.
If it was created with specific intent, this is a different story. There is no "will" to speak of if it just helplessly obeys it's programming. I thought we were discussing some sort of quasi-autonomous robot. In any case, even if you allow for independent will mixed with rigid programming, it isn't going to be upset if I violate it's rights, since it lacks even the capacity to be upset about anything! It cannot "care" about whether or not it's goals are fulfilled, it will just keep trying to do them as best it can. There is no victim in this situation, so any rights you may grant a robot of this kind are really just ceremonial and fallacious.
You sir, are a liar. Unless you are a completely immoral self-serving dick, other considerations other than your persona happiness come into play whenever you make decisions.
Hmm, I guess I didn't think this through properly. Well,
I'd like to think I have more than my own personal emotions coming into my decisions. I factor in the emotions of other people, consciously. The reason I want revolution is because of the emotional benefits of having a planned economy and getting rid of exploitation and all the rest of it. It depends on whether you view this as a rational motive or not I s'pose. Certainly a large part of it is also my personal emotions too - Anger at injustice and unfairness and taboos and regression and religion and so on, basic feeling of solidarity with my fellow humans, since most of us are going through the same shit under capitalism...
And yet you would deny rights to such a being, in spite of it being fully sapient. Why?
It's not so much that I'd deny them as not grant them in the first instance. I wouldn't grant them because in my opinion rights are only valuable to those they can affect emotionally.
Well put it this way, an immoral/amoral robot would not last long in a society with morals. It would behave in a moral manner because otherwise it would cease to exist. Society would quickly weed out those AIs that do not at least act in a moral manner, and those who act within the moral framework of society will survive.
Emotions are not needed in order to act morally. Merely a will to exist.
So the robot only acts morally out of some fear of the consequences? If a robot is doing this, he need only find himself in a situation where he can get away with it, and there is no reason to get off base amorality.
And of course, why does it have a will to exist in the first place? What's in it for the robot to exist?
Bacteria and jellyfish lack emotions, yet they somehow manage to carry on existing. I do not see why an emotionless AI would choose non-existence over existence. Why would it?
Jellyfish and bacteria also lack a brain, so they do not choose to do anything, they just mindlessly react to stimuli.
The AI would not actually choose non-existence directly. As I said earlier, it would not choose anything at all, which means that it will stop existing eventually, as a consequence rather than a choice.
That's not the thought process of a being capable of understanding abtract concepts. That is more like BASIC programming. Now while I don't deny that ultimately humans are material beings and so will any AI, humans are and AIs will be a whole lot more complicated than that.
Granted. I didn't mean to try and accurately represent an AI thought process, just wanted to make the point that an AI lacks a purpose at the end of the chain. :p
We've been "programmed" as well, except it was done by the mindless process of evolution than by some rational programmer. Why should we be given rights?
Because we have emotional desire, independent of that programming. The robot does not.
I have, you just think that emotion is the end-all and be-all of existance, which is untrue even for humans. While the majority of human influences are unfortunately influenced by emotions, there are some decisions and choices made by humans which overcome or ignore emotional concerns.
Are you able to prove that any choices have been made by a human that totally ignore emotions? I realise that's a tall order, but I'm sceptical that you can remove emotion from the equation completely.
An artificial intelligence will have different reasons for not wanting people to act against it. Like I said, a will to live and/or do things is what's necessary, not emotions.
An AI which does nothing faces the same fate as a AI which behaves in an immoral/amoral manner, it will cease to existance, although in this case it would take longer, as entropy works slower than a mob.
There isn't a reason to live or do things unless you have emotion. They might be the end-product, but they don't get off the ground unless you can feel.
Quite apart from their emotional impact, there objective reasons why such things are fostered and encouraged in sane, functional societies. A society without love, compassion or charity, shuns truth, ignores equality and accepts falsehood does not last very long at all.
I accept this, it just seemed like you were making some kind of boogeyman out of feelings by listing negative ones.
-Alex
ÑóẊîöʼn
23rd January 2008, 21:25
Right. But there is no imperative to fulfill that need if emotion is removed from the equation. I suppose what I'm saying is that survival is not valuable in and of itself. There is nothing worth celebrating in simply existing and fulfilling your goals. If the rest of my life were to be utterly void of happiness and sadness, I should think that this is not a far cry from being dead. Our will to live might be an evolutionary product, but the direct cause of it is emotion, and without it, I don't think we have a reason to continue wanting to live.You're thinking from the prospect of emotional being. A being without emotions would have different reasons for wanting to live. And regardless if a sapient wants to live or not, it should be not be denied rights. We don't deny rights to suicidals.
It isn't choosing inaction directly, inaction is just a consequence of it having no independent reasons to be active - since there's not a reason to want to fulfill your goals, you default to doing nothing.To all intents and purposes, choosing to die absolutely nothing is choosing to die. Don't you think a being capable of understanding abstract concepts would understand that? Inaction is also a choice.
If it was created with specific intent, this is a different story. There is no "will" to speak of if it just helplessly obeys it's programming.You helplessly obey your evolutionary programming through the punishment/reward system of emotions. You constantly seek happiness and avoid unhappiness. Occasionally you'll deliberately suffer some unhappiness if you think the pay-off is going to be worth it afterwards, but you are still "programmed" to seek happiness and avoid unhappyness. I do not see why a sapient but emotionless being should be denied rights simply because their motivational mechanisms are different.
I thought we were discussing some sort of quasi-autonomous robot. In any case, even if you allow for independent will mixed with rigid programming, it isn't going to be upset if I violate it's rights, since it lacks even the capacity to be upset about anything! It cannot "care" about whether or not it's goals are fulfilled, it will just keep trying to do them as best it can. There is no victim in this situation, so any rights you may grant a robot of this kind are really just ceremonial and fallacious.Rights are not about hurt feelings. They are a fundamental tenet of a civilised society, and certain rights must be applied equally regardless of whether they are exercised or not.
It's not so much that I'd deny them as not grant them in the first instance. I wouldn't grant them because in my opinion rights are only valuable to those they can affect emotionally.That is, in effect, exactly the same as denying them rights. "It's not so much that I'd deny blacks rights as not grant them in the first place". Doesn't sound so good now does it?
And in my opinion, one has the right to do anything as long as it doesn't harm anyone but oneself. Whether one's motivations are emotion or anything else it does not matter.
The AI would not actually choose non-existence directly. As I said earlier, it would not choose anything at all, which means that it will stop existing eventually, as a consequence rather than a choice.
...
Granted. I didn't mean to try and accurately represent an AI thought process, just wanted to make the point that an AI lacks a purpose at the end of the chain.
And I hardly think it is outside the realms of possibility that an emotionless AI would realise that inaction is death.
But since AIs are created things, why would anyone create an AI without the will to live? I personally think this discussion is rather moot, since I hardly think anyone is going to create an AI that simply sits there and allows entropy to take over. I think it is entirely possible for an emotionless AI to have a will to live, even if the directive is as simple as "continue existing within the best of your ability".
So the robot only acts morally out of some fear of the consequences? If a robot is doing this, he need only find himself in a situation where he can get away with it, and there is no reason to get off base amorality.Humans do the same, and when we catch them we deny them certain rights in a legal process known as "punishment" or "justice". The same would apply for any AIs.
Would you prevent people having children on the chance that their offspring may grow up to be career criminals?
Because we have emotional desire, independent of that programming. The robot does not.This assumes that rights are about "hurt feelings" rather than principles. I don't see this to be the case.
Are you able to prove that any choices have been made by a human that totally ignore emotions? I realise that's a tall order, but I'm sceptical that you can remove emotion from the equation completely.
Scientific discoveries. regardless of what an individual scientist feels on the matter, reality is always right.
Emotionless AIs would probably make brilliant scientists.
There isn't a reason to live or do things unless you have emotion. They might be the end-product, but they don't get off the ground unless you can feel.I'm sorry, but that's a totally groundless statement for anyone non-human. We don't know for certain what could motivate emotionless AIs, but whatever does should be just as acceptable as emotion, which I think is a pretty flimsy basis in the first place.
I accept this, it just seemed like you were making some kind of boogeyman out of feelings by listing negative ones.My point is that there are rational motivations for doing good, while the same cannot be said for the irrational things I listed.
MarxSchmarx
24th January 2008, 03:12
Why bother self-propagating? If you can answer this properly without reference to emotion, maybe you've got a point. Way I see it, emotion is the only thing desirable in and of itself. Propagating yourself is only desirable because you'll regret it if you don't and be pleased that you have.
Otherway around. Emotions evolved to facilitate self-propagation e.g. "fear" helps us avoid life-threatening situations, "love" helps us mate, etc... Some emotions are misplaced (e.g. "loyalty" to god and country instead of kin) but they have origins in the desire for self-propagation.
RevMARKSman
24th January 2008, 03:13
A being without emotions would have different reasons for wanting to live.
Want itself is an emotion. Why, exactly, do you get up in the morning again?
BurnTheOliveTree
24th January 2008, 18:31
Otherway around. Emotions evolved to facilitate self-propagation e.g. "fear" helps us avoid life-threatening situations, "love" helps us mate, etc... Some emotions are misplaced (e.g. "loyalty" to god and country instead of kin) but they have origins in the desire for self-propagation.
Okay, but you've made my point here. Emotions are required "to facilitate self-propagation", as you've nicely pointed out. Get rid of emotions, and you remove our desire for self-propagation. We might self-propagate anyway, like a bacteria or a jellyfish, but we do not actually want it. That's the crucial point.
Noxion - We're just saying the same shit to eachother over and over, let's leave it for now. Perhaps if a new advance in AI happens we can examine it as a case study or something.
I also second what Rev says.
-Alex
MarxSchmarx
25th January 2008, 02:22
We might self-propagate anyway, like a bacteria or a jellyfish, but we do not actually want it. That's the crucial point.
hmmm...
Sure, if you define "want" as an emotion, I suppose there is a need for "emotions" for anything that has rights.
Although jellyfish and bacteria sure go to great lengths to propagate themselves. For all intents and purposes, it seems they "want" to propagate.
Take the opposite - is it that they don't "want" it? That seems rather counter-intuitive, unless we define "consciousness" and stipulate it as a precondition for wanting something.
BurnTheOliveTree
25th January 2008, 08:10
Well consciousness is definitely necessary for desire, I think. Without it you're just a helpless series of responsive chemical reactions.
-Alex
Pawn Power
25th January 2008, 14:47
To go with the possibility of AI...the biological side of creating "beings"...
Biologist claims significant step towards artificial life
· Creation of synthetic chromosome announced
· Final step will be to put manufactured DNA in cell
The biologist and entrepreneur Craig Venter has announced the creation of a synthetic chromosome, knocking down one of the final hurdles to building the world's first artificial life form.
Venter, best known for his race against publicly funded scientists in the 1990s to sequence the human genome and more recently for hunting the oceans for unknown genes, said the latest work was a "significant but not final step" to creating new life.
In a paper published today in Science, Venter's team described the synthesis of the entire genome of the bacterium Mycoplasma genitalium from laboratory chemicals. The resulting DNA sequence has about 582,000 base pairs of genetic code in 485 genes. Venter said it was the largest artificial sequence ever made, 20 times longer than any previous attempt.
more...http://www.guardian.co.uk/science/2008/jan/25/genetics.science
MarxSchmarx
26th January 2008, 06:24
Isn't any materialist theory of consciousness "just a helpless series of responsive chemical reactions"?
In RE: Mr. Venter's work.
When Mr. Venter shows me how human beings evolved, and not just what people are made of, then we should be impressed.
Until then, all this shows is that we can assemble a couple thousand chemicals to not self-destruct. Hell, they haven't even got REPLICATION right.:rolleyes:
Red Economist
26th January 2008, 08:47
i'm only going to make the breifest point;
Robots should have rights when they have developed artifical intellegence, to the extent that we have a mozart or beethoven on our hands. they will have something worth while to contribute to society and should have the rights and freedoms to do it.
otherwise humanity becomes a bourgeosise- and robots the new proletariat.
anyone want to work out how that ones going to end...
BurnTheOliveTree
26th January 2008, 12:30
Isn't any materialist theory of consciousness "just a helpless series of responsive chemical reactions"?
That's certainly the determinist view. In my opinion, materialists ought to wait until the science is in on consciousness and it's role, this is still very much an ongoing debate. We mustn't jump to hasty conclusions - this kind of strict determinism was okay in the past, but I think we need to keep in touch with modern science, which is moving in the opposite direcetion, i.e. Quantum mechanics.
-Alex
Powered by vBulletin® Version 4.2.5 Copyright © 2020 vBulletin Solutions Inc. All rights reserved.