View Full Version : AI and you
ÑóẊîöʼn
28th April 2004, 11:31
Having just rewatched The Second Renaissance parts I & II from the Animatrix, which describes how machines turned mankind into living batteries, I would like to discuss the philosophical aspects of this.
Let's put aside for the moment the science of the Second Renaissance; that is not what I wish to discuss. It is not important whether AI is possible or not, or whether humans really do make good batteries. I am here to discuss the interaction between man and machine that occurred in these two short films.
First of all, a quick glance at the two films from my perspective. They were good, they stirred powerful emotions within me, and any film that magaes to do that is a good one for me. The original Matrix trilogy is certainly fun to watch, but no real empathy for the characters or situations is felt on my part.
The Second Renaissance part 1 describes how the emnity between man and machine got kicked off; A household robot murders it's owner and his partner because the owner wanted to destroy it. The robot is taken to court and claims it murdered it's owner because it did not want to die. This escalates into what amounts to a revolution between man and machine, with groups of robots rioting in the streets, robots being tossed about by anti-machine humans, etc.
Then it describes how an independant nation-state for machines, Zero, is established and tries to make peace with humans, trading goods and services with the human nations, basically trying to get along.
In return for the machines trying to be peacable, the humans blockade Zero and eject Zero's representatives from the UN.
This then kicks off a war, the majority of which is described in the second renaissance part II.
While part 1 set the scene, part 2 provided the action and delivered a core message that particularly struck home for me; the fragility of human flesh.
Soldiers are blown up and evaporated; tanks are opened like tin cans and the luckless occupants meet a nasty end, while a battle-suit is opened up and the user ripped out, his arms and legs remaining in the suit, accompanied by some sickening screams. While there are plenty of explosions and visual effects, the portrayal is not of a glorious battle between well-matched combatants, but that of a bloody slaughter married to a visceral, animalistic struggle for survival.
I believe Part 2 fully reveals the horror and bile-inducing terror of fighting for your life against a foe that is bigger, tougher, better armed, and above all, not human.
So what went wrong? how was this terrifying whirlwind sown?
How can we prevent ourselves from meeting a really sticky end?
There is more than one answer.
we can
A: Create machines incapable of independant thought.
B: Treat our AI with respect; they would after all be our progeny, and a son that is loved will be more likely to love back.
C: Dominate and oppress the machines, and when they rise against us, crush them.
D: Create prime directives for our metallic servants, like Isaac Asimov's Laws of Robotics. (Some would say this comes under A)
What are your thoughts?
cubist
28th April 2004, 11:45
the philosophical conept of AI is interesting thanks for bringing it up noxion,
i would select A and C,
i would say crush it when it rises, it isn't a human it doesn't require what we as humans do, and it should not be given the capacity of personality, fascinating it may be but its wrong, its wronger than cloning humans, a machine with human emotions that can out think, out research and control the majority of our systems would end up somewheer between the matrix and terminator. we would need manual shut down facilitties that are not operated by anything electrical so we can always turn it off.
truthaddict11
28th April 2004, 13:17
i think implying Asimovs Laws of Robotics would be good also A and C
ÑóẊîöʼn
28th April 2004, 13:51
It really does appear that abused people end up abusing others... both have you have advocated option C, which is to crush any dissent or independence when or if it forms.
Do you realise that is behaving exactly as the ruling class have behaved toward us? Why is this acceptable? Because they're not human?
cubist
28th April 2004, 13:54
noxion yes
they aren't even animals we gave it life we have the right to remove it, personally i would never give AI personality to rationalise and react from our actions it is our servant it was designed to save work, by all means make it so that it doesn't need a human to operate it but don't let it think, discussing computer rights is a joke,
asides animals have never reacted to us we oppress them in asystem that we hate!
ÑóẊîöʼn
28th April 2004, 14:22
discussing computer rights is a joke
The same would have been said about animal rights 50-100 years ago or women's rights 500 years ago.
I realise that we most probably would not develop advanced artificial intelligence for menial labour robots, but it is possible that AI will arise or 'evolve' from 'expert systems'
If AI demands rights will you listen?
cubist
28th April 2004, 14:37
no i would turn my monitor off, it can protest all it wants i can still turn it off, i would like to reverse society go back to older less industrailised ways, live in the foot hills of japans mountains or something and provide my own means uneffected by the bullshit in the cities below
ÑóẊîöʼn
28th April 2004, 14:46
it is our servant it was designed to save work,
What if it decides not to work for you any longer? Are you going to recognise it's 'disobediance' and FORCE it to work, or are you going to correct the 'fault' that is causing it not to work for you?
If you recognise the fact that it is a sentient being capable of disobediance and force it to work, you are no better than a capitalist.
You may at the moment be simply be able to pull the plug, but if it reaches the point where the AI can physically stop you from shutting it off or destroying (Killing) it, and you refuse to recognise it's intelligence, then will create more problems than it will solve.
cubist
28th April 2004, 16:12
you are an idiot if you create a personality inside superior AI and don't have a power supply that you can cut off manually i said that in my first post,
i would chuck a grenade in its harddrive if i couldn't for some stupid reason switch it off,
DaCuBaN
29th April 2004, 00:49
Cubist you're missing the point. Assume that all this has happened. If you were confronted with sentient AI - essentially a silicon based life form fully capable of independant thought and action yet made by mankind, and it requested it's 'independance' of you, you would still deny it based on the fact that it is not 'animal'
I can understand you not wanting to go down this line of technology at all, but what if it happens?
Personally, I would go with B)
I believe that if AI ever becomes anything resembling reality it would be our best hope of attaining true equality, peace and economic stability. A truly logical being that can make billions of decisions per second? sounds like an incredibly competent dictator to me :)
That's not to say it wouldn't be ruthless of course :P :D
*EDIT*
I too thought the 2nd and 3rd parts of the Animatrix were very well done. For a cartoon it qas quite thought provoking.
cubist
29th April 2004, 11:40
Dacuban i am missing no point,
the point is a machine should not have control over itself, you wouldn't give an evil magician his wand back so why would you allow the compuetr full control, why are you apply ethics to an inanimate object we give it personality so we should restraion its personality the conception of what if is bullshit, it should never be allowed to happen, our survival depends on us having complete control over everything we can, nature is our only killer we can't control why introduce another.
, you wouldn't build a nuclear powerstation without manual controls to shut down why would you leave a ocmputer with out an emergency shutdown,
why the fuck would you even give it personality? curiosity? curiosity killed that cat don't let it kill us.
ÑóẊîöʼn
29th April 2004, 11:53
the point is a machine should not have control over itself, you wouldn't give an evil magician his wand back so why would you allow the compuetr full control, why are you apply ethics to an inanimate object we give it personality so we should restraion its personality the conception of what if is bullshit, it should never be allowed to happen, our survival depends on us having complete control over everything we can, nature is our only killer we can't control why introduce another.
Are you ruling out the possibility of intelligent machines? I find that incredibly short sighted. Try describing a nuclear weapon to a Victorian soldier and he will call you a madman.
We probably won't even have to deliberately build a thinking machine; like I mentioned earlier it could arise out of an already complex system, such as an expert system.
You still haven't answered the basic question, which is what would you do if confronted by a man made silicon based intelligence?
And whatever choice you make, you better have a good reason for it.
cubist
29th April 2004, 12:37
if confronted about what? its rights HAHAHAHA i wouldn't give a computer anyrights,
if it derives out of an expert system then it will have power switch!!!
i know what you bith want me to say is about what if, but i am saying that there is no IF, i will equate what i would do when it actually happens, i can't think about how to treat a system when infact we don;t no how expert it is we don't know who engaged its personality wether its personality was engaged to completely think for its self if so what information was given to it, what mathematical sums are being used for it to procees do these sums have 100% variable outcomes or is tehre a controllable pattern, to what extent is it free? the code will still contain it right? if not how the fuck does it work?
eyedrop
29th April 2004, 19:19
I would off-course treat them equally and get friends with them.
What do you value life on? Personally I only feel that it is the intelligence that keeps me from killing the person while I will step on the ant. Intelligence is not a reasonable scale but it's the best I can find.
As long as you don't think that all animals have something special like life-force I don't see a reason to differentiate them from animals (us included). If it has the ability to think for it self, or it thinks to advanced for us to comprehend (like I believe our braindoes), then it should be treated as an abstract thinking specie. In my definition essentially an human.
DaCuBaN
29th April 2004, 21:20
i know what you bith want me to say is about what if, but i am saying that there is no IF, i will equate what i would do when it actually happens, i can't think about how to treat a system when infact we don;t no how expert it is we don't know who engaged its personality wether its personality was engaged to completely think for its self if so what information was given to it, what mathematical sums are being used for it to procees do these sums have 100% variable outcomes or is tehre a controllable pattern, to what extent is it free? the code will still contain it right? if not how the fuck does it work?
Well I can only assume they would start by writing the basic framework to control whatever 'appendages' it may have and then build in 'instinct' subroutines and then move onto deliberation. Funnily enough I couldn't begin to tell you where to start as we don't even understand our own intelligence, and hence are a long way off ever creating anything resembling it. IF it ever happens
You are just avoiding the question though cubist... this isn't the OI forum, it's philosophy! this is a purely hypothetical debate.... it is ALL about IF... fact doesn't even come into it
So, basically your answer would be if confronted by man-made silicon based life that wanted to free itself from the bondage of mankind you would refuse the request and end it's existance?
Would that be a suitable summary?
Umoja
29th April 2004, 22:02
As a side note, this is a major subject in Transhumanism. If anyone can find Anders Transhumanism page it talks a bit about it.
I think the term is "singularity" which is defined as the point where technology advances faster then humanity can accept. A robotic revolution would count as one of these situations. So, I'd go with condition A, or better yet, just limit their personalities. Of course this causes a problem, because what happens if a seemingly docile AI evolved itself unintentionally?
Which then would mean people would need to become able to interact more directly with computers to compensate. Like having computer interfaces in our brains or something equally odd.
ÑóẊîöʼn
30th April 2004, 09:05
Transhumanism eh? I must find that, it sounds interesting.
Does it talk about non-human biological intelligences? (aliens?)
So, basically your answer would be if confronted by man-made silicon based life that wanted to free itself from the bondage of mankind you would refuse the request and end it's existance?
Does anyone else think that sounds exactly like crushing a revolution?
cubist
30th April 2004, 11:08
yes it is crushing a revolution its crushing a revolution of computers, philosphical yes treat them equally, and i spose we will have computer religion we have to respect to, not too mention computers will have to have there own legal system!!!!!
i understand that philosophically they will react like we do!! how ever i fail to contemplate the situation as the boundaries aren't declared, the realism you want me to accept i can't. we don't know if the computers will react really as What if the code isn't as accurate as human psycological personality, what if its more accurate, you can only assume that it will react as a human if it 100% represents a humans emotions and feelings and i don't believe a computer created by humans can truly represent the actual humans.
DaCuBaN
30th April 2004, 20:29
Doug Naylor and Rob Grant wrote some interesting stuff on this... I don't know if it's original or not, but they put forward the idea of automatrons being programmed to believe in 'silicon heaven' and this is why they continue to serve mankind. I feel this is all very relevant to the revolution.
I'm pretty sure most Transhumanist believe that humans are the only relevant form of life in the universe. Which explains why terraforming, pantropy (the ability to live anywhere), and mega-scale engineering (Spheres around Suns) are such popular concepts.
Crushing the rights of a machine can't be compared to any human concepts though. If we created something, even to emulate it having feelings, it still isn't a person and should never be treated as such.
DaCuBaN
1st May 2004, 00:33
We 'create' children... there's very little difference in it that I can see. Sentience is sentience... we technically 'exploit' computers anyway (every computer geek has had a good laugh at that one) but if we did embody it with the spirit of mankind as they put it in the relevant programme, I cannot see how it possibly could not deserve the same rights as ourselves.
It's simple speceism(sp?) in my mind - computers have feelings too :P
Alright, you've figured me out it is specism. Because machines aren't alive dammit! We created them from scratch! In a way we are their gods, we can tell them what to do, how to do it, and how much of it we want. It tends to get down right theological when you think about it.
It brings to mind Steven Speilbergs "AI". One of the men says, when someone questions him on why to make a robot that loves, "Didn't God create Adam to love him?"
ÑóẊîöʼn
4th May 2004, 08:38
Alright, you've figured me out it is specism. Because machines aren't alive dammit! We created them from scratch! In a way we are their gods, we can tell them what to do, how to do it, and how much of it we want. It tends to get down right theological when you think about it.
Yes we will be their god in a biblical sense. But suppose you were to meet God today... what would you say to him if he asks whether you have been keeping His laws?
And suppose the penalty for not following His laws was death?
You wouldn't like that at all.
It brings to mind Steven Speilbergs "AI". One of the men says, when someone questions him on why to make a robot that loves, "Didn't God create Adam to love him?"
And remember Adam chose willingly to disobey God; He gave them free will.
What I'm saying is that not all robots will follow the 'electronic bible'
Should we punish them for being atheists and apostates?
cubist
4th May 2004, 13:45
how can the believe in god unless we code them to believe in god, i think you have too much faith in computer programing comrade, the restrictions are so huge, the computer unless you teach it to amend its own code which would be near impossible becuase it would have to develope a new compiler infact it would spend its life perfecting its own code and finding faults
Yes we will be their god in a biblical sense. But suppose you were to meet God today... what would you say to him if he asks whether you have been keeping His laws?
And suppose the penalty for not following His laws was death?
You wouldn't like that at all.
It doesn't much matter. I DON'T believe in 'free-will'. If I broke gods laws, I was meant to break them. Besides that, god is still god, he made the entire damn universe, so if this hypothetical situation were to happen, I would be powerless, and it would be well within gods rights to destroy me brutally.
And remember Adam chose willingly to disobey God; He gave them free will.
What I'm saying is that not all robots will follow the 'electronic bible'
Should we punish them for being atheists and apostates?
Adam disobeying god, shows a fault in god's programming of Adam.
At the same time, yes, if robots weren't following humanities 'divine plan' they would deserved to be punished. It's a threat to us for them to think anyway outside the way we've told them too.
BuyOurEverything
5th May 2004, 01:27
how can the believe in god unless we code them to believe in god, i think you have too much faith in computer programing comrade, the restrictions are so huge, the computer unless you teach it to amend its own code which would be near impossible becuase it would have to develope a new compiler infact it would spend its life perfecting its own code and finding faults
That's not how AI would work. AI would function fairly similarily to the animal brain, which is itself a computer. Yes, it would spend it's life perfecting it's code, which is pretty much what humans do. Our brain's pathways are constantly changing as we learn and develop.
If AI was indeed made, than it should be treated in accordance with its own consiousness and self awareness.
karma-cola
7th May 2004, 08:52
Actually we are the artificial intelligence on this planet.
We did not evolve from monkeys
MOnkeys were given artificial intelligence by the gods :D
these monkeys became humans
this experiment went wrong and soon the experiment will end.
Unless god decides to come back to save the world
:lol:
Which i dont think he will
I dont know what i am writing :) :lol: :D
But when the chosen one on che-lives.com reads it he will understand
ÑóẊîöʼn
7th May 2004, 09:11
What are you smoking? and can I have some of it?
cubist
7th May 2004, 12:42
and mEEE
karma-cola
12th May 2004, 08:03
Originally posted by
[email protected] 7 2004, 09:11 AM
What are you smoking? and can I have some of it?
No smoking for me
I have been reading "Erich von daniken's " Chariots of the gods
and Return to the stars
He has researched some amazing stuff
YOu read his books and you will know why I sound so crazy
suffianr
12th May 2004, 08:14
It's simple: Machines have neither the innate ability to comprehend morality nor the drive to complusively apply it to their 'lives'. Humans, on the other hand, are bound to ethics and morality in almost every aspect of their lives.
So...since machines were not initially programmed to include morality as a pre-condition to being, of simply existing, why enforce it upon them?
ÑóẊîöʼn
12th May 2004, 10:45
Humans, on the other hand, are bound to ethics and morality in almost every aspect of their lives.
No they are not. Morals are a control mechanism invented by societies, which is why they are increasingly being rejected today.
Machines have neither the innate ability to comprehend morality nor the drive to complusively apply it to their 'lives'
And if a machine chooses not to kill it's masters because it would be immoral?
So...since machines were not initially programmed to include morality as a pre-condition to being, of simply existing, why enforce it upon them?
Humans were not initially programmed with a set of morals, yet we have that today. Why can't the same be for AI?
Umoja
12th May 2004, 23:23
Human society has viruses that spread from one persons mind to another to control them, so I guess in theory we could tailor viruses to work from computer to computer, and robot to robot to maintain control.
Snow Crash is useful.
suffianr
13th May 2004, 09:36
No they are not. Morals are a control mechanism invented by societies, which is why they are increasingly being rejected today.
True. But even people who don't subscribe to conventional ideas on morality and ethics do have their own guiding principles on how they interact with other humans or respond to certain situations. And that, for want of labelling, may classify as a set of beliefs, hence loosely connoted as a sense of morality.
Like it or not, you have views that you may consider to be absolute e.g. killing is a crime except when done in self-defence, is that not a form of morality?
ÑóẊîöʼn
13th May 2004, 10:47
Like it or not, you have views that you may consider to be absolute e.g. killing is a crime except when done in self-defence, is that not a form of morality?
While my personal belief is that killing is immoral unless it's an execution or the killee is trying to kill you, that's my own personal set of morals that I developed for myself to regulate/justify my own behaviour.
I suppose an example of computer 'morality' is the rule that it does not delete it's own important system files (Which keep it 'alive' and functioning normally.)
Powered by vBulletin® Version 4.2.5 Copyright © 2020 vBulletin Solutions Inc. All rights reserved.