Log in

View Full Version : The Great Computer Debate



Stormin Norman
8th July 2002, 15:29
Let’s try something different today. We have all heard the arguments for and against communism. Although it is a pressing social issue of the day, why not investigate another one of today’s metaphysical questions. The role of computers in virtually all aspects of our lives. From quantum computing to biological computers, the advancements in the scientific arena pose just as many interesting ethics questions as the field of biotechnology. Computing power as well as our infrastructure’s dependence on such systems puts us humans in quite the predicament. Will merging our minds with the machines be the only way for us to insure our own survival in a world dominated by computers? Will nanotechnology progress and eventually wind up in the hands of the average citizen? If so what is to stop the user from doing incredible damage to humanity? What is the government’s role in such an industry? Aside from those questions, let’s ponder the increasing gap in computer literacy. Will the technological divide create a class of people unable to adapt to an ever changing world? Will this divide be used as a way to create an underpriviledged class who is relegated to the position of slaves? As you can see the implications of our dependence on computers, and the exponential rate at which they are able to process information leaves all of us humans in a compromising position. This is yet another one of today’s problems that must be addressed.
The following article appeared in wired magazine some months ago. It was written by Bill Joy who was cofounder of Sun Microsystems and an expert in Java computer language. In it Joy uses the writings of Ted Kasinsky to help explain the dangers presented by our technological age. At the time it was a controversial article. It sparked the debate as to whether ethics boards ought to be established to review the dangers of computers and make determinations as to what should be allowed. This issue could be paralleled with the bio-ethics debate as to whether to clone humans or conduct stem cell research. Read it and consider the questions that have be posed by this subject. Maybe we can get an interesting conversation going.

Why the future doesn't need us.

Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species.
By Bill Joy

From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inventor of the first reading machine for the blind and many other amazing things.
Ray and I were both speakers at George Gilder's Telecosm conference, and I encountered him by chance in the bar of the hotel after both our sessions were over. I was sitting with John Searle, a Berkeley philosopher who studies consciousness. While we were talking, Ray approached and a conversation began, the subject of which haunts me to this day.
I had missed Ray's talk and the subsequent panel that Ray and John had been on, and they now picked right up where they'd left off, with Ray saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that, and John countering that this couldn't happen, because the robots couldn't be conscious.
While I had heard such talk before, I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given Ray's proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.
It's easy to get jaded about such breakthroughs. We hear in the news almost every day of some kind of technological or scientific advance. Yet this was no ordinary prediction. In the hotel bar, Ray gave me a partial preprint of his then-forthcoming bookThe Age of Spiritual Machines, which outlined a utopia he foresaw - one in which humans gained near immortality by becoming one with robotic technology. On reading it, my sense of unease only intensified; I felt sure he had to be understating the dangers, understating the probability of a bad outcome along this path.
I found myself most troubled by a passage detailing adystopian scenario:
THE NEW LUDDITE CHALLENGE
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his "problem." Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them "sublimate" their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.1

1 The passage Kurzweil quotes is from Kaczynski's Unabomber Manifesto, which was published jointly, under duress, byThe New York Times and The Washington Post to attempt to bring his campaign of terror to an end. I agree with David Gelernter, who said about their decision:
"It was a tough call for the newspapers. To say yes would be giving in to terrorism, and for all they knew he was lying anyway. On the other hand, to say yes might stop the killing. There was also a chance that someone would read the tract and get a hunch about the author; and that is exactly what happened. The suspect's brother read it, and it rang a bell.
"I would have told them not to publish. I'm glad they didn't ask me. I guess."
(Drawing Life: Surviving the Unabomber. Free Press, 1997: 120.)

Bill Joy, cofounder and Chief Scientist of Sun Microsystems, was cochair of the presidential commission on the future of IT research, and is coauthor ofThe Java Language Specification. His work on theJini pervasive computing technology was featured inWired 6.08.

Stormin Norman
8th July 2002, 19:32
What, are my posts to long for you? Is it too hard to discuss anything other than topics which you have already formulated your thinking? Do you guys just hate me and ignore me in hopes that I go away? Come on, if you look hard enough at the questions you will find that this topic is relevant to the class distinctions that many of you completely despise.

RedCeltic
8th July 2002, 19:50
Do you guys just hate me and ignore me in hopes that I go away?

Yes.

Lardlad95
8th July 2002, 19:54
Come on Celtic. If we just say that we don't talk to him out of spite he'll feel that he won.

Personally I feel we are all ready to dpendant on technology as it is....oh wait I'm on a computer.

but if you think about every generation has it easier than the one before them thats how society prgresses.

As far as biotechnology people are already putting chips inside of them. Personally I think its stupid except for medical reasons like if they need to identify you if your unconcious or dead and they have no contacts for you or id.

Michael De Panama
8th July 2002, 20:04
As much as I love technology, I feel that one day this world is just going to overdose on it.

Lardlad95
8th July 2002, 20:06
I never knew you could od on computers....I'm pretty sure some internet junkie will learn how to

Stormin Norman
8th July 2002, 20:10
No I am not talking about the inconveniece of a power outage or that lack of a computer. Sentient machines! A thinking network that can make a determination that we have become obsolete, would be a dangerous revolution in the world of technology. Never in history has technology progressed at such a quick rate.
Unless you are talking about chips embedded in pace makers to regulate heart palpitations, chip technology is still in its early stages. Implanting chips to improve memory or bio-processes remains something of the future. Would you get an upgrade if you could or do you find it to be a morally repugnant idea? Should people who will be able to afford such 'improvments' be allowed to give themselves that competitive edge?

RedCeltic
8th July 2002, 20:20
I was just messing around with him.. :biggrin:

It's an interesting article and it does make one think.. just how far people will take biotechnology.

We had a debate on this board once about stem cell research and if cloning of the human cell was going too far.

Personally, I was under the belief that cloning a cell is different from cloning a human and that if it would save lives than it would be worthwhile.

However, reading some posts from my friend Nickadermus started to make me wonder. Just how far would science go in biotechnology? Is there a line that we should not cross? And if so where is it?

These are perplexing questions that leaves one wondering where is the "Moral ground" in all of this?

Lardlad95
8th July 2002, 20:25
Very Ray Bradbury Norman. Of course it s a possiblitiy.

Scientists have already started to play God in several other feilds. For some reason they don't know when to quit.

I guess they would veiw this things as accomplishments and not realize the potential for them to find us obsolete and dispose of us.

If these things were created it would be very likely that a we would be found useless.

As technology progresses the risks and dangers seem to be increasing even faster

RedCeltic
8th July 2002, 20:28
On one hand, it seems perfectly moral to save lives if the technology is available.

However on the other hand, does it seem moral to turn the human race into super humans?

I can see both points, and I'm not clear on where I stand on it.

Lardlad95
8th July 2002, 20:34
Super humans....hey that makes me think

This is off subject. But the US talked about how the soviets created all these Russian supermen...but that comic book hero Captain America was genetically engeniered

ok i said that now lets get back to the serious talk

Stormin Norman
8th July 2002, 20:43
Entirely different topic, Red Celtic, but an interesting one. Nanotech and sentient machines is the current topic, but I will speak to the notion of stem cell research.
Image the chicken egg industry. Try to think of the enormous scale the production of eggs used for food must be. Think about the machinery, production process and possibly the hidden underbelly of its existence. You know, the things the consumer is better off not knowing. Now picture an industry of the same size and scale. Hopefully, it is a more sterile environment. The difference being that it is completly reliant on stem cells harvested from human embyros. Think of the blood, really get some good imagery going on. The danger of stem cell research does not necessarily lie in the research itself, but rather the industry that it revelations could possibly cause. Don't you think that if a large profit was involved, the bio-tech industry would create such a morbid industry? An economy of scale dependent on the dead.
'But it is noble to want to save lives'. What do you think the consequences of preventing people from natural death would be? I submit that a weakening of the human race as a whole would occur, because no matter how hard we try to ignore it, survival of the fittest must be the template for the survival of humans as a species. If we allow those to live, who would otherwise die, we effectively lower the human race to the lowest common denominator. Aside from that our world supposedly surpassed its carrying capacity a couple years ago. Wouldn't wars and famine increase as a result of overpopulation. I am not talking eugenics. I am talking about the natural order of things. How many times must we disrupt the natural cycles before we are rendered exstinct?

Stormin Norman
8th July 2002, 20:47
Sorry, I have to get some sleep. Keep the discussion going I will come back later and respond to everyone's posts.