View Full Version : Hurbert Dreyfus- world's biggest idiot?
Dermezel
9th August 2010, 04:24
For those of you unfamiliar with AI debates the name Hurbert Dreyfus might be unfamiliar to you. But those who are interested in such things he is something of a laughing stock.
Dreyfus is known as a leading AI/robotics skeptic. One who argues there are certain things machines "just can't do."
This first began with his famous boast that no computer "would ever beat me at chess." He lost his very first game (http://books.google.com/books?id=AJuowQmtbU4C&printsec=frontcover&dq=wired+for+wars&source=bl&ots=ui-av61I-_&sig=rjfjYv2jpJNUqlEeCUyS8gIEPyE&hl=en&ei=eGtfTJehK4HksQO41q2qCw&sa=X&oi=book_result&ct=result&resnum=5&ved=0CC4Q6AEwBA#v=onepage&q=dreyfus&f=false). (To be fair he wasn't the greatest of players. )
He then wrote a book called What Computers Can't Do (http://en.wikipedia.org/wiki/What_Computers_Can't_Do) where he argued no computer would ever "beat a class-A chess player." That was quickly disproven.
So he wrote another book What Computers Still Can't Do (http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=5890) where he claimed a computer would never be able to beat the world's best chess player. You probably know how that one went.
Then he began ranting about how we can "Always unplug it". Now scientists have begun to develop magnetic transistors (http://www.wired.com/science/discoveries/news/2006/02/70190) which can act as superconductors and manipulate quantum effects so that the machine may be able to run for years, decades, maybe even centuries unplugged. This is because it doesn't need electricity but can function based on a magnetic charge (http://books.google.com/books?id=AJuowQmtbU4C&printsec=frontcover&dq=wired+for+war&source=bl&ots=ui-av56E3W&sig=Fi9JNn5B4yC7JGD4zMXnly6WRF8&hl=en&ei=TG9fTLfPBJTmsQORlMiqCw&sa=X&oi=book_result&ct=result&resnum=5&ved=0CDMQ6AEwBA#v=onepage&q=magnetic &f=false) (which works even after you pull the plug. )
Undoubtedly Dreyfus will come up with a new argument about how he can still eat a cheese burger and enjoy it, and a robot can't. However this is also questionable, seeing as they have a robot capable of "tasting" meat with lasers: http://www.wired.com/table_of_malcontents/2006/11/robot_identifie/ (apparently human flesh tastes like bacon according to chemical analysis) and there's another robot that can digest organic matter for energy (http://www.newscientist.com/article/dn6366-selfsustaining-killer-robot-creates-a-stink.html).
Though I doubt that will stop him.
Kayser_Soso
9th August 2010, 05:35
Answering the title of the thread: No, not while Glenn Beck is alive.
Raúl Duke
9th August 2010, 06:09
Dreyfus is known as a leading AI/robotics skeptic.
Story on how he's been proven to be wrong multiple times (AKA expose as an obtuse/boneheaded idiot who won't learn from mistakes)Why? What's the motive? Is he trying to put people off from robotics so to stop (in his mind) some sort of skynet scenario/ robot-PC-network becoming sentient/sapient from happening?
Dermezel
9th August 2010, 13:52
Why? What's the motive? Is he trying to put people off from robotics so to stop (in his mind) some sort of skynet scenario/ robot-PC-network becoming sentient/sapient from happening?
Well the problem is several fold. First, does blind denial really prevent disasters or exasperate them? It is in this sense that I call him an idiot. Perhaps one of the biggest because AI can be potentially very dangerous. In fact the most potent of existential risks (http://www.transhumanist.com/volume9/risks.html), because it is a super-intelligence and we have no experience of such. Just our intelligence managed to wipe out 80% of large mammals in record time all over the globe. What can a super-intelligence do? I imagine it would look the same but a little more one sided.
Second, this can easily spill over into blind hate and prejudice. What about cyborgs, people who are part machine? Do we give them less rights or dehumanize them? I have no doubt corporations will try to label people with cybernetic parts a form of property. Say you have a companies cybernetic augmentations, what's to prevent them from claiming a patent, and thereby, if you are sufficiently cybernetic, ownership of the said person? What happens when these people are almost fully mechanical, or say they upload their minds into a machine completely, do they lose all rights? Do we now consider them "non-sentient Chinese boxes"?
Then we get into the more complex and perhaps important question of general machine sentience and rights. And before you are quick to dismiss the notion consider the rule: "Tyranny anywhere is a threat to freedom everywhere." This applies particularly to an AI, especially one that is super-functional or intelligent. If AI does not have rights that means it can be considered property. Now consider what a Corporation, State or Military can do with this property. Imagine what effect a Corporation with a super-functional or intelligent AI at its disposal can do to say corner a market, bias legislation, undermine your own rights.
It would be better to presume these machines will get and deserve rights once reaching a certain level of sentience, and thereby, we should not ask if they have rights, but what kinds of machines with rights we want to co-exist this. So far the best proposal is Friendly AI (http://singinst.org/ourresearch/publications/CFAI/index.html).
Last, it should be noted as an extremely pertinent albeit tangential point that the development of a Friendly AI as soon as possible may be necessary for our very survival.
AI is known as a potential "protective" or "preventive" technology, which is basically a technology that can help protect you from other dangerous technologies. An example of this is making sure you have advanced cures before you engineer certain diseases, just in case they are accidentally released. Or say in an alternate reality we were able to make force fields before nuclear weapons, so we wouldn't blow ourselves up. According to this reasoning you cannot realistically just ban technological development. However you can organize it so that you get some before others. AI is considered one of the first you want to get because it can guide you through the others. A Friendly AI can make sure you don't mess up and wipe out the species with say, nanotechnology, or biotechnology or quantum experimentation. However nanotechnology is not likely to help in the prevention of creating a dangerous Artificial Intelligence.
So the real problem with Dreyfus is that we are facing some very real and serious issues that can effect our very survival and all he does is spread ignorance on the matter.
Dermezel
9th August 2010, 14:00
Again, and I want to reiterate this because it is extremely relevant within a political atmosphere like this one, simply dismissing AI's as "non-sentient" or "Chinese boxes" is a potential justification for slave status. And it is extremely dangerous to all our rights, because any Company can then use this super-intelligence that it "owns" to undermine our rights.
Imagine giving a company "ownership" of a super-powerful and intelligent slave, say one genetically engineered to combine the genius of several Einstein's or Hannibal Lecters. Now double this. (This is just a rough hypothetical, because in real life AI would have several psychological advantages, like no need for sleep or distraction and no biases ) . What can a Corporation do with such a slave? How long would we retain our rights with such beings at the disposal of a State, or Corporation or Military?
Remember, the question of AI rights and sentience may well determine the question of our rights and continued existence.
#FF0000
9th August 2010, 17:18
He then wrote a book called What Computers Can't Do (http://en.wikipedia.org/wiki/What_Computers_Can't_Do) where he argued no computer would ever "beat a class-A chess player." That was quickly disproven.
So he wrote another book What Computers Still Can't Do (http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=5890) where he claimed a computer would never be able to beat the world's best chess player. You probably know how that one went.
AI have beaten grandmasters before but it's not as if it's a one sided thing. Kasparov beat Deep Blue the second time iirc.
IcarusAngel
9th August 2010, 22:57
Some people aren't meant to be taken seriously or are crazy.
Anyway, it's the other way around with Kasparov and Deep Blue. Kasparov beat Deep Blue the first time, and lost the second time. He claimed the computer cheated because it made moves that were "too human like." He's accused other people of cheating before I believe. Bobby Fischer was another one who regularly accused people of cheating or conspiring. Deep blue was retired after defeating Kasparov.
What's interesting is that no computer can beat even amateur players in the game of Go (http://www.gokgs.com/tutorial/index.jsp). This is why it's not recommended to learn to play by playing against computer programs as you'll pick up bad moves. I tried to play go but it's hard to tell at what point one person is going to win and one person is going to lose and you have to score each game.
Dermezel
9th August 2010, 23:18
AI have beaten grandmasters before but it's not as if it's a one sided thing. Kasparov beat Deep Blue the second time iirc.
Uh... go us? "Yah man we kicked its ass! Go us! Ruff ruff! Grunt- zug zug!"
I mean I dunno maybe I'm strange but I kind of expect AI to surpass humans one day and frankly don't care. I don't like Dreyfus because he's a moron. If anything I feel he makes us look bad because he makes us (as a species) look stupid. Stupid, egotistical, and arrogant.
Dermezel
9th August 2010, 23:20
What's interesting is that no computer can beat even amateur players in the game of Go (http://www.gokgs.com/tutorial/index.jsp). This is why it's not recommended to learn to play by playing against computer programs as you'll pick up bad moves. I tried to play go but it's hard to tell at what point one person is going to win and one person is going to lose and you have to score each game.
So after it wins are you going to argue that computers will never beat class-A players at Go? Again I don't care either way, I mean, I care about whether or not people are well taken care of, fed, clothed, not killed, etc. But "I gotta be the best! I gotta be the best! Species power!' I've never really understood that mentality. Call it my lack of competitive or "team" spirit.
IcarusAngel
9th August 2010, 23:39
What are you talking about? I was just pointing out a factual statement. I didn't say a computer would NEVER beat a world class player. However, it might not be for a while given the vast moves of go, and, according to the Wiki article, a lot of the chess programs are just brute force algorithms.
And if you want egotistical, know nothing, arrogant trolls check out www.mises.org.
#FF0000
9th August 2010, 23:53
So after it wins are you going to argue that computers will never beat class-A players at Go? Again I don't care either way, I mean, I care about whether or not people are well taken care of, fed, clothed, not killed, etc. But "I gotta be the best! I gotta be the best! Species power!' I've never really understood that mentality. Call it my lack of competitive or "team" spirit.
You're kind of missing the point. We're pointing out that AI has a long way to go still.
Not really all that long, I don't think. Unless we sort of hit a wall which sort of happens with technology.
You're stupid.
Dermezel
9th August 2010, 23:56
You're kind of missing the point. We're pointing out that AI has a long way to go still.
Not really all that long, I don't think. Unless we sort of hit a wall which sort of happens with technology.
You're stupid.
Define long. It took us billions of years to get here. Machines have gotten there in roughly 2,000. They are outpacing us thousands of times fold. And their rate of progress is accelerating.
#FF0000
10th August 2010, 00:00
Define long. It took us billions of years to get here. Machines have gotten there in roughly 2,000. They are outpacing us thousands of times fold. And their rate of progress is accelerating.
Yeah exactly. So on second thought probably not that long at all.
Dermezel
10th August 2010, 00:03
Yeah exactly. So on second thought probably not that long at all.
Okay but why did you have to note the AI lost the second game? I mean I already knew that, but it was sort of said without provocation. I'm sorry, but it sort of sounded like you were saying "But we kicked their ass in round 2!" and I am like "Who is we?" I support all sentient life.
In fact, I'd date a robot girl before a human girl any day in all probability. In fact, I'd date a robot guy before a human too. Less drama, probably more dexterous, also special features. I mean fuck yeah!
Dermezel
10th August 2010, 00:20
Yeah exactly. So on second thought probably not that long at all.
Anyways you're right. And you are the first person to admit that much in debates.
Dermezel
10th August 2010, 01:30
Anyways. On a side note, I've been arguing with this what I presume to be a petty bourgeoisie/right-wing technodcreep (he is either that or a prole with some super-backwards ideas, or big bourgeoisie which I doubt) who told me in my story "Post-Science (http://www.legendfire.com/forums/index.php?showtopic=2945)" (btw it is meant to be speculative fiction but they put it in the non-fiction section despite my criticisms):
It is indeed concievable that a society can inherit machinery it no longer understands, that's my greatest fear. To design and build machines without the knowledge to do so, however, is patently impossible.
My response being:
why is that your "greatest fear"? My greatest fears are of things like Global Warming, and Nuclear War and Mass Starvation/Poverty. I could care less whether someone builds a machine they do not fully "understand".
I mean after all parents give birth to kids and hope their kids are smarter then themselves (at least good parents do that) and they don't seem horrified that they created something "beyond their control" that will "one day destroy us!".
In other words capitalism is already like a technology beyond our control, that kills millions of people and threatens human existence everyday. So this guy's fears of an "un-understandable AI" seemed almost childish. People are starving, and this guy is worried about his wounded ego because a machine is smarter then him. Get over it.
Anyways, Post-Science is basically a speculative technical improvement based on AI and cybernetics that makes it wholly different from science. It has nothing to do with Friedman's version (http://www.postscience.com/) which is more like pre-science (and which I did not know about while writing the short story. )
Best review I got:
Nice write-up !
I'll still have to read it again when my headache's gone away...
FWIW, 'research' by multi-dimensional mapping of data and seeking patterns there-in is not new. Automated testing with multiple variables can generate far more data than a person can hope to comprehend, while automatic pattern recognition plus 'genetic algorithms' can home in on a 'sweet spot' solution. That's just shy of a 'brute force' approach, but may throw up unexpected wonders.
Of course, the result may only be as good as the model, and in-vivo testing then clinical trials *must* complement 'in-vitro' testing...
Though I disagree it is "brute force". I think that is something we use to label Deep Blue and other AI (and perhaps one day cybernetics) when it overcomes a baseline biological human. Such ideas I believe are kind of bigoted, almost racist.
Blackscare
10th August 2010, 02:35
Though I disagree it is "brute force". I think that is something we use to label Deep Blue and other AI (and perhaps one day cybernetics) when it overcomes a baseline biological human. Such ideas I believe are kind of bigoted, almost racist.
You must not do any programming. Brute force just describes an approach that emphasizes repetition of a simple action until a desired objective is met, rather than utilizing more elegant methods.
I know nothing of the internal programming of Deep Blue but I assume that if it is indeed a "brute force" machine, and I personally find it likely that it is, it probably just analyzes the state of the board in one given turn, without reference to things like play style, etc, and just breaks down every mathematical possibility until it finds a move with the least risk. That is brute force, and it reflects on the programmer more than the AI.
It's very similar to decryption programs that work by repetition. They are brute force, yet extremely useful and advanced in certain circumstances.
Dermezel
10th August 2010, 02:38
You must not do any programming. Brute force just describes an approach that emphasizes repetition of a simple action until a desired objective is met, rather than utilizing more elegant methods.
I know nothing of the internal programming of Deep Blue but I assume that if it is indeed a "brute force" machine
Actually I read via Skeptic's Society this wasn't true because it could not do the full trillions of possible calculations for "brute forcing" chess and had to utilize some hermeneutic rules. The "brute force" accusation was made by journalists trying to rationalize the loss.
Powered by vBulletin® Version 4.2.5 Copyright © 2020 vBulletin Solutions Inc. All rights reserved.