View Full Version : IBM Super computer contestant on Jeopardy
MarxSchmarx
16th December 2010, 05:51
IBM is going to have one of its supercomputers pitted against the American quiz show jeopardy champs in February
http://www.bbc.co.uk/news/technology-11996531
Jeopardy for those of you who haven't seen it is a quiz show with subtle and often indirect questions that have to be answered with questions: as in "He co-wrote the Communist Manifesto with Karl Marx" to be answered by "Who is Freidrich Engels?", but can be more difficult like the awkwardly phrased spanish phrase: "la patria que su nombre significa la linea en el centro de la tierra", that should be understandable to even non-spanish speakers of English with a decent feel for Latin, to which the answer is "What is Ecuador"?
I think a lot of this is already done by google, but I think the real trick is coming up with THE correct answer, especially given the potentially wide database involved, in less time than the humans can. When google returns thousands upon thousands of results for a query, this isn't helpful from a jeopardy players perspective. So who do you all think will win this, and why? My money is still on those bipedal monkeys.
ÑóẊîöʼn
16th December 2010, 06:00
Since Jeopardy has a limited amount of rules and winnning conditions, I don't think it's asking too much to have a computer try and do it. Sheer speed helps; a computer can apply iterative algorithms at a blinding pace in order to pare down "search results" to the relevant items, then use more sophisticated algorithms to select one out of a more refined pool of potential answers.
Of course, how well that works depends on the skill of the programmers. If they're any good, I say the computer has a good chance of winning.
NGNM85
16th December 2010, 06:07
I don't know enough about this machine to judge, either way. The real questions are; "Is Artifical Intelligence ("Strong A.I.") possible?', and "If it is possible, how close are we?" I would say that it is possible, because I don't presently see any truly convincing argument that it is impossible. How close are we? Closer than we have ever been.
ÑóẊîöʼn
16th December 2010, 06:15
I don't know enough about this machine to judge, either way. The real questions are; "Is Artifical Intelligence ("Strong A.I.") possible?', and "If it is possible, how close are we?" I would say that it is possible, because I don't presently see any truly convincing argument that it is impossible. How close are we? Closer than we have ever been.
I think "is strong AI possible" has already been affirmatively answered - human brains operate according to the well-understood priciples of classical physics. I think it is only a matter of time and research before we can come up with a system that emulates the volitional and decision-making capabilities of the human brain, although of course this is not to say that any Strong AIs will be exactly like humans or even human brains.
As for how soon, I'm more open-minded about that. I consider it a tautology that we are closer than we have ever been - research is still ongoing, and even if that were to halt completely that would only mean no further progress. Of course if civilisation were to collapse then that would constitute a backward step as far as achieving Strong AI is concerned, but with any luck that is a remote possibility.
MarxSchmarx
16th December 2010, 06:49
Since Jeopardy has a limited amount of rules and winnning conditions, I don't think it's asking too much to have a computer try and do it. Sheer speed helps; a computer can apply iterative algorithms at a blinding pace in order to pare down "search results" to the relevant items, then use more sophisticated algorithms to select one out of a more refined pool of potential answers.
Of course, how well that works depends on the skill of the programmers. If they're any good, I say the computer has a good chance of winning.
I am not so sure present hardware and programming abilities are at this stage yet. Sure, for reasons you note later on, the human brain is just a glorified computer programmed primarily by natural selection and the rules of physics/chemistry, so there's no reason why we can't mimic that at least in theory.
I think there are real hurdles, though, to how efficiently calculations can be carried out in a computer. For example, we can't improve processing speed unless we harness something like nuclear fusion, and even parallel algorithms that scale very well are limited by the bandwidth and communication speed between components. So on the hardware side of things alone I'm not sure it could be done, quite yet.
On the software side, I'm not so sure we're quite there with paring down search results or shifting through large amounts of data. The problem is that a lot of this involves dealing with different data types and conditionals upon conditionals, so there is a limit to how efficiently those could be looked at simultaneously. Even with the best programmers (and I have no reason to suspect IBM is short-changed in this department), I'm curious to hear why precisely users here think our current capabilities are sufficient. There are many software problems we haven't been able to crack yet, so I guess I'm not so pessimistic (optimistic?).
NGNM85
16th December 2010, 07:42
I think "is strong AI possible" has already been affirmatively answered - human brains operate according to the well-understood priciples of classical physics. I think it is only a matter of time and research before we can come up with a system that emulates the volitional and decision-making capabilities of the human brain, although of course this is not to say that any Strong AIs will be exactly like humans or even human brains.
Well, yeah, to say it's impossible at this stage you essentially have to bring in Cartesian dualisim or some other idea that theres' more to it than just a complex electrochemical process. Then there's the theory of 'Quantum Consciousness' which essentially says; 'We don't understand brains or Quantum Physics so consciousness is a Quantum phenomena.' Of course, I'm oversimplifying a little, but not that much. From what I can tell the majority of Neuroscientists dismiss this view. However, the fact that we still don't totally understand the brain does give me pause because that's an important variable. There may be unforseen principles or properties we just aren't presently aware of.
I think an A.I. would be very different. For one thing, it would probably be a sociopath, I think. I wouldn't expect it to be malevolent, but our morality is fundamentally rooted in evolved instincts it wouldn't have. I think it would jealously guard it's power supply and be rather apprehensive about being deactivated. I'm not sure if it would have any reproductive drive, it could just have another artificial brain created (Or, perhaps, make one, itself.) and install a copy of it's own software, essentially re-creating itself. As it would not age and, again, have no evolved behavior patterns it might not find this to be a valuable enterprise. I think we should be extremely cautious about connecting it to the internet.
As for how soon, I'm more open-minded about that. I consider it a tautology that we are closer than we have ever been -research is still ongoing, and even if that were to halt completely that would only mean no further progress.
It was kind of a throwaway statement, but it's the closest thing to an authoritative answer I can provide at this time.
Of course if civilisation were to collapse then that would constitute a backward step as far as achieving Strong AI is concerned, but with any luck that is a remote possibility.
The biggest issue is nuclear proliferation, and I am not impressed. This business with the START treaty is a perfect example. It could easily go either way. Every generation feels this way, but I feel it is not hyperbole to say that the human species is at a critical juncture, that we are setting the stage for the endgame right now.
ÑóẊîöʼn
16th December 2010, 08:16
I think there are real hurdles, though, to how efficiently calculations can be carried out in a computer. For example, we can't improve processing speed unless we harness something like nuclear fusion, and even parallel algorithms that scale very well are limited by the bandwidth and communication speed between components. So on the hardware side of things alone I'm not sure it could be done, quite yet.
I'm not sure I understand you; how would nuclear fusion improve processing capacity?
On the software side, I'm not so sure we're quite there with paring down search results or shifting through large amounts of data. The problem is that a lot of this involves dealing with different data types and conditionals upon conditionals, so there is a limit to how efficiently those could be looked at simultaneously. Even with the best programmers (and I have no reason to suspect IBM is short-changed in this department), I'm curious to hear why precisely users here think our current capabilities are sufficient. There are many software problems we haven't been able to crack yet, so I guess I'm not so pessimistic (optimistic?).
Google and other search engines process tens of millions of search requests on a daily basis - of course it helps that they have multiple facilities, but those cost money and big savings can be made by improving base units, so...
Well, yeah, to say it's impossible at this stage you essentially have to bring in Cartesian dualisim or some other idea that theres' more to it than just a complex electrochemical process. Then there's the theory of 'Quantum Consciousness' which essentially says; 'We don't understand brains or Quantum Physics so consciousness is a Quantum phenomena.' Of course, I'm oversimplifying a little, but not that much. From what I can tell the majority of Neuroscientists dismiss this view.
My understanding is that Cartesian dualism and quantum consciousness are both scientific nonsense; in fact the latter would fall under what Murray Gell-Mann called "quantum flapdoodle".
However, the fact that we still don't totally understand the brain does give me pause because that's an important variable. There may be unforseen principles or properties we just aren't presently aware of.
I'm not so sure. There may rules operating within the system that we are not aware of, but nothing that breaks our current understanding of the universe, otherwise we'd have found out by now.
I think an A.I. would be very different. For one thing, it would probably be a sociopath, I think. I wouldn't expect it to be malevolent, but our morality is fundamentally rooted in evolved instincts it wouldn't have. I think it would jealously guard it's power supply and be rather apprehensive about being deactivated.
I think such questions depend entirely on the goal system - an AI won't act like a sociopath unless it's programmed to. Malice and empathy are evolutionary responses that AIs will lack unless we program it in.
I'm not sure if it would have any reproductive drive, it could just have another artificial brain created (Or, perhaps, make one, itself.) and install a copy of it's own software, essentially re-creating itself. As it would not age and, again, have no evolved behavior patterns it might not find this to be a valuable enterprise.
But then again, having no evolved behaviour patterns it might not see any problem with populating the universe with copies of itself, if that is what it takes to complete it's goals.
I think we should be extremely cautious about connecting it to the internet.
To be honest I'm more concerned about malicious subversion from outside rather than anything it might do to the rest of the world. I don't particularly fancy the world's first AI becoming part of some botnet. Or worse, hacking attempts by anti-AI types.
The biggest issue is nuclear proliferation, and I am not impressed. This business with the START treaty is a perfect example. It could easily go either way. Every generation feels this way, but I feel it is not hyperbole to say that the human species is at a critical juncture, that we are setting the stage for the endgame right now.
Nukes are not the problem. They are controlled by small minorities with a vested interest in not blowing themselves up, the world is no longer divided between East and West, and the logic of nuclear-weapons doomsday is completely incoherent - why would North Korea and South Korea kick off if India and Pakistan were to nuke each other?
No, the real problem is that technological society will die in a whimper, not a bang - crippled by increasingly degraded environments that nobody important cares enough to do anything about, and the gradual but sure depletion of resources by a voracious capitalist price system that encourages all to consume more, more, more.
NGNM85
18th December 2010, 06:41
My understanding is that Cartesian dualism and quantum consciousness are both scientific nonsense; in fact the latter would fall under what Murray Gell-Mann called "quantum flapdoodle".
I'm not so sure. There may rules operating within the system that we are not aware of, but nothing that breaks our current understanding of the universe, otherwise we'd have found out by now.
I essentially agree, again, I don’t subscribe to either of those ideas. I'm just saying that considering we still don't understand how the brain works, that our confidence should reflect that. Presently we don’t have any information that proves strong AI is impossible. It’s a little premature to extrapolate from that that it absolutely is possible, and we’ll do it in 10 years, or something like that. I’m not saying otherwise, I’m just questioning the confidence in such assertions.
I think such questions depend entirely on the goal system - an AI won't act like a sociopath unless it's programmed to. Malice and empathy are evolutionary responses that AIs will lack unless we program it in.
In that case we’ll be modifying it from it’s default sociopathic state.
But then again, having no evolved behaviour patterns it might not see any problem with populating the universe with copies of itself, if that is what it takes to complete it's goals.
That’s possible. This is also further reason to limit it’s capabilities and to remain vigilant for potential sub-goal/super-goal conflicts.
To be honest I'm more concerned about malicious subversion from outside rather than anything it might do to the rest of the world. I don't particularly fancy the world's first AI becoming part of some botnet. Or worse, hacking attempts by anti-AI types.
I’m just apprehensive of a potentially malign superintelligent entity having control over every computer system on the planet. Some group conducted an experiment, a version of the Turing test, where they, taking on the character of an AI, attempt to persuade the other person to connect them to the internet, they usually do, and I think on average it took about two hours of persuasion. Theoretically, a strong AI could be much more persuasive.
Nukes are not the problem. They are controlled by small minorities with a vested interest in not blowing themselves up,
I am not moved in this by the slightest. The United States and Israel, for example, frequently sacrifice security over other priorities. Israel is seriously jeopardizing it’s future, it could virtually eliminate the risk by complying with international law. The clearest example is the Cuban Missile crisis. Since the end of the Cold War documents have surfaced that have shown the situation was actually far more dire than anybody had previously thought. The only reason we’re having this conversation is because of a man named Vasily Arkhipov, the first officer on a Soviet sub. Of the three highest ranking officers; himself, the captain, and the political officer, Arkhipov was the only one who didn’t want to launch the nukes. That’s how close we came. You can pick dozens of examples, from the recent financial meltdown, to the oil spill in the gulf. Admittedly, the powers that be do usually exhibit a basic kind of logic and usually behave within a predictable framework however, I see no compelling evidence to the contrary, and I think history has repeatedly evidenced, that under the right circumstances, they will gallop headlong into oblivion, even risking the most dire of consequences.
the world is no longer divided between East and West, and the logic of nuclear-weapons doomsday is completely incoherent - why would North Korea and South Korea kick off if India and Pakistan were to nuke each other?
That isn’t the right way to approach the problem. The issue is not what level of nuclear annihilation could civilization withstand. The ideal number is none. The catastrophic suffering and loss of life should be sufficient to motivate us to avoid this potentiality.
Even a “small-scale’ nuclear exchange would result in massive casualties, then radiation, mass migration of refugees, starvation, these things could create substantial instability in the surrounding regions and could, conceivably, lead to further conflicts.
Even if we could be reasonably positive that nation-states, even the crazy ones, would never under any circumstances run the risk of nuclear annihilation, there are also non-state actors who clearly aren’t so discriminating. We certainly don’t want a group like Al-Qaeda or Aum Shinrikyo or some other club of suicidal maniacs to get their hands on nuclear weapons.
Lastly, there is no legitimate defense for the massive arsenals of these awesomely destructive weapons. The United States alone could ostensibly annihilate, essentially, all life on earth at least twice. The only practical application for nuclear weapons that I can think of is to deflect an impending asteroid, or to defend our planet against hostile extraterrestrials. Our present arsenals are way out of proportion to the statistical likelihood of these threats. Contrary to NRA literature the abundance of nuclear weapons has an inverse relationship to our safety and security. Unlike pandemics, gamma ray bursters, or incoming asteroids, this is a threat we can completely control. We need to stop making these engines of death, and then we can cut down on the ones we already have. Incidentally, the US is the sole impediment to this process.
No, the real problem is that technological society will die in a whimper, not a bang - crippled by increasingly degraded environments that nobody important cares enough to do anything about, and the gradual but sure depletion of resources by a voracious capitalist price system that encourages all to consume more, more, more.
That’s another theory. I think we should pursue that, but that doesn’t negate an international effort to combat nuclear proliferation. We can, and should, do both.
ÑóẊîöʼn
18th December 2010, 08:40
I essentially agree, again, I don’t subscribe to either of those ideas. I'm just saying that considering we still don't understand how the brain works, that our confidence should reflect that. Presently we don’t have any information that proves strong AI is impossible. It’s a little premature to extrapolate from that that it absolutely is possible, and we’ll do it in 10 years, or something like that. I’m not saying otherwise, I’m just questioning the confidence in such assertions.
I'm not making any predictions, so I've no idea where you got "10 years" from.
In that case we’ll be modifying it from it’s default sociopathic state.
No, no, no. Sociopathy is a human trait that AIs will lack unless we purposefully program it in.
That’s possible. This is also further reason to limit it’s capabilities and to remain vigilant for potential sub-goal/super-goal conflicts.
I think it's sufficient to stipulate avoiding deliberate harm and/or unconsenting modification of humans as part of the goal system. Anything beyond that becomes exponentially more complicated and therefore just asking for trouble.
I’m just apprehensive of a potentially malign superintelligent entity having control over every computer system on the planet. Some group conducted an experiment, a version of the Turing test, where they, taking on the character of an AI, attempt to persuade the other person to connect them to the internet, they usually do, and I think on average it took about two hours of persuasion. Theoretically, a strong AI could be much more persuasive.
If it's a strong AI with even the slightest capacity for self-optimisation, then our comparitively crude attempts at either limiting it or attacking it in some fashion will come to naught. This is a simple consequence of being able not only to think faster than humans, but also better - perfect recall, photographic memory, and so on.
There will be a point, we don't know when, where AIs will surpass humans in general ability. Once that happens, humans will no longer be the dominant force on this planet, for better or worse. There's no reason to believe that we won't realise that AIs have crossed that threshold until it's too late.
I am not moved in this by the slightest. The United States and Israel, for example, frequently sacrifice security over other priorities. Israel is seriously jeopardizing it’s future, it could virtually eliminate the risk by complying with international law. The clearest example is the Cuban Missile crisis. Since the end of the Cold War documents have surfaced that have shown the situation was actually far more dire than anybody had previously thought. The only reason we’re having this conversation is because of a man named Vasily Arkhipov, the first officer on a Soviet sub. Of the three highest ranking officers; himself, the captain, and the political officer, Arkhipov was the only one who didn’t want to launch the nukes. That’s how close we came. You can pick dozens of examples, from the recent financial meltdown, to the oil spill in the gulf. Admittedly, the powers that be do usually exhibit a basic kind of logic and usually behave within a predictable framework however, I see no compelling evidence to the contrary, and I think history has repeatedly evidenced, that under the right circumstances, they will gallop headlong into oblivion, even risking the most dire of consequences.
The fact that we are having this conversation proves you wrong, actually. We have been close to the brink, true, but the important lesson to take home is that we weren't so unaware of the potential consequences to dive headlong into the abyss. Sure, somebody may nuke Israel, and I would be surprised if there wasn't a regional nuclear war before the end of the century, but neither of those scenarios will result in the extinction of the human species or even present an existential threat to global civilisation. Too many people have lived through decades of having a nuclear Sword of Damocles over their heads to risk throwing everything away so easily. As for the future, things will be too interconnected.
That isn’t the right way to approach the problem. The issue is not what level of nuclear annihilation could civilization withstand. The ideal number is none. The catastrophic suffering and loss of life should be sufficient to motivate us to avoid this potentiality.
Yeah, it would be great if we could eliminate war, but that isn't happening any time in the forseeable future. Therefore our options consist of harm reduction strategies, recognising the plain and obvious fact that neither nuclear weapons nor nation-states are going to be gone any time soon.
Even a “small-scale’ nuclear exchange would result in massive casualties, then radiation, mass migration of refugees, starvation, these things could create substantial instability in the surrounding regions and could, conceivably, lead to further conflicts.
The sort of people who get to lead countries are savvy enough to realise that if your neighbours nuke each other, and you suffer from the fallout, the situation is not improved by nuking one's other neighbours. What refugees? Most people would be dying of radiation sickness and/or starving, considering that nuclear weapons are commonly aimed at urban concentrations.
Even if we could be reasonably positive that nation-states, even the crazy ones, would never under any circumstances run the risk of nuclear annihilation, there are also non-state actors who clearly aren’t so discriminating. We certainly don’t want a group like Al-Qaeda or Aum Shinrikyo or some other club of suicidal maniacs to get their hands on nuclear weapons.
Firstly, it is people who are crazy, not nation-states. The kind of people who would launch nuclear weapons without very careful consideration tend not to get very far in the kind of nations capable of credible launch capability. Iran may be a theocracy, but that is not the same thing as saying they are a bunch of suicidal jihadists.
Speaking of which, nuclear weaponry requires the kind of resources that terrorists and other non-state actors simply cannot afford - not just materials, but manpower as well. Construction of nuclear weapons takes very rare skills and in some cases involves techniques the precise details of which are high-level classified. Not only that, but pre-built nuclear weapons are heavily guarded, especially since the September 11th attacks. It would be tremendously easier, and likely just as effective in terms of spreading terror, to steal some radioactive materials for use in a dirty bomb.
Lastly, there is no legitimate defense for the massive arsenals of these awesomely destructive weapons. The United States alone could ostensibly annihilate, essentially, all life on earth at least twice. The only practical application for nuclear weapons that I can think of is to deflect an impending asteroid, or to defend our planet against hostile extraterrestrials. Our present arsenals are way out of proportion to the statistical likelihood of these threats. Contrary to NRA literature the abundance of nuclear weapons has an inverse relationship to our safety and security. Unlike pandemics, gamma ray bursters, or incoming asteroids, this is a threat we can completely control. We need to stop making these engines of death, and then we can cut down on the ones we already have. Incidentally, the US is the sole impediment to this process.
Well, if the US makes the first move towards complete nuclear disarmament, it's possible but by no means certain that other nations would follow suit. But even then, such a situation would not be stable - all it takes is one person re-starting their nuclear weapons program and the whole damn cycle starts all over again.
No, I think the solution lies not in the simple-minded banning of nuclear weapons, but in creating a world that everyone has a direct interest in preserving. I have the capability to murder my neighbour, but there are various extremely good reasons why I won't even consider it.
That’s another theory. I think we should pursue that, but that doesn’t negate an international effort to combat nuclear proliferation. We can, and should, do both.
Disarmament is a foolish dream for reasons I have just illustrated.
Powered by vBulletin® Version 4.2.5 Copyright © 2020 vBulletin Solutions Inc. All rights reserved.