Log in

View Full Version : why the singularity isn't going to happen



bcbm
15th October 2010, 19:04
I don't believe in the Singularity for the same reason I don't believe in Heaven.

Once I met a Singularity zealot who claimed that eating potato chips after the Singularity would induce sublime ecstasy. Our senses would be so heightened that we could completely focus our whole attention on the ultimate chippiness of the chip. For him, the Singularity was just like Sunday school Heaven, full of turbo versions of everything we love down here on Earth. But instead of an all-powerful God zotting angel puppies into existence for our pleasure, we would be using the supposed tools of the Singularity like nanotech and A.I. to conjure up the tastiest junk food ever.

That is not a vision of social progress; it is, in fact, a complete failure to imagine how technology might change society in the future.
Though it's easy to parody the poor guy who talked about potato chips after the Singularity, his faith is emblematic of Singulatarian beliefs. Many scientifically-minded people believe the Singularity is a time in the future when human civilization will be completely transformed by technologies, specifically A.I. and machines that can control matter at an atomic level (for a full definition of what I mean by the Singularity, read my backgrounder on it (http://io9.com/5534848/what-is-the-singularity-and-will-you-live-to-see-it)). The problem with this idea is that it's a completely unrealistic view of how technology changes everyday life.

Case in point: Penicillin. Discovered because of advances in biology, and refined through advances in biotechnology, this drug cured many diseases that had been killing people for centuries. It was in every sense of the term a Singularity-level technology. And yet in the long term, it wound up leaving us just as vulnerable to disease. Bacteria mutated, creating nastier infections than we've ever seen before. Now we're turning to pro-biotics rather than anti-biotics; we're investigating gene therapies to surmount the troubles we've created by massively deploying penicillin and its derivatives.

continued:
http://io9.com/5661534/why-the-singularity-isnt-going-to-happen?skyline=true&s=i

Kiev Communard
15th October 2010, 19:20
Yes, the idea of "God-like" singularitarians walking the heavens the next day after the limits of Moore's Law are hit is pretty much ridiculous, yet the development of many breakthroughs in technological development could still be used to achieve a great deal of progress - provided that these technologies are freed from the imperatives of market and national state.

bcbm
15th October 2010, 19:23
yet the development of many breakthroughs in technological development could still be used to achieve a great deal of progress

i don't think anyone disputes this, but the author's point is that progress will not solve all problems- it will solve some while creating new ones.

Invincible Summer
15th October 2010, 19:36
I used to be pretty into this sort of stuff, but now I'm feeling pretty skeptical. I mean, the utopian concept of the Singularity that most of its proponents describe would basically require a post-scarcity scenario with an egalitarian outlook that would not be possible with capitalism. Kurzweil and others contest that prices for technology have come down and continue to do so, but a price is still a price and even with federally subsidized nanotechnology or whatever, the people who would benefit most probably won't be able to afford it.

So if we need a communist revolution first, then who the fuck knows when anything remotely resembling the Singularity will happen.

Summerspeaker
15th October 2010, 22:21
Kurzweil's brand of Singularitarianism simply adds a juicy techno-utopian reward to the classic program of status-quo support. As long as we keep on doing what we're doing (the system that made Kurzweil rich), SCIENCE will conveniently solve all our problems.

However, self-serving politics alone reasonably prompt suspicion but don't invalidate technological predictions and aspirations. Kurzweil and company have conceived me of the possibility (never inevitability) of radically transformative developments with the next few decades.

If instance, a recursive artificial intelligence might well gain the ability dispense doped-up potato chips to every human being on the planet. Superintelligence won't necessarily follow the old rules of technological advance. Singularitarians like Eliezer Yudkowsky and Michael Anissimov use this potential to argue for building friendly AI, letting it take over, and hoping for the best.

As little as I trust them (or any small group of independent scientists) to carry out their dream, the more believe scenario of military AI looks far worse. The technologies transhumanists fantasize about would enable a regime of social beyond even the sophisticated and largely successful one that exists today. The narcotic snack foods mentioned in the article would likely echo the drugs in A Brave New World more than anything else. This sort of thing stands a real threat to the revolutionary cause. To give a crude example, they're researching the next generation of less-lethal weapons only a handful a miles from where I live.

I think we on the left need to take transhumanism seriously both as a threat and as an opportunity.

ckaihatsu
16th October 2010, 00:47
For imaginative things like this I think we should make sure to rake it over the coals of a class analysis and see what remains intact.

My pet theory is that the wealthy -- of any era -- need some kind of "intellectual" status symbol to go with their *material* status symbols and personality-displacing wealth. Their wealth-driven egos can't *but* ache for some kind of concoted *social* identity for themselves, one which would then enable them to spray their diarrhea all over the public in socially acceptable ways.

In the past the clergy had the whole supernatural deity thing going, and of course there was an internal hierarchy based on *that* mythology (and wealth / power). These days that's old-hat, so I think we're seeing newer, more techno-centric versions of the trans-humanity mythos -- wielded at will by those whose names have already been vetted by the bourgeois community.

ÑóẊîöʼn
16th October 2010, 13:03
The guy describing how potato chips will taste like after the Singularity is an idiot:


There's another safeguard that isn't in the Principles. It's the idea I originally wrote Staring into the Singularity to emphasize. It's this one last piece of advice: Don't go Utopian.

Don't describe Life after Singularity in glowing terms. Don't describe it at all. I think the all-time low point in predicting the future came in the few brief paragraphs of Unbounding the Future that I read, when they described a pedestrian being run over and his hand miraculously healing. That's ridiculous. Pedestrian? Run over? Hand? Cars in a nanotech world? Why not just have a bunch of apes describe the ease of getting bananas with a human mind?

In the words of Drexler:

"I would emphasize that I have been invited to give talks at places like the physical sciences colloquium series at IBM's main research center, at Xerox PARC, and so forth, so these ideas are being taken seriously by serious technical people, but it is a mixed reaction. You want that reaction to be as positive as possible, so I plead with everyone to please keep the level of cultishness and bullshit down [Yudkowsky's emphasis], and even to be rather restrained in talking about wild consequences, which are in fact true and technically defensible, because they don't sound that way. People need to have their thinking grow into longer-term consequences gradually; you don't begin there."

The problem with people expounding their Utopian visions of a nanotech world is that their consequences aren't wild enough. Looking at stories of instantly healing wounds, or any material object being instantly available, doesn't give you the sense of looking into the future. It gives you the sense that you're looking into an unimaginative person's childhood fantasy of omnipotence, and that predisposes you to treat nanotechnology the same way. Worse, it attracts other people with unimaginative fantasies of omnipotence. There's no better way to turn into a bunch of parlor pinks, sipping coffee and planning the Revolution without actually doing anything.

...

In a moment of insanity, I subscribed to the Extropian mailing list. These people know what "Singularity" means. In theory, they know what's coming. And yet, even as I write , folk who [I]really ought to know better are arguing over whether transhumans will have enough computing power to simulate private Universes, whether the amount of computing power available to transhumans is limited by the laws of physics, whether someone uploaded into a trans-computer is really the same person or just an amazing soybean imitation, and - least believably of all - whether our unimaginably intelligent future selves will still be having sex.

Why is this our concern? Why do we need to know this? Can it not be that maybe, just maybe, these problems can wait until after we're five times as smart and some of our blind spots have been filled? Right now, every human being on this planet has one concern: How do we get to the Singularity as fast as possible? What happens afterward is not our problem and I deplore those gosh-wow, unimaginative, so-cloying-they-make-you-throw-up, and just plain boring and unimaginative pictures of a future with unlimited resources and completely unaltered mortals. Leave the problems of transhumanity to the transhumans. Our chances of getting anything right are the same as a fish designing a working airplane out of algae and pebbles.

Summerspeaker
16th October 2010, 17:31
While I don't see much worthwhile coming from talk of ecstasy-inducing potato chips, Yudkowsky does not offer a superior alternative in this case. Utopianism gets a bad rap. Socialist utopian tales like William Morris's New from Nowhere provide considerably more political content than Yudkowsky's mad rush to superintelligence. He recommends a fuller surrender to the coming machine god than the junk food enthusiast cited in the opening post. We're too lowly to even dream of wonders in store. The only meaningful human endeavor in this narrative becomes the one final technical pursuit: friendly AI.

No thank you.

ÑóẊîöʼn
16th October 2010, 18:30
While I don't see much worthwhile coming from talk of ecstasy-inducing potato chips, Yudkowsky does not offer a superior alternative in this case. Utopianism gets a bad rap. Socialist utopian tales like William Morris's New from Nowhere provide considerably more political content than Yudkowsky's mad rush to superintelligence. He recommends a fuller surrender to the coming machine god than the junk food enthusiast cited in the opening post. We're too lowly to even dream of wonders in store. The only meaningful human endeavor in this narrative becomes the one final technical pursuit: friendly AI.

No thank you.

It's hardly a "mad rush"- such terminology has the implications of going forward without a thought to the potential consequences, but if the aim is for Friendly AI then that's not exactly a slap-dash effort that's careless of the consequences.

Further, the fact that human intelligence has it's limits would not make us "lower" (funny that a self-ascribed anarchist thinks in such hierarchical terms) than a superintelligence, only less complicated and therefore more limited in our abilities.

I fully support the creation of Friendly AI with the ability to modify itself. It would represent an enormous boon the species, save for those individuals who hanker for the days when men were created by gods, rather than the other way around.

Summerspeaker
16th October 2010, 21:29
Further, the fact that human intelligence has it's limits would not make us "lower" (funny that a self-ascribed anarchist thinks in such hierarchical terms) than a superintelligence, only less complicated and therefore more limited in our abilities.

Friendly AI advocates such as a Yudkowsky, Michael Anissimov, and Nick Bostrom construct it in hierarchical terms. Bostrom, in particular, writes about the necessity for a singleton (http://www.nickbostrom.com/fut/singleton.html), a good old-fashioned centralized authority. It's a broad concept, but superintelligence emerges as key means for establishing a singleton. I'm not making up the hierarchy. How can you reconcile such a goal with the anarchist tradition? Yudkowsky and company even make democratic socialists (http://ieet.org/index.php/IEET/print/3670) uncomfortable.


I fully support the creation of Friendly AI with the ability to modify itself. It would represent an enormous boon the species, save for those individuals who hanker for the days when men were created by gods, rather than the other way around.

What about those of us who say, "No gods, no masters"? Friendly AI is a lot like a benevolent dictatorship. Might be nice if you could pull it off but awfully hard to achieve and dangerous if it goes wrong. Even if I were convinced of its desirability, I see little reason to trust any current research structures to get friendly AI right. As soon as strong AI becomes practical the military will do it first unless somebody's sleeping on the job or there's a massive campaign in favor of the globally friendly version. The dream of bypassing social struggle through technology has a plausible basis in reality but deadly political implications. I don't see any realistic substitute for revolution.

ÑóẊîöʼn
16th October 2010, 22:15
Friendly AI advocates such as a Yudkowsky, Michael Anissimov, and Nick Bostrom construct it in hierarchical terms. Bostrom, in particular, writes about the necessity for a singleton (http://www.nickbostrom.com/fut/singleton.html), a good old-fashioned centralized authority.

From what I read, a "singleton" could just as easily be a collective intelligence, with no single member more important than any other. What defines a singleton is unity of action and purpose, not centralised political authority.

In any case, it is presented as a hypothesis, in which case it will turn out to be true or false no matter if it offends our political sensibilities or not.


What about those of us who say, "No gods, no masters"? Friendly AI is a lot like a benevolent dictatorship. Might be nice if you could pull it off but awfully hard to achieve and dangerous if it goes wrong. Even if I were convinced of its desirability, I see little reason to trust any current research structures to get friendly AI right. As soon as strong AI becomes practical the military will do it first unless somebody's sleeping on the job or there's a massive campaign in favor of the globally friendly version. The dream of bypassing social struggle through technology has a plausible basis in reality but deadly political implications. I don't see any realistic substitute for revolution.

A Friendly AI would only be a dictator (benevolent or otherwise) if it was programmed to be one.

If the military is seriously interested in Strong AI for their applications (as opposed to expert systems and remote drones they actually seem to be going for) then they're more foolish than I thought.

As for technology solving social problems, I'm ultimately a pragmatist. If in the future it turns out that the Singularity solves the problems worth solving, then that will be all to the good. Until then, I'm hedging my bets and leaving open the possibility that social change will come as a result of other means. I wouldn't be on this website otherwise.

Summerspeaker
17th October 2010, 00:14
From what I read, a "singleton" could just as easily be a collective intelligence, with no single member more important than any other. What defines a singleton is unity of action and purpose, not centralised political authority.

According to Bostrom, a singleton cannot tolerate any challenge to its "supremacy" and exerts "effective control over major features of its domain (including taxation and territorial allocation)." That's great for liberals, but fundamental incompatible with anarchism as far as I can tell. Considering what Anissimov at least personally thinks of anarchism and socialism, this isn't surprising.


A Friendly AI would only be a dictator (benevolent or otherwise) if it was programmed to be one.As described by Singularitarians, a superintelligence would quickly become physically dominant by virtue of its nature. We humans wouldn't materially matter under those circumstances. Only other superintelligences would, and a singleton wouldn't let independent ones arise.


If the military is seriously interested in Strong AI for their applications (as opposed to expert systems and remote drones they actually seem to be going for) then they're more foolish than I thought.

If they believed the hype, they would have to build a narrowly interested AI, prevent anyone else from building an AI, or renounce the imperialist game. See above. Once a universally friendly AI appears, the United States (and every other country) loses in traditional power politics terms. I have trouble picturing the bosses throwing in the towel voluntarily. The Singularity Institute for Artificial Intelligence could only succeed by either incredible luck or massive popular campaign.

ÑóẊîöʼn
17th October 2010, 01:33
According to Bostrom, a singleton cannot tolerate any challenge to its "supremacy" and exerts "effective control over major features of its domain (including taxation and territorial allocation)." That's great for liberals, but fundamental incompatible with anarchism as far as I can tell. Considering what Anissimov at least personally thinks of anarchism and socialism, this isn't surprising.

Would a global communist society (which would constitute a "singleton" under Bostrom's definition) tolerate any challenges to its economic and territorial supremacy? I think not!


As described by Singularitarians, a superintelligence would quickly become physically dominant by virtue of its nature. We humans wouldn't materially matter under those circumstances. Only other superintelligences would, and a singleton wouldn't let independent ones arise.

Whether humans matter or not depends on the nature of the superintelligence, surely?


If they believed the hype, they would have to build a narrowly interested AI, prevent anyone else from building an AI, or renounce the imperialist game. See above.

Number three isn't happening for obvious reasons. Number two would be difficult to enforce, to say the least. Number one is likely, but self-improving Strong AI is has various issues relating to control, and the last thing that any general wants is a piece of equipment (which is how most in the military would regard an AI, Strong or no, rather than regarding it more accurately as a person) to start making its "own" decisions.


Once a universally friendly AI appears, the United States (and every other country) loses in traditional power politics terms. I have trouble picturing the bosses throwing in the towel voluntarily.

Who said they would? But in a straight-up power struggle between a Strong AI and all human governments, my money (and my support) would be on Team Silicon.


The Singularity Institute for Artificial Intelligence could only succeed by either incredible luck or massive popular campaign.

Programming a Friendly AI is an engineering issue, not a matter of luck or popularity.

9
17th October 2010, 02:00
what the hell is singularity? :(

ÑóẊîöʼn
17th October 2010, 02:23
what the hell is singularity? :(

Wikipedia is your friend (http://en.wikipedia.org/wiki/Technological_singularity)

Yudkowsky's 5-minute intro (http://yudkowsky.net/singularity/intro)

Summerspeaker
17th October 2010, 07:47
Would a global communist society (which would constitute a "singleton" under Bostrom's definition) tolerate any challenges to its economic and territorial supremacy? I think not!

I was talking about anarchism specifically for a reason. Certain forms of communism could indeed fulfill Bostrom definition. I can even imagine (and have imagined (http://queersingularity.wordpress.com/2010/06/15/posthuman-political-theory/)) outcomes tolerably inline with my morality. I would hardly begrudge a singleton that strictly limited itself to useful production and violence prevention. The critical problem of trust remains, however. Anarchists favor decentralization and freedom because actors with power over others overwhelmingly abuse it.


Whether humans matter or not depends on the nature of the superintelligence, surely?I'm talking about pure material power in this respect. A superintelligence with access to molecular manufacturing could literally create a post-scarcity economy from scratch wherever energy and matter supplies allowed. Trying to fight such an entity would be an exercise in futility. At least under the current system workers have material leverage in the production system. A general strike would stop everything. The bosses right now need us. The aforementioned superintelligence wouldn't.


Number two would be difficult to enforce, to say the least. Number one is likely, but self-improving Strong AI is has various issues relating to control, and the last thing that any general wants is a piece of equipment (which is how most in the military would regard an AI, Strong or no, rather than regarding it more accurately as a person) to start making its "own" decisions.With renunciation off the table, they'll have to attempt state AI and/or suppression. Under the established assumptions, we're talking about the fate of the world here. The risk of opposing AI far outweighs the risk of your own AI going rogue. The SIAI plan of independent researchers cooking up universally friendly superintelligence makes about as much sense as an alternative history where private scientists invented the atom bomb while governments stood around twiddling their thumbs. It can only if happen if states show unusual incompetence or discard their customary goals.


But in a straight-up power struggle between a Strong AI and all human governments, my money (and my support) would be on Team Silicon.Assuming a friendly AI were created, same here. It's getting to that stage that I find implausible without messy political struggle.


Programming a Friendly AI is an engineering issue, not a matter of luck or popularity.Any development as profoundly transformative and potentially catastrophic as superhuman artificial intelligence ethically demands popular engagement and support. Its appearance would affect the lives of every human being on the planet on a fundamental level. They deserve input on the process.

ÑóẊîöʼn
17th October 2010, 08:25
I was talking about anarchism specifically for a reason. Certain forms of communism could indeed fulfill Bostrom definition. I can even imagine (and have imagined (http://queersingularity.wordpress.com/2010/06/15/posthuman-political-theory/)) outcomes tolerably inline with my morality. I would hardly begrudge an singleton that strictly limited itself to useful production and violence prevention. The critical problem of trust remains, however. Anarchists favor decentralization and freedom because actors with over others overwhelmingly abuse it.

Well, I don't take decentralisation to be a universal good. Both decentralisation and its counterpart can be optimal solutions and can work together, depending on the context. It remains to be seen whether a worlwide classless society is viable in a wholly decentralised mode.


I'm talking about pure material power in this respect. A superintelligence with access to molecular manufacturing could literally create a post-scarcity economy from scratch wherever energy and matter supplies allowed. Trying to fight such an entity would be an exercise in futility. At least under the current system workers have material leverage in the production system. A general strike would stop everything. The bosses right now need us. The aforementioned superintelligence wouldn't.

I'm not seeing anything wrong with this. Why fight a Friendly AI beyond reactionary anti-cybernetic sentiment?


With renunciation off the table, they'll have to attempt state AI and/or suppression. Under the established assumptions, we're talking about the fate of the world here. The risk of opposing AI far outweighs the risk of your own AI going rogue.

So far nation-states have proven themselves to be universally useless at effectively confronting issues that affect the world as a whole (see: climate change). I see no reason why this should change in the forseeable future.


The SIAI plan of independent researchers cooking up universally friendly superintelligence makes about as much sense as an alternative history where private scientists invented the atom bomb while governments stood around twiddling their thumbs. It can only if happen if states show unusual incompetence or discard their customary goals.

I don't think your analogy works. Atomic bombs are primarily weapons; AIs are general purpose constructs. Building bombs in the old days took loads of really expensive custom-built equipment and fiddly techniques; whereas programming can now be done on relatively cheap commercially available hardware, and you can backup your work.


Any development as profoundly transformative and potentially catastrophic as superhuman artificial intelligence ethically demands popular engagement and support. Its appearance would affect the lives of every human being on the planet on a fundamental level. They deserve input on the process.

What can they add apart from "I like it" or "I don't like it"? If Strong AI is possible, then its development is inevitable barring civilisation-wrecking catastrophe, and as far as I can see the SIAI is currently our best option.

Summerspeaker
17th October 2010, 23:08
I'm not seeing anything wrong with this. Why fight a Friendly AI beyond reactionary anti-cybernetic sentiment?

You're not at all concerned about a single technological development that purports to invalidate traditional social struggle? I'm sure you actually are considering that recognition of the danger involved generated the friendly AI movement in the first place. Anissimov and company argue for moving forward primarily because making good AI seems the best way to prevent bad AI. I'm glad folks are thinking about the issue, but I'm not convinced by the SIAI platform. Even if I accepted their framework I see no credible way to prevent narrowly interested AI and achieve friendly AI barring a massive popular campaign. Without one, either the military will beat them to the punch or they'll accidentally encode their own values into the machine and we'll be stuck with bourgeois positivism for the rest of eternity.


So far nation-states have proven themselves to be universally useless at effectively confronting issues that affect the world as a whole (see: climate change).They're proactive (to say the least) about confronting direct threats to their power. Unlike climate change, friendly AI would be exactly that. The U.S. government monitors and periodically harasses everyone from liberal peace activists to Al-Qaeda. Counting on them to misjudge the risk or feasibility of artificial intelligence strikes me as wildly optimistic.


I don't think your analogy works. Atomic bombs are primarily weapons; AIs are general purpose constructs.As advertised, strong AI would be the mightiest weapon ever invented.


Building bombs in the old days took loads of really expensive custom-built equipment and fiddly techniques; whereas programming can now be done on relatively cheap commercially available hardware, and you can backup your work.If we get to the point where individuals or small groups can easily program superintelligence, achieving friendly AI becomes less likely (http://www.acceleratingfuture.com/michael/blog/2010/08/the-overall-risk-seems-to-be-minimized/). Yudkowsky flatly states that Moore's Law is the enemy.


What can they add apart from "I like it" or "I don't like it"?While that would be enough, a wide range of possibilities exist. The species might decide to slower (or faster) than Yudkowsky wants. We could reject the notion of salvation by an tiny clique of intellectuals and demand broader scrutiny on the details of friendly AI code.


If Strong AI is possible, then its development is inevitable barring civilisation-wrecking catastrophe, and as far as I can see the SIAI is currently our best option.Wait a second. From whence comes this assumption that everything possible is inevitable?

ÑóẊîöʼn
17th October 2010, 23:55
You're not at all concerned about a single technological development that purports to invalidate traditional social struggle? I'm sure you actually are considering that recognition of the danger involved generated the friendly AI movement in the first place. Anissimov and company argue for moving forward primarily because making good AI seems the best way to prevent bad AI. I'm glad folks are thinking about the issue, but I'm not convinced by the SIAI platform. Even if I accepted their framework I see no credible way to prevent narrowly interested AI and achieve friendly AI barring a massive popular campaign. Without one, either the military will beat them to the punch or they'll accidentally encode their own values into the machine and we'll be stuck with bourgeois positivism for the rest of eternity.

The military is perfectly capable of conducting their own AI projects regardless of support for alternatives.

What's "bourgeois positivism"?


They're proactive (to say the least) about confronting direct threats to their power. Unlike climate change, friendly AI would be exactly that. The U.S. government monitors and periodically harasses everyone from liberal peace activists to Al-Qaeda. Counting on them to misjudge the risk or feasibility of artificial intelligence strikes me as wildly optimistic.

Climate change is also a direct threat to imperial power because it is politically stabilising - sure, at the moment the less developed nations are acting as a cushion, but there is only so much they can take before the effects spill over elsewhere.

In fact, if I remember correctly the Pentagon commissioned a report (http://www.climate.org/topics/climate-change/pentagon-study-climate-change.html) on the geopolitical effects of climate change and its impact on global stability. Its conclusions weren't pretty.

Military awareness of an issue does not automatically translate into meaningful action on the part of the nation-state as a whole.


As advertised, strong AI would be the mightiest weapon ever invented.

Only in the sense that intelligence can be a weapon.


If we get to the point where individuals or small groups can easily program superintelligence, achieving friendly AI becomes less likely (http://www.acceleratingfuture.com/michael/blog/2010/08/the-overall-risk-seems-to-be-minimized/). Yudkowsky flatly states that Moore's Law is the enemy.

The SIAI evidently have good reason to believe that they are up to task as a non-profit organisation.


While that would be enough, a wide range of possibilities exist. The species might decide to slower (or faster) than Yudkowsky wants. We could reject the notion of salvation by an tiny clique of intellectuals and demand broader scrutiny on the details of friendly AI code.

Slowing down the development of Friendly AI increases the possibility of Unfriendly AI arising first, and doubtless SIAI are working as fast as they can to achieve their goals. As for scrutiny, only a very tiny proportion of the world's population is qualified to do so meaningfully.


Wait a second. From whence comes this assumption that everything possible is inevitable?

In the case of Strong AI, there's always going to be someone somewhere working to achieve it, and sooner or later one of them is going to succeed.

Summerspeaker
18th October 2010, 00:48
The military is perfectly capable of conducting their own AI projects regardless of support for alternatives.

Ideally, we'd abolish the imperialist states before proceeding to the AI debate. If that's out of the question, a serious attempt at friendly AI should work within the existing political system to secure robust policies against narrow AI by individual countries. Perhaps SIAI will employ this approach when the times becomes right but I see little evidence of it at the moment. To the contrary, Yudkowsky and company express disdain for democracy and existing political institutions.


What's "bourgeois positivism"?Bourgeois applies with its customary meanings. By positivism in this case I mean the belief in acquiring objectivity correct knowledge through empirical observation and the scientific method. Scientism (http://www.skepdic.com/scientism.html) might be a better word to use.


Military awareness of an issue does not automatically translate into meaningful action on the part of the nation-state as a whole.They can't solve climate change by the tried-and-true methods such as imprisonment, assassination, and governmental coup. AI researchers, on the other hand, make juicy targets for the first two. Similarly, there's no silver-bullet climate change solution to fund while strong AI is the holy grail of techno-fixes.


As for scrutiny, only a very tiny proportion of the world's population is qualified to do so meaningfully.Given the stakes, increasing that number could be a wise move. Again, morally the people must have a say in such a globally transformative development.


In the case of Strong AI, there's always going to be someone somewhere working to achieve it, and sooner or later one of them is going to succeed.I'm not convinced of the first proposition. Doesn't that also make engineered super-plague, ecophagy (http://en.wikipedia.org/wiki/Ecophagy), and other calamities equally inevitable? I suspect society where nobody successfully pursued strong AI could exist even without draconian controls. The proper mixture of technical limitations, public awareness, and community values does the trick. The SIAI claim that we have to make supreme friendly AI or mean AI will get us isn't fundamentally different from the tired mainstream belief that only the state's monopoly on force prevents us from killing each other.

ÑóẊîöʼn
18th October 2010, 01:38
Ideally, we'd abolish the imperialist states before proceeding to the AI debate. If that's out of the question, a serious attempt at friendly AI should work within the existing political system to secure robust policies against narrow AI by individual countries.

Not only does suppression of technology go against the principles that technophiles hold dear, it just plain doesn't work.


Perhaps SIAI will employ this approach when the times becomes right but I see little evidence of it at the moment. To the contrary, Yudkowsky and company express disdain for democracy and existing political institutions.

And so they should, they're very smart people.


Bourgeois applies with its customary meanings. By positivism in this case I mean the belief in acquiring objectivity correct knowledge through empirical observation and the scientific method. Scientism (http://www.skepdic.com/scientism.html) might be a better word to use.

I'm always wary when people start using words like "scientism". In my experience they tend to be postmodernist babblers or have an ulterior (and usually anti-scientific) agenda. Just because something is bourgeois does not automatically make it bad - the rise of modern science (and by extension effective methods of discovering truths about the universe) coincides with the rise of the bourgeoisie.


They can't solve climate change by the tried-and-true methods such as imprisonment, assassination, and governmental coup. AI researchers, on the other hand, make juicy targets for the first two. Similarly, there's no silver-bullet climate change solution to fund while strong AI is the holy grail of techno-fixes.

Such tactics are very high-profile, especially against targets in developed countries - certainly many people would notice if Yudkowsky were to mysteriously disappear or if SIAI property were damaged. The moment that happens, you can expect every AI researcher to immediately leave the US for greener pastures.


Given the stakes, increasing that number could be a wise move. Again, morally the people must have a say in such a globally transformative development.

If you want more people to have a greater understanding of the implications of Strong AI, the best way to do that would be to support the improvement of education the world over. There are plenty of other good reasons also.

But until then, most people simply aren't mentally prepared to deal with the issue in a rational manner. Democratic discourse must give those in the know greater weight, otherwise it is simply vulgar populism.


I'm not convinced of the first proposition. Doesn't that also make engineered super-plague, ecophagy (http://en.wikipedia.org/wiki/Ecophagy), and other calamities equally inevitable?

People may want a biological weapon as a geopolitical trump card, or they may want molecular nanotechnology in order to kick-start a post-scarcity society, but nobody outside of a James Bond movie wants to actually destroy the world.


I suspect society where nobody successfully pursued strong AI could exist even without draconian controls. The proper mixture of technical limitations, public awareness, and community values does the trick. The SIAI claim that we have to make supreme friendly AI or mean AI will get us isn't fundamentally different from the tired mainstream belief that only the state's monopoly on force prevents us from killing each other.

Except that it happens to be true. What do you think would happen in the event that a less-than-Friendly AI appears, without a Friendly AI around to stamp it out or at least keep it in check?

ContrarianLemming
18th October 2010, 01:48
By singularity, are we refering to the idea of a tech singularity? the idea that eventually we'll create an Ai that can create an AI smarter then itself, and then another smarter AI, and so on.

the novels in the series "the Culture" seem so relevant

ÑóẊîöʼn
18th October 2010, 01:54
By singularity, are we refering to the idea of a tech singularity? the idea that eventually we'll create an Ai that can create an AI smarter then itself, and then another smarter AI, and so on.

Pretty much. I posted a couple of links to another enquirer earlier the thread - check them out.

black magick hustla
18th October 2010, 08:49
singularitarian pride worldwide

Kuppo Shakur
18th October 2010, 20:55
singularitarian pride worldwide
EW.

ÑóẊîöʼn
18th October 2010, 22:34
EW.

What's Electronic Warfare got to do with it?

Amphictyonis
19th October 2010, 01:16
Lets create another food chain where we aren't at the top! That will work out well?

ÑóẊîöʼn
19th October 2010, 01:22
Lets create another food chain where we aren't at the top! That will work out well?

I think it's more like a food tree in actual nature, meaning that a predator don't necessarily eat all of the "lower" animals. Besides, we aren't talking about animals here. We're talking about intelligences equal to or greater than human.

Amphictyonis
19th October 2010, 02:01
Would you think twice about spraying a colony of ants that had infested your kitchen?

ÑóẊîöʼn
19th October 2010, 02:23
Would you think twice about spraying a colony of ants that had infested your kitchen?

That analogy only works in terms of power, not co-habitation. A Strong AI might not think much of our reasoning capabilities, but they would still be able to engage with us on our level, unlike us and ants.

Amphictyonis
19th October 2010, 02:39
That analogy only works in terms of power, not co-habitation. A Strong AI might not think much of our reasoning capabilities, but they would still be able to engage with us on our level, unlike us and ants.

If you're talking about a 'singularity' where AI can exponentially advance to the point of plateau I'd assume the human condition would cease to make sense to such a 'being', if it ever did in the first place. Mankind has to be critical of such endeavors. To just blindly throw caution to the wind without exploring worst case scenario's would be unintelligent. I'm all for advances in technology but the idea of a singularity bothers me.

I'm also of the opinion we need some major social advancements before we start making exponential advances in technology. Which should come first? The chicken or the egg?

Summerspeaker
19th October 2010, 03:05
Not only does suppression of technology go against the principles that technophiles hold dear, it just plain doesn't work.

Unless you have a FAI singleton, you mean, as the central argument for FAI's necessity involves the suppression of technology. If a FAI can't stop genocidal genies it won't help us. The SIAI platform itself rejects regulation of the basis of perceived influence on the field. If they're right and good intentions from researchers will prevent narrowly interested/unfriendly superintelligence, why couldn't the universally acknowledged dangers forestall research altogether?


And so they should, they're very smart people.

How does a leftist buy into Singularitarian elitism and obsession with intelligence rankings? These aspects alone would make me suspicious of the movement. Needless to say I have more contempt than respect for representative democracy in its existing forms, but folks deserve power over issues that affect them.


I'm always wary when people start using words like "scientism". In my experience they tend to be postmodernist babblers or have an ulterior (and usually anti-scientific) agenda.

The first applies to me to the extent that I accept postmodernist philosophical contributions. I've no anti-scientific agenda whatsoever. In general, I'm a fan. It's when people try to paint science as making normative claims that I take exception. The scientific method cannot tell us what to value. See James Hughes for details on the pitfalls of the cult of rationality (http://ieet.org/index.php/IEET/more/hughes20100108/).


Except that it happens to be true. What do you think would happen in the event that a less-than-Friendly AI appears, without a Friendly AI around to stamp it out or at least keep it in check?

Even if you're down with truth claims, don't you think making them about hypothetical technology constitutes a stretch? As far as we know, superhuman intelligence does not currently exist. As such, we're on decidedly shaky ground. The SIAI program rests on a giant pile of assumptions. A big one here is that only FAI as meaningful chance of preventing unfriendly AI. There's a decent argument for it, just as there is for police, but I'm not at all convinced. Narratives of security so often serve oppression. Anarchists have traditionally been willing to live without the patriarch's protection. As Shevek says in The Dispossessed, "Freedom is never very safe."

synthesis
19th October 2010, 05:09
Sorry to get off-topic, but:



Case in point: Penicillin. Discovered because of advances in biology, and refined through advances in biotechnology, this drug cured many diseases that had been killing people for centuries. It was in every sense of the term a Singularity-level technology. And yet in the long term, it wound up leaving us just as vulnerable to disease. Bacteria mutated, creating nastier infections than we've ever seen before. Now we're turning to pro-biotics rather than anti-biotics; we're investigating gene therapies to surmount the troubles we've created by massively deploying penicillin and its derivatives.

My understanding was that penicillin had been around for some time before it was actually called "penicillin," in that people had been using moldy bread as a "traditional cure" for centuries, just that no one understood the science behind it. Also, why wouldn't bacteria also mutate in response to pro-biotic treatments?

ÑóẊîöʼn
19th October 2010, 17:57
If you're talking about a 'singularity' where AI can exponentially advance to the point of plateau I'd assume the human condition would cease to make sense to such a 'being', if it ever did in the first place. Mankind has to be critical of such endeavors. To just blindly throw caution to the wind without exploring worst case scenario's would be unintelligent. I'm all for advances in technology but the idea of a singularity bothers me.

Working on Friendly AI is the opposite of throwing caution to the wind.


I'm also of the opinion we need some major social advancements before we start making exponential advances in technology. Which should come first? The chicken or the egg?

Well, maybe we'll achieve a communist society before the singularity happens - predictions may turn out to have been on the optimistic side. But I think the mere fact that some humans desire a communist society is evidence enough of our advancement.


Unless you have a FAI singleton, you mean, as the central argument for FAI's necessity involves the suppression of technology.

Wrong. People could still be allowed to build AIs, just not Unfriendly ones.


How does a leftist buy into Singularitarian elitism and obsession with intelligence rankings? These aspects alone would make me suspicious of the movement. Needless to say I have more contempt than respect for representative democracy in its existing forms, but folks deserve power over issues that affect them.

Elitist? I've yet to hear a Singularitarian claim that they are immune to the same biases and shortcomings that all humans experience. They stress the importance of knowledge, but they also advocate that knowledge be widely available (hence the non-suppression). The more people have access to knowledge and education, the better informed their decisions will be. Everyone deserves to be less wrong.


The first applies to me to the extent that I accept postmodernist philosophical contributions. I've no anti-scientific agenda whatsoever. In general, I'm a fan. It's when people try to paint science as making normative claims that I take exception. The scientific method cannot tell us what to value.

That's true, but scientific knowledge can inform morality nonetheless. Example; science doesn't tell us it's wrong to burn witches (normative claim) but it can tell us there is no evidence for witches being able to perform magic (descriptive claim). Burning people for doing stuff they can't have done strikes me as an evil thing to do.


Even if you're down with truth claims, don't you think making them about hypothetical technology constitutes a stretch? As far as we know, superhuman intelligence does not currently exist. As such, we're on decidedly shaky ground. The SIAI program rests on a giant pile of assumptions. A big one here is that only FAI as meaningful chance of preventing unfriendly AI. There's a decent argument for it, just as there is for police, but I'm not at all convinced. Narratives of security so often serve oppression. Anarchists have traditionally been willing to live without the patriarch's protection. As Shevek says in The Dispossessed, "Freedom is never very safe."

You admit there is a decent argument for it, and I'm pretty sure even bourgeois liberals would program a Friendly AI to value humanity's existence over it's own, a tendency which can lead to corruption, so I don't think your comparison with police is adequate.

Amphictyonis
20th October 2010, 02:47
Working on Friendly AI is the opposite of throwing caution to the wind.





If we create a 'thing' which has the ability to exponentially surpass human intelligence there's no way we can predict the potentially malevolent or benign nature of such a thing. Hell, if it were created under our current system, you'd must know, the military funds all such projects. If it's no use to the military it doesn't get done.

Also, if from the moment you were born you had a road map outlining exactly what to do what would be the fun of living? Part of life is teh struggle- figuring things out, learning, overcoming. I'm not sure I'd want some super computer making all my decisions for me. I'd rather do that collectively.

Summerspeaker
20th October 2010, 03:03
Elitism runs strong amongst Singularity movement in the often-voiced sentiment that the tiny minority of knowledgeable folks with high IQ should make the meaningful decisions for the entire species without regard to what everyone else thinks. Yudkowsky explicitly dismisses mass organizing in favor winning funding from the rich (http://www.scribd.com/doc/39277469/Artificial-Intelligence-as-a-Positive-and-Negative-Factor-in-Global-Risk). If that's not elitism, what is?

(Incidentally, he caricatures communist history and rejects communism in the same paper.)

ÑóẊîöʼn
20th October 2010, 08:52
If we create a 'thing' which has the ability to exponentially surpass human intelligence there's no way we can predict the potentially malevolent or benign nature of such a thing.

Not so. Firstly, intelligence does not garuantee belligerence, and further why would an AI explicitly constructed from the start to be Friendly suddenly veer off into a genocidal mode?


Hell, if it were created under our current system, you'd must know, the military funds all such projects. If it's no use to the military it doesn't get done.

That must be why the LHC was never constructed. OH WAIT.


Also, if from the moment you were born you had a road map outlining exactly what to do what would be the fun of living? Part of life is teh struggle- figuring things out, learning, overcoming. I'm not sure I'd want some super computer making all my decisions for me. I'd rather do that collectively.

I hardly think a Friendly AI would eliminate all struggle from life - just the struggles that no amount of unaided human activity would solve.


Elitism runs strong amongst Singularity movement in the often-voiced sentiment that the tiny minority of knowledgeable folks with high IQ should make the meaningful decisions for the entire species without regard to what everyone else thinks. Yudkowsky explicitly dismisses mass organizing in favor winning funding from the rich (http://www.scribd.com/doc/39277469/Artificial-Intelligence-as-a-Positive-and-Negative-Factor-in-Global-Risk). If that's not elitism, what is?

(Incidentally, he caricatures communist history and rejects communism in the same paper.)

It would be helpful if you highlighted the relevant sections.

CommunityBeliever
20th October 2010, 11:50
I am not interested in a prolonged discussion, however, since this topic is discussing artificial intelligence I felt compelled to formulate a reply.


the singularity isn't going to happen

It will happen sooner then you think! Recently, we have seen a lot of progress from a variety approaches. With the mathematical approach we have seen advancements from lambda calculus, lisp and self code generation. However, perhaps the most interesting of advancements in recently times has been from modeling the neocortex. Look up artificial neural networks (http://en.wikipedia.org/wiki/Artificial_neural_network) and hierarchical temporal memory (http://en.wikipedia.org/wiki/Hierarchical_temporal_memory).

All these approaches to AI will culminate, through the Internet, into a Seed AI that will in turn create what we known of as the Singularity, a point where our ability to model time fails.


Building bombs in the old days took loads of really expensive custom-built equipment and fiddly techniques; whereas programming can now be done on relatively cheap commercially available hardware, and you can backup your work.

The amount of hardware that will be necessary to formulate Strong AI is immense. It will require thousands of computers, super-computers, and other expensive equipment.

And the amount of code to formulate a Strong AI is also immense, such that it will only come about from a team of millions of programmers.

No organization, government or corporation, has the power necessary to formulate this, so this raises the question: what organization will?

The answer to that is the Internet. This AI will only arise from a world-wide collective effort of millions of computer scientists, an effort which is currently known as the open source movement (http://www.opensource.org/). Furthermore, the massive amount of computational power necessary to run this AI will come from the collective pooling of thousands of computers on the Internet.


it's easy to parody the poor guy who talked about potato chips

The term singularity carries negative connotations as a result of the irrational/quasi-religious claims from loons like this person and Ray Kurzweil. Therefore the neutral term Seed AI is preferable.


It's when people try to paint science as making normative claims that I take exception. The scientific method cannot tell us what to value.

Science can answer all of our descriptive questions, however, prescriptive questions are indeed a problem.

Science cannot tell us what to value, however, it can tell us what is valued. First of all, by observing all things that exist we can clearly extrapolate that survival is valued, for entities that do not value their own survival do not have a place in existence.

Therefore, from the fact that survival is valued we can derive an approach to prescriptive values known as perfectionism. In the perfectionist approach all flaws, such as irrationality and disability, are actively eliminated because they may hinder survival.

Besides eliminating flaws, perfectionists constantly seek to improve themselves by accumulating knowledge and technology because those things may aid in survival.

By combining the scientific approach with the perfectionist approach no individuality is necessary, so this can be undertaken by a singular being.


Bostrom, in particular, writes about the necessity for a singleton, a good old-fashioned centralized authority

That is an accurate hypothesis, however, I maintain that it is more precise that a singular being will end up dominating society.

ÑóẊîöʼn
20th October 2010, 12:02
The amount of hardware that will be necessary to formulate Strong AI is immense. It will require thousands of computers, super-computers, and other expensive equipment.

And the amount of code to formulate a Strong AI is also immense, such that it will only come about from a team of millions of programmers.

No organization, government or corporation, has the power necessary to formulate this, so this raises the question: what organization will?

This appears to contradict your previous statement directed towards Annalee Newitz. Who else has been making the advancements you mention, if not private organisations? Although governments don't seem interested in researching the subject, at least not directly.


The answer to that is the Internet. This AI will only arise from a world-wide collective effort of millions of computer scientists, an effort which is currently known as the open source movement (http://www.opensource.org/). Furthermore, the massive amount of computational power necessary to run this AI will come from the collective pooling of thousands of computers on the Internet.

Why can't private organisations or even governments harness this power? SETI is certainly able to harness the spare computer power of subscribers in order to perform data analysis. I imagine if the SIAI were to start a similar project they would find plenty of subcribers.

CommunityBeliever
20th October 2010, 12:08
Why can't private organisations or even governments harness this power?

Of course that is theoretically possible, however, I believe it is likely that the collective power of the Internet will need to be harnessed to go about formulating this Strong AI.

ÑóẊîöʼn
20th October 2010, 13:35
Of course that is theoretically possible, however, I believe it is likely that the collective power of the Internet will need to be harnessed to go about formulating this Strong AI.

Like I said, it seems plausible that the SIAI could do something to harness at least some of that power - I imagine if asked, most SIAI supporters would be ready and willing to install a free program that uses their computer as part of a distributed network for data analysis and/or problem solving.

Ovi
20th October 2010, 14:59
It will happen sooner then you think! Recently, we have seen a lot of progress from a variety approaches. With the mathematical approach we have seen advancements from lambda calculus, lisp and self code generation. However, perhaps the most interesting of advancements in recently times has been from modeling the neocortex. Look up artificial neural networks (http://en.wikipedia.org/wiki/Artificial_neural_network) and hierarchical temporal memory (http://en.wikipedia.org/wiki/Hierarchical_temporal_memory).

Artificial neural networks aren't something new and much of the excitement over it worn off as it failed to accomplish what researchers hoped for which lead to its abandonment in 1969. According to the AI winter article (http://en.wikipedia.org/wiki/AI_winter), the interest revived a decade later, but the debate of connectionism vs computationalism is far from over.

Summerspeaker
20th October 2010, 16:06
Here (http://docs.google.com/viewer?a=v&q=cache:kH51SGcaPEcJ:singinst.org/upload/artificial-intelligence-risk.pdf+yudkowsky+%2B+communism&hl=en&gl=us&pid=bl&srcid=ADGEEShXAVwRIkbTT5t1mUBTwTq9p-90gWnlccHHurvVTxtkqVTt9GQLEutjUNOZACOKhjAvyl42CNFf dnkFrQY_oyPrbYsFr5V6NrEaE8TNVz4PVn3-I3gfxIUpXc9lkjLLetbh8IQj&sig=AHIEtbT1XdZ-hZ1i7poD6Mg3DnPt27JwVg) is a searchable version of that Yudkowsky paper. Just look for "communism" and "funding".

ÑóẊîöʼn
20th October 2010, 16:31
Elitism runs strong amongst Singularity movement in the often-voiced sentiment that the tiny minority of knowledgeable folks with high IQ should make the meaningful decisions for the entire species without regard to what everyone else thinks. Yudkowsky explicitly dismisses mass organizing in favor winning funding from the rich (http://www.scribd.com/doc/39277469/Artificial-Intelligence-as-a-Positive-and-Negative-Factor-in-Global-Risk). If that's not elitism, what is?

The investment of time and effort needed for the creation and maintenance of a mass movement represents a significant distraction from the primary goal - creation of Friendly AI. Of course, that assumes that such a mass movement is sure to take off, which is by no means certain. It is more appropriate to convince rich people to give you money (they have a lot of it to spare, after all), because under the current system acquiring money can get you certain things much quicker than trying to build mass support - sad but true.


(Incidentally, he caricatures communist history and rejects communism in the same paper.)

That's not my impression:


The folly of programming an AI to implement communism, or any other political system, is
that you're programming means instead of ends. You're programming in a fixed decision,
without that decision being re-evaluable after acquiring improved empirical knowledge
about the results of communism. You are giving the AI a fixed decision without telling the
AI how to re-evaluate, at a higher level of intelligence, the fallible process which produced
that decision.

Yudkowsky here looks to me like he is criticising what he percieves to be the means of communism, not the ends, of which I have not seem him criticise.

Further, he is discussing the implications of programming an AI to implement communism, and not discussing whether it is possible to achieve communism at all.

Summerspeaker
20th October 2010, 21:11
What reason do the rest of us have to trust FAI developers funded by exploiters and explicitly uninterested in our input? Wouldn't the elite prefer AI that maintained their privilege over the universally friendly AI Yudkowsky and company want to build? Any scheme that expects people to cheerfully act against their own interests strikes me as implausible.

As for communism, he gives the American pop culture version of the movement's history. "Communism seemed like a pretty good idea until people actually tried it and catastrophe followed." Presenting the Russian Revolution in this fashion shows either lack of knowledge about the real process or a loose and dangerous interpretative style. Perhaps both. As we all know, the horrors of Stalin need not discredit communism as a whole.

ÑóẊîöʼn
20th October 2010, 21:22
What reason do the rest of us have to trust FAI developers funded by exploiters and explicitly uninterested in our input? Wouldn't the elite prefer AI that maintained their privilege over the universally friendly AI Yudkowsky and company want to build?

No, because that wouldn't work for the same reasons that a communist AI wouldn't work. If an AI favours a certain section of humanity over others, then it cannot be Friendly because potentially all humans could end up in it's "don't care" category.


As for communism, he gives the American pop culture version of the movement's history. "Communism seemed like a pretty good idea until people actually tried it and catastrophe followed." Presenting the Russian Revolution in this fashion shows either lack of knowledge about the real process or a loose and dangerous interpretative style. Perhaps both. As we all know, the horrors of Stalin need not discredit communism as a whole.

Well, substitute "communism" for "Marxist-Leninism" and it works just as well.

Summerspeaker
21st October 2010, 00:44
No, because that wouldn't work for the same reasons that a communist AI wouldn't work.

That's highly speculative and also comparing apples to oranges. A narrowly interested or simply obedient AI need not favor a certain political program.


If an AI favours a certain section of humanity over others, then it cannot be Friendly because potentially all humans could end up in it's "don't care" category.

Obviously, narrowly interested intelligence works - we see it around us each day. While this sort of thing might be more dangerous than FAI, the bosses wouldn't necessarily care if it keeps them on top. I'm not the only one saying this sort of thing, by the way. Anissimov often notes the danger of narrowly interested AI. Even if it's vastly riskier than FAI elites still might try it out of ignorance or hubris. I suggest Bill Hibberd's critique of SIAI (http://www.ssec.wisc.edu/%7Ebillh/g/SIAI_critique.html) if you haven't already read it. Old but good.


Well, substitute "communism" for "Marxist-Leninism" and it works just as well.

Yes, but that's not what Yudkowsky wrote.

Amphictyonis
21st October 2010, 09:05
NoXion said - "That must be why the LHC was never constructed. OH WAIT."

How do you know the LHC doesn't have some sort of military use? As if splitting the atom didn't?

ÑóẊîöʼn
21st October 2010, 18:15
That's highly speculative and also comparing apples to oranges. A narrowly interested or simply obedient AI need not favor a certain political program.

Such an AI would be inferior to the sort of Friendly AI that SIAI wants to develop.


Obviously, narrowly interested intelligence works - we see it around us each day. While this sort of thing might be more dangerous than FAI, the bosses wouldn't necessarily care if it keeps them on top. I'm not the only one saying this sort of thing, by the way. Anissimov often notes the danger of narrowly interested AI. Even if it's vastly riskier than FAI elites still might try it out of ignorance or hubris. I suggest Bill Hibberd's critique of SIAI (http://www.ssec.wisc.edu/%7Ebillh/g/SIAI_critique.html) if you haven't already read it. Old but good.

That criticism does not adequately address the need for a generalised FAI; rather, it is a criticism of SIAI as an organisation. Fair enough, but that does not deal with the narrowly interested AIs and "evil genie" AIs that you mentioned.


Yes, but that's not what Yudkowsky wrote.

So? He's a human being, for goodness sake, I can correct him without hubris.


NoXion said - "That must be why the LHC was never constructed. OH WAIT."

How do you know the LHC doesn't have some sort of military use? As if splitting the atom didn't?

You're making the claim, so you tell me.

Summerspeaker
21st October 2010, 23:38
Such an AI would be inferior to the sort of Friendly AI that SIAI wants to develop.

Inferior in what sense?


So? He's a human being, for goodness sake, I can correct him without hubris.:confused: Indeed you can. My point is the way he wrote it reflects poorly on his knowledge of and opinion toward leftist politics. Perhaps that doesn't matter for the technical matter of FAI design, but I wouldn't count on it.

This has been an enjoyable discussion. I think we've reached the point where there's not a whole lot more to stay. I'm curious as to how you ingrate your support for FAI/SIAI with traditional social struggle.

Amphictyonis
22nd October 2010, 00:07
You're making the claim, so you tell me.

I'm not in the Department of Defense inner circle. That's like asking a grocery clerk (during WW2) what the Manhattan Project is. Most all of our modern technology has been born out of the 'military industrial complex'- be it in the US or other western nations military. It's a sad state of affairs we're in. Even NASA is largely for military use. The domination of space (the immediate space around earth).

Most of the current world leaders are not into facilitating equality or material abundance- they're into maintaining class society, false scarcity and control of human populations. I have every reason to believe AI would simply be used to maintain the current paradigm. Even if we were living in an advanced communist world I'd have reservations- a 'Matrix' or "Terminator' type scenario could indeed manifest. It's silly bringing up movies but hell, they have a point.

ÑóẊîöʼn
22nd October 2010, 01:04
Inferior in what sense?

An AI with narrow interests or that only acts when asked to by humans is less powerful than a general AI with it's own volition.


:confused: Indeed you can. My point is the way he wrote it reflects poorly on his knowledge of and opinion toward leftist politics. Perhaps that doesn't matter for the technical matter of FAI design, but I wouldn't count on it.

This has been an enjoyable discussion. I think we've reached the point where there's not a whole lot more to stay. I'm curious as to how you ingrate your support for FAI/SIAI with traditional social struggle.

Because "traditional social struggle" is the responsibility of all of us, not a small bunch of AI researchers. I'm personally not sure what will come first, a communist society or a Singularity, so I see no problem with advocating both.


I'm not in the Department of Defense inner circle. That's like asking a grocery clerk (during WW2) what the Manhattan Project is.

The LHC is a multi-national civilian operation, the Manhattan Project was an explicitly American military project that was pretty much a secret. It doesn't make sense to compare the two.


Most all of our modern technology has been born out of the 'military industrial complex'- be it in the US or other western nations military. It's a sad state of affairs we're in. Even NASA is largely for military use. The domination of space (the immediate space around earth).

If it's domination of space they're after, they're not doing a very good job at it. True domination would consist of industrialising Earth orbit and thus achieving complete economic domination over the planet, as well as permanent supply security for quite a few resources.

Hell, NASA could do establish a project for moving around asteroids, and plausibly claim it's for civilian mining purposes, but they don't.


Most of the current world leaders are not into facilitating equality or material abundance- they're into maintaining class society, false scarcity and control of human populations. I have every reason to believe AI would simply be used to maintain the current paradigm.

That applies to practically any technological advancement. So what should we advocate when it comes to that area? Regression? Stasis? Moving forward? One of those three has to happen, and I'm not convinced that the first two will be at all helpful. At least the lattermost offers the opportunity for big changes to occur.


Even if we were living in an advanced communist world I'd have reservations- a 'Matrix' or "Terminator' type scenario could indeed manifest. It's silly bringing up movies but hell, they have a point.

What point is that? Nothing like Skynet would be built in a communist world, and even if it were possible to choose life in a virtual world I doubt that everyone on the planet would choose to upload themselves.

Summerspeaker
22nd October 2010, 03:42
Because "traditional social struggle" is the responsibility of all of us, not a small bunch of AI researchers. I'm personally not sure what will come first, a communist society or a Singularity, so I see no problem with advocating both.

Should I take that you mean that you don't integrate them? (If so, that's a reasonable position.) I'm interested because I spend perhaps too much of my time trying to promote anarchism, communism, and radical feminism in the transhumanist community. If you've got a grand synthesis I'd love to hear it.


If it's domination of space they're after, they're not doing a very good job at it.

The U.S. military seeks space supremacy as the ultimate high ground relative to earth. They want primarily the ability to attack from orbit and shoot down satellites. I recently watched the powerful film Pax Americana (http://www.pax-americana.com/) on this subject. Scary stuff.


At least the lattermost offers the opportunity for big changes to occur.

This draws me to futurism more than anything else. The technologies transhumanists dream about may offer an opening revolutionary social change.

ÑóẊîöʼn
22nd October 2010, 04:40
Should I take that you mean that you don't integrate them? (If so, that's a reasonable position.) I'm interested because I spend perhaps too much of my time trying to promote anarchism, communism, and radical feminism in the transhumanist community. If you've got a grand synthesis I'd love to hear it.

I wish I could tell you that I have, but I've devoted more time personally to the synthesis of anarchism and technocracy, which I feel has the higher priority.

Nevertheless when it comes to Transhumanism I have two principles in addition to those commonly held by Transhumanists (http://www.aleph.se/Trans/Cultural/Philosophy/Transhumanist_Principles.html):

1. Availability: All people should have the right to access transformative technology or to transcend their natural limitations, if they so choose.

2. Escape clause: provisions should be made for those who do not want to live in a high-tech Transhuman society. This could take any form, from Transhumans leaving Earth as a baseline reserve, to the establishment of pre-Singularity baseline colonies on Earth-like planets.


The U.S. military seeks space supremacy as the ultimate high ground relative to earth. They want primarily the ability to attack from orbit and shoot down satellites. I recently watched the powerful film Pax Americana (http://www.pax-americana.com/) on this subject. Scary stuff.

The efforts of the US in this area seem especially half-hearted to me, especially given China's recent interest in space.


This draws me to futurism more than anything else. The technologies transhumanists dream about may offer an opening revolutionary social change.

Indeed, but since social action as well as technological development is important, it is most likely that a revolutionary synthesis of Transhumanism is necessary - I would welcome any ideas on this. Otherwise, promotion of Transhumanist technologies could be left solely in the hands of the god-wannabes and libertarian dolts, which could shoot ourselves in the foot even in the event that a successful social revolution occurs - sure, capitalism may be dead and gone, but the universe is indifferent to our moral character and rejection of Transhumanism would represent a missed opportunity on the greatest scale yet.

Amphictyonis
24th October 2010, 22:53
The LHC is a multi-national civilian operation, the Manhattan Project was an explicitly American military project that was pretty much a secret. It doesn't make sense to compare the two.











US Federal Funding for LHC:

http://www.phenix.bnl.gov/WWW/lists/phenix-news-l/msg00084.html


And we have EU state funding:

"The reason lies with long-term planning and commitment to science, an area where sadly the United States has in recent times often fallen short. Each European member of CERN pledges a certain amount every year, depending on its gross national product. Thus the designers of the LHC could count on designated funding over the many years required to get the enterprise up and running. Already, the upgrades of coming years are being programmed. Foresight and persistence are the keys to the LHC’s success."

"American researchers form a large contingent in the major LHC experiments. They are proud to contribute to such a pivotal venture. Although the United States is not a member of CERN, it donates ample funds toward LHC research. While celebrating Europe’s achievements, however, many American physicists still quietly mourn what could have taken place at home."


http://www.wired.com/wiredscience/2009/09/collider_excerpt/

Revy
25th October 2010, 03:33
Isn't the Singularity a load of self-indulgent crap?

Not about technology for the good of humanity, only about the wow factor, but also it seems to be about how evil or dangerous technology will be. I guess that is between pro and anti Singularity sides? Their babble can be so confusing sometimes you don't know what they're about.

Salyut
25th October 2010, 07:00
Call me when hard AI research actually goes somewhere.

Amphictyonis
28th October 2010, 03:29
There's a show on the TV thing right now with Michio Kaku talking about the singularity. Noxion should watch it. I'll post it as soon as it goes on youtube. Many of my concerns are being brought up- specifically the fact that we cannot predict how AI would act and it could predict/override our attempts to control it (amongst many other things). One of the most well known advocates for AI/singularity is talking about "Nanny AI" which would "help" mankind by monitoring all communications, cameras etc in order to "keep us safe". A centralized global big brother. Pfft. If anything of that nature arises in my lifetime I convert to primitivism ;)

Summerspeaker
31st October 2010, 20:33
I just wrote a blog post (http://queersingularity.wordpress.com/2010/10/31/ben-goertzel-rejects-siais-scary-idea/) on AGI researcher Ben Goertzel's critique of the Singularity Institute. Goertzel considers the notion of provably friendly AI likely unfeasible and fears of hard takeoff somewhat exaggerated. He suggest accepting the risks and proceeding carefully but ambitiously with AGI development.

Thirsty Crow
1st November 2010, 04:00
I have a question regarding Artificial Intelligence and problem solving...Bear with me, it may come off as technophobic or something, but it's an honest question.

If we were to envision such a solution to the organization of production and management of social affairs, what consequences would appear with regard to human capability of problem solving?
In other words, what social practices could be instituted in order that the human intellectual capacities do not wither away...?

Probably I'm not making any sense. But if someone manages to decode what I cannot express but in this code, feel free to criticize and/or answer.

Amphictyonis
1st November 2010, 07:21
I have a question regarding Artificial Intelligence and problem solving...Bear with me, it may come off as technophobic or something, but it's an honest question.

If we were to envision such a solution to the organization of production and management of social affairs, what consequences would appear with regard to human capability of problem solving?
In other words, what social practices could be instituted in order that the human intellectual capacities do not wither away...?

Probably I'm not making any sense. But if someone manages to decode what I cannot express but in this code, feel free to criticize and/or answer.

u9s7afoYI-M

ÑóẊîöʼn
1st November 2010, 07:48
I have a question regarding Artificial Intelligence and problem solving...Bear with me, it may come off as technophobic or something, but it's an honest question.

If we were to envision such a solution to the organization of production and management of social affairs, what consequences would appear with regard to human capability of problem solving?
In other words, what social practices could be instituted in order that the human intellectual capacities do not wither away...?

Probably I'm not making any sense. But if someone manages to decode what I cannot express but in this code, feel free to criticize and/or answer.

I think I understand what you're saying.

I reckon that games would form part of the solution - and I don't mean just computer games. Practically any human activity can be turned into a game, something done for its for own sake but which at the same time stimulates and exercises us in certain areas, depending on the game in question. Games can involve mental exercises from arithmetic to roleplaying, physical exercises from sex to cooking, and cultural exercises such as visual arts, prose and self-identification/social groupings. Even the process of creating games can be turned into a game.

For those of us who are AIs or who choose to upload, we'd probably come up with something like Infinite Fun Space (http://www.nada.kth.se/~asa/Ethics/infinite.html) to while away the time in our virtual communities (http://en.wikipedia.org/wiki/Diaspora_(novel)#The_Polises). But of course I could be wrong.