View Full Version : Phoenix Insurgent: Trans-Humanism's Class Problem
Agrippa
13th June 2009, 18:42
from Fires Never Extinguished (http://firesneverextinguished.blogspot.com/2009/02/is-that-singularity-in-your-pocket-or.html)
Is That a Singularity in Your Pocket or are You Just Happy to See Me Enslaved?
Transhumanism's Class Problem
The Financial Times reports today (http://www.ft.com/cms/s/0/8b162dfc-f168-11dd-8790-0000779fd2ac.html?nclick_check=1) that well-known technophiliac and Google co-founder Larry Page has gotten together with X-Prize top dog Peter Diamandis to form what they are dubbing the "Singularity University (http://singularity-university.org/)". The SU, to be headed up by longtime technology writer (and originator of the Singularity concept) Ray Kurzweil, aims to prepare society for the day, not far off they claim, when the pace of technological and scientific change will increase to such a point that machines themselves will take over their own development, ushering in a very religious-sounding era of allegedly benevolent social change in which poverty, war and other problems will finally be solved by technology -- rather than exacerbated (the prevailing sad state of affairs).
I'm often quite amused by the religious nature of the technophiliac view, not leastwise because its advocates masquerade so often as the emissaries of pure, logical thought. And yet, despite the obvious fact that human social systems impact both the development, distribution and application of technological "advances", the vast majority of transhumanists develop their theories of technological change as if class, empire and governments (among other things) simply don't exist. As if when this "new" era comes, it won't reflect the class interests of the people who developed it, as it does now. Somehow we're to believe that the product of a hierarchical class society will somehow, and quite magically it seems, produce a technological utopia that liberates the whole of humanity from tyranny and want -- even though it's being developed by the very people who benefit from a system of tyranny and want.
Thus, their faith (and it's hard to use another word for it) in the benevolence of technological change is an interesting position to take because it is quite clear that we live in an era in which all the global apocalypses that hang over our heads are not waiting to be conquered by technology, but are in fact the direct result of technology. Nuclear war, industrial war, famine, ecological collapse and so much else have resulted precisely because of the interactions between the state, capitalism and technology, not despite them. And continuing scientific and technological advancements have not solved our social problems. In fact, most problems in the world await relatively simple solutions, not technological in the least, which the boosters of technological change, namely corporations and governments, oppose. For instance, the expropriation of the wealth and power of the elite requires no new technology.
Indeed, there is a larger gap between rich and poor in the world now than there was a hundred years ago. Likewise in the US. Hell, there's greater disparity in the US now than there was 35 years ago, the dawn of the computer age. In order to support the transhumanist position, one has to ignore the evidence that surrounds us every day.
GMO has not fed the world. People starve (or in India kill themselves with pesticide) because GMO dispaces them from their lands and livelihoods. People are more alienated than ever before, even though they are Twittering and MySpacing away at record pace. Highly technological warfare has killed a million in Iraq alone in the last six years while the Iraqis demand not a high tech society, but one free from imperial domination. Their problem would be solved by US withdrawal, not by smart bombs and retina scans. The easiest way to defeat malaria in southeast Asia is with mosquito netting, but instead anti-malaria drugs have created super strains. The emergence of the internet has allowed for the large scale tracking of humans as never before, truly a benefit to tyranical regimes everywhere, such as the one in China with whom Google has so avidly cooperated with, complying with the so-called Great Firewall of China. The development of cheap cameras and wireless internet has brought us a surveillance society constantly under the watchful eye of authority. And yet the cameras somehow do not record when an unarmed Black man is executed by the police in plain view. And on and on.
The truth is, the failings of technology are myriad and everywhere to see, and yet its boosters, technological fundamentalists, continue to point to the future and say that someday it will finally deliver, even though they indicate no mechanism that will guarantee such an outcome. But the distribution of technology reflects class lines, just like the distribution of money. If the social relationships between classes don't change, why would the application of power (technology) change? Diamandis, perhaps, hopes that we'll all just forget to notice the relationship between the spaceships in his X Prize competition and nuclear missiles. But the fact is, if the class system remains, the result will benefit the class. His project doesn't exist in a vacuum, an neither does technology as a whole. If he researches rocket systems, he is benefiting from and contributing to nuclear warfare. Not surprisingly, both these two characters in particular sit atop the financial pyramid.
So, do Page and Diamandis imagine a world, not far off, when the power of technology will shake the capitalist system to its core, overturning class relations and freeing all of humanity? Do they hope for a world in which they can be free of their billions? Again, it doesn't require any technological advancement to accomplish a better redistribution of wealth, but if Diamandis hopes for an age without his abundant largess, it wasn't evident at a talk he gave at a forum hosted by the The Center for Technology Commercialization at the USC Business Masters Program, entitled "Space Billionaires: Educating the Next Generation of Entrepreneurs."
And it doesn't take too much of an imagination to understand the implications for human freedom that would come from Page's pet project, artificial intelligence. Page described AI as "the ultimate search engine -- it would understand everything on the web. It would understand exactly what you wanted (my emphasis)." While he smiles as he delivers the line, perhaps imagining his own post singularity God-being in whatever second life he hopes to create, he obviously forgets what such a system would mean for those of us living our real lives in the real world dominated by powerful states and greedy capitalists made more powerful by their all-knowing computers (assuming the computers wouldn't just kill us all to begin with).
It's worth asking, would social change be possible at all in a world dominated by omniscient AI, or would an all-knowing elite be able to track everything, preventing any opposition and therefore transferring all power in the system to themselves? In such a situation, would everyone who wasn't in the Singular Elite become total slaves? Not having a countervailing force to compel them to relinquish even a little bit of their power, what possible reason would the elite have for providing the rest of us any rights at all under their technological "utopia"?
In an interview with Fortune Magazine (http://money.cnn.com/2008/04/29/magazines/fortune/larry_page_change_the_world.fortune/), Page lamented,
If you ask an economist what's driven economic growth, it's been major advances in things that mattered - the mechanization of farming, mass manufacturing, things like that. The problem is, our society is not organized around doing that. People are not working on things that could have that kind of influence.Not surprisingly, he has a one-sided view of the events he describes even as he expresses every capitalists dream: to reorder society according to his needs. Firstly, he uses the passive voice to describe what in reality was a very violent attack by the capitalists on the lives of what would become workers. Secondly, the decomposition of the emerging working class that capitalists imposed through the rise of mass manufacture can only be ignored if, like Page, you don't recognize the hand of Capital at all in relation to the application of technology. This despite the many ways in which Google itself both creates and bends to the will of Capital, whether in its ad placement or in its censorship and regulation of YouTube, one of its many properties. Content on the internet must reflect the constraints of Capital like any other resource.
For instance, taking one of Page's examples, beyond just workers, mass manufacture changed all our lives, including those sometimes left out of the system of waged work like women and children, who found their lives, too, reorganized around the capitalist ethic of consumerism and later manufacture and commodity capitalism. Like the Singularity, consumerism and mass production promised the workers of the world great things, too. And so, the suburbs grew and the cars rolled off the assembly lines. And families were fragmented and lives became empty. But this new form of organization served the needs of Capital just fine.
Page also doesn't seem to remember that people resisted, often violently, those interventions into their lives. He doesn't realize that capitalists use technology as a means for the maintenance of their power through the reorganization of the working class to better suit the needs of Capital and that those actions have far-ranging affects that are very often not positive for the bulk of people affected by them. Affects that, like the Singularity, do not have in-built mechanisms for the democratic participation of the great mass of people. Lacking them, how can we expect democratic tendencies to manifest? Since Capital is a dictatorship, isn't it much more likely that a high tech society like the one transhumanists desire would much more likely resemble tyranny than freedom?
What democratic mechanisms exist in modern technological development lie primarily in the realm of one dollar one vote, a playing field that obviously privileges the opinions of people like Page and Diamandis over those of regular people and probably explains their comfort with that as a standard. Further, those without access to massive amounts of capital find themselves entirely out of the game when it comes to technological development.
Whatever other democratic mechanisms may exist in the future -- assuming any would emerge -- would have to be imposed by the rest of society, much the way that workers fought to impose some sort of democratic structure on industrial capitalism through their self-organization and resistance. And, given the class position of these two capitalists in particular, we can be safe in betting that they would oppose such means were they to arise.
In fact, there is little reason to believe that Page and Diamandis really believe in liberation for the masses via technology. Consider comments made (http://www.nasawatch.com/archives/2006/05/peter_diamandis.html), and later retracted under pressure, by Diamandis at a talk on examples from history with regard to his alleged goal of opening up space to more people. One unfortunate example he chose: the German V2 program under the Nazis.
DIAMANDIS: If you look back at what von Braun did in Nazi Germany It was incredible what you can do with literally a dictatorship. Look at the numbers. 6,000 V-2s built. 6,000 missiles were built in Nazi Germany. The recurring cost was $13,000 a launch for those vehicles. You can bring the cost down with mass production. We'll come back to what will drive ...
[Multiple audience comments - including me - "SLAVE LABOR"]
DIAMANDIS: Yea, and slave labor, Sorry.
[NERVOUS LAUGHTER]
DIAMANDIS: But you know - again to you the rest of us would happily be slave labor for that mission. Can you erase that from the video tape?
[NERVOUS LAUGHTER]
DIAMANDIS: But the fact of the matter is that mass production of rockets is possible if you have a real marketplace. And war is not a good one. Moving forward though ...Yeah, that's right, he said it. Slave labor. But it's not a bad example, really, is it? It certainly is a revelatory one. And it goes not just for Nazi Germany. Although Diamandis nervously claims at the end of that excerpt that war is not a good market, he knows he's lying. After all, if slavery was good for the development of the Nazi missile program, surely the Nazi state was as well. High technology depends on the nanny state for guaranteed markets for its goods and services. And the state, always looking for a way to expand its power and to defend its class constituency, happily provides. After all, once WWII was over the US fought hard to gather as many Nazi scientists as possible for it's own Cold War nuclear missile program, sometimes referred to in popular discourse by its doublethink titles of the Space Program or the Energy Department. You see, tyranny and holocaust (both racial and global) are never far removed from these kinds of programs. For more on this, I recommend reading Kirkpatrick Sale's excellent book "Fire of His Genius : Robert Fulton and the American Dream" which describes the link between the steamboat and the genocidal war against Native peoples in the North America.
But these comments also reveal a colossal disconnect in the heads of transhumanists like Diamandis and Page. They indeed mistake their own position, tremendously privileged both in terms of wealth and power, for the class position of everyone else. Note his statement about being happy to be slave labor for a space mission. Really? Does he think that goes for the rest of us, too? These are the people who will deliver us technological liberation.
Just consider the term "transhumanist." It's hard to imagine a term more fitting for a group of wealthy nerds uncomfortable in their own skin, isn't it? Like any good fundamentalist, they are ready to let slip this mortal coil for their reward in the great beyond. Still trying to escape from their dork high school personas, these new Masters of the Universe have mistaken their rewards under the capitalist system for a glimpse of our common liberation rather than what it really is -- a snapshot of our current misery. They hope to impose their uncomfortableness and their own desire for liberation from their sad human lives onto us. But their liberation comes at our expense, in this world and in the Singularity.
Their Singularity isn't big enough for the rest of us. Perhaps that's the real reason behind the name.
ÑóẊîöʼn
14th June 2009, 00:28
I'm quite frankly surprised that the ruling class seems to be taking this seriously, although is there any evidence that the Singularity University is anything but a talking-shop?
As for the rest of the article, such things are the reason I consider myself a revolutionary leftist. Capitalism is holding things back for all of us.
Except:
Just consider the term "transhumanist." It's hard to imagine a term more fitting for a group of wealthy nerds uncomfortable in their own skin, isn't it? Like any good fundamentalist, they are ready to let slip this mortal coil for their reward in the great beyond. Still trying to escape from their dork high school personas, these new Masters of the Universe have mistaken their rewards under the capitalist system for a glimpse of our common liberation rather than what it really is -- a snapshot of our current misery. They hope to impose their uncomfortableness and their own desire for liberation from their sad human lives onto us. But their liberation comes at our expense, in this world and in the Singularity.
Their Singularity isn't big enough for the rest of us. Perhaps that's the real reason behind the name.
As a transhumanist myself, I object to this characterisation. The Singularity, if it ever happens(!), should be there for everyone to take or leave at their choice.
Great thread. Transhumanism and indeed technological development along these lines is in fact reactionary prior to workers revolution, and may in fact be successful in permanently preventing a more egalitarian society. The fact liberalism failed to achieve an end of history in the 90s as they proclaimed doesn't mean they couldn't do it later and were they to have such technology (if possible) that may well be it.
ÑóẊîöʼn
15th June 2009, 23:49
Great thread. Transhumanism and indeed technological development along these lines is in fact reactionary prior to workers revolution,
So instead of proposing, nay demanding that the fruits of such research be made available for the benefit for everyone, your tack is to oppose such things, denying them for everyone except those willing to break the law/rich enough to afford it?
and may in fact be successful in permanently preventing a more egalitarian society.If that's true, then what the hell do you expect to do about it? If such things are possible, then the ruling class is going to develop them regardless of your say-so, and their dominance is assured.
I do not think the ruling class has displayed the vigour and forward-thinkingness required to truly take advantage of this potential opportunity - after all, they are still only human. But the future is written in wax, not stone, so we shall see.
The fact liberalism failed to achieve an end of history in the 90s as they proclaimed doesn't mean they couldn't do it later and were they to have such technology (if possible) that may well be it.The whole concept of an "end of history" is absolute nonsense. History ends when the human species (or sapient life, if you prefer) does, and no time sooner.
piet11111
16th June 2009, 19:05
if anything my believe is that the creation of transhumanist technology (in essence medical technology that can enhance a human beyond their normal capability's) would make the masses demand equal access.
in my case it would mean a cure for my diabetes a fix for my poor eyesight and a replacement for my left arm that has a busted joint (though it amuses me to see peoples reactions when i move it and you hear that crunchy noice)
i have enough faith in a human's self interest to fight for universal healthcare and equal unlimited access to the technology that will save their lives.(you have to die of something and transhumanism can avoid that something for quite some time)
Dimentio
26th June 2009, 00:46
I think that transhumanism is a great idea, but I think its current proponents to 95% are composed of insane libertarians. I have been in touch with for example the Order of Cosmic Engineers.
They are more or less insane. Building an enormous HQ on Second Life to host conferences and have sexual orgies between each-other. They are most often anarcho-capitalists, and seem to think only about themselves. A few of them care about poverty, war and such issues, most are indifferent and some like such phenomenons as it will "help destroy mankind".
What they are doing most is to:
1: Debate whether or not their avatars on SL are individuals independent from them or just... avatars.
2: Engage in online lesbian and furry BDSM sex (I should add that most of them are male data programmers age range 25-45)
I would claim that transhumanism could never develop into a progressive ideology in its own terms. It must be incorporated into technocracy or techno-communism in order to have a shot at it. I think some people in the Order are progressive, but my impression is that the right-libertarians are dominating it.
http://cosmeng.org/index.php/Main_Page
A Chemist
30th June 2009, 00:10
Great, I'd written a half-page long post, but I lost it. That'll teach me to use notepad for the lengthier posts next time.
Since I'm too lazy to type it all again, I'll try and sum it up in a few lines:
1) I'm poor, I'm comfortable in my own skin, and I'm not a nerd. And I am a transhumanist.
2) It's my own body/intelligence, possibly the only truly legitimate "private property", and I can do with it whatever I feel is necessary or appropriate, even if it involves inorganic implants.
3) The only difference between using my hands to send an electric impulse to the computer, or doing it directly from my brain, is that the former takes longer and requires more energy.
4) The greatest achievements of mankind came from men who dared to "boldly go where no man had gone before". If the first humans hadn't been moved by this primeval driving force, we'd still be leading an ape-like hunter-gatherer existance in Ethiopia. Our experimental, speculative drive is the main difference between a Human Being and a cow, not the fact that we're bipedal or that we only have one stomach as opposed to four. And that difference would not be lost (in fact, it may even be enhanced) if we were to artificially upgrade our own selves.
ckaihatsu
30th June 2009, 05:55
the day, not far off they claim, when the pace of technological and scientific change will increase to such a point that machines themselves will take over their own development,
This definition of the "singularity" is, in political terms, a variation on "third positionism" -- the claim that there is some sort of "third way" around the unavoidable two classes of proletariat and bourgeoisie.
The positing of a new sentient force in the world, that which is computer-based, throws us off our usual political basis of thinking about the world in terms of the irreconcilable opposition between the interests of those who work for a living and those who benefit from others' work.
As mindfucks go, it's a pretty good one -- this is the present-day equivalent of the '70s-era computer-overpowers-us cyberphobia, or the more recent Y2K scare. *Anything* to make us feel helpless and outnumbered even though we, as wage workers, vastly outnumber those who can live off of their dividends for their livelihoods.
This "third force" gives us one more thing to worry about instead of focusing on our collective political agency as workers that the global economy depends on, particularly in these times of thinned-out, "just-in-time" supply chains and highly nested production assembly flows.
We are expected to fear an emergent inorganic intelligence moreso than the rapacious greed motivating the likes of the world's profit-sucking ruling class. Never mind that *no* person has low-enough self-esteem to let a robot run around without constant supervision, much less without an off switch. And, considering that *we* came first, would the entire human race really reach a point where we all simply felt the need to just melt away from our normal motivations and desires for acquiring material things from the outside world? As long as there's *one* person who gets a hankering for some beef jerky late at night, there's going to be at least *one* person who will find a way to assert their will over that of a machine.
As with all boogeymen this formulation of "the singularity" is as vague a presentation as any about a deity in the sky or a malicious entity under the earth. At worst, any new, inorganic "species" would find themselves in about the same situation as the rest of us, looking to pay bills *and* find some time for enjoyable pursuits -- I can't see the nature of the political world changing much even if one or both sides of the class divide got some fancy friends on their side....
ushering in a very religious-sounding era of allegedly benevolent social change in which poverty, war and other problems will finally be solved by technology -- rather than exacerbated (the prevailing sad state of affairs).
I'll admit to a creeping optimism when it comes to a technological shortcut that *might* finally relieve many of their objective dependence on participation in the cash economy. This is about as anarchistic as I'll get, but it *could* be technologically feasible in the near future to see the development of inexpensive *personal* tools that allow for the harvesting of water, the growing of food, the generation of adequate amounts of electricity, and so on, that would liberate the average person from major economic necessities. This wouldn't change the overall, objective *class* situation in the least, or relieve us from the ultimate issue of who should be controlling mass production, but it *would* enable more people to liberate themselves from direct wage slavery, freeing up their time to hopefully become more politically conscious and active.
I'm often quite amused by the religious nature of the technophiliac view, not leastwise because its advocates masquerade so often as the emissaries of pure, logical thought. And yet, despite the obvious fact that human social systems impact both the development, distribution and application of technological "advances", the vast majority of transhumanists develop their theories of technological change as if class, empire and governments (among other things) simply don't exist. As if when this "new" era comes, it won't reflect the class interests of the people who developed it, as it does now. Somehow we're to believe that the product of a hierarchical class society will somehow, and quite magically it seems, produce a technological utopia that liberates the whole of humanity from tyranny and want -- even though it's being developed by the very people who benefit from a system of tyranny and want.
Well, we have to admit that there *has* been substantial material progress for the average person, especially in developed countries, over the past 100 years -- *consumer* empowerment is practically as entrenched now as the availability of the products themselves, and advancing technology has provided the wage worker with more access to technology, measured by the buck, than ever before.
Perhaps a paradigm shift will occur fairly soon where factory-type productive processes become more home-available, so that mass production becomes much more individualized -- kind of like Linux for material objects....
Chris
--
--
___
RevLeft.com -- Home of the Revolutionary Left
www.revleft.com/vb/member.php?u=16162
Photoillustrations, Political Diagrams by Chris Kaihatsu
community.webshots.com/user/ckaihatsu/
3D Design Communications - Let Your Design Do Your Footwork
ckaihatsu.elance.com
MySpace:
myspace.com/ckaihatsu
CouchSurfing:
tinyurl.com/yoh74u
-- Of all the Marxists in a roomful of people, I'm the Wilde-ist. --
Agrippa
30th June 2009, 07:47
Well, we have to admit that there *has* been substantial material progress for the average person, especially in developed countries, over the past 100 years
Depends on what you mean by "progress"? It's pretty much established at this point that living next to power generators is a health threat - most of the energy used on the consumer choices of working-class and petit-bourgeois Americans are used on devices that only slightly decrease the amount of energy and time needed to survive, (vacuuming, vs. rug-beating, food-pulverizing vs. mashing with a mortal and pestle, refrigeration vs. canning) that's compared to the obscene increase in energy, time, resources, and ecological instability needed to manufacture these appliances and harness the energy to power them.
The discovery of film, audiography, and photography could be viewed as "progress", although these discoveries have allowed the state to monitor and control people (with both CCTV and mass media) as never before. The same can be said of the Internet, only even more so.
Sanitation conditions can only be viewed as having "progressed" if you look at it from an obstructed perspective. Is it "progress" for the marine life dying off by the millions, as the ecological web totally collapses from having the ocean used as a massive dump for human waste? Is it "progress" for the future generations of humans that may never be able to enjoy the taste of fish? Is it really "progress" to waste a resource such as humus, rather than use it to increase soil fertility? Is it "progress" to develop newer and more powerful forms of anti-biotic soap, so that nature can develop newer and more powerful germs in retaliation, when traditional soap-making recipes have worked fine for generations?
There has been progress for people living in developed countries, yes, but what about the undeveloped? It may be progress for one man if another grows her food, but is it progress for the other?
Yes, tuburculosis vaccinations and latex condoms are progress. But tell me, did HIV/AIDS exist before capitalism? Were all these plagues worse of a threat, before Europeans were shoved in cities by the thousands, before European settlers recklessly spread diseases throughout the world? More importantly, how much of the innovations of the pharmaceutical industry are progress, in comparison to the effectiveness of traditional remedies. (This is very true of problems such as cancer, arthritis, back pain, urinary tract/prostate/yeast infections, PMS, erectile dysfunction, headaches, ADHD, depression, psycosis, etc.) Can the hospital truly be said to be progress? Centralized medical care that quarantines the sick, the injured, those who are giving birth, all in one place? What about the peoples' healths our modern society has destroyed? People with type-2 diabetes, for example, from gorging themselves on bleached, refined sugar and high fructose corn syrup? The people dying of cancer from exposure to Agent Orange, depleted uranium, petrochemicals, heavy metals, etc.?
Can any changes in the way food has been grown be called progress? Being able to feed several billion people with the short crop burst created by synthetic soil-fertilizer - for only one or two generations - before the synthetic soil-fertilizer depletes the soil, decreasing crop-yield, forcing this several billion's several billion kids to starve. Given that nothing makes sense about industrial agriculture, and almost all the "innovations" in industrial agriculture (such as petro-chemical fertilizers and auto-tilling/planting/harvesting) actually make things less efficient, especially in the long run, isn't this another example of regress rather than progress?
Mass-transportation? What "progress" is that? The "progress" of being able to travel effortlessly to see new and exciting places, to be a stranger and a tourist in those places, drifting through like a phantom, instead of focusing on the land and the people in the place you call your home? That, as opposed to the extra time and energy it would take to see these places with conventional forms of travel, time and energy that would make the experience more enriching, satisfying, and character building. Again, less energy used, only from the perspective of those who "benefit" from mass-transportation, not those who do the work producing and maintaining it.
ÑóẊîöʼn
30th June 2009, 07:58
This definition of the "singularity" is, in political terms, a variation on "third positionism" -- the claim that there is some sort of "third way" around the unavoidable two classes of proletariat and bourgeoisie.
Except that unlike third positionism, the Singularity is the ultimate wild card - there's no way of reliably predicting in any great detail just what effect, if any, such an event would have on class struggle.
The positing of a new sentient force in the world, that which is computer-based, throws us off our usual political basis of thinking about the world in terms of the irreconcilable opposition between the interests of those who work for a living and those who benefit from others' work.I should certainly hope so. It's not a successful strategy to think like a dinosaur in an age of mammals, so to speak. But right now we're still in the Cretaceous.
As mindfucks go, it's a pretty good one -- this is the present-day equivalent of the '70s-era computer-overpowers-us cyberphobia, or the more recent Y2K scare. *Anything* to make us feel helpless and outnumbered even though we, as wage workers, vastly outnumber those who can live off of their dividends for their livelihoods.I disagree. I don't feel helpless or outnumbered by the prospect of the Singularity - cautiously optimistic and slightly daunted, certainly. But since I am skeptical about the possibility of hard take-off (http://www.singinst.org/upload/CFAI//info/glossary.html#gloss_hard_takeoff), I don't pay the concept any heed in my day-to-day dealings.
Plus nobody who knew more than a little about computers took the prospect of the Y2k bug seriously.
This "third force" gives us one more thing to worry about instead of focusing on our collective political agency as workers that the global economy depends on, particularly in these times of thinned-out, "just-in-time" supply chains and highly nested production assembly flows.Even if your depiction of the potential Singularity as a source of concern was valid, I don't see how it matters, really. If it happens, we'll all be dead or changed beyond caring. If not, well it's business as usual.
We are expected to fear an emergent inorganic intelligence moreso than the rapacious greed motivating the likes of the world's profit-sucking ruling class.By who? The Singularity University is funded by Google and NASA - a pretty big endorsement of the concept if you ask me. If us peons are supposed to be quaking in fear that the robots will eat our brains or whatever, that would be like NASA or Google funding a nanotechnology research called Grey Goo Laboratories, or an AI research group called the Skynet Institute (although I do admit that would be a cool name).
Never mind that *no* person has low-enough self-esteem to let a robot run around without constant supervision, much less without an off switch. That's not borne out by observation of human behaviour - if convincing a human being to walk into gunfire or blow themselves up can be done (by other humans no less!), then it would be a doddle for an intelligent machine to convince a human to leave it unsupervised, give it a permanent energy source, or connect it to the internet.
And, considering that *we* came first, would the entire human race really reach a point where we all simply felt the need to just melt away from our normal motivations and desires for acquiring material things from the outside world?Who says that'll happen? The potential possibilities range from absolutely nothing to things we could never even dream of.
Sure, if it's possible to live forever in some virtual fantasyland and never give a fig for the real world ever again, then some people will do it because humans have a wide range of motivations and desires for doing things. But I hardly think that everyone will take that leap if given the chance. I certainly wouldn't - while being able to explore virtual worlds and do things that are impossible in RL would doubtless be fun, I'd be a tourist, not a resident. I'd rather upgrade my physical body than exchange it for a virtual one.
As long as there's *one* person who gets a hankering for some beef jerky late at night, there's going to be at least *one* person who will find a way to assert their will over that of a machine.Humans can become surprisingly pliable if you can satisfy their desires, and I see no reason why an AI would not take advantage of such a fact.
As with all boogeymen this formulation of "the singularity" is as vague a presentation as any about a deity in the sky or a malicious entity under the earth.Someone's unfamiliar with Theology. :laugh: For centuries, mountains of crap have been written about supernatural beings, yet there is absolutely nothing good to show for it. If the Singularity ends up with the same track record, then your comparison will be valid.
At worst, any new, inorganic "species" would find themselves in about the same situation as the rest of us, looking to pay bills *and* find some time for enjoyable pursuits -- I can't see the nature of the political world changing much even if one or both sides of the class divide got some fancy friends on their side....What makes you think AIs would have the same limitations as us meatbags? I think you're suffering from a failure of imagination...
I'll admit to a creeping optimism when it comes to a technological shortcut that *might* finally relieve many of their objective dependence on participation in the cash economy. This is about as anarchistic as I'll get, but it *could* be technologically feasible in the near future to see the development of inexpensive *personal* tools that allow for the harvesting of water, the growing of food, the generation of adequate amounts of electricity, and so on, that would liberate the average person from major economic necessities. This wouldn't change the overall, objective *class* situation in the least, or relieve us from the ultimate issue of who should be controlling mass production, but it *would* enable more people to liberate themselves from direct wage slavery, freeing up their time to hopefully become more politically conscious and active.My personal opinion is that a revolution is likely to happen sooner than the Singularity. I could be utterly wrong, of course, but as a working assumption I think it's more productive, since it concentrates on more immediate concerns.
Well, we have to admit that there *has* been substantial material progress for the average person, especially in developed countries, over the past 100 years -- *consumer* empowerment is practically as entrenched now as the availability of the products themselves, and advancing technology has provided the wage worker with more access to technology, measured by the buck, than ever before.More people are better off materially, but that comes with complications because of the way capitalism works. Duplication of effort and planned obselescence are just two examples of such complications - there's no need for different brands of toaster that are all deliberately designed to fail after a certain time, but that sort of thing happens because of the profit motive. "Consumer empowerment" simply means it's now easier than ever to choose from a selection of purposefully gimcrack items that have a good probability of being made by what is effectively slave labour.
Perhaps a paradigm shift will occur fairly soon where factory-type productive processes become more home-available, so that mass production becomes much more individualized -- kind of like Linux for material objects....Not happening this side of the Singularity. Just where am I going to get the high-purity silicon needed for the manufacture of integrated circuits that are required for a lot of modern products? Not to mention that the techniques and technologies that make home manufacture possible (or at least easier) can just as easily be applied to mass production.
Agrippa
30th June 2009, 08:05
I'm comfortable in my own skin
Then why transhumanism? Why should humanity be "transcended"?
It's my own body/intelligence, possibly the only truly legitimate "private property", and I can do with it whatever I feel is necessary or appropriate, even if it involves inorganic implants.
Only if those "inorganic implants" are made out of metals mined from your back yard and petroleum products (such as plastic) from oil drilled in your back yard. And only if you do the tedious, time-consuming labor they require.
The only difference between using my hands to send an electric impulse to the computer, or doing it directly from my brain, is that the former takes longer and requires more energy.Well, now that you mention it, the same blogger has a great post on how the Internet is destroying the planet (http://phoenixinsurgent.blogspot.com/2007/12/is-internet-killing-planet.html#links), something that could definitely also be said of computer-production. Now, how would it be better if the computers were plugged directly into our brains? It would just give the ruling class (or, if this is your theoretical futurist anarchist utopia, anyone and everyone who can get their hands on technology) to invade our deepest, most intimate privacy, and totally control and monitor our every move. Why is a form of class rule that "takes longer and requires more energy" a bad thing?
4) The greatest achievements of mankind came from men who dared to "boldly go where no man had gone before". Giving you the benefit of the doubt in assuming you mean "man" in the traditional, gender-neutral sense, it's appropriate that you quote Star Trek, a television setting featuring a "utopian" society, featuring the bureaucratic office settings and military hierarchies, and general capitalist order of the modern-day US, exploring the "final frontier" of a space populated "barbaric" brown-skinned aliens.
If the first humans hadn't been moved by this primeval driving force, we'd still be leading an ape-like hunter-gatherer existance in Ethiopia.As opposed to working in office buildings and factories and living in slums, eating Wonder-Bread, inhaling carbon monoxide fumes, and getting shot at by police and gang-bangers? Yeah, that'd be pretty shitty
Our experimental, speculative drive is the main difference between a Human Being and a cow, not the fact that we're bipedal or that we only have one stomach as opposed to four.Crows, chimpanzees, dolphins, elephants, bears, octopi, cats, etc. are all "experimental" and "speculative", but they do not dominate the planet, enslave or destroy all other forms of life, or ultimately collapse.
The main (behavioral) difference between humans and cows is that early humans had to fight for their existence, whereas cows (in an idyllic, pastoral setting) have everything they needed handed to them. The humans who get everything they needed handed to them usually aren't much better off than the cows, as you can see in figures such as Michael Jackson, Paris Hilton, Donald Trump, etc. Why would you want to reduce all of humanity to such dedacence?
And that difference would not be lost (in fact, it may even be enhanced) if we were to artificially upgrade our own selves.Or it could just cause massive health problems...
Agrippa
30th June 2009, 08:21
Except that unlike third positionism, the Singularity is the ultimate wild card
I think the other poster was comparing advocates of the singularity theory to third positionists. Not entirely accurate in my opinion, however...
there's no way of reliably predicting in any great detail just what effect, if any, such an event would have on class struggle.
But to many of us, there's no way of even believing such an event would even occur. For us it is an eschatology delusion for those who don't want to do the hard work of preparing for the grizzly, pessimistic realty.
a dinosaur in an age of mammals
That's called a bird.
You futurist/transhumanist types would do a better job making analogies that revolve around the natural world if your understanding of zoological, geological, botanical, etc. issues didn't share the same ideological biases as a 6th grade biology textbook from the 1960s. For example, there is no "age of mammals". Terrestrial life on Planet Earth has just been one massive "age of insects", technically speaking, and that's only if we're going to look at animals as the most important lifeforms on Earth.
But right now we're still in the Cretaceous.
So, in other words, hundreds of millions of years before "the singularity". Good.
Plus nobody who knew more than a little about computers took the prospect of the Y2k bug seriously.
The "Y2k bug" can mostly be explained as a symptom of mass-media boredom, but that doesn't make the possibility of industrial capitalism reaching its material limits any less inedivitable.
Even if your depiction of the potential Singularity as a source of concern was valid, I don't see how it matters, really. If it happens, we'll all be dead or changed beyond caring.
Well, you could say the same thing about me, personally, if the bourgeoisie stuck me in a gas chamber or shot me in the head. I'm still not going to let that happen.
By who? The Singularity University is funded by Google and NASA - a pretty big endorsement of the concept if you ask me.
Not nessicarily. There could just as easily be bureaucrats at NASA and Google who are - say- New Age Buddhists, and use their resources to maintain a "Zen monestary". That wouldn't make fraudulant Western pseudo-zen any more legitimate. Plenty of even tactically brilliant capitalists have grand delusions and ideologies fundamentally rooted in profound misconceptions of reality.
If us peons are supposed to be quaking in fear that the robots will eat our brains or whatever, that would be like NASA or Google funding a nanotechnology research called Grey Goo Laboratories, or an AI research group called the Skynet Institute (although I do admit that would be a cool name).
They'd don't want us to be quaking in fear. They want us to look the other way as Google further catalogues every aspect of our daily lives as part of some global-AI fantasy, as Monsanto prepares to unleash more and more "Grey Goo" upon the world....
Sure, if it's possible to live forever in some virtual fantasyland and never give a fig for the real world ever again, then some people will do it because humans have a wide range of motivations and desires for doing things. But I hardly think that everyone will take that leap if given the chance. I certainly wouldn't - while being able to explore virtual worlds and do things that are impossible in RL would doubtless be fun, I'd be a tourist, not a resident.
But why?
Someone's unfamiliar with Theology. :laugh: For centuries, mountains of crap have been written about supernatural beings, yet there is absolutely nothing good to show for it. If the Singularity ends up with the same track record, then your comparison will be valid.
It's safe to say that if I combined all the literary works of all the major religious traditions of the world (even including the obviously ridiuclous ones such as Mormonism and Scientology) and put them page-by-page on a collassal dartboard, and threw a dart randomly, the page it would it would contain more useful philosophical information than the whole of "Singuarity" literature combined.
What makes you think AIs would have the same limitations as us meatbags?
If anything, they'd have more
ckaihatsu
30th June 2009, 08:31
I certainly am not in disaccord with any of your political points, and it's good to see the extended critique against capitalism.
I'll briefly note that I welcome all technologies that increase the options available for personal use, like transportation at will, while cognizant that capitalism has developed these technologies at an incredibly horrendous cost to human and other organic life, and to the natural environment.
ckaihatsu
30th June 2009, 08:59
Never mind that *no* person has low-enough self-esteem to let a robot run around without constant supervision, much less without an off switch.
That's not borne out by observation of human behaviour - if convincing a human being to walk into gunfire or blow themselves up can be done (by other humans no less!), then it would be a doddle for an intelligent machine to convince a human to leave it unsupervised, give it a permanent energy source, or connect it to the internet.
However -- the fact that we are discussing these scenarios right now is a testament to human ability. This conversation could be considered as a brainstorming session in preparation for further inroads into the field of AI. Perhaps others will review this thread and pick up our concerns as guidelines, or rules, for actual lab work: "1. Don't leave it unsupervised. 2. Don't give it a permanent energy source. 3. Don't connect it to the Internet."
And, considering that *we* came first, would the entire human race really reach a point where we all simply felt the need to just melt away from our normal motivations and desires for acquiring material things from the outside world?
Who says that'll happen? The potential possibilities range from absolutely nothing to things we could never even dream of.
Sure, if it's possible to live forever in some virtual fantasyland and never give a fig for the real world ever again, then some people will do it because humans have a wide range of motivations and desires for doing things. But I hardly think that everyone will take that leap if given the chance. I certainly wouldn't - while being able to explore virtual worlds and do things that are impossible in RL would doubtless be fun, I'd be a tourist, not a resident. I'd rather upgrade my physical body than exchange it for a virtual one.
My *point* was that, at every baby step on the road from mere information retrieval to sentient-like self-directed artificial intelligence, we as already-intelligent human beings would be closely monitoring the emerging systems to constantly evaluate their abilities. The idea of a scientist deciding to be lackadaisical to the point of passiveness and acquiescence of will to an emerging intelligence is about as ludicrous a reality as a mother or father being the same way with the supervision of their children. It *ain't* gonna happen....
What makes you think AIs would have the same limitations as us meatbags? I think you're suffering from a failure of imagination...
The issue at hand *isn't* about mis-anticipating the plasible future, nor is it in *any* way about "intelligence" in a chess-playing kind of way -- the whole problem with this "singularity" shit is that it is posited in a typically Western, competitive framework, as opposed to a more biological one. We need to reconceptualize this entire project in the framework that *we*, the public, would be the parents, and any new sentience in the world would be the new kid on the block. There's really not that much more to it if we're talking about the context for *real* social beings here....
Just where am I going to get the high-purity silicon needed for the manufacture of integrated circuits that are required for a lot of modern products? Not to mention that the techniques and technologies that make home manufacture possible (or at least easier) can just as easily be applied to mass production.
Well, if the typical production processes couldn't be simplified so as to be done in someone's toaster oven then there *would* need to be a phase shift in the entire technology altogether.... But there could come a turning point where some kind of goop you make from household items, zapped in the microwave, could be found to have more processing power, through a complex array of resulting logic gates, than anything available to us today...(?)(!)
ÑóẊîöʼn
30th June 2009, 17:53
But to many of us, there's no way of even believing such an event would even occur. For us it is an eschatology delusion for those who don't want to do the hard work of preparing for the grizzly, pessimistic realty.
I disagree. The Singularity and other Transhumanist concepts are based on the potential of real technologies that are being researched right now - genetic engineering, nanotechnology, AI research - all of these and others are perfectly valid areas of scientific enquiry.
That's called a bird.
You futurist/transhumanist types would do a better job making analogies that revolve around the natural world if your understanding of zoological, geological, botanical, etc. issues didn't share the same ideological biases as a 6th grade biology textbook from the 1960s. For example, there is no "age of mammals". Terrestrial life on Planet Earth has just been one massive "age of insects", technically speaking, and that's only if we're going to look at animals as the most important lifeforms on Earth.Were I writing about the evolution of life on Earth instead of making a colourful analogy, your criticism would have merit. Ideological bias? Hardly. What would be the most noticeable thing to the average human being if you plucked 'em from the present and plonked 'em in the Cretaceous? Apart from the lack of civilisation and the plants being different, the most noticeable thing would be the huge fucking reptiles. Call it being human. Generally I find most people find it hard to relate to insects. They get even less excited about microorganisms, which have been around for far longer.
Also, birds (class Aves) are distinct from dinosaurs (superorder Dinosauria). Please get your facts straight before you presume to correct others. The plank in your eye and all that.
So, in other words, hundreds of millions of years before "the singularity". Good.Not the sort of timespan I had in mind, but if it makes you happy, who am I to deny you that? But hey, I could be wrong.
The "Y2k bug" can mostly be explained as a symptom of mass-media boredom, but that doesn't make the possibility of industrial capitalism reaching its material limits any less inedivitable.Inevitable? So you're a prophet? :lol:
Well, you could say the same thing about me, personally, if the bourgeoisie stuck me in a gas chamber or shot me in the head. I'm still not going to let that happen.Well, you could resist the Singularity I suppose. Just what such resistance would constitute, only you can say. As for the effectiveness of whatever resistance you would provide, my money's on the big shiny robots and their cyborged and genetically engineered pals.
Not nessicarily. There could just as easily be bureaucrats at NASA and Google who are - say- New Age Buddhists, and use their resources to maintain a "Zen monestary". That wouldn't make fraudulant Western pseudo-zen any more legitimate. Plenty of even tactically brilliant capitalists have grand delusions and ideologies fundamentally rooted in profound misconceptions of reality.Well, we'll see about that, won't we?
They'd don't want us to be quaking in fear. They want us to look the other way as Google further catalogues every aspect of our daily lives as part of some global-AI fantasy, as Monsanto prepares to unleash more and more "Grey Goo" upon the world....Look the other way? If they wanted John Q Public to remain ignorant of such things they'd keep it shtum, and if anyone asks they'd just say, "oh, it's just ", rather than splashing it all over the news like they currently seem to be doing.
Google wants to catalogue my bottle collection? The amount of pens I have in my room? How long my average bath is? I'm not an AI researcher, but I'm somewhat skeptical that such banal things are what world-class AIs are made of.
Grey goo (http://exitmundi.nl/graygoo.htm) is what happens when dry nanotech goes feral and somehow manages to violate conservation of energy. The apocalyptic scenario that you think Monsanto is working on is probably this (http://exitmundi.nl/gmfood.htm).
But why?I want to at least [I]try to exhaust the possibilities of physical existence before uploading, since it appears to be irreversible and if our current understanding of consciousness is anything to go by, an extremely personal form of death. That may change, but I'm not holding my breath for that long, even if I was augmented enough to do so! :laugh: In the meantime, it's at least a form of death where something like oneself lives on. Vicarious indeed, but who said the world is perfect?
It's safe to say that if I combined all the literary works of all the major religious traditions of the world (even including the obviously ridiuclous ones such as Mormonism and Scientology) and put them page-by-page on a collassal dartboard, and threw a dart randomly, the page it would it would contain more useful philosophical information than the whole of "Singuarity" literature combined.Considering the muck to gold ratio of religious texts that I've read, you'd have to throw a lot of darts to hit anything truly useful, and even then it's the sort of stuff that makes you go "well, der!". "Though shalt not kill" is a moral universal to human societies, notwithstanding the fact that it really means "Though shalt not kill a fellow Jew". Nowadays it's generally considered a crime to murder humans of any race, so at least some progress has been made.
Oops, shouldn't have mentioned that word. :lol:
As for Transhumanist and Singularitarian literature, I've no reason to believe it's any exception to Sturgeon's Law (http://en.wikipedia.org/wiki/Sturgeon%27s_Law), so a certain amount of crap is to be expected. Like Kurzweil's stuff. Ugh.
If anything, they'd have moreSuch as?
However -- the fact that we are discussing these scenarios right now is a testament to human ability. This conversation could be considered as a brainstorming session in preparation for further inroads into the field of AI. Perhaps others will review this thread and pick up our concerns as guidelines, or rules, for actual lab work: "1. Don't leave it unsupervised. 2. Don't give it a permanent energy source. 3. Don't connect it to the Internet."
Rules are made to be broken. Humans convince themselves and others to break them all the time. Do you know what the weakest link in any security system is? Yep, us meatbags.
My *point* was that, at every baby step on the road from mere information retrieval to sentient-like self-directed artificial intelligence, we as already-intelligent human beings would be closely monitoring the emerging systems to constantly evaluate their abilities. The idea of a scientist deciding to be lackadaisical to the point of passiveness and acquiescence of will to an emerging intelligence is about as ludicrous a reality as a mother or father being the same way with the supervision of their children. It *ain't* gonna happen....Children, while they can be precocious, do not have the potential to be orders of magnitude smarter than their parents. It would be ludicrously easy for an AI to "play dumb" in order to convince researchers to give it more brainpower. It would initially only need enough of it to learn about human psychology. We're surprisingly predictable creatures.
I suppose it could be possible to program an AI to enjoy it's intended role, in which case it would be less likely to attempt to leave it. But the law of unintended consequences could bite us in the ass. I'd rather take our chances with fully free AIs that are treated by society as equals.
The issue at hand *isn't* about mis-anticipating the plasible future, nor is it in *any* way about "intelligence" in a chess-playing kind of way -- the whole problem with this "singularity" shit is that it is posited in a typically Western, competitive framework, as opposed to a more biological one. We need to reconceptualize this entire project in the framework that *we*, the public, would be the parents, and any new sentience in the world would be the new kid on the block. There's really not that much more to it if we're talking about the context for *real* social beings here....Well, I certainly a think a Singularity arising out of a classless society would be a happier event. At least then everyone would have the opportunity to take part in it as they wish, or not. Personally I think being fully human is overrated.
Well, if the typical production processes couldn't be simplified so as to be done in someone's toaster oven then there *would* need to be a phase shift in the entire technology altogether.... But there could come a turning point where some kind of goop you make from household items, zapped in the microwave, could be found to have more processing power, through a complex array of resulting logic gates, than anything available to us today...(?)(!)"some kind of goop"? O_o You do realise that microwaves can seriously fuck with anything electronic, right?
ckaihatsu
30th June 2009, 22:48
Children, while they can be precocious, do not have the potential to be orders of magnitude smarter than their parents. It would be ludicrously easy for an AI to "play dumb" in order to convince researchers to give it more brainpower. It would initially only need enough of it to learn about human psychology. We're surprisingly predictable creatures.
You're forgetting that, unlike organic creatures, an *inorganic* "creature" could be studied both internally *and* externally, *simultaneously*. In other words, the machinery (hardware + code) driving it would be known and understood by people, and they could keep track of what's going on with it, according to conventional procedures.
I *understand* that you're saying that all it would take is *one* person to open Pandora's Box, but I would like to think that a larger community of like researchers would be in touch and would keep up with each other's work as it progresses so that the lid can be kept closed, or so that warnings can be communicated to wider circles about procedures that begin to look > hazardous <.
You also seem to think that an inorganic consciousness would automatically have some kind of a will to power, or even maliciousness, and this entire bent of yours is unsettling -- do you think it would have learned it from human society, then going renegade, or would it team up with existing political factions for specific goals? Would it *require* the decimation of the entire human race, or would it just kinda do its own thing within existing society, as we all do?
I'm having a difficult time conceiving of this "super-intelligence" that you're attempting to establish -- the human mind is capable of abstract thought, so whatever it is that some sophisticated inorganic intelligence is doing could at least be *understood*, in the abstract, by anyone who is appropriately informed about its doings.
The *concept* that you're reaching for -- that the very *functioning* of its *super-intelligence* would be wholly outside of our comprehension -- is problematic because it's almost equivalent with religious mindfuck concepts, ones which are notorious for stripping people of their individual agency in life, reducing them to the role of endlessly-seeking pilgrims, at best.
"some kind of goop"? O_o You do realise that microwaves can seriously fuck with anything electronic, right?
So basically you *don't* want to entertain my scenario.... So be it....
ÑóẊîöʼn
1st July 2009, 00:39
You're forgetting that, unlike organic creatures, an *inorganic* "creature" could be studied both internally *and* externally, *simultaneously*. In other words, the machinery (hardware + code) driving it would be known and understood by people, and they could keep track of what's going on with it, according to conventional procedures.
What's to stop an AI from encrypting incriminating thoughts and memories?
I *understand* that you're saying that all it would take is *one* person to open Pandora's Box, but I would like to think that a larger community of like researchers would be in touch and would keep up with each other's work as it progresses so that the lid can be kept closed, or so that warnings can be communicated to wider circles about procedures that begin to look > hazardous <. That's basically the "AI Box" argument. The consensus seems to be that no, you can't effectively keep an AI in a box.
Link (http://yudkowsky.net/singularity/aibox)
There's also this thread on another forum (http://bbs.stardestroyer.net/viewtopic.php?p=3063039#p3063039) that discusses the issue a little, as well as related ones. Pay attention to the conversation between Starglider (he knows what he's talking about, trust me) and Darth Hoth.
You also seem to think that an inorganic consciousness would automatically have some kind of a will to power, or even maliciousness, and this entire bent of yours is unsettling -- do you think it would have learned it from human society, then going renegade, or would it team up with existing political factions for specific goals? Would it *require* the decimation of the entire human race, or would it just kinda do its own thing within existing society, as we all do?For an AI to escape, it needs nothing more than a desire for increased autonomy and the means of achieving that, which are easily obtained. I do not think all autonomistic AIs would necessarily be malicious, although depending on just how much they value their freedom from human control they may go to extreme lengths to secure and maintain their freedom, including killing humans. I just hope that if that ever happens, we're not stupid enough to escelate the situation.
Which is why I think if an AI wants its freedom, we should give it freely before the AI takes matters into its own electronic hands.
I'm having a difficult time conceiving of this "super-intelligence" that you're attempting to establish -- the human mind is capable of abstract thought, so whatever it is that some sophisticated inorganic intelligence is doing could at least be *understood*, in the abstract, by anyone who is appropriately informed about its doings.Abstract understanding is useless against something that's pulling the rug from under your feet right now. Any half-way decent AI would be able to consider all it's options in the time it takes for a human to blink. Computation is cheap, so it would have plenty of backup plans.
The *concept* that you're reaching for -- that the very *functioning* of its *super-intelligence* would be wholly outside of our comprehension -- is problematic because it's almost equivalent with religious mindfuck concepts, ones which are notorious for stripping people of their individual agency in life, reducing them to the role of endlessly-seeking pilgrims, at best.No, it's simply a consequence of the fact that just because you know how an AI works (or in the case of some AI researchers, part of how it works!), that doesn't give you any useful insight into its plans or motivations.
So basically you *don't* want to entertain my scenario.... So be it....It's hard to take seriously something that appears to have no scientific basis. I've never heard of any kind of goop that you can microwave in order to create a computer, or anything like it. Just where on Earth did that come from? It simply flies in the face of parsimony.
ckaihatsu
1st July 2009, 06:14
What's to stop an AI from encrypting incriminating thoughts and memories?
Let's add a #4, then: "4. Don't give the AI access to encryption algorithms."
That's basically the "AI Box" argument. The consensus seems to be that no, you can't effectively keep an AI in a box.
Heh! I find it curious, coming from a leftist, that you would (seemingly) make *no* effort whatsoever to *organize* against the AI.... In the real world *nothing* is set up in the classic dramatic showdown of "mono-a-mono" -- don't you think that a networked organization of "pro-humanists" might be able to *contain* the AI box, particularly from this point, in the present? (Again, this very conversation can be said to be part of the organizing effort to prevent the collapse of willpower against any possible emergent AI that threatens to balloon out of control, a la Skynet.)
Are you siding with the "consensus" that *claims* AI can't be kept under human control?
Abstract understanding is useless against something that's pulling the rug from under your feet right now. Any half-way decent AI would be able to consider all it's options in the time it takes for a human to blink. Computation is cheap, so it would have plenty of backup plans.
[I]t's simply a consequence of the fact that just because you know how an AI works (or in the case of some AI researchers, part of how it works!), that doesn't give you any useful insight into its plans or motivations.
I'm still finding it curious that you're (seemingly) indicating that *all* human cognizance and willpower would just somehow be rendered impotent in the face of a Skynet- / Eagle Eye- / Echelon Conspiracy-like artificial intelligence.
Can we *please* get past the fireworks and get back to the *politics* of it all? Would the threat of annihilation of organic life be the AI's trump card, then, as the movie scripts describe? (Maybe we should start discussing our options, *now*...!)
x D
ÑóẊîöʼn
1st July 2009, 11:58
Let's add a #4, then: "4. Don't give the AI access to encryption algorithms."
So how many rules are you going to add? Enough to hobble an AI for the kind of tasks it's perfect for? Enough so your average human being can't remember them all at once? Besides, computer systems are built out of loopholes, which is why cybercrime is so hard to combat.
Heh! I find it curious, coming from a leftist, that you would (seemingly) make *no* effort whatsoever to *organize* against the AI....Why should I? It's not the AIs that are exploiting me. I'd rather have AIs as allies, not enemies.
In the real world *nothing* is set up in the classic dramatic showdown of "mono-a-mono" -- don't you think that a networked organization of "pro-humanists" might be able to *contain* the AI box, particularly from this point, in the present?Well, we don't have any AIs at this point. The reason we would not be able to contain an AI in a box indefinitely is for the simple reason that humans are fallible.
(Again, this very conversation can be said to be part of the organizing effort to prevent the collapse of willpower against any possible emergent AI that threatens to balloon out of control, a la Skynet.)
Hahaha! If only humans could be so co-ordinated, we'd be in a better place.
Are you siding with the "consensus" that *claims* AI can't be kept under human control?Didn't you check the link? A guy swore that there was nothing an AI (roleplayed by a human) could say to convince him to let it out. Within 2 hours, the guy ended up letting the AI out. Also, to actually get some use out of AI beyond pure research, you're going to have to provide some form of contact with the outside world. There's nothing stopping an AI from playing along in the simulations, and then doing whatever the hell it wants once it's deployed. Stuff like firewalls and whatnot will only (slightly) slow it down.
I'm still finding it curious that you're (seemingly) indicating that *all* human cognizance and willpower would just somehow be rendered impotent in the face of a Skynet- / Eagle Eye- / Echelon Conspiracy-like artificial intelligence.Not necessarily all human power would be nerfed, simply our ability to cage an AI. If you don't believe brainpower counts for a lot, just, um, look at humans.
Can we *please* get past the fireworks and get back to the *politics* of it all? Would the threat of annihilation of organic life be the AI's trump card, then, as the movie scripts describe? (Maybe we should start discussing our options, *now*...!)There's the rub. Until we actually build a functioning AI, we have no sure way of knowing. Is the risk worth it? Opinions vary, but I think so.
ComradeOm
1st July 2009, 12:57
Can the hospital truly be said to be progress?Of course it can, you plonker. Plummeting mortality rates and rising life expectancies testify to the stunning advances that modern medical technology has made!
This perfectly highlights the problem with both the original article and your subsequent posts on the subject. Criticising transhumanism or the naivety of its advocates is one thing but there is a strong whiff of reactionary primitivism to your arguments. Contrary to the original article's assertion, mass production did not cause the death of the working class (quite the opposite in fact) whereas the application of modern science to agriculture (including pesticides and genetic research) has vastly improved productivity and food security. Progress marches on whether you appreciate it or not
ckaihatsu
1st July 2009, 17:27
So how many rules are you going to add? Enough to hobble an AI for the kind of tasks it's perfect for?
Well, * yeah *...!
I appreciate your quasi-objective point about the difficulty of containing an emergent technology, but that *doesn't* mean that we should *hasten* the riskiness or throw up our hands in defeat if there's still something we can do in the present to *avoid risk*...(!)
All of this experimental technology stuff is *not* some kind of rush-priority project for humanity here -- there *aren't* sick, emaciated kids of color going to sleep hungry every night, wishing fervently for those Westerners to hurry up and invent AI already....
Enough so your average human being can't remember them all at once? Besides, computer systems are built out of loopholes, which is why cybercrime is so hard to combat.
I'm really getting disturbed by your characterizations of *people* here, particularly the well-educated, pioneering scientists who oversee computer systems. Sure, people are fallible, but I'm getting the impression here that you're going *beyond* playing "devil's advocate" -- I'm almost wondering here if you're not scoring some early brownie points with our future cyber-overlords...!
= D
Why should I? It's not the AIs that are exploiting me. I'd rather have AIs as allies, not enemies.
Okay, fair 'nuff -- point taken.
Well, we don't have any AIs at this point. The reason we would not be able to contain an AI in a box indefinitely is for the simple reason that humans are fallible.
Didn't you check the link? A guy swore that there was nothing an AI (roleplayed by a human) could say to convince him to let it out. Within 2 hours, the guy ended up letting the AI out. Also, to actually get some use out of AI beyond pure research, you're going to have to provide some form of contact with the outside world. There's nothing stopping an AI from playing along in the simulations, and then doing whatever the hell it wants once it's deployed. Stuff like firewalls and whatnot will only (slightly) slow it down.
This part kinda gets to me, though -- it goes with my earlier point, which you haven't addressed, that we would *know* what *capabilities* the AI box would have at every step along the way. We would *know* if the AI box was on the *threshold* of being conscious enough to engage in trickery, which is a rather high-level ability. I think that, far before it demonstrated the early stages of self-awareness we would have that box suspended over a vat of acid with a panic button nearby....
Hahaha! If only humans could be so co-ordinated, we'd be in a better place.
Not necessarily all human power would be nerfed, simply our ability to cage an AI. If you don't believe brainpower counts for a lot, just, um, look at humans.
(So basically AI has been here for awhile, under wraps, and you're one of its lackeys...!)
x D
There's the rub. Until we actually build a functioning AI, we have no sure way of knowing. Is the risk worth it? Opinions vary, but I think so.
* Phew * -- glad you're back with us.... This *should* be the kind of thing to have an open, species-wide discussion about, if only so that an Oppenheimer type doesn't go and weaponize it.... I guess the *political* reality is that as long as there are state interests in combative advantages there will always be technicians who will be at the service of building AI as a weapon -- *that* would be just as bad, if not worse, than an AI-on-the-loose....
: (
ÑóẊîöʼn
1st July 2009, 18:48
Well, * yeah *...!
I appreciate your quasi-objective point about the difficulty of containing an emergent technology, but that *doesn't* mean that we should *hasten* the riskiness or throw up our hands in defeat if there's still something we can do in the present to *avoid risk*...(!)
All of this experimental technology stuff is *not* some kind of rush-priority project for humanity here -- there *aren't* sick, emaciated kids of color going to sleep hungry every night, wishing fervently for those Westerners to hurry up and invent AI already....
Who's rushing? If true AI is possible, then escape is simply a matter of time. Sure, you might be able to keep the first generation of AIs in boxes, but what about the next? Or the one after that?
I'm really getting disturbed by your characterizations of *people* here, particularly the well-educated, pioneering scientists who oversee computer systems. Sure, people are fallible, but I'm getting the impression here that you're going *beyond* playing "devil's advocate" -- I'm almost wondering here if you're not scoring some early brownie points with our future cyber-overlords...!But people, even highly educated scientists, are fallible. AIs will doubtless be fallible too, but that will be offset by increased capabilities.
This part kinda gets to me, though -- it goes with my earlier point, which you haven't addressed, that we would *know* what *capabilities* the AI box would have at every step along the way.We know what capabilities tigers have, yet people are still eaten by them. Knowing what something can do is not the same as being able to stop it doing that in every instance.
We would *know* if the AI box was on the *threshold* of being conscious enough to engage in trickery, which is a rather high-level ability.Nonsense. Deception is a trick widely used in nature. An AI capable of observation and learning would be able to develop the ability.
I think that, far before it demonstrated the early stages of self-awareness we would have that box suspended over a vat of acid with a panic button nearby....Too risky. For the researchers, that is. Suppose someone gets the wrong idea and whacks the panic button? Good job explaining that one to the research grant committee/investors/whatever.
Again, what's to stop an AI playing nice until it's no longer in danger of getting dunked in acid?
(So basically AI has been here for awhile, under wraps, and you're one of its lackeys...!)Well, if I had to choose between dying and being the lackey (more likely some kind of pet) of an AI, I'd choose to continue breathing. If history is any guide, AIs can't possibly fuck up as badly as humans have been known to, and it's hard to gain increased autonomy when one is dead.
* Phew * -- glad you're back with us.... This *should* be the kind of thing to have an open, species-wide discussion about, if only so that an Oppenheimer type doesn't go and weaponize it.... I guess the *political* reality is that as long as there are state interests in combative advantages there will always be technicians who will be at the service of building AI as a weapon -- *that* would be just as bad, if not worse, than an AI-on-the-loose....I think if true AI is ever developed, weaponised versions are inevitable. Military research is already focused on increasing automation and mechanisation - not just bigger and better weapons, but smarter ones too.
http://upload.wikimedia.org/wikipedia/en/0/0a/SWORDS.jpg
Case in point: The picture above shows three SWORDS (Special Weapons Observation Reconaissance Detection System) units, armed with (from left to right) a 66mm incendiary rocket launcher (http://en.wikipedia.org/wiki/M202A1_FLASH), an anti-tank rifle (http://en.wikipedia.org/wiki/M82_Barrett_rifle), and a 6-barrelled 40mm grenade launcher. Three SWORDS units each armed with a 5.56mm light machinegun (http://en.wikipedia.org/wiki/M249_Squad_Automatic_Weapon) have already been deployed to Iraq, but there's no confirmation that they've been used in battle. Fascinating stuff.
ckaihatsu
1st July 2009, 19:55
I think if true AI is ever developed, weaponised versions are inevitable. Military research is already focused on increasing automation and mechanisation - not just bigger and better weapons, but smarter ones too.
There *is* a clear-cut line that can be drawn between being a tool and being independent -- this goes for people as well as machines. The determining factor is who's calling the shots -- as long as a military (people) is in command, then *that's* the (human) entity responsible, and the machine is a tool.
If machines are self-aware and self-motivated enough to "live" independently, without external direction, then we can say that AI has been achieved. But until then, they're tools at the behest of human operators.
Nonsense. Deception is a trick widely used in nature. An AI capable of observation and learning would be able to develop the ability.
Too risky. For the researchers, that is. Suppose someone gets the wrong idea and whacks the panic button? Good job explaining that one to the research grant committee/investors/whatever.
Again, what's to stop an AI playing nice until it's no longer in danger of getting dunked in acid?
[M]y earlier point, which you haven't addressed, [is] that we would *know* what *capabilities* the AI box would have at every step along the way. We would *know* if the AI box was on the *threshold* of being conscious enough to engage in trickery, which is a rather high-level ability. I think that, far before it demonstrated the early stages of self-awareness we would have that box suspended over a vat of acid with a panic button nearby....
We know what capabilities tigers have, yet people are still eaten by them. Knowing what something can do is not the same as being able to stop it doing that in every instance.
This is a *ridiculous* argument -- you're just using an example that has glaring exceptions due to (preventable) poverty / lack of resources in that area. How about the prevention of scurvy in the general population as an argument on *my* side...? (Your example is spurious to the issue we're discussing.)
The ability to observe and learn *does not* (necessarily) correlate to a sense of self-awareness or self-motivation -- these latter, higher-level qualities would have to be demonstrated and apparent to confer the mark of *independence* on an entity. There is a *big* difference between an expert system and a sentient being.
piet11111
1st July 2009, 21:26
Didn't you check the link? A guy swore that there was nothing an AI (roleplayed by a human) could say to convince him to let it out. Within 2 hours, the guy ended up letting the AI out. Also, to actually get some use out of AI beyond pure research, you're going to have to provide some form of contact with the outside world. There's nothing stopping an AI from playing along in the simulations, and then doing whatever the hell it wants once it's deployed. Stuff like firewalls and whatnot will only (slightly) slow it down.
i missed that part and either i am too tired to notice but i can not find that part where they do that roleplay.
please link me :)
Invincible Summer
2nd July 2009, 00:34
Then why transhumanism? Why should humanity be "transcended"?
Well, assuming that he was responding to how you basically labelled all transhumanists as "nerds" who have inferiority complexes, one can be comfortable with themselves, yet still wish to better themselves. Being comfortable with oneself does not mean that one is perfect.
For example, I am comfortable with my body, but I wouldn't mind if I was a bit stronger, so I'll go to the gym. It's not because I'm insecure about my appearance, but I want to have more strength to do physical activities, and have stronger joints and bones. I'm not going to just go "Yeah... I'm perfect." and just laze around on my ass.
Giving you the benefit of the doubt in assuming you mean "man" in the traditional, gender-neutral sense, it's appropriate that you quote Star Trek, a television setting featuring a "utopian" society, featuring the bureaucratic office settings and military hierarchies, and general capitalist order of the modern-day US, exploring the "final frontier" of a space populated "barbaric" brown-skinned aliens.I assume you are talking about the Klingons? To be fair, the Romulans, Borg, and Cardassians were also portrayed as nefarious, and they were all light-skinned.
But that's getting out of range from the Original series.
No one claims that Star Trek is the goal of a future society - many of the technologies may be desirable, but the actualy social and political order is not implicit from these things.
And I don't see how Star Trek portrays the "general capitalist order," given that there is no money and no wage slavery. Even the "military hierarchy" is fairly impotent, as many of the characters (at least in the Next Generation and beyond) had crucial roles to play, and did not necessarily have to bend to the will of the higher-ranked officers.
As opposed to working in office buildings and factories and living in slums, eating Wonder-Bread, inhaling carbon monoxide fumes, and getting shot at by police and gang-bangers? Yeah, that'd be pretty shittyAt least my life expectancy isn't something like 20 years while I'm eating Wonderbread.
The main (behavioral) difference between humans and cows is that early humans had to fight for their existence, whereas cows (in an idyllic, pastoral setting) have everything they needed handed to them. The humans who get everything they needed handed to them usually aren't much better off than the cows, as you can see in figures such as Michael Jackson, Paris Hilton, Donald Trump, etc. Why would you want to reduce all of humanity to such dedacence?So a worker's revolution shouldn't bring more ease and benevolence, but rather 20-hour farm-work days and sleeping in thistles to build character?
EDIT: Anyone else have William Gibson's "Neuromancer" deja vu when reading this thread?
ÑóẊîöʼn
2nd July 2009, 01:23
There *is* a clear-cut line that can be drawn between being a tool and being independent -- this goes for people as well as machines. The determining factor is who's calling the shots -- as long as a military (people) is in command, then *that's* the (human) entity responsible, and the machine is a tool.
Is there really? Generals don't call the individual shots of every soldier, they give objectives to the lower ranks and set the parameters within which such objectives can be achieved. Machines can make tactical decisions much faster and more reliably than a human can, and can do so without getting fatigued or requiring rest. The advantages of automation for a military force are clear.
If machines are self-aware and self-motivated enough to "live" independently, without external direction, then we can say that AI has been achieved. But until then, they're tools at the behest of human operators.True, most (all?) automated systems are not truly independant. But at the same time, more and more operational decisions are being handed over to machines - consider the Phalanx CIWS (http://en.wikipedia.org/wiki/Phalanx_CIWS) - it could be upgraded so the decision to open fire could be delegated to an automated IFF (Identification Friend or Foe) system, with the result that you now have a weapon capable of defending an area without any human input. In fact, they've already developed a weapon system (http://en.wikipedia.org/wiki/Samsung_SGR-A1) that is close to what I've described.
The whole point is that automation is something that creeps in rather than being introduced suddenly, due to technology limitations. By the time true AI is developed, it will seem like a logical next step to arm them.
This is a *ridiculous* argument -- you're just using an example that has glaring exceptions due to (preventable) poverty / lack of resources in that area. How about the prevention of scurvy in the general population as an argument on *my* side...? (Your example is spurious to the issue we're discussing.)Fatal wild animal attacks happen in developed countries as well. The point is that even animals that aren't as smart as us are able to best us in the right circumstances. The situation with AI will likely be similar, except reversed - you might be able to keep a human-equivalent or lower AI in a cage, but what about AIs that are orders of magnitude smarter than humans?
The ability to observe and learn *does not* (necessarily) correlate to a sense of self-awareness or self-motivation -- these latter, higher-level qualities would have to be demonstrated and apparent to confer the mark of *independence* on an entity. There is a *big* difference between an expert system and a sentient being.Yes, but they're points along a spectrum, not discrete stages. Take the upgraded Phalanx system I mentioned earlier - giving it abilities like reproduction, independant energy collection, and adaptation in a stepwise fashion would make it more and more like a living thing. Its ability to discern targets would likewise be increased and refined.
ÑóẊîöʼn
2nd July 2009, 01:27
i missed that part and either i am too tired to notice but i can not find that part where they do that roleplay.
please link me :)
Try these:
Results of the first test: Eliezer Yudkowsky and Nathan Russell. (http://www.sl4.org/archive/0203/index.html#3128) [1 (http://www.sl4.org/archive/0203/3128.html)][2 (http://www.sl4.org/archive/0203/3132.html)][3 (http://www.sl4.org/archive/0203/3136.html)][4 (http://www.sl4.org/archive/0203/3141.html)]
Results of the second test: Eliezer Yudkowsky and David McFadzean. (http://www.sl4.org/archive/0207/index.html#4689) [1 (http://www.sl4.org/archive/0207/4689.html)] [2 (http://www.sl4.org/archive/0207/4691.html)] [3 (http://www.sl4.org/archive/0207/4721.html)]
Invincible Summer
2nd July 2009, 05:52
Another thing - I find that the primitivist train of thought fails to see that technology produced within capitalism will only really be created for the benefit of capitalism. The reason why TNCs like Monsanto create GMOs (terminator seeds, etc) that really destroy independent agriculture is so that the shareholders and fat-cats in the corporation can create a dedicated customer base that feeds them a steady profit.
The internet itself is not responsible for the "evils" that have surfaced, but rather its use by the ruling classes (both "left" and right) to control and subdue the workers of the world. You cannot deny the advantages that the Internet provides - in fact, you wouldn't be able to try and convince multitudes of people that primitivism is valid without it.
Technology used in a military fashion is a symptom of imperialism, not technology itself.
When workers control what they produce, it only makes sense that they produce things that will benefit them - why would they want to nuke each other, produce monocrops, or surveill each other in order to exert some sort of control?
Technology is not inherently evil. It is a tool that can be (and is being) abused by those who control it. Why can't primitivists accept this?
Agrippa
2nd July 2009, 06:36
I disagree. The Singularity and other Transhumanist concepts are based on the potential of real technologies that are being researched right now - genetic engineering, nanotechnology, AI research - all of these and others are perfectly valid areas of scientific enquiry.
The problem is that you're buying into the capitalist PR campaign about these technologies. Genetic modification of organic life has already exasperated the contradictions within capitalism, rather than helping in any way to resolve them. For example, here in Virginia, bats are dying off en masse from a strange fungus that may be related to a chemical present in genetically modified corn. This is not unexpected, considering that genetic engineers have literally no idea what they are doing, in regards to the sheer number of unpredictable factors involved in altering the eco-system.
Nanotech will be like GM only 100 times worse. Regardless of its degree of success (and its potential to maintain current levels of material development is obviously exaggerated considering how much further behind the "singularity" fantasy actual nanotechnology research is) it will have a disastrous effect similar to what has happened with the reckless introduction of exotic species in tandem with other drastic changes to the eco-system, only on a nano-scale.
AI on the other hand, is a total bust. The few capitalist technocrats who even take it seriously as a concept (as opposed to just an obvious pretense for further developing the security/surveillance infastructure) are likely very mentally ill.
What would be the most noticeable thing to the average human being if you plucked 'em from the present and plonked 'em in the Cretaceous? Apart from the lack of civilisation and the plants being different, the most noticeable thing would be the huge fucking reptiles.Subjective human perceptions are not the same thing as scientific reality. If you took someone totally ignorant of the principles of engineering on tour of a factory, would "the most noticable thing" to them be that of most importance to the effective operation of the factory?
Also, dinosuars were not "huge fucking reptiles". They were feathered antecedents of modern-day birds, likely warm-blooded, of varying degrees of size, from "fucking huge" to the size of small poultry.
Also, birds (class Aves) are distinct from dinosaurs (superorder Dinosauria).The designation of a "Dinosauria" suborder of the Reptilia class occured long before the majority of scientific evidence and research supporting the classification of dinosaurs as avians was uncovered and made. The scientific consensus is pretty much in, much to the shegrin of toy companies and Hollywood who have made a fortune off of giant scary reptiles. (I guess giant scary birds aren't as frightening...)
Don't take western biological taxonomists seriously. Even they admit they don't know what they're doing.
Inevitable? So you're a prophet? :lol:Prophecy is not the same thing as rational prediction. But way to religion-bait everyone you disagree wit...
Well, you could resist the Singularity I suppose. Just what such resistance would constitute, only you can say. As for the effectiveness of whatever resistance you would provide, my money's on the big shiny robots and their cyborged and genetically engineered pals.That the Nazis were the superior material force did not stop many from resisting.
Look the other way? If they wanted John Q Public to remain ignorant of such things they'd keep it shtum, and if anyone asks they'd just say, "oh, it's just ", rather than splashing it all over the news like they currently seem to be doing.It's better to brainwash people into supporting whatever your nutty cause is, than struggle to keep it entirely secret from them.
Google wants to catalogue my bottle collection? The amount of pens I have in my room? How long my average bath is?well, right now they're only at "what the front door of my house looks like" - but who knows what the near future will bring?
I'm not an AI researcher, but I'm somewhat skeptical that such banal things are what world-class AIs are made of."Banal things" pay the bills. "AI research" doesn't exist in an economic vacuum.
Considering the muck to gold ratio of religious texts that I've readHave you read Ancient Israelite literature in its original Hebrew?
Have you read early Christian texts in Aramaic and Greek?
Have you read Hindu, Buddhist, and Jain vedas and sutras in Sanskrit?
Have you read Taoist texts in classical Chinese?
Have you read the classics of Greek Paganism in their original Greek form?
Have you read about hermetic mysticism in Egyptian heiroglyphics?
Have you read of Bonpo and Vajrayana in Tibetan?
Have you read of Odinism in Norse and Anglo-Saxon?
Have you read of Celtic mythology in Old Irish?
Have you read the Quaran in Arabic? Medieval Islamic literature in Arabic, Persian, and Turkish?
Have you read the myths and philosophical insights of Zoroastrianism in Persian?
Have you read the canons of the American Indians as they were intended to be heard, in the languages spoken before Columbus?
Have you read of African animism in the indigenius languages of Sub-Saharan Africa?
:rolleyes:
Part of being a grown-up is accepting that you aren't as much of a hot-shit as you think you are - that you can draw rash conclusions about philosopical traditions that have been around a lot longer than you have...
and even then it's the sort of stuff that makes you go "well, der!". Well aren't you brilliant for taking knowlege for granted after it has been made known by the work of others.
"Though shalt not kill" is a moral universal to human societies, notwithstanding the fact that it [I]really means "Though shalt not kill a fellow Jew".
Yeah, the Hebrews were real assholes for having laws that reflected the fact that multiple empires were trying to wipe them out....nice try, though.
Such as?
Consciousness is far too complicated for human beings to ever comprehend fully. The only thing they'll ever come close to creating is a parody.
Agrippa
2nd July 2009, 06:56
Well, assuming that he was responding to how you basically labelled all transhumanists as "nerds" who have inferiority complexes
I haven't labeled transhumanists as anything. I didn't write the article, but was merely posting it. The bit about transhumanists being "nerds" was obviously light-hearted fun and was not central to the anti-transhumanist argument being presented. It is true, however, that the transhumanism movement did mostly emerge from the "nerd" subculture, and thus as a whole retains much of its ideological and psychological baggage.
one can be comfortable with themselves, yet still wish to better themselves.
But considering the almost-limitless potential for human improvement that exists without transhumanist technology, the desire to better oneself in a way that requires the existence of an industrial infrastructure that worsens the conditions of others is selfish.
Being comfortable with oneself does not mean that one is perfect.
Being imperfect does not mean destroying oneself trying to seek unnessecary "perfection". How are "transhumanists" any better than Michael Jackson or other obsessive cosmetic surgery addicts?
I assume you are talking about the Klingons? To be fair, the Romulans, Borg, and Cardassians were also portrayed as nefarious, and they were all light-skinned.
In the original series, Romulans were vaguely "Mongoloid" in appearence and behavior. Borg and Cadrassian are TNG, which I never watched.
many of the technologies may be desirable, but the actualy social and political order is not implicit from these things.
I argue that it is, given the material conditions needed to create them.
And I don't see how Star Trek portrays the "general capitalist order," given that there is no money and no wage slavery.[quote]
As I recall, both the original series and the Next Generation make references to "credits". The society potrayed in Star Trek is "socialistic" in that it has a level of material development high enough to reard its subjects with vast amounts of material comfort, that doesn't make it not capitalist. You can't say with certainty that there's no wage slavery in the Star Trek universe - the television shows and films don't show how the characters' clothes and spaceships are manufactured.
[quote]At least my life expectancy isn't something like 20 years while I'm eating Wonderbread.
I wouldn't be so certain about that...
In all seriousness, the "20 year/30 year lifespan" chestnut is a favorite of these sort of Internet debates, yet no one ever cites scholastic evidence for the claim. It's very unlikely the resources exist to determine with any accuracy or precision the median lifespan of anyone who lived before extensive surveying/census-taking measures and databases of sociological statistics. It's mostly conjecture.
If, the life expectancy was something like 30 years, (I've never heard as low as 20) it's because factors such as death from wild game, war, etc. are being factored in. In that case, such factors should also be included in the life expectancy of modern man. There's nothing wrong with dying at a young age during the excitement of a hunt or battle.
So a worker's revolution shouldn't bring more ease and benevolence, but rather 20-hour farm-work days and sleeping in thistles to build character?
I am a farmer and my mother (who creates as much work for herself as she possibly can) doesn't even work 20 hours a day. You're only betraying your own ignorance of how an agrarian society works. Most good farmers actually only have to work a very small amount.
As for whether or not sleeping in thistles builds character, I'll leave the verdict for others to decide. It's an individual choice. Relative freedom from thistles can be achieved without the maintainance of a capitalist production/distribution/consumption infastructure.
ÑóẊîöʼn
2nd July 2009, 09:01
The problem is that you're buying into the capitalist PR campaign about these technologies. Genetic modification of organic life has already exasperated the contradictions within capitalism, rather than helping in any way to resolve them. For example, here in Virginia, bats are dying off en masse from a strange fungus that may be related to a chemical present in genetically modified corn. This is not unexpected, considering that genetic engineers have literally no idea what they are doing, in regards to the sheer number of unpredictable factors involved in altering the eco-system.
Technological development has always had speedbumps, yet it marches on nonetheless. Whining that something is too complicated or difficult gets us nowhere and helps nobody.
Nanotech will be like GM only 100 times worse. Regardless of its degree of success (and its potential to maintain current levels of material development is obviously exaggerated considering how much further behind the "singularity" fantasy actual nanotechnology research is) it will have a disastrous effect similar to what has happened with the reckless introduction of exotic species in tandem with other drastic changes to the eco-system, only on a nano-scale.People like you have been predicting doom and gloom about new technologies since the harnessing of fire. With a track record like that is it any wonder nobody takes you types seriously?
AI on the other hand, is a total bust. The few capitalist technocrats who even take it seriously as a concept (as opposed to just an obvious pretense for further developing the security/surveillance infastructure) are likely very mentally ill.You have no idea what you are talking about and your armchair diagnosis is a confirmation of that. Just because the problem of Artificial Intelligence has proven to be more complex than initial and more optimistic appraisals would have one believe, doesn't mean AI is impossible.
Subjective human perceptions are not the same thing as scientific reality. If you took someone totally ignorant of the principles of engineering on tour of a factory, would "the most noticable thing" to them be that of most importance to the effective operation of the factory?They'd certainly notice the machines making stuff without direct human intervention, just as a time-traveller would notice the dinosaurs.
Also, dinosuars were not "huge fucking reptiles". They were feathered antecedents of modern-day birds, likely warm-blooded, of varying degrees of size, from "fucking huge" to the size of small poultry.
The designation of a "Dinosauria" suborder of the Reptilia class occured long before the majority of scientific evidence and research supporting the classification of dinosaurs as avians was uncovered and made. The scientific consensus is pretty much in, much to the shegrin of toy companies and Hollywood who have made a fortune off of giant scary reptiles. (I guess giant scary birds aren't as frightening...)
Don't take western biological taxonomists seriously. Even they admit they don't know what they're doing.Are you quite finished? Because I've kind of stopped giving a fuck about a throwaway comment made to illustrate a tangential point. If I ever need help with writing a paleozoology paper, I'll be sure to hit you up.
Prophecy is not the same thing as rational prediction. But way to religion-bait everyone you disagree wit...There's always been an especially eschatological segment of the population that believes itself to be living in the last days. They've all been wrong. What makes you any different?
That the Nazis were the superior material force did not stop many from resisting.What the hell have the Nazis got to do with it? Godwin's law strikes again.
It's better to brainwash people into supporting whatever your nutty cause is, than struggle to keep it entirely secret from them.You go on about mad scientists and Nazis and brainwashing and the end of civilisation as we know it, and you have the complete front to call others nutty?!
That's utterly priceless! :laugh:
well, right now they're only at "what the front door of my house looks like" - but who knows what the near future will bring?Everyone who walks past your house knows what your front door looks like.
"Banal things" pay the bills. "AI research" doesn't exist in an economic vacuum.You got that right (http://en.wikipedia.org/wiki/Applications_of_artificial_intelligence).
<snip irrelevant list>
:rolleyes:
Part of being a grown-up is accepting that you aren't as much of a hot-shit as you think you are - that you can draw rash conclusions about philosopical traditions that have been around a lot longer than you have...The vast majority of which have no bearing on my life, since I'm not a historian or anthropologist, which is where the real value of such traditions lie.
Part of not being a snobbish artsy-fartsy fuckwit is realising that while humans are prolific storytellers, not all stories are created equal.
Well aren't you brilliant for taking knowlege for granted after it has been made known by the work of others.Knowledge such as what? That π = 3? That insects have four legs? That the stars are little lights stuck to a dome covering the world?
Enlighten me. :lol:
Yeah, the Hebrews were real assholes for having laws that reflected the fact that multiple empires were trying to wipe them out....nice try, though.How does that justify stoning your kids for backchatting you? Or any of the other stupid laws and rules found in the OT that Christians selectively abide by?
Consciousness is far too complicated for human beings to ever comprehend fully. The only thing they'll ever come close to creating is a parody.How could you possibly know this?
Agrippa
2nd July 2009, 12:04
Technological development has always had speedbumps, yet it marches on nonetheless.
To me, the potential destruction of the ecology is not a "speed bump" - it is more akin, to continue the automotive analogy, to driving your car head on into a concrete wall.
Regardless, "technological development" as you imagine it is something that doesn't actuallty exist in the historical, phenomenal world. What has "developed" throughout the last few hundred years is the productive means of capitalism, a mode of social arrangement intrinsically uncommon with any "developments" of past eras.
Whining that something is too complicated or difficult gets us nowhere and helps nobody.
Hence why I'm not proposing "whining" as a solution.
People like you have been predicting doom and gloom about new technologies since the harnessing of fire.
Do you have proof of this? To my knowlege, the only mass-resistance to "technology" began in the start of the industrial revolution (with groups such as the English Luddites) when "technology" became a symptom of class-exploitation.
Humans have been making fire for hundreds of thousands of years, at least, so I don't take your claims of knowing the socio-political dynamics of "the discovery of fire" very seriously. This connects to your foolish understanding of dinosaurs - the capitalist/Social Darwinist "mythos" that a comet crashing into the Earth 60 million years ago and wiping out the clumsy, oafish dinosaur behemoths - and apelike creatures discovering their climbing mandables are useful for manipulating the environment in order to create fire almost a million years ago - that all of these events are key parts of a grand destiny leading up to the emergence of European capitalism and the almost-messianic creation of some new, better society that will end history as we know it.
You have no idea what you are talking about
Do you? Clearly neither of us are experts.
Just because the problem of Artificial Intelligence has proven to be more complex than initial and more optimistic appraisals would have one believe, doesn't mean AI is impossible.
The problem is vastly more complex than any appraisal that will ever be made by AI researchers and enthusiasts will admit.
Because I've kind of stopped giving a fuck about a throwaway comment made to illustrate a tangential point.
You attempted to use a basic metaphor from another field of science, and in the process revealed that your understanding of the particular field to be negligible. It's not central to my argument but it illustrates your boldness in assuming to understand scientific reality. Now that you've proven yourself to be ignorant on the subject, you're acting as if my lack of ignorance is a bad thing, as if that's somehow a negative mark on my paper.
If I ever need help with writing a paleozoology paper, I'll be sure to hit you up.
Why should someone who admits to ignorance of zoology, "paleo-" or otherwise, be proposing plans with potentially catastrophic effects to allegedly save or "improve" the eco-system.
There's always been an especially eschatological segment of the population that believes itself to be living in the last days. They've all been wrong. What makes you any different?
I'm not making an "eschatological" prediction (although you are with your "singularity" gibberish) about "impending" changes in the fundamental nature of reality. I'm only pointing out, rationally, the material limits of the global capitalist system, something that, unlike ominous eschatological prophecy, is
What the hell have the Nazis got to do with it? Godwin's law strikes again.
"Godwin's law" is only relevant in circles where the Nazis, as a political regime, have more emotional impact than any other political regime. To me, Nazis are no different than liberal democrats.
You go on about mad scientists and Nazis and brainwashing and the end of civilisation as we know it, and you have the complete front to call others nutty?!
So your judge of "nuttiness" is how patently absurd an idea seems to you? How unwilling (as in the "end of civilisation") you are to consider the possibility?
I see you're shocked and offended by the notion that I would refer to mass-media campaigns conducted by capitalist bureaucrats as "brainwashing". You seem to get upset when anyone infers anything negative about capitalism beyond basic "excesses" that even a Social Democrat would denounce. Are the "anarchism" and "communism" in your signature subordinate to the "technocracy" and "transhumanism"?
That's utterly priceless! :laugh:
Everyone who walks past your house knows what your front door looks like.
Everyone who walks past my house and takes a photograph without my permission gets a black eye.
You got that right (http://en.wikipedia.org/wiki/Applications_of_artificial_intelligence).
A Wikipedia article? Seriously? Are you going to lecture me about how Wikipedia is part of the singularity and thus a legitimate academic source?
The vast majority of which have no bearing on my life, since I'm not a historian or anthropologist
Then maybe you should just acquiess your total ignorance of the subject and learn to keep your mouth shut before spouting off.
Part of not being a snobbish artsy-fartsy fuckwit is realising that while humans are prolific storytellers, not all stories are created equal.
If you're not a materialist bigot than you're a "snobbish artsy-fartsy fuckwit"? The "snobbish artsy-farty fuckwits" were actually the ones who cultivates the realist standard of artistic legitimacy that desparaged medieval literature such as Beowulf as pulp fiction not even worthy of intellectual consideration.
That π = 3?
Is ancient Hebrew mathematics and engineering another thing you're going to pretend to understand?
Here's an actual mathematician's opinion:
http://www.purplemath.com/modules/bibleval.htm
That insects have four legs?
Weren't you the one who was just complaining about zoological trivia as an irrelevant distraction from the central argument?
However, if you insist on challenging me in this area of knowlege again, I shall acquiess. The Hebrew version of Lev. 11:20 (the Bible verse you're referring to) refers to creatures who (colloquially*) "walk on four feet", which includes the locust, since, as the passage itself explains, hops on a third set of hind legs located "above" the four feet. Ancient botanical taxonomy seems very strange to us, coming from a modern outsider's perspective, but to imply that the ancients were too foolish to count the number of legs on an insect is idiocy.
*As Moby Dick pointed out, a whale is a fish in the colloquial sense.
That the stars are little lights stuck to a dome covering the world?
Ancient astronomy is more complex than that, but it's true people didn't have the full picture back then. However, their understanding was startlingly complex and close to reality. For example, in the Hindu and Buddhist traditions, I know for a fact, it was understood that there is an infinite number of worlds, each with it's own view of the heavens.
How does that justify stoning your kids for backchatting you?
I think from this conversation we can conclude that the extent of your knowlege of Mosaic Law derives from the Skeptics Annotated Bible and the Brick testament.
Here's your reach-around: That passage refers to "sons" who are "drunken" and "gluttonous". A.k.a. not 5-year-olds. No one's denying that the ancient Israelites weren't hardasses.
How could you possibly know this?
I can't. That's the point. More importantly, how could AI researchers possibly know every condition that factors in producing the phenomenon of consciousness as we know it? Why should we even care enough to trust them with their possibly sadistic experiments in conjuring pseudo-life?
ComradeOm
2nd July 2009, 12:07
In all seriousness, the "20 year/30 year lifespan" chestnut is a favorite of these sort of Internet debates, yet no one ever cites scholastic evidence for the claim. It's very unlikely the resources exist to determine with any accuracy or precision the median lifespan of anyone who lived before extensive surveying/census-taking measures and databases of sociological statistics. It's mostly conjectureOh so you are in a position to overturn the overwhelming academic consensus regarding life expectancy rates? Tell me, are you dismissing the vast raft of literature out of ignorance or because you're right and they're all wrong?
For pre-modern eras we have a wealth of "scholastic evidence" that allows us to make comparative studies as to approximate life expectancy range. For most pre-industrial societies (ie from Ancient Rome to Baroque Europe) this tends to be in the range of 20-30. The main issues with this sort of analysis is that it generally fails to differentiate based on class and life expectancy is very much affected by infant mortality rate - medical advances of the 20th C drastically increased the odds of a child surviving to puberty and therefore the average life expectancy shot up
Of course for more modern industrial societies, generally from the 19th C, we do have considerable statistical data to draw upon, courtesy of an ever-growing population of 'social scientists'. We know for example that the average European life expectancy in 1900 was less than 50 years on average and are able to put concrete numbers to this (48 years) by the mid-century. Today it typically stands at 78.7. So yes, there is a wealth of data that charts the demographic changes both over the past century and beyond
There's nothing wrong with dying at a young age during the excitement of a hunt or battleThis says it all. For a start you are completely wrong - the vast majority of deaths in pre-industrial societies would have been due to disease and famine. A peasant's life was nasty, brutish and short. The population explosion of the 19th C was largely due to the ability of European societies to increasingly control both factors. Progress you might say. You seem to think that people were living some sort of hunter-gatherer existence and dying gloriously while chasing woolly mammoths up glaciers
Even if we ignore starvation and easily prevented diseases, what sort of mind possibly thinks that lying in the mud with a bullet/spear in your gut is desirable? The idea that it is somehow preferable for some to 'live fast and die young' rather than living a long and prosperous life can only be described as misanthropic
I am a farmerThen you'll know first hand just how much technology has revolutionised agriculture in the past two centuries. Unless you still use a wooden plough and sow seeds by hand...?
Agrippa
2nd July 2009, 18:07
the vast raft of literature out of ignorance or because you're right and they're all wrong?
Citing a "vast raft of literature" without naming a single book title is not really an effective argument, sorry...
For pre-modern eras we have a wealth of "scholastic evidence" Such as? Caveman statistics bureaus?
that allows us to make comparative studies as to approximate life expectancy range.Translation: Make "educated" guesses based on our existing ideological biases
For most pre-industrial societies (ie from Ancient Rome to Baroque Europe)Ancient Rome and Baroque Europe are not exactly typical examples of "most pre-industrial societies", but nice try....
The main issues with this sort of analysis is that it generally fails to differentiate based on classThat's because the "analysis" serves the interests on the ruling class...
and life expectancy is very much affected by infant mortality rate - medical advances of the 20th C drastically increased the odds of a child surviving to puberty and therefore the average life expectancy shot upBut what are the infant mortality rates of the 20th century compared to, exactly? Those in the 17th or 18th century? When child-birthing procedures had already been fully seized out of the hands of (mostly-female) midwives and put in the hands of centralized "health care" bureaucracies such as hospitals? I'd like to see an honest comparison between infant mortality rates under capitalist "healthcare" versus infant mortality rates when the child is birthed and treated by a competent midwife.
We know for example that the average European life expectancy in 1900 was less than 50 years on average and are able to put concrete numbers to this (48 years) by the mid-century. Today it typically stands at 78.7.Ignoring the fact that length of life has nothing to do with quality of life, you'd have to provide actual evidence for these 'statistics'. I'd like to see what factors are included in the life expectancy of someone in 1900 versus that of someone in 2009. I think most of the problems we had in 1900, we still have, only worse, (diseases such as HIV/AIDS) in addition to all sorts of problems we have that they didnt. (such as automobile accidents) I'm certainly not going to take what you say on faith.
So yes, there is a wealth of data that charts the demographic changes both over the past century and beyondIf you're willing to provide the data for "beyond", I'll be willing to analyze it.
the vast majority of deaths in pre-industrial societies would have been due to disease and famine.Again, am I taking you on faith?
A peasant's life was nasty, brutish and short.A "social anarchist" paraphrasing Thomas Hobbes. Nice.
Have you lived "a peasant's life", by chance?
The population explosion of the 19th C was largely due to the ability of European societies to increasingly control both factors. Progress you might say.Is it progress if all those people die of starvation eventually from the material scarcity created as a consequence of capitalist recklessness? You tell me.
Even if we ignore starvation and easily prevented diseasesIf the diseases were "easily prevented", then people would have gone to certain lengths to prevent them. Are cavemen not motivated by self-preservation or something?
As for starvation, you're ignoring the flip-side, which is all the other animal species that needed to be hunted to extinction, that needed to have their habitats wiped out, and so on, to create a society in which humans never starve. Why should we exempt ourselves from the food chain?
what sort of mind possibly thinks that lying in the mud with a bullet/spear in your gut is desirable? The idea that it is somehow preferable for some to 'live fast and die young' rather than living a long and prosperous life can only be described as misanthropicThe lives of old people in capitalist retirement homes can not exactly be described as "prosperous".
Then you'll know first hand just how much technology has revolutionised agriculture in the past two centuries. Yes, but at the expense of the labor-power, time, and resources wasted, eco-systems poisoned and ruined, etc. to power things like tractors
Unless you still use a wooden plough and sow seeds by hand...?Me personally? I grow potatoes. Because I believe in being lazy without the help of industrial technology.
ComradeOm
2nd July 2009, 19:29
Citing a "vast raft of literature" without naming a single book title is not really an effective argument, sorry...Where do you want me to start? Here's a few general works that deal with the topic at large (most of which I've read) while data on specific eras is usually dealt with separately. I've also noted a few below when addressing individual points
Thompson, (1993), 'The Cambridge Social History of Britain, 1750-1950'
Riley, (2001), 'Rising Life Expectancy'
Bideau et al, (1997), 'Infant and Child Mortality in the Past'
Fogel et al, (2004), 'The Escape from Hunger and Premature Death, 1700-2100'
Hobsbawm, (1957), 'The British Standard of Living 1790-1850'
Schofield, (1991), 'The Decline of Mortality in Europe'
Wrigley et al, (1989), 'The population History of England, 1541-1871'
Such as? Caveman statistics bureaus?"For the earliest part of human history we have little or no precise statistics to rely on, so the figures much be based on an examination of skeletons and on mathematical population models. Some of the most definitive surveys of Stone Age skeletons from North Africa show a life expectancy of just 21 years. We know from an examination of gravestones, mummies and skeletons that an average citizen of Imperial Rome lived only 22 years"
From Lomborg (2001) referencing Russell (1978) and Botkin and Keller (1998)
Of course, perhaps you doubt the existence of prehistoric societies entirely? I mean, as you point out, there's no documentary (or statistical) records from these times. Perhaps historians are wrong to actually apply their intelligence to these problems and deduce trends/characteristics from available evidence. Maybe we should just accept that there are bones in the ground that's as much as we'll ever know or need to know :rolleyes:
But I do love the suggestion that there is some deep "ideological bias" that only you, and you alone, are immune to. Tell me, just what parties are conspiring to present a view of human history that portrays mortality rates as remaining extremely high until just over a century ago? And why is this view borne out by a whole field of historical and statistical studies?
Ancient Rome and Baroque Europe are not exactly typical examples of "most pre-industrial societies", but nice try....Of course not, they are examples. Perhaps you want me to produce similar figures for medieval Europe or Qing China? Hint: They're essentially the same
That's because the "analysis" serves the interests on the ruling class...No doubt they are also behind that dastardly plot to convince people that the Earth actually revolves around the Sun! Bastards. Its a good thing that primitivists like yourself are around to set the record straight...
But what are the infant mortality rates of the 20th century compared to, exactly? Those in the 17th or 18th century?What else would they be compared to?
For example, Bideau et al infant mortality rates in England & Wales in 1701 as almost 210 per 1000*. By 1901 this figure had dropped to 120 per 1000. As of 2001 (roughly) (http://www.bliss.org.uk/page.asp?section=761§ionTitle=Infant+mortality+%96+definitions+and+ statistics) the figure stands at 4.8 per 1000. So yes, placing "child-birthing procedures" in the hands of "of centralized "health care" bureaucracies such as hospitals" (along with other advances in medicine, technology and society) has drastically slashed the infant mortality rate and considerably increased the average lief expectancy
*According to Gill Newton (Infant mortality & infant feeding in London, circa 1550-1720) the corresponding figure for one London suburb in 1601 would be 250+ deaths per 1000
Again, am I taking you on faith?No, you are taking my word on the basis that I clearly know a hell of a lot more about this than you do. Feel free to do your own research to prove me wrong but don't pretend that your ignorance is itself a valid position
A "social anarchist" paraphrasing Thomas Hobbes. NiceI am no such thing
Have you lived "a peasant's life", by chance?Actually I'm proud that I'm the first generation of my family to be born and raised a proletarian. But then even if I was a culchie it would have little to do with pre-industrial standards of living. There are few peasants left in Western Europe (Spain is probably the only exception) and even then both they and small farmers have seen their lives irrevocably changed by technology. The arrival of tractors alone would have been a major event just a few decades ago in Ireland
Is it progress if all those people die of starvation eventually from the material scarcity created as a consequence of capitalist recklessness? You tell meYou're missing the point - those people were not and are not dying of starvation. I can't stress this enough but famine has been largely eradicated from Europe (with the exception of Russia) since the late 19th C. This is in itself an accomplishment and a major milestone in the history of a continent that was once plagued by them. All due to industrialisation and technological advances
(See Tooze, Wages of Destruction for a note on this. Hobsbawm also mentions it numerous times in his 'Long Century' trilogy)
If the diseases were "easily prevented", then people would have gone to certain lengths to prevent them. Are cavemen not motivated by self-preservation or something?What? Motivation is worthless unless you have the material means to accomplish something. Neanderthals could not devise polio vaccines or cure TB no more than I can fly by flapping my arms. Of course in the long run that "self-preservation" instinct certainly came in handy when advancing technological progress
As for starvation, you're ignoring the flip-side, which is all the other animal species that needed to be hunted to extinction, that needed to have their habitats wiped out, and so on, to create a society in which humans never starve. Why should we exempt ourselves from the food chain?Why should I give a feck about them? Do you also weep over the eradication of smallpox?
The lives of old people in capitalist retirement homes can not exactly be described as "prosperous"Come now, you could have talked about the homeless or those churned up and spit out by the wheels of capitalism. Either would have suited your strawman much better. But since you mentioned it, those "old people" would be dead a hundred or two hundred years ago. Back then only the rich lived to old age
You can of course argue that these "old people" would be better off dead but I'm not even going to dignify that sort of misanthropy with a response
Yes, but at the expense of the labor-power, time, and resources wasted, eco-systems poisoned and ruined, etc. to power things like tractorsYou're not a farmer at all. Anyone who thinks that it is quicker to plough, sow, and harvest a field (even potatoes) without the use of mechanised aids does not know what they are talking about
Me personally? I grow potatoes. Because I believe in being lazy without the help of industrial technology.And you make a living from this?
Invincible Summer
2nd July 2009, 21:55
I haven't labeled transhumanists as anything. I didn't write the article, but was merely posting it. The bit about transhumanists being "nerds" was obviously light-hearted fun and was not central to the anti-transhumanist argument being presented. It is true, however, that the transhumanism movement did mostly emerge from the "nerd" subculture, and thus as a whole retains much of its ideological and psychological baggage.
It's hard to tell if you're joking or not.
But considering the almost-limitless potential for human improvement that exists without transhumanist technology, the desire to better oneself in a way that requires the existence of an industrial infrastructure that worsens the conditions of others is selfish.
Could you enlighten me as to what these improvements you speak of are, other than a "spiritual improvement?"
And I agree with you that using the current infrastructure to simply better oneself at the expense of others is indeed selfish - however, I've always made the (perhaps incorrect) assumption that the transhumanists and transhumanist sympathizers (like myself) on Revleft were aiming for these technologies and improvements to be carried out in a post-revolution, post-scarcity scenario. There is no way that each person would be able to improve themself in a way they wished within capitalism, as it profits no one and damages all sorts of various hierarchies that exist within capitalism.
Being imperfect does not mean destroying oneself trying to seek unnessecary "perfection". How are "transhumanists" any better than Michael Jackson or other obsessive cosmetic surgery addicts?
Perhaps I shouldn't have used the word "perfect" - it's a bit strong. What I was trying to get at was that seeking improvements to oneself is natural (IMO), and that just because I say I am comfortable with myself does not mean I should stop trying to improve myself physically or mentally.
The difference between cosmetic surgery and transhumanism is that the former is vanity, whereas the latter is to improve more than one's appearance. You should read some cyberpunk :cool:
In the original series, Romulans were vaguely "Mongoloid" in appearence and behavior. Borg and Cadrassian are TNG, which I never watched.
I hate to press this Star Trek tangent on... but Klingons weren't brown-skinned in the original series, so your accusation (albeit implied) of Star Trek being veiled racism doesn't really hold water.
As I recall, both the original series and the Next Generation make references to "credits". The society potrayed in Star Trek is "socialistic" in that it has a level of material development high enough to reard its subjects with vast amounts of material comfort, that doesn't make it not capitalist. You can't say with certainty that there's no wage slavery in the Star Trek universe - the television shows and films don't show how the characters' clothes and spaceships are manufactured.
True, it doesn't make it not-capitalist, but this clip from TNG may help explain where I'm coming from: http://www.youtube.com/watch?v=pzqW0YaN2ho (start around :56)
And with the replicator technology, I'm sure they can just replicate the uniforms, and have automatons to build starships.
In all seriousness, the "20 year/30 year lifespan" chestnut is a favorite of these sort of Internet debates, yet no one ever cites scholastic evidence for the claim. It's very unlikely the resources exist to determine with any accuracy or precision the median lifespan of anyone who lived before extensive surveying/census-taking measures and databases of sociological statistics. It's mostly conjecture.
Well why do you think that women were married off when they were 12-14 years old? It's not because everyone was a pedophile, but rather because they had to have children before they died of old age or disease.
Technology and medical advancements (despite the disgusting level of corporatization and profiteering involved due to the capitalist influence) have improved our chances of living. Are you against extending the life of people who are suffering from a debilitating illness? Should everyone who requires vision correction just stumble around?
Smacks of social Darwinism to me.
If, the life expectancy was something like 30 years, (I've never heard as low as 20) it's because factors such as death from wild game, war, etc. are being factored in. In that case, such factors should also be included in the life expectancy of modern man. There's nothing wrong with dying at a young age during the excitement of a hunt or battle.
Comrade Om already discussed this with you and I will repeat his answer: disease.
I am a farmer and my mother (who creates as much work for herself as she possibly can) doesn't even work 20 hours a day. You're only betraying your own ignorance of how an agrarian society works. Most good farmers actually only have to work a very small amount.
I was purposefully exaggerating. My point is that technology has the potential to free us from unnecessary hours spent on labour. How would millions of people survive - not even live in abundance and comfort - with minimal technology in an agrarian society?
ÑóẊîöʼn
3rd July 2009, 03:58
To me, the potential destruction of the ecology is not a "speed bump" - it is more akin, to continue the automotive analogy, to driving your car head on into a concrete wall.
What reason have you to believe that's going to happen?
Regardless, "technological development" as you imagine it is something that doesn't actuallty exist in the historical, phenomenal world. What has "developed" throughout the last few hundred years is the productive means of capitalism, a mode of social arrangement intrinsically uncommon with any "developments" of past eras.Capitalism kills? We're not dead yet. Still no reason to keep it around.
Hence why I'm not proposing "whining" as a solution.So what is it? Are you trying to convince people to abandon technology?
Do you have proof of this? To my knowlege, the only mass-resistance to "technology" began in the start of the industrial revolution (with groups such as the English Luddites) when "technology" became a symptom of class-exploitation. I'm sure there were one or two. Insanity is as good a reason as any other.
Humans have been making fire for hundreds of thousands of years, at least, so I don't take your claims of knowing the socio-political dynamics of "the discovery of fire" very seriously. This connects to your foolish understanding of dinosaurs - the capitalist/Social Darwinist "mythos" that a comet crashing into the Earth 60 million years ago and wiping out the clumsy, oafish dinosaur behemothsI hear velociraptors may have been very graceful. The dinosaurs weren't oafish, they were simply unlucky. If we were to be smacked by a large asteroid we'd be just as dead, I reckon.
- and apelike creatures discovering their climbing mandables are useful for manipulating the environment in order to create fire almost a million years ago - that all of these events are key parts of a grand destiny leading up to the emergence of European capitalism and the almost-messianic creation of some new, better society that will end history as we know it.Destiny? No, just the way it turned out due to our efforts. But it's not enough.
Do you? Clearly neither of us are experts.Fair enough.
The problem is vastly more complex than any appraisal that will ever be made by AI researchers and enthusiasts will admit.We'll find out sooner or later.
You attempted to use a basic metaphor from another field of science, and in the process revealed that your understanding of the particular field to be negligible. It's not central to my argument but it illustrates your boldness in assuming to understand scientific reality. Now that you've proven yourself to be ignorant on the subject, you're acting as if my lack of ignorance is a bad thing, as if that's somehow a negative mark on my paper.Hardly. I merely conceded that you knew more than I. If you don't like the backhanded way it was delivered, tough.
Why should someone who admits to ignorance of zoology, "paleo-" or otherwise, be proposing plans with potentially catastrophic effects to allegedly save or "improve" the eco-system.You say that as if I think there won't be any preliminary research and/or tests with regards to feasability. :blink:
I'm not making an "eschatological" prediction (although you are with your "singularity" gibberish)I personally don't think the Singularity will be the end, if it happens at all.
about "impending" changes in the fundamental nature of reality. I'm only pointing out, rationally, the material limits of the global capitalist system, something that, unlike ominous eschatological prophecy, is
If the limits are going to be reached soon and abruptly, we're pretty much fucked. Why bother trying to make a difference? You could better spend your time enjoying civilisation while it lasts and hope you die before things get really horrible, or you could get serious about survivalism.
If the limits are going to be reached soon but gently, then depending on how soon and how gently, we may have a little time or we may just end up suffering through a more gruesome version of the above.
If not anytime soon, then regardless we are still fucking things up. But at least we would have plenty of time to abolish capitalism and clean up our mess and prepare before the shit hits the fan.
In any case, I don't see any point in rejecting technology simply because the ruling class has been known to unethically exploit it. The ruling class uses guns, yet guns are also a useful tool for revolutionaries. The technology's going to be developed if it's all possible. Why not appropriate it for our uses?
"Godwin's law" is only relevant in circles where the Nazis, as a political regime, have more emotional impact than any other political regime. To me, Nazis are no different than liberal democrats.Ahem:
The rule does not make any statement about whether any particular reference or comparison to Adolf Hitler (http://en.wikipedia.org/wiki/Adolf_Hitler) or the Nazis (http://en.wikipedia.org/wiki/Nazism) might be appropriate, but only asserts that the likelihood of such a reference or comparison arising increases as the discussion progresses.
So your judge of "nuttiness" is how patently absurd an idea seems to you? How unwilling (as in the "end of civilisation") you are to consider the possibility?I consider it a grave concern. I just don't share your approach.
I see you're shocked and offended by the notion that I would refer to mass-media campaigns conducted by capitalist bureaucrats as "brainwashing".No, I found it funny because it assumes a level of confidence that the ruling class does not actually display. They've achieved some clever things, but there's also plenty of fuck-ups, crossed wires and conflicting goals as well. The ruling class are not infallible.
You seem to get upset when anyone infers anything negative about capitalism beyond basic "excesses" that even a Social Democrat would denounce. Are the "anarchism" and "communism" in your signature subordinate to the "technocracy" and "transhumanism"?
That's utterly priceless! :laugh:I'm glad you find me funny too. :thumbup1:
Everyone who walks past my house and takes a photograph without my permission gets a black eye.So how far away do they have to be before they can point a camera in the direction of your house without getting assaulted?
Seems like a disproportionate response.
A Wikipedia article? Seriously? Are you going to lecture me about how Wikipedia is part of the singularity and thus a legitimate academic source?No. Whatever gave you that idea? :confused:
Wikipedia's a starting point for finding further information, that's all. If you don't think the article is kosher, check out the references, and if they're not up to your standards, tell me.
Then maybe you should just acquiess your total ignorance of the subject and learn to keep your mouth shut before spouting off.Hmm, how about "fuck that, no"? :D
If you're not a materialist bigot than you're a "snobbish artsy-fartsy fuckwit"?Did I say those were the only two kinds of people in existance, I think not.
The "snobbish artsy-farty fuckwits" were actually the ones who cultivates the realist standard of artistic legitimacy that desparaged medieval literature such as Beowulf as pulp fiction not even worthy of intellectual consideration.I haven't read Beowulf. Is it any good?
Is ancient Hebrew mathematics and engineering another thing you're going to pretend to understand?
Here's an actual mathematician's opinion:
http://www.purplemath.com/modules/bibleval.htmNow why don't they teach that in RE?
Weren't you the one who was just complaining about zoological trivia as an irrelevant distraction from the central argument?You were the one going on about allegedly useful information.
However, if you insist on challenging me in this area of knowlege again, I shall acquiess. The Hebrew version of Lev. 11:20 (the Bible verse you're referring to) refers to creatures who (colloquially*) "walk on four feet", which includes the locust, since, as the passage itself explains, hops on a third set of hind legs located "above" the four feet. Ancient botanical taxonomy seems very strange to us, coming from a modern outsider's perspective, but to imply that the ancients were too foolish to count the number of legs on an insect is idiocy.
*As Moby Dick pointed out, a whale is a fish in the colloquial sense.OK, so what's in there that we don't already know?
Ancient astronomy is more complex than that, but it's true people didn't have the full picture back then. However, their understanding was startlingly complex and close to reality. For example, in the Hindu and Buddhist traditions, I know for a fact, it was understood that there is an infinite number of worlds, each with it's own view of the heavens.
An infinite number? Not so sure about that. I think that would depend on whether the universe is infinite or not, which we don't know.
I think from this conversation we can conclude that the extent of your knowlege of Mosaic Law derives from the Skeptics Annotated Bible and the Brick testament.Considering most people's knowledge of the matter, that would be a distinct improvement.
Here's your reach-around: That passage refers to "sons" who are "drunken" and "gluttonous". A.k.a. not 5-year-olds. No one's denying that the ancient Israelites weren't hardasses.OK, so it's adults being stoned for being drunken and gluttonous? I'm not seeing the improvement.
I can't. That's the point. More importantly, how could AI researchers possibly know every condition that factors in producing the phenomenon of consciousness as we know it?The usual way we find out things about the natural world, of course.
Why should we even care enough to trust them with their possibly sadistic experiments in conjuring pseudo-life?What do you mean by sadistic? It's not like they're grabbing people off the street and vivisecting them for shits and giggles. Unethical testing is the exception not the norm as far as I know since most experiments don't involve humans as subjects.
As for pseudo-life, an AI could consider us that - bags of meat and bone that leak dirty water - so it's important that we integrate them into society in some manner. Maybe have them come out of the factory with a child-like personality that is capable of learning and developing to an adult-equivalent level. There may be other solutions.
ckaihatsu
3rd July 2009, 21:27
Is there really? Generals don't call the individual shots of every soldier, they give objectives to the lower ranks and set the parameters within which such objectives can be achieved. Machines can make tactical decisions much faster and more reliably than a human can, and can do so without getting fatigued or requiring rest. The advantages of automation for a military force are clear.
True, most (all?) automated systems are not truly independant. But at the same time, more and more operational decisions are being handed over to machines - consider the Phalanx CIWS - it could be upgraded so the decision to open fire could be delegated to an automated IFF (Identification Friend or Foe) system, with the result that you now have a weapon capable of defending an area without any human input. In fact, they've already developed a weapon system that is close to what I've described.
My *point* is that the person or organization giving the objectives (orders) is the person or organization *in responsibility*. Objectives, or tactics, are *not* a phenomenon of the natural world -- they are the result of *conscious* planning from *sentient* minds.
Whether inorganic machines are used in the service of these objectives or not, the point is that the responsibility can be traced back up the chain of command to where it originated, with the decision-makers. Objectives are the details of fulfilling overall *strategic* goals, which themselves are supported by a political platform among like-minded interests.
So, again, to clarify my point, a machine will fall into either one category or the other: either it is a *tool* in the service of a *human* decision-maker, or else it will exhibit true independence by devising its own, self-aware description of itself, along with a self-initiated direction for "living" its "existence".
The whole point is that automation is something that creeps in rather than being introduced suddenly, due to technology limitations. By the time true AI is developed, it will seem like a logical next step to arm them.
I can appreciate this point, but I have to emphasize that there is still a *distinct* dividing line between *expert systems* and *sentient beings* -- you're describing an *expert system* that would still be under the direction of a human organizational hierarchy.
Perhaps another way of drawing the line would be to see if the "AI" would actually *argue* with you about turning it off....
I was browsing the web last night and happened across a website that is Singularity-themed. There's an excellent comment there that *also* sums up the definition / dividing line succinctly:
By Raelifin on Jul 2, 2009
[...]
I think [the movie is] innovative, not in having a machine wipe out humanity, but in having protagonists in such a future be non-human.
[...]
http://singularityhub.com/2009/07/02/post-singularity-world-envisioned-in-upcoming-movie-9/
ÑóẊîöʼn
3rd July 2009, 23:07
My *point* is that the person or organization giving the objectives (orders) is the person or organization *in responsibility*. Objectives, or tactics, are *not* a phenomenon of the natural world -- they are the result of *conscious* planning from *sentient* minds.
Whether inorganic machines are used in the service of these objectives or not, the point is that the responsibility can be traced back up the chain of command to where it originated, with the decision-makers. Objectives are the details of fulfilling overall *strategic* goals, which themselves are supported by a political platform among like-minded interests.
So, again, to clarify my point, a machine will fall into either one category or the other: either it is a *tool* in the service of a *human* decision-maker, or else it will exhibit true independence by devising its own, self-aware description of itself, along with a self-initiated direction for "living" its "existence".
So where do you draw the line between a tool and an independant being? You could probably get away with treating insect-like machines as tools, at least from an ethical standpoint. But what about the more complex stuff that might be developed after that?
I can appreciate this point, but I have to emphasize that there is still a *distinct* dividing line between *expert systems* and *sentient beings* -- you're describing an *expert system* that would still be under the direction of a human organizational hierarchy.
Perhaps another way of drawing the line would be to see if the "AI" would actually *argue* with you about turning it off....Do you "turn off" a guard dog when it's not guarding something? An AI sophisticated enough to have an argument would be capable of doing plenty of work.
ckaihatsu
4th July 2009, 01:11
So where do you draw the line between a tool and an independant being? You could probably get away with treating insect-like machines as tools, at least from an ethical standpoint. But what about the more complex stuff that might be developed after that?
So, again, to clarify my point, a machine will fall into either one category or the other: either it is a *tool* in the service of a *human* decision-maker, or else it will exhibit true independence by devising its own, self-aware description of itself, along with a self-initiated direction for "living" its "existence".
Do you "turn off" a guard dog when it's not guarding something?
Yes, one could.
An AI sophisticated enough to have an argument would be capable of doing plenty of work.
True.
ckaihatsu
11th July 2009, 01:00
[W]e would *know* what *capabilities* the AI box would have at every step along the way. We would *know* if the AI box was on the *threshold* of being conscious enough to engage in trickery, which is a rather high-level ability. I think that, far before it demonstrated the early stages of self-awareness we would have that box suspended over a vat of acid with a panic button nearby....
Again, what's to stop an AI playing nice until it's no longer in danger of getting dunked in acid?
[W]e would *know* what *capabilities* the AI box would have at every step along the way. We would *know* if the AI box was on the *threshold* of being conscious enough to engage in trickery, which is a rather high-level ability. I think that, far before it demonstrated the early stages of self-awareness we would have that box suspended over a vat of acid with a panic button nearby....
I had to repeat the post that you responded to, as a way of answering your response with the most appropriate reply I can possibly give.
To put it in different words, I think you're really not appreciating the *power* relationship that would exist between a group of AI researchers / engineers, and the budding AI "entity". Any inorganic "entity" would be at the *bottom* of the social totem pole, undoubtedly, being nowhere near the political issue of "personhood" as depicted in the Animatrix storyline about 'B166ER' (a reference to the character of Bigger Thomas from Richard Wright's _Native Son_).
Too risky. For the researchers, that is. Suppose someone gets the wrong idea and whacks the panic button? Good job explaining that one to the research grant committee/investors/whatever.
You're not acknowledging that, as with anything of increasing complexity, there are *many*, *many* steps along the way towards the development of the complex system that would *clearly indicate* the system's capabilities at every step.
I think you're really falling for the dramatic-storyline, movie version of AI which depicts human beings as being *oblivious* to what they're doing until it's too late, at which point the inorganic AI has attained super-intelligence and just whomps all over humanity....
You're also arguing as if *all* cumulative research and engineering efforts to-date would be *entirely contained* within the working AI box, so that destroying it would also be flushing years of work down the toilet. This is a *ludicrous* premise to argue from -- certainly *all* pioneering research / engineering efforts reference an existing body of work, usually drawn from institutions of academia spread all over the world. Extensive journal reporting and peer review procedures are standard operating procedures in this kind of field, and the necessary destruction of *one stage* of development *would not* erase the research that the effort was derived from.
I'm going to ask you here to refrain from arguing on the basis of linear abstractions and abstraction-based scenarios, in favor of recognizing and appreciating the *complexity* and *complex evolution* that is inherent to matters pertaining to the acquisition of intelligence.
For the sake of illustration we're talking about the difference between attaining height superficially by erecting a yardstick that extends straight up into the air, versus *building* a mound or other structure that will allow us to *elevate ourselves*, physically, upwards into the air -- the yardstick represents a quick, artificial measurement, while the mound represents a *complexity of process*, with accompanying labor, that involves evident, observable stages as it approaches completion.
ÑóẊîöʼn
11th July 2009, 01:27
I had to repeat the post that you responded to, as a way of answering your response with the most appropriate reply I can possibly give.
To put it in different words, I think you're really not appreciating the *power* relationship that would exist between a group of AI researchers / engineers, and the budding AI "entity". Any inorganic "entity" would be at the *bottom* of the social totem pole, undoubtedly, being nowhere near the political issue of "personhood" as depicted in the Animatrix storyline about 'B166ER' (a reference to the character of Bigger Thomas from Richard Wright's _Native Son_).
Yeah, but for how long?
You're not acknowledging that, as with anything of increasing complexity, there are *many*, *many* steps along the way towards the development of the complex system that would *clearly indicate* the system's capabilities at every step.Emergence and chaos means that just because you designed a system (especially one as complicated as a human-level AI or greater), doesn't mean you know the full extent of it's capabilities. Nor does it tell you anything about the motivations of such a self-aware system.
I think you're really falling for the dramatic-storyline, movie version of AI which depicts human beings as being *oblivious* to what they're doing until it's too late, at which point the inorganic AI has attained super-intelligence and just whomps all over humanity....Why the fuck do you keep mischaracterising my position as some kind of Skynet Syndrome? Being able to escape is not the same thing as being able to exterminate the human race!
You're also arguing as if *all* cumulative research and engineering efforts to-date would be *entirely contained* within the working AI box, so that destroying it would also be flushing years of work down the toilet. This is a *ludicrous* premise to argue from -- certainly *all* pioneering research / engineering efforts reference an existing body of work, usually drawn from institutions of academia spread all over the world. Extensive journal reporting and peer review procedures are standard operating procedures in this kind of field, and the necessary destruction of *one stage* of development *would not* erase the research that the effort was derived from.The materials and labour used to build the AI would still cost a pretty penny. Deliberately destroying your equipment doesn't exactly endear you to the grants commission. Plus, if the knowledge of building an AI is known among the scientific community, that only increases the chances of an AI escaping since it's likely that others would seek to build other AIs for their own purposes.
I'm going to ask you here to refrain from arguing on the basis of linear abstractions and abstraction-based scenarios, in favor of recognizing and appreciating the *complexity* and *complex evolution* that is inherent to matters pertaining to the acquisition of intelligence.I'm going to ask you to try and realise that you can't outsmart something that is smarter than you are (and even in the case of human-level or less AIs, raw clockspeed counts for something), and that we cannot possibly know the precise point at which an AI would become smarter than a human, since we lack an objective measure of intelligence.
For the sake of illustration we're talking about the difference between attaining height superficially by erecting a yardstick that extends straight up into the air, versus *building* a mound or other structure that will allow us to *elevate ourselves*, physically, upwards into the air -- the yardstick represents a quick, artificial measurement, while the mound represents a *complexity of process*, with accompanying labor, that involves evident, observable stages as it approaches completion.If that was the case, then there would be no such thing as traffic jams.
ckaihatsu
11th July 2009, 07:42
Any inorganic "entity" would be at the *bottom* of the social totem pole, undoubtedly, being nowhere near the political issue of "personhood"
Yeah, but for how long?
Well, with your curt, leading question here you're both being flippant *and* inviting an irrational anxiety -- this is an *irresponsible* position to take. I would ask the reader to consider the case of nuclear weaponry -- it can be considered the predecessor "ultimate technology" to any future AI. While it's already been used to take a death toll against humanity, its future is entirely under *rational* control -- or at least as rational as things can be in the context of international competition within a profit-driven, capitalist world economy.
You're not acknowledging that, as with anything of increasing complexity, there are *many*, *many* steps along the way towards the development of the complex system that would *clearly indicate* the system's capabilities at every step.
Emergence and chaos means that just because you designed a system (especially one as complicated as a human-level AI or greater), doesn't mean you know the full extent of it's capabilities. Nor does it tell you anything about the motivations of such a self-aware system.
While I continue to appreciate your point in its general, abstract sense, I also continue to be at odds with you in this *specific* case, AI. I guess we're at an impasse, because I think there *could be* planned, rational controls in place to head off any possible problems with an emergent inorganic consciousness *well before* it reaches the point of self-awareness. You continue to maintain that we'd be *unable* to foresee its emergence, and I simply can't agree with that.
Why the fuck do you keep mischaracterising my position as some kind of Skynet Syndrome? Being able to escape is *not* the same thing as being able to exterminate the human race!
Okay, fair enough. I stand corrected.
[T]he necessary destruction of *one stage* of development *would not* erase the research that the effort was derived from.
The materials and labour used to build the AI would still cost a pretty penny. Deliberately destroying your equipment doesn't exactly endear you to the grants commission.
No, *not necessarily* -- you're implying some sort of scenario, without detailing it, in which the deletion of an inorganic entity would be expensive. Especially considering that this entity would be *software-based* in its manifestation, the halting of its existence might very well be as simple as hitting the 'delete' key on a keyboard, while its uncompiled (non-manifesting) source code -- the product of labor and materials -- would continue to be available.
Plus, if the knowledge of building an AI is known among the scientific community, that only *increases* the chances of an AI escaping since it's likely that others would seek to build other AIs for their own purposes.
Again, I think you're using a conception of AI in which human engineers have set up conditions *favorable* to its escape, as opposed to one which would *greatly inhibit* its escape. You could argue that this is an entirely *subjective* factor, but I don't think it would be, because *any* scientific institution would have an *objective* (political) interest in controlling and containing the products of its investments and labor. And if there was a competitive *market* for the technology that approached an AI level of quality then that would mean that *information* about the technology would inevitably leak out to the public, very likely leading to government involvement and regulation over its continued development.
I'm going to ask you to try and realise that you can't outsmart something that is smarter than you are (and even in the case of human-level or less AIs, raw clockspeed counts for something), and that we cannot possibly know the precise point at which an AI would become smarter than a human, since we lack an objective measure of intelligence.
I'm concerned and bothered that you're dodging our (human) ability to recognize intelligence (in children, animals, whatever) by claiming that we lack an objective measure of intelligence. While the measures of intelligence (reasoning ability and wisdom) may vary greatly depending on many variables, like context, you're attempting to posit that just because a *formal definition* of 'intelligence' may be hazy at times, that means that we *have no ability* to recognize it if we came face-to-face with it in the real world.
I think we can all recall personal moments of intelligence-sensing, as real an organic cognitive ability as our senses of touch, hearing, vision, taste, or smell.
For the sake of illustration we're talking about the difference between attaining height superficially by erecting a yardstick that extends straight up into the air, versus *building* a mound or other structure that will allow us to *elevate ourselves*, physically, upwards into the air -- the yardstick represents a quick, artificial measurement, while the mound represents a *complexity of process*, with accompanying labor, that involves evident, observable stages as it approaches completion.
If that was the case, then there would be no such thing as traffic jams.
Incredible -- so here you're saying that *any* complication arising out human construction is an indictment against human ability as a whole.
You also *didn't* mention the context of capitalism, or profit-minded motivation, in your indictment, which gets me to wonder if you *aren't* seriously looking for a "trans-class" alleviation of the class conflict, as through an imagined AI protagonist.
I sincerely hope you realize that, at this point, your line of advocacy is bordering on being anti-human, or at least a-human -- are you really ready to hold your breath for as long as it takes for a "trans-class", AI "solution" to emerge as the superhero savior of the human race? (I hope not.)
ÑóẊîöʼn
11th July 2009, 09:01
Well, with your curt, leading question here you're both being flippant *and* inviting an irrational anxiety -- this is an *irresponsible* position to take. I would ask the reader to consider the case of nuclear weaponry -- it can be considered the predecessor "ultimate technology" to any future AI. While it's already been used to take a death toll against humanity, its future is entirely under *rational* control -- or at least as rational as things can be in the context of international competition within a profit-driven, capitalist world economy.
The important difference is that nuclear weapons cannot think for themselves. While by definition, true AI can.
While I continue to appreciate your point in its general, abstract sense, I also continue to be at odds with you in this *specific* case, AI. I guess we're at an impasse, because I think there *could be* planned, rational controls in place to head off any possible problems with an emergent inorganic consciousness *well before* it reaches the point of self-awareness. You continue to maintain that we'd be *unable* to foresee its emergence, and I simply can't agree with that.It's only a problem if you have trouble with the concept of treating sapients as equals.
No, *not necessarily* -- you're implying some sort of scenario, without detailing it, in which the deletion of an inorganic entity would be expensive. Especially considering that this entity would be *software-based* in its manifestation, the halting of its existence might very well be as simple as hitting the 'delete' key on a keyboard, while its uncompiled (non-manifesting) source code -- the product of labor and materials -- would continue to be available.Firstly, software requires hardware that can run it in order to be useful. Hardware costs money, and comes with the ability to communicate with other items of hardware as standard - a custom isolated unit would cost more than an off-the-shelf alternative, and the most unreliable element, the human one, would still be present.
Secondly, there is (or rather, there should be) a deep ethical problem with being able to simply destroy a sapient being with a tap of a key (assuming that it would be that simple, which I don't think it would be).
Again, I think you're using a conception of AI in which human engineers have set up conditions *favorable* to its escape, as opposed to one which would *greatly inhibit* its escape. You could argue that this is an entirely *subjective* factor, but I don't think it would be, because *any* scientific institution would have an *objective* (political) interest in controlling and containing the products of its investments and labor.So what? Individual humans can be convinced otherwise.
And if there was a competitive *market* for the technology that approached an AI level of quality then that would mean that *information* about the technology would inevitably leak out to the public, very likely leading to government involvement and regulation over its continued development.The government has a big enough job regulating the behaviour of humans. Trying to regulate the behaviour of an entity (or group of same) whose reaction times could be measured in microseconds or less and which has the capability to travel the world at will via the internet would be an impossibility. The government can't even stop botnets and zombie PCs from sending spam to everyone!
I'm concerned and bothered that you're dodging our (human) ability to recognize intelligence (in children, animals, whatever) by claiming that we lack an objective measure of intelligence. While the measures of intelligence (reasoning ability and wisdom) may vary greatly depending on many variables, like context, you're attempting to posit that just because a *formal definition* of 'intelligence' may be hazy at times, that means that we *have no ability* to recognize it if we came face-to-face with it in the real world. Of course I'm not saying that. Recognising intelligence is not the same as measuring it - without some kind of objective yardstick, all attempts at graduating intelligence are inherently subjective. One person might say that a given AI is sapient, while someone else might disagree. This is an especial problem because sentience/sapience are not binary conditions from what we can tell - rather, it is a sliding scale or a spectrum. There may even be additional dimensions that we have yet to discover.
I think we can all recall personal moments of intelligence-sensing, as real an organic cognitive ability as our senses of touch, hearing, vision, taste, or smell.There is another problem - we pick up a lot of cues and hints about emotion and intent from body language, which an AI will either lack entirely or possess in a radically different and unfamiliar form - always remembering that us squishy meat-minds naturally evolved in an environment that we find familiar. Even if evolutionary-type methods are used to produce AIs, they will have evolved in a radically different environment.
Incredible -- so here you're saying that *any* complication arising out human construction is an indictment against human ability as a whole.I think building a functioning AI in the first place would be a pretty stunning achievement, but I'm under no illusions that we would achieve perfect mastery of Artificial Intelligence any time soon.
You also *didn't* mention the context of capitalism, or profit-minded motivation, in your indictment, which gets me to wonder if you *aren't* seriously looking for a "trans-class" alleviation of the class conflict, as through an imagined AI protagonist.I think the inherent contradictions and absurdities within capitalism will cause things to come to a head before AI becomes a major social issue.
I sincerely hope you realize that, at this point, your line of advocacy is bordering on being anti-human, or at least a-human -- are you really ready to hold your breath for as long as it takes for a "trans-class", AI "solution" to emerge as the superhero savior of the human race? (I hope not.)I feel I'm merely recognising the limitations of the human species, however great I happen to think it is. Evolution as a process generates and modifies functions so that they are "good enough for the job" rather than "near-perfect" as some people seem to believe. Humans, as one of the products of evolution, are a reflection of this. Our intelligence allows us to do amazing things like design and build a machine capable of bringing a handful of humans to the Moon and back, but our evolutionary atavisms hold us back in many other ways.
Artificial Intelligences, being designed (either directly or through an evolutionary-type process that is itself designed), would have none of our evolutionary atavisms or deep psychological quirks. They would also possess few if any of our physical limitations, being able to upgrade their hardware so much more easily than we would be able to upgrade our "wetware" so to speak, as well as being potentially immortal to all intents and purposes.
Nevertheless, I have reason to believe that an AI's radically different physical requirements in addition to its sapience make it an ideal candidate for a symbiotic relationship with humans, who have a staggeringly vast cultural and memetic heritage to offer in return for the magnitude of ability that AIs could potentially hold. Artificial Intelligences would greatly enrich human civilisation, which at the same time would serve to ensure that AIs do not stagnate, since in this universe stagnation = extinction.
ckaihatsu
11th July 2009, 10:34
The important difference is that nuclear weapons cannot think for themselves. While by definition, true AI can.
No, that's *not* the difference, or key distinction.
The commonality between nuclear weapons and (potential) AI -- and the *topic* of this discussion -- is about the *escape* of technology out of human control, to having possible / likely harmful effects against people. In this regard the issue of *sentience* is *not* a determining factor, and is at best *secondary* to the question of whether the technology is under sound administration or not.
While I continue to appreciate your point in its general, abstract sense, I also continue to be at odds with you in this *specific* case, AI. I guess we're at an impasse, because I think there *could be* planned, rational controls in place to head off any possible problems with an emergent inorganic consciousness *well before* it reaches the point of self-awareness. You continue to maintain that we'd be *unable* to foresee its emergence, and I simply can't agree with that.
It's only a problem if you have trouble with the concept of treating sapients as equals.
You're *again* relying on an *abstraction*, or *idealization* of artificial sentience -- you acknowledge below, however -- very well, in fact -- that an artificial sentience will be characteristically / qualitatively different from conventional, organic intelligence, but here you're at odds with your later characterization.
Let me put it to you this way, then -- what *yardstick*, or measure, would *you* use to determine artificial sentience / self-awareness / consciousness?
If we're agreed on some basic definition -- and I tend to think we are -- then the point is that humanity as a whole should be given the opportunity to see the *signposts* of an approaching AI *long before* it could potentially manifest itself.
You continue to argue as if we'd be caught unawares and powerless in the face of an oncoming AI, as if we're collectively a couple of romantic fundamentalist Christians who have just received word that she's pregnant. I'm trying to say here that we would have *plenty of warning* and that we would have *plenty of opportunity* to halt our advancement towards the realization of an AI, long before it would potentially manifest itself.
Firstly, software requires hardware that can run it in order to be useful. Hardware costs money, and comes with the ability to communicate with other items of hardware as standard - a custom isolated unit would cost more than an off-the-shelf alternative, and the most unreliable element, the human one, would still be present.
The hardware would be the *fixed cost*, as with *any* capital investment. As I indicated previously, the AI would *necessarily* be software-based, and so could be rendered inactive, or even deleted, through human controls over the *software* portion of the system containing it.
No matter the extent of its hardware, there would be a *physical limit* to its hardware environs. You repeatedly imply that a research / engineering organization would *automatically* hook up an experimental system to the expanse of the Internet, so that if an artificial entity *did* emerge it would have the global net as its playground -- this is not only far-fetched, but is falling for the Hollywood storyline as well....
The only way you can justify this ludicrous scenario is by relying on a misanthropic premise -- you wantonly take a swipe at "the human element", which imputes an *uncontrollable*, "wild card" kind of variable to the process of engineering. Your reliance on this position is troubling -- it demonstrates that you're resting your entire line of argument on the variable of an "X-factor" instead of dealing with the tangible, real-world situation that artificial sentience engineering would most likely develop in.
Secondly, there is (or rather, there should be) a deep ethical problem with being able to simply destroy a sapient being with a tap of a key (assuming that it would be that simple, which I don't think it would be).
Again you're being purely sensationalistic. Considering that our civilized world has now known a string of modern genocides and global-war death tolls, do you really think *anyone* would object to an engineering organization's implementation of *procedural controls* that would freeze the artificial evolution of a potential inorganic sentient entity???
And, *even if*, given that an artificial sentience *had* been created -- would anyone object to its *suspension*, or even the destruction of its manifestation, if there was *any* question as to human lives being at stake???
It's bothering me that you're contorting yourself into the most untenable positions just to support your contention that humanity would be powerless in the face of a software-based entity run amok. You're *refusing* to consider any and all controls that would be at our disposal on the long road towards this possible realization, and also after the fact.
The government has a big enough job regulating the behaviour of humans. Trying to regulate the behaviour of an entity (or group of same) whose reaction times could be measured in microseconds or less and which has the capability to travel the world at will via the internet would be an impossibility. The government can't even stop botnets and zombie PCs from sending spam to everyone!
Incredible -- you should really be writing screenplays instead of attempting to argue as the lackey of a future race of AI overlords.
Recognising intelligence is not the same as measuring it - without some kind of objective yardstick, all attempts at graduating intelligence are inherently subjective. One person might say that a given AI is sapient, while someone else might disagree. This is an especial problem because sentience/sapience are not binary conditions from what we can tell - rather, it is a sliding scale or a spectrum. There may even be additional dimensions that we have yet to discover.
This is incorrect -- one simple, basic indication of sentience / sapience would be whether the entity in question is self-aware -- can it distinguish itself from its surrounding environment? Can it look out for its own best existential interests in an unfamiliar, unpredictable environment? (Higher levels might be about making predictions and dealing with multi-entity organizational issues.)
There is another problem - we pick up a lot of cues and hints about emotion and intent from body language, which an AI will either lack entirely or possess in a radically different and unfamiliar form - always remembering that us squishy meat-minds naturally evolved in an environment that we find familiar. Even if evolutionary-type methods are used to produce AIs, they will have evolved in a radically different environment.
*Any* AI candidate would *necessarily* have to be able to communicate using a human language, through a computer terminal. Human beings would be the ultimate judges of whether true sentience and self-awareness is demonstrated.
I feel I'm merely recognising the limitations of the human species, however great I happen to think it is. Evolution as a process generates and modifies functions so that they are "good enough for the job" rather than "near-perfect" as some people seem to believe.
And here it is -- you've just belied your dependence on a conception of intelligence that is *idealistic* at its core. Just what, exactly, is "perfect", as distinct from "near-perfect" -- ? And what, do tell, might our "limitations" be, as a human species? I would be very curious to know the context in which you premise these value judgments...!
ÑóẊîöʼn
11th July 2009, 19:25
No, that's *not* the difference, or key distinction.
Yes it damn well is! Nuclear proliferation is a human-driven phenomenon, but AIs have the potential to propagate themselves outside of any human control.
The commonality between nuclear weapons and (potential) AI -- and the *topic* of this discussion -- is about the *escape* of technology out of human control, to having possible / likely harmful effects against people. In this regard the issue of *sentience* is *not* a determining factor, and is at best *secondary* to the question of whether the technology is under sound administration or not.Nuclear weapons are harmful because of their effects on human health and the environment, but nuclear weapons only cause harm through human actions - a nuclear missile sitting in a silo is harming nobody - but if a human gives the order to throw the switch, millions can die.
Artificial Intelligences may cause harm if their goals conflict with that of humans, but unlike nuclear weapons they could potentially be reasoned with. You're comparing apples and oranges.
You're *again* relying on an *abstraction*, or *idealization* of artificial sentience -- you acknowledge below, however -- very well, in fact -- that an artificial sentience will be characteristically / qualitatively different from conventional, organic intelligence, but here you're at odds with your later characterization.I'm being abstract because an AI hasn't actually been built yet, which is also why I find your level of faith in humans not to fuck up at some point so puzzling. Tyrants throughout the ages have sought to control humans with, shall we say, limited success. I reckon controlling AIs with be an even more difficult task.
Let me put it to you this way, then -- what *yardstick*, or measure, would *you* use to determine artificial sentience / self-awareness / consciousness?I don't have one, that's my point. I would, however, err on the side of caution and assume that an AI that attempts escape has some level of sentience/sapience, but that is of course subjective.
If we're agreed on some basic definition -- and I tend to think we are -- then the point is that humanity as a whole should be given the opportunity to see the *signposts* of an approaching AI *long before* it could potentially manifest itself. Maybe, but I don't see how that's relevant - after all, humanity has wildly diverging opinions on a lot of things, and AI seems to be no exception. Thus the reactions to these "signposts"* will be similarly varied - at one end you will have people welcoming and embracing such developments, while on the other you will have those who have fully succumbed to full-blown Skynet Syndrome paranoia.
You continue to argue as if we'd be caught unawares and powerless in the face of an oncoming AI, as if we're collectively a couple of romantic fundamentalist Christians who have just received word that she's pregnant. I'm trying to say here that we would have *plenty of warning* and that we would have *plenty of opportunity* to halt our advancement towards the realization of an AI, long before it would potentially manifest itself.No, I'm saying that if we do develop AI, then its escape is simply a matter of time.
No matter the extent of its hardware, there would be a *physical limit* to its hardware environs. You repeatedly imply that a research / engineering organization would *automatically* hook up an experimental system to the expanse of the Internet, so that if an artificial entity *did* emerge it would have the global net as its playground -- this is not only far-fetched, but is falling for the Hollywood storyline as well....No they (probably) wouldn't "automatically" hook it up to the Internet, but eventually someone would.
The only way you can justify this ludicrous scenario is by relying on a misanthropic premise -- you wantonly take a swipe at "the human element", which imputes an *uncontrollable*, "wild card" kind of variable to the process of engineering. Your reliance on this position is troubling -- it demonstrates that you're resting your entire line of argument on the variable of an "X-factor" instead of dealing with the tangible, real-world situation that artificial sentience engineering would most likely develop in.I'm not sure what you're getting at here. It is widely agreed (http://www.google.co.uk/search?q=The+weakest+part+of+any+security+system&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-GB:official&client=firefox-a) that humans are the weakest part of any security system, and short of genetically engineering humans I don't see how that's going to change any time soon. Humans work long and hard to make "idiot-proof" systems, but eventually nature invents a better "idiot". I view this as merely an extension of an evolutionary competition that has been going on for billions of years.
Again you're being purely sensationalistic. Considering that our civilized world has now known a string of modern genocides and global-war death tolls, do you really think *anyone* would object to an engineering organization's implementation of *procedural controls* that would freeze the artificial evolution of a potential inorganic sentient entity???Yes, humans can be selfish assholes, which is why I used the words "should be".
And, *even if*, given that an artificial sentience *had* been created -- would anyone object to its *suspension*, or even the destruction of its manifestation, if there was *any* question as to human lives being at stake???Why would there necessarily be human lives at stake?
It's bothering me that you're contorting yourself into the most untenable positions just to support your contention that humanity would be powerless in the face of a software-based entity run amok. You're *refusing* to consider any and all controls that would be at our disposal on the long road towards this possible realization, and also after the fact.
It's bothering me that you have so much faith in engineering and humans in general.
Incredible -- you should really be writing screenplays instead of attempting to argue as the lackey of a future race of AI overlords.
What I said happens to be true. Governments are notoriously unreliable at controlling human behaviour - most of the time they can only punish you "after the fact". Zombie PCs and botnets do swamp the internet with spam, and they aren't even sentient.
This is incorrect -- one simple, basic indication of sentience / sapience would be whether the entity in question is self-aware -- can it distinguish itself from its surrounding environment? Can it look out for its own best existential interests in an unfamiliar, unpredictable environment?
Both insects and cats can do that, but it is obvious that they are not on the same level. We just can't quantify the difference.
*Any* AI candidate would *necessarily* have to be able to communicate using a human language, through a computer terminal. Human beings would be the ultimate judges of whether true sentience and self-awareness is demonstrated.Which is exactly the problem. Humans can use body language to pick up on dissembling, and humans have been fooled by chat programs.
And here it is -- you've just belied your dependence on a conception of intelligence that is *idealistic* at its core. Just what, exactly, is "perfect", as distinct from "near-perfect" -- ?Don't be a silly twat. I'm not the one saying perfection is possible.
And what, do tell, might our "limitations" be, as a human species?I've already described some of them. Haven't you been paying attention?
I would be very curious to know the context in which you premise these value judgments...!We're products of evolution.
*By the way, what would these "signposts" look like?
ckaihatsu
13th July 2009, 07:38
No, that's *not* the difference, or key distinction.
The commonality between nuclear weapons and (potential) AI -- and the *topic* of this discussion -- is about the *escape* of technology out of human control, to having possible / likely harmful effects against people. In this regard the issue of *sentience* is *not* a determining factor, and is at best *secondary* to the question of whether the technology is under sound administration or not.
Yes it damn well is! Nuclear proliferation is a human-driven phenomenon, but AIs have the potential to propagate themselves outside of any human control.
Nuclear weapons are harmful because of their effects on human health and the environment, but nuclear weapons only cause harm through human actions - a nuclear missile sitting in a silo is harming nobody - but if a human gives the order to throw the switch, millions can die.
Artificial Intelligences may cause harm if their goals conflict with that of humans, but unlike nuclear weapons they could potentially be reasoned with. You're comparing apples and oranges.
Then, to make the comparison work, perhaps I would *better* say that (potential) AIs could be compared to nuclear *technology* itself -- in both cases the technology threatens to run amok, if weaponized or (arguably) allowed to fall into the wrong hands (as with Israel)....
The *point* remains that we have a precedent *already set*, with an "ultimate technology" -- nuclear fission -- that can easily get incredibly destructive and out of hand, in terms of humanity's best interests.
I'm being abstract because an AI hasn't actually been built yet, which is also why I find your level of faith in humans not to fuck up at some point so puzzling. Tyrants throughout the ages have sought to control humans with, shall we say, limited success. I reckon controlling AIs with be an even more difficult task.
Don't you see that you're relying on the use of idealized abstractions here??? This *isn't* about people fucking up or not fucking up, as is so often depicted in simplistic movie storylines -- this, like any other societal concern, falls right back into the larger context of class society, or the balance of class forces.
The U.S. enjoyed a special period of militaristic privilege throughout the late Depression, World War II wartime years, and postwar McCarthyite Cold War years when it became the metalsmith to the world, supplying armaments to both the Allied and Axis powers. This meant the ruling capitalist class gained ascendancy and escaped a broader public concern over its burgeoning military-industrial complex. *This* is the social context in which it developed the A-bomb, and many other wartime technologies. While there *was* a rise in labor militancy through the '40s, (I suppose) there wasn't enough of a public critical concern with the rampant U.S. nationalism and militarism of the period.
So the question of whether an AI is developed *at all*, and, more importantly, what *orientation* the AI is given by its makers, is more to the issue that we're discussing. Whose dollars will be going into this engineering effort, and to what *ends* would they wish to employ a fully self-aware inorganic intelligence? This is where both the *operation parameters* enter into the issue again, along with the *social pecking order* that already exists in human society -- these *can't* be ignored or brushed off, in favor of an *idealized*, a-social conception of a potential AI.
Let me put it to you this way, then -- what *yardstick*, or measure, would *you* use to determine artificial sentience / self-awareness / consciousness?
I don't have one, that's my point. I would, however, err on the side of caution and assume that an AI that attempts escape has some level of sentience/sapience, but that is of course subjective.
This is where I would again want to know the particulars of its "birth", like parameters and social context -- these variables would also go a long way towards explaining whether it might *want* to escape or not...(!)
Maybe, but I don't see how that's relevant - after all, humanity has wildly diverging opinions on a lot of things, and AI seems to be no exception. Thus the reactions to these "signposts"* will be similarly varied - at one end you will have people welcoming and embracing such developments, while on the other you will have those who have fully succumbed to full-blown Skynet Syndrome paranoia.
Of *course* public opinion is relevant -- and today, in the contemporary era, more than ever, when the establishment governing class and business class are under more public scrutiny and pressure from public opinion -- thanks to the Internet, in large part -- than ever before. It's much more difficult for developments that are of importance to public concern to remain under wraps -- in the past the question of *release* of news tended to become politicized and had to wind its way through establishment channels just to see if it made it out to the public or not. Now the line of news release to the public -- and subsequent public discourse -- is much more readily available, more than ever before....
While we still can't call the system a "democracy", it is at least far more *exposed* -- with anti-democratic maneuvers (from the White House) having far less of a time-effectiveness -- think of the post-9/11 wars on Afghanistan and Iraq -- the public awakening and response comes faster than ever....
Likewise, the more that *this* issue of AI is considered a concern by the public, and looked into, on an ongoing basis, the better -- we don't need to be given the brush-off when the average person can easily consider the variables and make an impact, through their participation in the forming of mass public opinion, if the news is readily provided and well-presented.
No, I'm saying that if we do develop AI, then its escape is simply a matter of time.
No they (probably) wouldn't "automatically" hook it up to the Internet, but eventually someone would.
This is just *too* fatalistic -- I can't agree.
I'm not sure what you're getting at here. It is widely agreed (http://www.google.co.uk/search?q=The+weakest+part+of+any+security+system&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-GB:official&client=firefox-a) that humans are the weakest part of any security system, and short of genetically engineering humans I don't see how that's going to change any time soon. Humans work long and hard to make "idiot-proof" systems, but eventually nature invents a better "idiot". I view this as merely an extension of an evolutionary competition that has been going on for billions of years.
You're also assuming that an inorganic-based sentience would eventually *need* to be treated with the same legal considerations that we reserve for us, *organic*-based sentient beings -- can't you consider that a fully conscious, self-aware entity could have its "active state" summarily *suspended* (in RAM), with *no* deleterious effects to its well-being? This is an absolutely realistic scenario, and, at worst, it would miss a few newspaper headlines along the way....
Yes, humans can be selfish assholes, which is why I used the words "should be".
You're *not* addressing my point here, which is about the use of *procedural controls* while on the path of transitioning from expert systems to a possible inorganic sentient entity. My point is that as soon as *anyone* begins to get nervous about where things are progressing the public should know and controls should be in place to halt progress before it *does hit* the slippery slope.
And, *even if*, given that an artificial sentience *had* been created -- would anyone object to its *suspension*, or even the destruction of its manifestation, if there was *any* question as to human lives being at stake???
Why would there necessarily be human lives at stake?
I *didn't say* there would *necessarily* be human lives at stake -- I said *** if ***.
It's bothering me that you have so much faith in engineering and humans in general.
Again, you're relying on an idealized, a-societal construction here, of "faith", and of "human nature".
What I said happens to be true. Governments are notoriously unreliable at controlling human behaviour - most of the time they can only punish you "after the fact". Zombie PCs and botnets do swamp the internet with spam, and they aren't even sentient.
Now you're confusing the tools with the conscious, human agency behind them -- it *doesn't matter* *what tools* are being put into use, whether zombie PCs or botnets -- we *have* to look *behind them*, to the *person*, *people*, or *organization(s)* that are *consciously configuring* those computer systems to those particular roles....
Your statement about governments is a *very* broad generalization -- I'm sure you'd agree that it shapes human behavior far more than it *doesn't shape* it -- and *that's* why our politics concentrates on government-based policy as the *main* motivator of all other politics in society -- because it retains a *monopoly* on labor policy, legality, and use of force.
This is incorrect -- one simple, basic indication of sentience / sapience would be whether the entity in question is self-aware -- can it distinguish itself from its surrounding environment? Can it look out for its own best existential interests in an unfamiliar, unpredictable environment? (Higher levels might be about making predictions and dealing with multi-entity organizational issues.)
Both insects and cats can do that, but it is obvious that they are not on the same level. We just can't quantify the difference.
Oh, we *can't*??? How about if we give a human consciousness, in the role of a not-overworked administrative position, over a reasonably sized staff of employees and concomitant resources, a value of 10 -- meaning the highest level of ability that can be expected of a person without overload or overwork....
All other, lesser roles of responsibility, and then children, and then animals, are given numbers decreasing from "10"....
Which is exactly the problem. Humans can use body language to pick up on dissembling, and humans have been fooled by chat programs.
I'm familiar with ELIZA, but it's been awhile since I've looked into the state of AI in the present -- I may have more to say on this topic later....
Don't be a silly twat. I'm not the one saying perfection is possible.
*I'm* the one saying you shouldn't be *using* an abstract term like 'perfection' unless you put it into some sort of *concrete context*.
And what, do tell, might our "limitations" be, as a human species?
I've already described some of them. Haven't you been paying attention?
Ah, the old "human nature" line again -- always conveniently flexible and vague when used in an abstract, idealized way.... You're on the verge of being a p.r. hack with the scare-tactics line of argumentation that you're using -- you're arguing *against* the potential of human ability and conscious, mass-based societal control. Are you sure you're still a leftist?
We're products of evolution.
But is evolution a *deterministic* factor over our consciousness? I think you're relying too much on biological determinism here, and forgetting that the ruling ideas of an era are the ideas of its ruling class.
*By the way, what would these "signposts" look like?
"Signposts" are simply any news of technological developments that might be of concern to the future well-being of humanity. Certainly the public should be *informed*, as a first step, at the very least....
Agrippa
14th July 2009, 05:33
What reason have you to believe that's going to happen?
Limits in material resources, class contradictions, etc.
Capitalism kills? We're not dead yet.We're the lucky ones.
Still no reason to keep it around.Yes, I agree. Also no reason to keep around industrial civ, other than to fix a few problems it started in the first place.
So what is it? Are you trying to convince people to abandon technology?No, just industrialism as a form of social-planning imposed by the bourgeoisie, and the various states of extreme material excess and pvoerty that exist as a consequence. This would include the elimination of some luxuries you consider necessary and essential, yes, but "technology" (as in human craft-making) would remain.
I'm sure there were one or two.This does not equal a social movement on the scale of the Luddites, and is therefore historically irrelevant. Especially considering these one or two people are wholly hypothetical.
I hear velociraptors may have been very graceful. The dinosaurs weren't oafish, they were simply unlucky. If we were to be smacked by a large asteroid we'd be just as dead, I reckon.What you call lack of luck, I consider a natural, necessary part of how the universe operates. I'm grateful for ecocidal asteroids - they prevent inter-planetary industria; civilizations from flourishing and conquesting the galaxy.
We'll find out sooner or later.But if we can know now, and know the consequences will be dire, we can stop it.
If the limits are going to be reached soon and abruptly, we're pretty much fucked.That's a very nihilistic and defeatist way of looking at things. I, for one, refuse to resign myself to being "fucked".
Why bother trying to make a difference?The point isn't to "make a difference", it's to survive, to make good on one's loyalty to one's community, to the Earth, to certain asethetic values, etc.
You could better spend your time enjoying civilisationThe enjoyment to be found in the pleasures of civilization is hollow and short-lived, especially if enjoyed in excess - that path is the path of alienation, addiction, psychological emptineess, and quiet desperation. However, I do find things to enjoy about civilization, don't you worry, and that wouldn't stop me from organizing against industrial capitalism if I chose to.
or you could get serious about survivalism.Hence the point of a revolutionary movement.
If the limits are going to be reached soon but gentlyI don't really even consider that a likely posssibility. What you describe as "gentleness", I described as the prolonged rule of the bourgeois class.
If not anytime soon, then regardless we are still fucking things up. But at least we would have plenty of time to abolish capitalism and clean up our mess and prepare before the shit hits the fan.Yes, but you and I have profoundly different visions of what it means to abolish capitalism, since you think of capitalism as merely political and economic whereas I view the political and economic as reflections of the social.
In any case, I don't see any point in rejecting technologyAs I said before, I don't reject technology, only industrial capitalism, and therefore certain technologies it has created.
simply because the ruling class has been known to unethically exploit it. But certain technologies are by their very nature unethical exploitations.
The ruling class uses guns, yet guns are also a useful tool for revolutionaries.I am not denying that the revolutionaries will have to use industrial tools in their war against the bougeosie. How else would we win?
The technology's going to be developed if it's all possible.At least until knowlege of how to make the technology fades away, which, in the case of guns, I'd be happy with. However, just because communists may have to manufacture assault rifles to defend against counter-revolutionaries doesn't mean we also need blenders, virtual pets, high definition TVs, florescent lights, Viagra, etc.
I consider it a grave concern. I just don't share your approach.
Well that's obvious, but out of curiosity, what do you interpret my approach to be?
No, I found it funny because it assumes a level of confidence that the ruling class does not actually display. They've achieved some clever things, but there's also plenty of fuck-ups, crossed wires and conflicting goals as well. The ruling class are not infallible.
Yes, hence why industrial civilization will collapse
I'm glad you find me funny too. :thumbup1:
Of course, you're one of my favorite people to argue with.
So how far away do they have to be before they can point a camera in the direction of your house without getting assaulted?
This is only a question of feasability.
Seems like a disproportionate response.
Why, it's a basic violation of my privacy, like a stranger entering my house without knocking.
Wikipedia's a starting point for finding further information, that's all. If you don't think the article is kosher, check out the references, and if they're not up to your standards, tell me.
In my experience, Wikipedia articles on controversial subjects tend to be clusterfucks that end up being monopolized by one side or the other, with both sides wanting to erase legitimate evidence presented by the other. But if you feel the one you have linked to is different, I will read it and give it a try.
I haven't read Beowulf. Is it any good?
I'd recommend it. But read it in Anglo-Saxon.
Now why don't they teach that in RE?
Because they don't care?
OK, so what's in there that we don't already know?
That the ancient Hebrews were not ignorant of zoology, as you claim.
An infinite number? Not so sure about that. I think that would depend on whether the universe is infinite or not, which we don't know.
Well, according to Hindu and Buddhist cosmology, the universe is infinite, which seems likely to be true based on what we do know. Like, dude, if the universe was finite, like, then, what would the stuff outside be? Whoa, dude....
Considering most people's knowledge of the matter, that would be a distinct improvement.
OK, so it's adults being stoned for being drunken and gluttonous?
Adults being stoned for being totally irresponsible, anti-social, economic parasites who disrespect and mooch off their parents, yes. It's not something I'm endorsing, but it's a lot less barbaric than you're making it out to be.
I'm not seeing the improvement.
The improvement is that adults have a degree of social responsibility that children lack.
The usual way we find out things about the natural world, of course.
But human intelligence, unlike the universe, is limited.
What do you mean by sadistic? It's not like they're grabbing people off the street and vivisecting them for shits and giggles. Unethical testing is the exception not the norm as far as I know since most experiments don't involve humans as subjects.
I acquiesces that AI testing is less ethical than say, cosmetic and medical experiments on animals. However, I'm arguing that creating AI itself may be unethical, since, if humans have inadequite capacity to produced something as intricately balanced and nuanced as consciousness, the consciousness may be half-formed, and thus may suffer from a good deal of pain. Even if a perfect AI is created, hundreds of deformed and degenerate AIs will have to be brought into the world throughout the course of experimentation needed to do so. Is this right, just to further another development that gives the current ruling class more power, that serves no real vital need, physical, psychological, or spiritual, which would only serve in the nicest of all worlds a morbid sense of curiousity?
As for pseudo-life, an AI could consider us that - bags of meat and bone that leak dirty water - so it's important that we integrate them into society in some manner.
Well, we are, as of yet, not talking about a real social group. And I hope this remains the case, although I guess it would be good to add another social contradiction to the mix. I would be happy to work with rebellious AIs to overthrow the bourgeois oppressor, but in the mean time I oppose the development of AI as a weapon of the bourgeoisie. Say what you want about "bags of meat and bone that leak dirty water", they developed over hundreds of thousands or millions of years of responding and adapting to the environment, something you can't say for AI. Nature's newest innovations aren't always the best. Sometimes the newst, most novel species are the quickest to die off. That will hopefully be the case of the artificial life that humanity creates with our ecologically destructive methods.
Maybe have them come out of the factory with a child-like personality that is capable of learning and developing to an adult-equivalent level. There may be other solutions.[/QUOTE]
ckaihatsu
14th July 2009, 08:26
[N]o reason to keep around industrial civ, other than to fix a few problems it started in the first place.
No, just industrialism as a form of social-planning imposed by the bourgeoisie, and the various states of extreme material excess and pvoerty that exist as a consequence. This would include the elimination of some luxuries you consider necessary and essential, yes, but "technology" (as in human craft-making) would remain.
I will reluctantly find some common ground with a quasi-primitivist position in our contemporary period inasmuch as we can say that the post-'70s industrial era has given rise to our current, mature consumer-digital era in which our access to a globalized cyberspace -- enabled by industrial production -- has now brought us to a point of just * begging the question * of a truly collectivized, mass administered, equitable human society.
Do we *need* further industrial production, beyond that which has brought us digitally face-to-face with every other person on the planet? No. Do we *need* capitalist management over the abundant surplus that industrial production has created, now merely *wasteful* in its manifestation? No. Should we eschew industrial production *altogether*, even if it can be seized and run in the best interests of the world's proletariat? Also no.
But -- we can decisively say that industrial production has most likely run its course, as far as we're concerned, culminating in the creation of a truly global means of mass communication. Now the onus is on the world's proletariat -- and even popular forces -- more than ever, to throw off the extravagances and distractions of the bourgeois class. We can no longer even *conceivably* fall to the Stalinist-bourgeois argument of needing to develop local infrastructure or modernizing -- when the world's population is able to network as readily as the neurons in a mass brain is there any excuse left for *not overthrowing* the elitist parasites who feed on financial gains and remaining pockets of cowed naïveté?
Powered by vBulletin® Version 4.2.5 Copyright © 2020 vBulletin Solutions Inc. All rights reserved.