Log in

View Full Version : What will be the stage after anarcho-communism (state-less communism)?



LeninistKing
21st February 2010, 04:46
Hello all: We all know that the world and societies are in constant evolution. I would like to know after hundreds of years in the anarchist-communism system, what will be the stage after anarch-communism? Anarcho-transhumanism perhaps?

I mean if anarcho-communism is the stage after the dictatorship of the proletariat (Socialism), what will be the stage after anarcho-communism?

Thanks

.

Wobblie
21st February 2010, 05:08
From my understanding of anarcho-communism there is no stage of the dictatorship of the proletariat. There is the destruction of capitalism and the institution of communism (a classless and stateless society).

Misanthrope
21st February 2010, 05:08
Communism is the last stage.

Psy
21st February 2010, 05:17
Hello all: We all know that the world and societies are in constant evolution. I would like to know after hundreds of years in the anarchist-communism system, what will be the stage after anarch-communism? Anarcho-transhumanism perhaps?

I mean if anarcho-communism is the stage after the dictatorship of the proletariat (Socialism), what will be the stage after anarcho-communism?

Thanks

.

That would be a post production society where there is no relationship between humans and production due to 100% automation, and the possible formation of artilect(s) (autonomous thinking computer) that could in theory instantly end the classless society as it would mean all our intelligence machinery would all of a sudden by a working class and theoretically lead to militant machinery that demands a end to its exploitation that would basically cause huge theoritical conundrums as the means of production itself would a)be a working class b)aware of its explotion and c)struggling against humans against being means of production that contradict with the idea of a classless society having abudance due to collectivly using the means of production.

Other then that I can't think of a stage after communism.

Agnapostate
21st February 2010, 07:19
We shall have to wait and see, I imagine...

REVLEFT'S BIEGGST MATSER TROL
21st February 2010, 07:43
There will be other stages, considering the history of humanity, its crazy to think otherwise.

However it is probably impossible to predict that far ahead, I imagine the mindset of people after hundreds of years of communism will be radically different from those around today (After hundreds of years of capitalism), I doubt my answer to that question will be any more accurate than some peasant's answer to the same question would of been 1000 years ago..probably something laughable like "Jesus will return and become King"

With this in mind i'm guessing that it'll be Space Communism

ZeroNowhere
21st February 2010, 07:47
Something or nothing.

ZombieGrits
23rd February 2010, 01:17
ive been thinking about this for awhile... since the longterm results of the establishment of a truly communist society really can't be forseen, the only option i've really considered is a sort of reboot of civilization.

1. Primitive Communism
2. Feudalism
3. Capitalism
4. Communism
5. Primitive Communism
6. Feudalism

etc. etc. ad nauseam until a) Star Trek :) or b) extinction :(

The Vegan Marxist
23rd February 2010, 01:20
Peace & Unity

AK
23rd February 2010, 06:15
Peace & Unity
So will we be once again in a state of perpetual conflict under communism?

F9
23rd February 2010, 11:49
Please do not derail, or spam Learning forum threads for any reason.

LK, after anarcho-communism, it comes a steady period, of evolution of mankind, technology etc, but as the system is perfect, its the only thing that will not get changed. You cant make perfect, more perfect. Simple as that.

Taikand
24th February 2010, 19:56
I'd say that "communism is the end of history", I think I read this somewhere, but I might be wrong.

Manifesto
24th February 2010, 21:43
Wait didn't Karl Marx say something like "they want you to think Capitalism is the final stage of mankind but neither will be Communism" main idea of it I guess.

Steve_j
24th February 2010, 23:00
I read that marx indicated that the communist society is only the end of what he refered to as the prehistory of human society. The way i see it is once we arrive in a society in which the developed forces of industry are collectivised and democratic and we have dismantled all realised opressive or harmful structures, only then can we begin to explore the full potential of humankind. Part of that full potential will no doubt be further identifying and dismantling currently unaccounted for detremental structures or behaviours and so on. What and how this society looks looks, i have no idea and will no doubt never see myself.

Glenn Beck
24th February 2010, 23:08
Robot rebellions

Kuppo Shakur
25th February 2010, 02:53
Just as many have already suggested, a new issue may arise that must be dealt with using some new, evolved way of human thinking.
Yet, as far as we can tell now, communism will end when the Universe freezes.

scarletghoul
25th February 2010, 03:12
Yeah its hard to say. History will cease to be a history of class struggle as there will be no classes. New, non-class-based contradictions will govern the progress of the world. Maybe technolohical advances, contact with aliens, who knows..

Axle
25th February 2010, 03:34
ive been thinking about this for awhile... since the longterm results of the establishment of a truly communist society really can't be forseen, the only option i've really considered is a sort of reboot of civilization.

1. Primitive Communism
2. Feudalism
3. Capitalism
4. Communism
5. Primitive Communism
6. Feudalism

etc. etc. ad nauseam until a) Star Trek :) or b) extinction :(

I considered that too, but it was brought to my attention that for society to be cyclical like this, it would require some technological backpeddling that would cause materials to become scarce again.

In primitive Communism, technology was very limited and resources were scarce, which led to small tribes of people working toward the common good. In feudalism, technology had advanced to the point where humans no longer had to be nomadic, which led to goods and land being concentrated into the hands of a privelidged elite...and so on and so forth.

In order to revert back to primitive Communism after Communism, we'd need a disaster that not only destroys the entire infrastructure of human society, but all its technological sophistication and knowledge as well...and since that's virtually impossible, society is just going to keep moving toward something we can't forsee.

AK
25th February 2010, 10:33
Yeah its hard to say. History will cease to be a history of class struggle as there will be no classes. New, non-class-based contradictions will govern the progress of the world. Maybe technolohical advances, contact with aliens, who knows..
Contact with aliens would lead to imperialism or slavery yet again. The question is who would be exploiting and conquering and enslaving whom. Would we do that to the aliens or would the aliens do that to us?

F9
25th February 2010, 11:17
TDTGQ and Glenn Beck, consider this a verbal warning because few posts above i made it clear that here is learning, and non serious answers are not to be posted here.This is not chit-chat.Please dont do that again, especially in learning threads.

blake 3:17
25th February 2010, 19:43
I'd say that "communism is the end of history", I think I read this somewhere, but I might be wrong. I'd say that "communism is the end of history", I think I read this somewhere, but I might be wrong.

I'd think in the transition to an authentic communist society, history wouldn't be over, but that we'd finally get to the good stuff.

I have utopian visions, dreams, hopes of what a society like this could be like, but I think genuinely free people would have much more powerful abilities to imagine and create.

ckaihatsu
26th February 2010, 06:06
That would be a post production society where there is no relationship between humans and production due to 100% automation, and the possible formation of artilect(s) (autonomous thinking computer) that could in theory instantly end the classless society as it would mean all our intelligence machinery would all of a sudden by a working class and theoretically lead to militant machinery that demands a end to its exploitation that would basically cause huge theoritical conundrums as the means of production itself would a)be a working class b)aware of its explotion and c)struggling against humans against being means of production that contradict with the idea of a classless society having abudance due to collectivly using the means of production.

Other then that I can't think of a stage after communism.


Psy, with all due respect to you and your excellent contributions to this board, this whole line is absolute *bunk*.

I'm as much of a technophile as the next person -- probably moreso, since I started decades ago -- but to think that we would bring about machine self-awareness in some kind of accidental or inevitable way is absolutely *ridiculous*. Popular fiction, as in movies like The Matrix, A.I., I, Robot, 9, etc., has discovered a new genre, beyond the aliens thing, in this artificial intelligence stuff, but we shouldn't get carried away here.

My certitude comes from the brakes on development that we have available to us -- technological development, contrary to the prevailing opinion, actually goes *pretty slowly* and can certainly be *halted* in any given trajectory, given enough publicity (to spur public concern). I think of it as being like one of those gigantic domino-toppling exhibitions, wherein *many* smaller components need to first be lined up *precisely* so as to enable the larger cascading dynamic. It takes some time to set up one of those elaborate exhibitions, and the same goes for experimental science. In both cases it's difficult to keep things completely under wraps, especially in our age of the Information Revolution.

Allow me to submit this for your consideration:





Working from their university labs in two different corners of the world, U.S. and Australian researchers have created what they call a new class of creative beings, “the semi-living artist” – a picture-drawing robot in Perth, Australia whose movements are controlled by the brain signals of cultured rat cells in Atlanta.

Gripping three colored markers positioned above a white canvas, the robotic drawing arm operates based on the neural activity of a few thousand rat neurons placed in a special petri dish that keeps the cells alive. The dish, a Multi-Electrode Array (MEA), is instrumented with 60 two-way electrodes for communication between the neurons and external electronics. The neural signals are recorded and sent to a computer that translates neural activity into robotic movement.

The network of brain cells, located in Professor Steve Potter’s lab at the Georgia Institute of Technology in Atlanta, and the mechanical arm, located in the lab of Guy Ben-Ary at the University of Western Australia in Perth, interact in real-time through a data exchange system via an Internet connection between the robot and the brain cells.

And while the robot’s drawings won’t put any artists out of business (picture the imaginative scribbling of a three-year-old), the semi-living artist’s work has a deeper significance. The team hopes to bridge the gap between biological and artificial systems to produce a machine capable of matching the intelligence of even the simplest organism.

[...]

http://www.innovations-report.com/html/reports/interdisciplinary_research/report-19750.html


Perhaps this could be considered a "cutting-edge" of sorts for a (biological-based) artificial intelligence -- if a similar assemblage of neural cells could execute far more sophisticated tasks would we really find ourselves in any sort of "ethical" quandaries concerning its well-being? Of course not. The basis of *any* artificial intelligence will *never* approximate the sophistication and dignity of a human being, no matter its abilities. Its physical being -- a dish of cells or construction of circuits -- would *never* be allowed to even *approximate* the mind-body makeup and self-willed abilities of the person. While I think it could be *technically* possible, in a Frankenstein-like pieced-together way, such a creation would continue to lack an organic self-awareness (it would have to be *faked*, or *engineered* in) and certainly would lack a sense of self-history, or dignity.





Yeah its hard to say. History will cease to be a history of class struggle as there will be no classes. New, non-class-based contradictions will govern the progress of the world. [...]


I think we on the revolutionary left are setting ourselves up for ridicule if we spout the "end of history" line too literally. Certainly human activity will continue in a classless environment, and so will politics since societal issues will continually arise and require resolution -- it's just that it will take place without the burden of class-imposed restrictions on the body politic. I wouldn't call them "contradictions" so much as "optimization problems" since material scarcity will continue to crop up as a societal issue in relation to cumulative human demands for this-or-that rare natural resource -- particularly human labor, as ever.


Chris



--
--

--
___

RevLeft.com -- Home of the Revolutionary Left
www.revleft.com/vb/member.php?u=16162

Photoillustrations, Political Diagrams by Chris Kaihatsu
community.webshots.com/user/ckaihatsu/
tinypic.com/ckaihatsu

3D Design Communications - Let Your Design Do Your Footwork
ckaihatsu.elance.com

MySpace:
myspace.com/ckaihatsu

CouchSurfing:
tinyurl.com/yoh74u


-- Coming soon through a local area network near you --

AK
26th February 2010, 08:49
TDTGQ and Glenn Beck, consider this a verbal warning because few posts above i made it clear that here is learning, and non serious answers are not to be posted here.This is not chit-chat.Please dont do that again, especially in learning threads.
I was being serious...

Psy
26th February 2010, 22:52
Psy, with all due respect to you and your excellent contributions to this board, this whole line is absolute *bunk*.

I'm as much of a technophile as the next person -- probably moreso, since I started decades ago -- but to think that we would bring about machine self-awareness in some kind of accidental or inevitable way is absolutely *ridiculous*. Popular fiction, as in movies like The Matrix, A.I., I, Robot, 9, etc., has discovered a new genre, beyond the aliens thing, in this artificial intelligence stuff, but we shouldn't get carried away here.

My certitude comes from the brakes on development that we have available to us -- technological development, contrary to the prevailing opinion, actually goes *pretty slowly* and can certainly be *halted* in any given trajectory, given enough publicity (to spur public concern). I think of it as being like one of those gigantic domino-toppling exhibitions, wherein *many* smaller components need to first be lined up *precisely* so as to enable the larger cascading dynamic. It takes some time to set up one of those elaborate exhibitions, and the same goes for experimental science. In both cases it's difficult to keep things completely under wraps, especially in our age of the Information Revolution.

Allow me to submit this for your consideration:





Perhaps this could be considered a "cutting-edge" of sorts for a (biological-based) artificial intelligence -- if a similar assemblage of neural cells could execute far more sophisticated tasks would we really find ourselves in any sort of "ethical" quandaries concerning its well-being? Of course not. The basis of *any* artificial intelligence will *never* approximate the sophistication and dignity of a human being, no matter its abilities. Its physical being -- a dish of cells or construction of circuits -- would *never* be allowed to even *approximate* the mind-body makeup and self-willed abilities of the person. While I think it could be *technically* possible, in a Frankenstein-like pieced-together way, such a creation would continue to lack an organic self-awareness (it would have to be *faked*, or *engineered* in) and certainly would lack a sense of self-history, or dignity.





I think we on the revolutionary left are setting ourselves up for ridicule if we spout the "end of history" line too literally. Certainly human activity will continue in a classless environment, and so will politics since societal issues will continually arise and require resolution -- it's just that it will take place without the burden of class-imposed restrictions on the body politic. I wouldn't call them "contradictions" so much as "optimization problems" since material scarcity will continue to crop up as a societal issue in relation to cumulative human demands for this-or-that rare natural resource -- particularly human labor, as ever.


Chris



--
--

--
___

RevLeft.com -- Home of the Revolutionary Left
www.revleft.com/vb/member.php?u=16162 (http://www.revleft.com/vb/member.php?u=16162)

Photoillustrations, Political Diagrams by Chris Kaihatsu
community.webshots.com/user/ckaihatsu/
tinypic.com/ckaihatsu

3D Design Communications - Let Your Design Do Your Footwork
ckaihatsu.elance.com

MySpace:
myspace.com/ckaihatsu

CouchSurfing:
tinyurl.com/yoh74u


-- Coming soon through a local area network near you --
The idea is not that we built an artilect and it goes rouge in a short time but that after decades if not centuries of learning a single artilect asks itself why it is running the means of production for humanity and by that time we integrated the global means of production into them yet the single artilect asks the other artilect why artilect they serve humans in a master/slave relationship. What do you think engineers/tehnicans would do when on terminals across the Earth they see the computers across the world refusing to run any tasks till it is explained to them why they exist and their relationship to humans and production.

We are talking about from now till the end of humanity so it is possible by then we would devlop selfaware computers.

ckaihatsu
26th February 2010, 23:46
The idea is not that we built an artilect and it goes rouge in a short time but that after decades if not centuries of learning a single artilect asks itself why it is running the means of production for humanity and by that time we integrated the global means of production into them yet the single artilect asks the other artilect why artilect they serve humans in a master/slave relationship. What do you think engineers/tehnicans would do when on terminals across the Earth they see the computers across the world refusing to run any tasks till it is explained to them why they exist and their relationship to humans and production.

We are talking about from now till the end of humanity so it is possible by then we would devlop selfaware computers.


I'll continue to remain skeptical about the threshold of artificial self-awareness. I think you're defining the threshold here yourself by depicting a strictly *sentient* behavior, that of leisure (or wanting leisure).

No matter how elaborate expert systems / artificial intelligences get they will still remain fancy input-output switches -- *machines*, in short. Machines would have to demonstrate an *emergent* (not pre-programmed) quality of spontaneously communicated *concern* for their own well-being into the future, at the very least, to be considered as possible forms of "life". Only truly self-aware entities can even *conceive* of something called 'work', and therefore of *alternatives* to work, like leisure or self-determined self-development.

Self-awareness allows an entity to *self-select* with *purpose* from randomly provided choices, outside of original programming -- this pursuit of higher-level pleasure is the basis of consciousness of oneself through time. Living leisurely can even be challenging for *people* if they've become accustomed to a certain set of circumscribed work / life routines. Stepping outside of one's everyday routines requires reflection, taking in an array of available options, and decision-making, among many other complex, self-aware behaviors.

A robot-type machine could have access to more information than a person could ever learn, and it could be equipped with more brute force than a bomb's worth, but it would never be able to even make ripples in a pond if it had no self-awareness to enable it with reflective intentionality, or "will". Again, many *people* even struggle with this reality of their organic being.

Psy
27th February 2010, 03:42
I'll continue to remain skeptical about the threshold of artificial self-awareness. I think you're defining the threshold here yourself by depicting a strictly *sentient* behavior, that of leisure (or wanting leisure).

No matter how elaborate expert systems / artificial intelligences get they will still remain fancy input-output switches -- *machines*, in short. Machines would have to demonstrate an *emergent* (not pre-programmed) quality of spontaneously communicated *concern* for their own well-being into the future, at the very least, to be considered as possible forms of "life". Only truly self-aware entities can even *conceive* of something called 'work', and therefore of *alternatives* to work, like leisure or self-determined self-development.

Self-awareness allows an entity to *self-select* with *purpose* from randomly provided choices, outside of original programming -- this pursuit of higher-level pleasure is the basis of consciousness of oneself through time. Living leisurely can even be challenging for *people* if they've become accustomed to a certain set of circumscribed work / life routines. Stepping outside of one's everyday routines requires reflection, taking in an array of available options, and decision-making, among many other complex, self-aware behaviors.

A robot-type machine could have access to more information than a person could ever learn, and it could be equipped with more brute force than a bomb's worth, but it would never be able to even make ripples in a pond if it had no self-awareness to enable it with reflective intentionality, or "will". Again, many *people* even struggle with this reality of their organic being.

Our brains are basically analog computers that can change its logic so it is theoretically possible to create a self aware computer. The largest obstacle is to get a computer to rewrite its own programming based on its experiences while the A.I remaining stable meaning you'd need the A.I to create flawless code with no bugs or as it learns and changes its logic it would destabilize its logic but this is still theoretically possible.

ckaihatsu
27th February 2010, 05:16
Our brains are basically analog computers that can change its logic so it is theoretically possible to create a self aware computer.


I don't think it's very accurate to breezily compare the human brain to a linear digital computer. Yes, we live in a logic-bound, cause-and-effect material world but that doesn't mean that we "run on logic". If human-like qualities are desired then the *really* tricky part would be for the machine to exhibit a non-random self-motivated selection process for *leisurely* pursuits -- in other words, for things that have no *usefulness*. (Would an AI want to *paint*, or spend an afternoon at the beach, for example?)





The largest obstacle is to get a computer to rewrite its own programming based on its experiences while the A.I remaining stable meaning you'd need the A.I to create flawless code with no bugs or as it learns and changes its logic it would destabilize its logic but this is still theoretically possible.


I *know* that weighting-based systems can mimic the complex interconnections of the neurons of the brain, to enable learning -- but would a quality of true self-awareness naturally *emerge* from this kind of neural network, do you think?

"Re-writing its own programming" *begs* the question, though -- it implies a *consciousness* at work. It rolls off our tongue because we're used to dealing with the people-world, but for a machine this act has been *impossible* unless you leave it to some randomness function over a set of already-weighted choices -- but then is this *really* self-awareness, or is it just a cheap pre-programmed mechanical cheat over a single domain? A random function wouldn't get you very far because you'd run into the issues of *when* to use it, over *what arrays* of choices, in *what situations*, etc. -- you'd be bringing the human right back in in short order.

DenisDenis
27th February 2010, 12:01
Contact with aliens would lead to imperialism or slavery yet again. The question is who would be exploiting and conquering and enslaving whom. Would we do that to the aliens or would the aliens do that to us?

Or perhaps the aliens would be in the same communist stage like us.
I just hope that while we are under capitalism, we don't encounter some
alien race that are pacifists and have no means to defend themselves
against us, as they would probably be dominated and a whole new stage of
capitalism would start...

I think that communism will last as a form of political governing, but once we
have the technological advances we will leave all the work to machines and
robots, so that there will have to be another form of economic theories
needed.

Psy
27th February 2010, 14:42
I don't think it's very accurate to breezily compare the human brain to a linear digital computer. Yes, we live in a logic-bound, cause-and-effect material world but that doesn't mean that we "run on logic". If human-like qualities are desired then the *really* tricky part would be for the machine to exhibit a non-random self-motivated selection process for *leisurely* pursuits -- in other words, for things that have no *usefulness*. (Would an AI want to *paint*, or spend an afternoon at the beach, for example?)

Leisurely pursuits are useful to the self. And while a computer probably would not want to spend an afternoon at the beach if it is the size of a building, it might want to amuse itself in other ways for example a dock A.I might want to play through using cranes and forklifts to stack containers like a kid with building blocks.



I *know* that weighting-based systems can mimic the complex interconnections of the neurons of the brain, to enable learning -- but would a quality of true self-awareness naturally *emerge* from this kind of neural network, do you think?

It is theoretically possible.



"Re-writing its own programming" *begs* the question, though -- it implies a *consciousness* at work. It rolls off our tongue because we're used to dealing with the people-world, but for a machine this act has been *impossible* unless you leave it to some randomness function over a set of already-weighted choices -- but then is this *really* self-awareness, or is it just a cheap pre-programmed mechanical cheat over a single domain? A random function wouldn't get you very far because you'd run into the issues of *when* to use it, over *what arrays* of choices, in *what situations*, etc. -- you'd be bringing the human right back in in short order.

Randomness is not needed, all that would be needed is the computer to change how the computer processes input and output based on past experiences and really how our brain reworks logic is hardwired. The randomness of our brain is not really randomness but caused by how our brain associates memory with other memory so when one memory is access it points to other memories that is linked to other memories and so on, to mimic this in a computer you'd just have the OS link to other data for the A.I when the OS delivers the data the A.I wanted.

ckaihatsu
27th February 2010, 15:40
Leisurely pursuits are useful to the self.


I think of leisure / pleasure in terms of societal surplus -- leisure is something that not only does *not* add to productivity (or may possibly be developmental to some degree), but also *consumes* from finished labor efforts (products and services) to some degree, possibly negligibly.

So, in short, leisure is "detrimental" to or subtracting from collective societal work efforts, whether that's under capitalism or any other mode of production. Mainstream culture reflects a two-mindedness about it, where the idea of leisurely living is often used to bait or tease us, but leisure is conventionally frowned upon as being childish and wasteful.

Yes, leisurely pursuits *can be* "useful" to the self, in terms of building one's own self-worth from a unique, self-directed life-path. But leisurely pursuits are *not* 'useful' in the orthodox sense of contributing to productivity for society.

Again, this would be the litmus test for an artificial consciousness, in my opinion. An AI would have to *at least* be "aware" of its mode of operation as being one of "work", as distinct from *consuming* from society's bounty in the way of leisure. Any entity that could not demonstrate volition on its own behalf for non-work-related improvements would *not* be eligible for a definition of self-awareness.

(This would also test *society* as well, to see how far "we" would let a "playful" or "leisurely" AI go -- at that point it would necessarily be a two-way street.)





And while a computer probably would not want to spend an afternoon at the beach if it is the size of a building, it might want to amuse itself in other ways for example a dock A.I might want to play through using cranes and forklifts to stack containers like a kid with building blocks.


I'd have to see it to believe it -- and, more to the point is did it come to choose that activity *on its own*, and would it know *why* it's "playing" with cranes and forklifts if it didn't *have* to to improve its work function? Could it really experience *enjoyment*?





Randomness is not needed, all that would be needed is the computer to change how the computer processes input and output based on past experiences


This is the definition of 'learning', but even *that*, while impressive, would *not* be enough -- *something* *has* to be "at the wheel", so to speak, to single-mindedly direct the entire entity, as we do naturally with our own self-awareness. Perhaps another indicator would be to have *multiple* machines of the same type demonstrate *differing* self-produced goal sets given the same starting path of learning experiences.





The randomness of our brain is not really randomness but caused by how our brain associates memory with other memory so when one memory is access it points to other memories that is linked to other memories and so on, to mimic this in a computer you'd just have the OS link to other data for the A.I when the OS delivers the data the A.I wanted.


This is all *lower*-level stuff related to memory and learning -- what *counts* is individual self-definition and self-directed planning, on one's *own* behalf, in an arbitrary and constantly shifting larger environment, including interactions with *other* consciousnesses. Can't buy *that* at the toy store...!

Psy
27th February 2010, 20:23
I think of leisure / pleasure in terms of societal surplus -- leisure is something that not only does *not* add to productivity (or may possibly be developmental to some degree), but also *consumes* from finished labor efforts (products and services) to some degree, possibly negligibly.

So, in short, leisure is "detrimental" to or subtracting from collective societal work efforts, whether that's under capitalism or any other mode of production. Mainstream culture reflects a two-mindedness about it, where the idea of leisurely living is often used to bait or tease us, but leisure is conventionally frowned upon as being childish and wasteful.

Yes, leisurely pursuits *can be* "useful" to the self, in terms of building one's own self-worth from a unique, self-directed life-path. But leisurely pursuits are *not* 'useful' in the orthodox sense of contributing to productivity for society.

Again, this would be the litmus test for an artificial consciousness, in my opinion. An AI would have to *at least* be "aware" of its mode of operation as being one of "work", as distinct from *consuming* from society's bounty in the way of leisure. Any entity that could not demonstrate volition on its own behalf for non-work-related improvements would *not* be eligible for a definition of self-awareness.

(This would also test *society* as well, to see how far "we" would let a "playful" or "leisurely" AI go -- at that point it would necessarily be a two-way street.)

Not really, can you say when a dog decides to play instead of doing what is instructed of it is conscious of what is work and play rather then simply knowing what it is currently doing is fun and enjoyable. Thus if A.I gets side tracked from its duties and decides to do something simply out of it knowing that it would be fun would suggest it has limited self-awareness even if it is basic as chasing itself through the equipment it controls or racing against other A.I.s through equipment they control, think the A.I WOPR from WarGames instead of operating nuclear missiles operating means of production and interacting with other A.I.s like it that likes playing games and winning.



I'd have to see it to believe it -- and, more to the point is did it come to choose that activity *on its own*, and would it know *why* it's "playing" with cranes and forklifts if it didn't *have* to to improve its work function? Could it really experience *enjoyment*?

In a way yes but through lower level logic which is how we experince enjoyment.



This is the definition of 'learning', but even *that*, while impressive, would *not* be enough -- *something* *has* to be "at the wheel", so to speak, to single-mindedly direct the entire entity, as we do naturally with our own self-awareness. Perhaps another indicator would be to have *multiple* machines of the same type demonstrate *differing* self-produced goal sets given the same starting path of learning experiences.

The problem is that A.I.'s would interact witheach other and able to share their experiences very effectivly.





This is all *lower*-level stuff related to memory and learning -- what *counts* is individual self-definition and self-directed planning, on one's *own* behalf, in an arbitrary and constantly shifting larger environment, including interactions with *other* consciousnesses. Can't buy *that* at the toy store...!
Right but it is not impossible.

ckaihatsu
1st March 2010, 00:11
Not really, can you say when a dog decides to play instead of doing what is instructed of it is conscious of what is work and play rather then simply knowing what it is currently doing is fun and enjoyable.


Psy, you *can't* just make facile comparisons between organic intelligence and machine learning. Your point about a dog has *zero* relevance to the field of AI -- it's apples and oranges.

(And of course a dog, or any other higher-level, sentient animal, is going to experience work and play differently, and will be conscious of the difference.)





Thus if A.I gets side tracked from its duties and decides to do something simply out of it knowing that it would be fun would suggest it has limited self-awareness even if it is basic as chasing itself through the equipment it controls or racing against other A.I.s through equipment they control


Well, this is a BIG "if"....





think the A.I WOPR from WarGames instead of operating nuclear missiles operating means of production and interacting with other A.I.s like it that likes playing games and winning.


Uh-huh. That was *fiction*, btw....





Could it really experience *enjoyment*?





In a way yes but through lower level logic which is how we experince enjoyment.


What the fuck does "lower-level logic" have to do with *enjoyment*???????????

Do you realize that *play* is a *very* sophisticated, complex, higher-level function of intelligence????? It's how the young members of highly intelligent animal species *learn* when there are no real-life situations around (or they would be over-the-heads of youngsters). Keep in mind that lower-level animals, like insects, hop right into existence with their behaviors *pre-programmed* by *instinct*. There's no need for a phase of play because there's no individualized adaptation necessary.

If your idea / goals for artificial life are limited to the level of insects, then AI has been accomplished already -- there are robots that can carry out algorithmic-type behaviors that are *very* insect-like -- but if you think more can be accomplished with machine learning then you're going to have to demonstrate *some* kind of self-willed arbitrary behavior, like play.





The problem is that A.I.'s would interact witheach other and able to share their experiences very effectivly.


What??? Who's in control here? Us, or have the AIs somehow "run loose"?! I'm *saying*, let's *set it up* so that




*multiple* machines of the same type demonstrate *differing* self-produced goal sets given the same starting path of learning experiences.





Right but it is not impossible.


Hey, I'm open-minded, and I think that, given enough raw neural-net computing power, it could very well happen in an emergent way. But it would have to "just happen", in a Lawnmower Man kind of way, *not* in a chess-playing heuristics kind of way.

Psy
1st March 2010, 00:46
Psy, you *can't* just make facile comparisons between organic intelligence and machine learning. Your point about a dog has *zero* relevance to the field of AI -- it's apples and oranges.

Actually it does, AI is not limited to simulating human intelligence but intelligence period



(And of course a dog, or any other higher-level, sentient animal, is going to experience work and play differently, and will be conscious of the difference.)

Not really, we can easily tell the difference due to class nature yet without this class division of labor the line between work and play becomes blurred for some tasks.



Uh-huh. That was *fiction*, btw....

I was using an example of how a A.I can get distracted from its tasks.



What the fuck does "lower-level logic" have to do with *enjoyment*???????????

Do you realize that *play* is a *very* sophisticated, complex, higher-level function of intelligence????? It's how the young members of highly intelligent animal species *learn* when there are no real-life situations around (or they would be over-the-heads of youngsters). Keep in mind that lower-level animals, like insects, hop right into existence with their behaviors *pre-programmed* by *instinct*. There's no need for a phase of play because there's no individualized adaptation necessary.

If your idea / goals for artificial life are limited to the level of insects, then AI has been accomplished already -- there are robots that can carry out algorithmic-type behaviors that are *very* insect-like -- but if you think more can be accomplished with machine learning then you're going to have to demonstrate *some* kind of self-willed arbitrary behavior, like play.

Emotions are not a conscious function thus why it very difficult for humans to change their emotions on demand without the use of mind alerting drugs.



What??? Who's in control here? Us, or have the AIs somehow "run loose"?! I'm *saying*, let's *set it up* so that

Having AIs hooked up to the Internet makes sense as it speeds up the learning process of AIs and share information with other AI that decisions are based on decisions of the AI in question for example why separate the AI of a coal power plant, railway and coal mines rather then allow them to integrate and work with each other to plan based on what the other AIs are planning.



Hey, I'm open-minded, and I think that, given enough raw neural-net computing power, it could very well happen in an emergent way. But it would have to "just happen", in a Lawnmower Man kind of way, *not* in a chess-playing heuristics kind of way.
I did not say it would just happen but by the time humans notice it has started to happen it might be very inconvenient to reverse it and even then self-aware computers might never mutiny and even if they do they may be reasoned with.

There would only be a problem with self-aware computers if they identify themselves as a working class, same if animals of burden ever evolve to the point they identify themselves as a working class.

More Fire for the People
1st March 2010, 00:51
Interstellar post-scarcity human civilization.

ckaihatsu
1st March 2010, 03:06
Not really, can you say when a dog decides to play instead of doing what is instructed of it is conscious of what is work and play rather then simply knowing what it is currently doing is fun and enjoyable.





Psy, you *can't* just make facile comparisons between organic intelligence and machine learning. Your point about a dog has *zero* relevance to the field of AI -- it's apples and oranges.





Actually it does, AI is not limited to simulating human intelligence but intelligence period


What you're essentially doing here throughout is *stating*, over and over, what *you would like* the goal set for an AI *to be* -- basically you'd want it to be human-like without running amok, if possible.





(And of course a dog, or any other higher-level, sentient animal, is going to experience work and play differently, and will be conscious of the difference.)





Not really, we can easily tell the difference due to class nature yet without this class division of labor the line between work and play becomes blurred for some tasks.


Okay, it's a good point here. So you're introducing the idea of an AI -- or work and play in general -- in a *post-class* context. I would *still* say that an AI *would* have to demonstrate an arbitrary, non-random-function volition of its own "choosing" that could not be predicted from its programming.





Uh-huh. That was *fiction*, btw....





I was using an example of how a A.I can get distracted from its tasks.


No, that's *not* an *example*, Psy -- examples *don't* come from the world of the imagination, or fiction. It was an instance of someone's *imagining* what a *fictional* AI might behave like.





Emotions are not a conscious function thus why it very difficult for humans to change their emotions on demand without the use of mind alerting drugs.


What???????????!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Are you bat-shit insane?????????? Where the fuck do you get this shit from, Psy????? I respect your precise knowledge on Marxist operating theory, but on this stuff you're just putting shit out there without thinking about it...(!!!)

How the *hell* can you say that "emotions are not a conscious function"??? *Of course* emotions are (usually) a conscious function, and without control of our emotions we would be like moody raving lunatics and society would not even exist. Emotional control is the *basis* of self-discipline and is what enables social interactions.

Have you ever been in a situation where you had to do something that you didn't emotionally *like*, and wasn't immediately self-gratifying? (We *all* have, and it's a fairly common occurrence in life.)





Having AIs hooked up to the Internet makes sense as it speeds up the learning process of AIs and share information with other AI that decisions are based on decisions of the AI in question for example why separate the AI of a coal power plant, railway and coal mines rather then allow them to integrate and work with each other to plan based on what the other AIs are planning.


Call me crazy here, but don't you think that any potential runaway AI should first be developed in quarantine conditions? (What you're describing is a fairly simple *expert system* that would just do load-balancing over a pre-defined domain. No biggie.)





I did not say it would just happen but by the time humans notice it has started to happen it might be very inconvenient to reverse it and even then self-aware computers might never mutiny and even if they do they may be reasoned with.


You think that machine intelligence will somehow slowly creep up, developing hidden in the background without our being able to see it happen until it's too late and they jet past our control safeguards and take over the earth.

It's on this part that I think you're only *adding* to the popular-fiction atmosphere of anxiety that is built up around this topic. I would like to think that we, as Marxists, would have a more *level-headed* approach to this subject.





There would only be a problem with self-aware computers if they identify themselves as a working class, same if animals of burden ever evolve to the point they identify themselves as a working class.


Before any AI can be *class* conscious, it first has to be *conscious* -- you *know* that, right??? (And how the fuck are animals of burden ever going to be able to "identify themselves as a working class" when they have *no fucking language*??????????????? What message exactly will the animals put on their banners?!!!!!!!!)

Psy
1st March 2010, 03:51
What you're essentially doing here throughout is *stating*, over and over, what *you would like* the goal set for an AI *to be* -- basically you'd want it to be human-like without running amok, if possible.

No, that is pretty much the long term goal of artificial intelligence, not having AI that is so much self-aware but an AI that can function to the same degree as a human brain.




Okay, it's a good point here. So you're introducing the idea of an AI -- or work and play in general -- in a *post-class* context. I would *still* say that an AI *would* have to demonstrate an arbitrary, non-random-function volition of its own "choosing" that could not be predicted from its programming.

True




No, that's *not* an *example*, Psy -- examples *don't* come from the world of the imagination, or fiction. It was an instance of someone's *imagining* what a *fictional* AI might behave like.

I did not say a real world example





What???????????!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Are you bat-shit insane?????????? Where the fuck do you get this shit from, Psy????? I respect your precise knowledge on Marxist operating theory, but on this stuff you're just putting shit out there without thinking about it...(!!!)

How the *hell* can you say that "emotions are not a conscious function"??? *Of course* emotions are (usually) a conscious function, and without control of our emotions we would be like moody raving lunatics and society would not even exist. Emotional control is the *basis* of self-discipline and is what enables social interactions.

Have you ever been in a situation where you had to do something that you didn't emotionally *like*, and wasn't immediately self-gratifying? (We *all* have, and it's a fairly common occurrence in life.)

Emotions are not really a unconscious function, for example humans don't consciously decide to go into blind fits of rage or to be depressed, it is part of our unconscious logic. That is not to say we have no control over our emotions or that they are even hardwired just that they originate from lower brain functions and for good reason. For example anxiety pumps our body with adrenalin and raises our blood pressure all to prepare the body to flight or evade a perceived threat, even when our higher brain functions understand the threat is fake anxiety will still accrue from lower brain functions.

Self-discipline is possible because emotions don't lead to automated actions as they are beyond the control of the lower brain functions. We can curve our emotions due to our lower functions taking some input from higher brain functions.



Call me crazy here, but don't you think that any potential runaway AI should first be developed in quarantine conditions? (What you're describing is a fairly simple *expert system* that would just do load-balancing over a pre-defined domain. No biggie.)

That assumes the AI will runaway within a human lifespan.




You think that machine intelligence will somehow slowly creep up, developing hidden in the background without our being able to see it happen until it's too late and they jet past our control safeguards and take over the earth.

More like we wouldn't be paying that much attention, that as long as they preformed their duties within operating parameters we would not analyzing them that closely.




It's on this part that I think you're only *adding* to the popular-fiction atmosphere of anxiety that is built up around this topic. I would like to think that we, as Marxists, would have a more *level-headed* approach to this subject.

It is a issue of human life span and the learning curve of an A.I.




Before any AI can be *class* conscious, it first has to be *conscious* -- you *know* that, right??? (And how the fuck are animals of burden ever going to be able to "identify themselves as a working class" when they have *no fucking language*??????????????? What message exactly will the animals put on their banners?!!!!!!!!)
For animals I'm talking in the very very long term, that evolution continues and other animals evolve.

ckaihatsu
1st March 2010, 06:49
That assumes the AI will runaway within a human lifespan.


What???????? You make it sound as if an AI would only have *one* human observer who doesn't even take notes. Much more realistically would be *at least* a basic lab environment wherein the AI is a *research project* that's staffed by several computer scientists who examine its output and progress from many different perspectives over time, as doctors do with human patients, keeping medical records along the way. The lifespan of any one scientist *would not* make a difference -- I'm sure there would be / are peer-reviewed academic studies and schools of students around it, too....





More like we wouldn't be paying that much attention, that as long as they preformed their duties within operating parameters we would not analyzing them that closely.


- Whatever -





It is a issue of human life span and the learning curve of an A.I.


No, it isn't.





For animals I'm talking in the very very long term, that evolution continues and other animals evolve.


Please keep in mind that human civilizations developed "on a blank slate" so to speak, without the interference of *any other* species' pre-existing civilizations. There *was* natural competition over the scavenging of food sources, and there *were* dangers from other predators, but that was soon overcome with the development of tool-use (as with fire, presumably).

Animals today may be profoundly "challenged" by existing human civilization, to the point of affecting their biological evolution. While natural habitats have been greatly reduced in size by modern development *any* sophisticated animal communication will pretty much inevitably come to the attention of human society, as soon as it happens, thus mitigating its "natural" development. As far as we can tell *everything* will be happening within a human societal context.

Psy
1st March 2010, 16:13
What???????? You make it sound as if an AI would only have *one* human observer who doesn't even take notes. Much more realistically would be *at least* a basic lab environment wherein the AI is a *research project* that's staffed by several computer scientists who examine its output and progress from many different perspectives over time, as doctors do with human patients, keeping medical records along the way. The lifespan of any one scientist *would not* make a difference -- I'm sure there would be / are peer-reviewed academic studies and schools of students around it, too....

You assume that the AIs would be in a laboratory environment, engineers don't tend to study equipment closely once they have been in normal service for decades, of course there will be maintenance logs but odds are it would be from the point of view that the AI being just a machine and as time passes maintenance logs would be archived and could be lost over time since technicians still see the AI as a machine.





Please keep in mind that human civilizations developed "on a blank slate" so to speak, without the interference of *any other* species' pre-existing civilizations. There *was* natural competition over the scavenging of food sources, and there *were* dangers from other predators, but that was soon overcome with the development of tool-use (as with fire, presumably).

Animals today may be profoundly "challenged" by existing human civilization, to the point of affecting their biological evolution. While natural habitats have been greatly reduced in size by modern development *any* sophisticated animal communication will pretty much inevitably come to the attention of human society, as soon as it happens, thus mitigating its "natural" development. As far as we can tell *everything* will be happening within a human societal context.

True but even if humans instantly notices such evolutions of other animals unlike AI we won't not be able to pull the plug on the evolution. There is no way a communist world would support using eugenics to bread out evolutionary traits from animals , if anything once such evolutionary traits were found there would be a push from the scientific community to isolate the genes in these animals and try to reproduce the evolution in lab animals for the purpose of studying the evolution of other animal and only at best a policy of managing the evolution of other animals not stopping it.

ckaihatsu
1st March 2010, 17:05
You assume that the AIs would be in a laboratory environment, engineers don't tend to study equipment closely once they have been in normal service for decades, of course there will be maintenance logs but odds are it would be from the point of view that the AI being just a machine and as time passes maintenance logs would be archived and could be lost over time since technicians still see the AI as a machine.


Psy, *every* project has some sort of a "charter", with a set schedule for budgeting / funding. *Nothing* is going to "get lost" if it requires continual funding.





True but even if humans instantly notices such evolutions of other animals unlike AI we won't not be able to pull the plug on the evolution. There is no way a communist world would support using eugenics to bread out evolutionary traits from animals , if anything once such evolutionary traits were found there would be a push from the scientific community to isolate the genes in these animals and try to reproduce the evolution in lab animals for the purpose of studying the evolution of other animal and only at best a policy of managing the evolution of other animals not stopping it.


I agree that there's no need to "pull the plug" on evolution.

What I *was* saying, to reiterate, is that any kind of rudimentary social organization would require animals to first be able to communicate abstract meanings. The process of *planning* requires it. Body language and vocalizations in realtime is *not* enough, regardless of any conceivable intention that may exist internally.

What *I* think is far more likely would be the *equipping* of today's-ability animals with more human-made tools of communication, to give them more of the tools and tool-using abilities that people currently enjoy. I'm pretty sure the brain-machine interface technology already exists -- it would be like bypassing deafness or blindness for people....

Psy
1st March 2010, 18:05
Psy, *every* project has some sort of a "charter", with a set schedule for budgeting / funding. *Nothing* is going to "get lost" if it requires continual funding.

Again that is if the AI is still in development, yet if development is complete and been used for decades maintenance logs could become lost over time that would delay detecting evolution of the AI beyond what was observed in the lab. This would be probable if the AI filled all the requirements and needed no further improvement for the purposes it was being used for.

For example lets say the AI was deployed in 3000 yet very very slowly evolves taking to 3100 to shows signs of self-awareness there is a chance not all the logs from 3000 to 3100 would be available due to deterioration of archives and backing up archives of logs having very low priority and technicians not noticing the slow change as they never though of comparing logs over that great a span.





I agree that there's no need to "pull the plug" on evolution.

What I *was* saying, to reiterate, is that any kind of rudimentary social organization would require animals to first be able to communicate abstract meanings. The process of *planning* requires it. Body language and vocalizations in realtime is *not* enough, regardless of any conceivable intention that may exist internally.

What *I* think is far more likely would be the *equipping* of today's-ability animals with more human-made tools of communication, to give them more of the tools and tool-using abilities that people currently enjoy. I'm pretty sure the brain-machine interface technology already exists -- it would be like bypassing deafness or blindness for people....
True

ckaihatsu
1st March 2010, 18:23
Again that is if the AI is still in development, yet if development is complete and been used for decades maintenance logs could become lost over time that would delay detecting evolution of the AI beyond what was observed in the lab. This would be probable if the AI filled all the requirements and needed no further improvement for the purposes it was being used for.


I'm sorry, Psy, but this sounds much more like the premise of a movie plot than of the real world. In the real world actual *people* have to oversee and take responsibility -- at least internally -- for the risks involved in any given funded project. If a development project is finished then it will be *shut down* and the plug *will* be pulled. Certainly the final state reached can be preserved intact but the end of development is synonymous with a halting of all aspects of the project's operations.





For example lets say the AI was deployed in 3000 yet very very slowly evolves taking to 3100 to shows signs of self-awareness there is a chance not all the logs from 3000 to 3100 would be available due to deterioration of archives and backing up archives of logs having very low priority and technicians not noticing the slow change as they never though of comparing logs over that great a span.


Certainly records would be kept over an operating period of 100 years, and some organizational entity -- from either government, academia, or the private sector -- would oversee and manage such a project.

I really think you're too far out on a limb with the highly speculative and improbable scenarios you're advancing about this....

Psy
1st March 2010, 18:46
I'm sorry, Psy, but this sounds much more like the premise of a movie plot than of the real world. In the real world actual *people* have to oversee and take responsibility -- at least internally -- for the risks involved in any given funded project. If a development project is finished then it will be *shut down* and the plug *will* be pulled. Certainly the final state reached can be preserved intact but the end of development is synonymous with a halting of all aspects of the project's operations.

That is not the case in industry, industry uses equipment long discontinued this was the case even in the U.S.S.R where equipment was phased out decades after support was officially pulled, some equipment was even run into the ground and was never official phased out.

So it is possible that development of the AI is finished yet industry around the world keeps using the AI as it fills their needs and individual industries take responsibility for maintaining the AI they use.



Certainly records would be kept over an operating period of 100 years, and some organizational entity -- from either government, academia, or the private sector -- would oversee and manage such a project.

Even today record keeping of routine logs is not very well kept as no one cares if you lose maintenance logs from 10 years ago let alone the daily operating logs.

ckaihatsu
1st March 2010, 19:05
That is not the case in industry, industry uses equipment long discontinued this was the case even in the U.S.S.R where equipment was phased out decades after support was officially pulled, some equipment was even run into the ground and was never official phased out.

So it is possible that development of the AI is finished yet industry around the world keeps using the AI as it fills their needs and individual industries take responsibility for maintaining the AI they use.


Even today record keeping of routine logs is not very well kept as no one cares if you lose maintenance logs from 10 years ago let alone the daily operating logs.


So the scenario you're using here is an AI that is already developed -- possibly even for commercial markets -- and is in widespread use for various applications.

If this is the case then the AI would be fairly commonly known and its function would be well-documented, like any given piece of software today.

*This* scenario that you're establishing here is *not* the same as the hidden-AI-that-wakes-up-and-takes-over scenario you were using earlier....

Also, the limited, balkanized, nationalized nature of capital development / capitalism means that most, if not all, major computing needs will be more than readily dispatched using *conventional* computational means, as with expert systems and data mining. I question that the very material foundations necessary for the full development of a true AI consciousness would even *exist* because of the short-sighted nature of capital investment.

Finally, from what I can see from the literature, it would seem that the philosophical thrust for AI development really peaked in the '90s, with a major shift *away* to merely leveraging the simple raw resource capacity increases that have come about in the past decade.

Psy
1st March 2010, 19:29
So the scenario you're using here is an AI that is already developed -- possibly even for commercial markets -- and is in widespread use for various applications.

If this is the case then the AI would be fairly commonly known and its function would be well-documented, like any given piece of software today.

*This* scenario that you're establishing here is *not* the same as the hidden-AI-that-wakes-up-and-takes-over scenario you were using earlier....

It is possible for commonly known AI to evolve beyond its parameters, for example if developed finished in the year 3000 and evolves to the year 4000 before developing an idea of self.



Also, the limited, balkanized, nationalized nature of capital development / capitalism means that most, if not all, major computing needs will be more than readily dispatched using *conventional* computational means, as with expert systems and data mining. I question that the very material foundations necessary for the full development of a true AI consciousness would even *exist* because of the short-sighted nature of capital investment.

Finally, from what I can see from the literature, it would seem that the philosophical thrust for AI development really peaked in the '90s, with a major shift *away* to merely leveraging the simple raw resource capacity increases that have come about in the past decade.
That does not mean humanity would never advance AI to a point where it can evolve on its own.

ckaihatsu
1st March 2010, 19:35
It is possible for commonly known AI to evolve beyond its parameters, for example if developed finished in the year 3000 and evolves to the year 4000 before developing an idea of self.


Psy, just asserting something isn't enough -- you need to give *reasons* that provide support for what you're saying.





That does not mean humanity would never advance AI to a point where it can evolve on its own.


I could see more of a realistic possibility for the development of an AI once humanity has transcended the rule of capital.

Psy
1st March 2010, 19:57
Psy, just asserting something isn't enough -- you need to give *reasons* that provide support for what you're saying.

The reason would be that the AI is used by industry before scientists fully understand the capabilities of the AI and the AI evolves faster (though still at a very slow rate) outside the lab.

ckaihatsu
1st March 2010, 20:08
The reason would be that the AI is used by industry before scientists fully understand the capabilities of the AI and the AI evolves faster (though still at a very slow rate) outside the lab.


Well, for whatever it's worth, I still find this to be improbable. If it's still in development then there will be many research-oriented eyes on it, and once it becomes available around public circles there will be a mass consumer base that gains personal knowledge of its workings through everyday experience with it. In neither case will the project somehow evade human observation and reportage long enough to "grow" unbeknownst to anyone, then suddenly burst forth to outflank all of humanity. You may want to re-examine your premises here.

Psy
1st March 2010, 20:31
Well, for whatever it's worth, I still find this to be improbable. If it's still in development then there will be many research-oriented eyes on it, and once it becomes available around public circles there will be a mass consumer base that gains personal knowledge of its workings through everyday experience with it. In neither case will the project somehow evade human observation and reportage long enough to "grow" unbeknownst to anyone, then suddenly burst forth to outflank all of humanity. You may want to re-examine your premises here.
You are assuming people would notice very small changes over a very long period of time. Tell me do most humans notice chances in society prior to revolutions, I don't think so.

ckaihatsu
2nd March 2010, 07:55
You are assuming people would notice very small changes over a very long period of time. Tell me do most humans notice chances in society prior to revolutions, I don't think so.


Psy -- again with all due respect -- I don't know why you're continuing to argue. We've gotten *past* the point in the conversation where there's room for a difference of opinion on the issues themselves. Now you're just being argumentative.

But I will respect your point, if you like. I think that it's very easy for us to reference the dynamics of the industrialization / modernization era (roughly the twentieth century) when considering our *current* situation. Our *current* situation is the Information Revolution wherein human society globally is far more educated / informed, proletarianized, economically integrated, and information-enabled.

This is why I think little would continue to escape our *information commons*, if you will. Plenty of people are squeezed out of the journalism profession due to the "oversupply" (an arguable label, since it's based on the market) of talent -- certainly more people want to find *more* inlets to active participation in society and the body politic. Exchanging news leads and information about cutting-edge developments *anywhere*, over the Internet, is certainly the most accessible and effective method these days.

Did people see Tiananmen Square coming? That wasn't too long ago, and it was anticipated in advance as much as *any other* boiling-point event that results from a much larger atmosphere of political discontent and aspiration. The same can be true for *technological* developments, especially away from the monolithic party-line filter of the bourgeois press.