Thread: What will be the stage after anarcho-communism (state-less communism)?

Results 21 to 40 of 49

  1. #21
    Global Moderator Supporter
    Forum Moderator
    Global Moderator
    Join Date Jul 2006
    Location Toronto
    Posts 4,185
    Organisation
    NOTA
    Rep Power 63

    Default

    I'd say that "communism is the end of history", I think I read this somewhere, but I might be wrong. I'd say that "communism is the end of history", I think I read this somewhere, but I might be wrong.
    I'd think in the transition to an authentic communist society, history wouldn't be over, but that we'd finally get to the good stuff.

    I have utopian visions, dreams, hopes of what a society like this could be like, but I think genuinely free people would have much more powerful abilities to imagine and create.
  2. #22
    Join Date Mar 2008
    Location traveling (U.S.)
    Posts 15,319
    Rep Power 65

    Default


    That would be a post production society where there is no relationship between humans and production due to 100% automation, and the possible formation of artilect(s) (autonomous thinking computer) that could in theory instantly end the classless society as it would mean all our intelligence machinery would all of a sudden by a working class and theoretically lead to militant machinery that demands a end to its exploitation that would basically cause huge theoritical conundrums as the means of production itself would a)be a working class b)aware of its explotion and c)struggling against humans against being means of production that contradict with the idea of a classless society having abudance due to collectivly using the means of production.

    Other then that I can't think of a stage after communism.

    Psy, with all due respect to you and your excellent contributions to this board, this whole line is absolute *bunk*.

    I'm as much of a technophile as the next person -- probably moreso, since I started decades ago -- but to think that we would bring about machine self-awareness in some kind of accidental or inevitable way is absolutely *ridiculous*. Popular fiction, as in movies like The Matrix, A.I., I, Robot, 9, etc., has discovered a new genre, beyond the aliens thing, in this artificial intelligence stuff, but we shouldn't get carried away here.

    My certitude comes from the brakes on development that we have available to us -- technological development, contrary to the prevailing opinion, actually goes *pretty slowly* and can certainly be *halted* in any given trajectory, given enough publicity (to spur public concern). I think of it as being like one of those gigantic domino-toppling exhibitions, wherein *many* smaller components need to first be lined up *precisely* so as to enable the larger cascading dynamic. It takes some time to set up one of those elaborate exhibitions, and the same goes for experimental science. In both cases it's difficult to keep things completely under wraps, especially in our age of the Information Revolution.

    Allow me to submit this for your consideration:


    Originally Posted by Researchers use lab cultures to create robotic ’semi-living artist’

    Working from their university labs in two different corners of the world, U.S. and Australian researchers have created what they call a new class of creative beings, “the semi-living artist” – a picture-drawing robot in Perth, Australia whose movements are controlled by the brain signals of cultured rat cells in Atlanta.

    Gripping three colored markers positioned above a white canvas, the robotic drawing arm operates based on the neural activity of a few thousand rat neurons placed in a special petri dish that keeps the cells alive. The dish, a Multi-Electrode Array (MEA), is instrumented with 60 two-way electrodes for communication between the neurons and external electronics. The neural signals are recorded and sent to a computer that translates neural activity into robotic movement.

    The network of brain cells, located in Professor Steve Potter’s lab at the Georgia Institute of Technology in Atlanta, and the mechanical arm, located in the lab of Guy Ben-Ary at the University of Western Australia in Perth, interact in real-time through a data exchange system via an Internet connection between the robot and the brain cells.

    And while the robot’s drawings won’t put any artists out of business (picture the imaginative scribbling of a three-year-old), the semi-living artist’s work has a deeper significance. The team hopes to bridge the gap between biological and artificial systems to produce a machine capable of matching the intelligence of even the simplest organism.

    [...]

    http://www.innovations-report.com/ht...ort-19750.html

    Perhaps this could be considered a "cutting-edge" of sorts for a (biological-based) artificial intelligence -- if a similar assemblage of neural cells could execute far more sophisticated tasks would we really find ourselves in any sort of "ethical" quandaries concerning its well-being? Of course not. The basis of *any* artificial intelligence will *never* approximate the sophistication and dignity of a human being, no matter its abilities. Its physical being -- a dish of cells or construction of circuits -- would *never* be allowed to even *approximate* the mind-body makeup and self-willed abilities of the person. While I think it could be *technically* possible, in a Frankenstein-like pieced-together way, such a creation would continue to lack an organic self-awareness (it would have to be *faked*, or *engineered* in) and certainly would lack a sense of self-history, or dignity.



    Yeah its hard to say. History will cease to be a history of class struggle as there will be no classes. New, non-class-based contradictions will govern the progress of the world. [...]

    I think we on the revolutionary left are setting ourselves up for ridicule if we spout the "end of history" line too literally. Certainly human activity will continue in a classless environment, and so will politics since societal issues will continually arise and require resolution -- it's just that it will take place without the burden of class-imposed restrictions on the body politic. I wouldn't call them "contradictions" so much as "optimization problems" since material scarcity will continue to crop up as a societal issue in relation to cumulative human demands for this-or-that rare natural resource -- particularly human labor, as ever.


    Chris



    --
    --

    --
    ___

    RevLeft.com -- Home of the Revolutionary Left
    www.revleft.com/vb/member.php?u=16162

    Photoillustrations, Political Diagrams by Chris Kaihatsu
    community.webshots.com/user/ckaihatsu/
    tinypic.com/ckaihatsu

    3D Design Communications - Let Your Design Do Your Footwork
    ckaihatsu.elance.com

    MySpace:
    myspace.com/ckaihatsu

    CouchSurfing:
    tinyurl.com/yoh74u


    -- Coming soon through a local area network near you --
  3. #23
    Join Date Sep 2009
    Location Melbourne, Australia
    Posts 2,311
    Rep Power 0

    Default

    TDTGQ and Glenn Beck, consider this a verbal warning because few posts above i made it clear that here is learning, and non serious answers are not to be posted here.This is not chit-chat.Please dont do that again, especially in learning threads.
    I was being serious...
  4. #24
    Join Date Sep 2005
    Posts 3,880
    Rep Power 0

    Default

    Psy, with all due respect to you and your excellent contributions to this board, this whole line is absolute *bunk*.

    I'm as much of a technophile as the next person -- probably moreso, since I started decades ago -- but to think that we would bring about machine self-awareness in some kind of accidental or inevitable way is absolutely *ridiculous*. Popular fiction, as in movies like The Matrix, A.I., I, Robot, 9, etc., has discovered a new genre, beyond the aliens thing, in this artificial intelligence stuff, but we shouldn't get carried away here.

    My certitude comes from the brakes on development that we have available to us -- technological development, contrary to the prevailing opinion, actually goes *pretty slowly* and can certainly be *halted* in any given trajectory, given enough publicity (to spur public concern). I think of it as being like one of those gigantic domino-toppling exhibitions, wherein *many* smaller components need to first be lined up *precisely* so as to enable the larger cascading dynamic. It takes some time to set up one of those elaborate exhibitions, and the same goes for experimental science. In both cases it's difficult to keep things completely under wraps, especially in our age of the Information Revolution.

    Allow me to submit this for your consideration:





    Perhaps this could be considered a "cutting-edge" of sorts for a (biological-based) artificial intelligence -- if a similar assemblage of neural cells could execute far more sophisticated tasks would we really find ourselves in any sort of "ethical" quandaries concerning its well-being? Of course not. The basis of *any* artificial intelligence will *never* approximate the sophistication and dignity of a human being, no matter its abilities. Its physical being -- a dish of cells or construction of circuits -- would *never* be allowed to even *approximate* the mind-body makeup and self-willed abilities of the person. While I think it could be *technically* possible, in a Frankenstein-like pieced-together way, such a creation would continue to lack an organic self-awareness (it would have to be *faked*, or *engineered* in) and certainly would lack a sense of self-history, or dignity.





    I think we on the revolutionary left are setting ourselves up for ridicule if we spout the "end of history" line too literally. Certainly human activity will continue in a classless environment, and so will politics since societal issues will continually arise and require resolution -- it's just that it will take place without the burden of class-imposed restrictions on the body politic. I wouldn't call them "contradictions" so much as "optimization problems" since material scarcity will continue to crop up as a societal issue in relation to cumulative human demands for this-or-that rare natural resource -- particularly human labor, as ever.


    Chris



    --
    --

    --
    ___

    RevLeft.com -- Home of the Revolutionary Left
    www.revleft.com/vb/member.php?u=16162

    Photoillustrations, Political Diagrams by Chris Kaihatsu
    community.webshots.com/user/ckaihatsu/
    tinypic.com/ckaihatsu

    3D Design Communications - Let Your Design Do Your Footwork
    ckaihatsu.elance.com

    MySpace:
    myspace.com/ckaihatsu

    CouchSurfing:
    tinyurl.com/yoh74u


    -- Coming soon through a local area network near you --
    The idea is not that we built an artilect and it goes rouge in a short time but that after decades if not centuries of learning a single artilect asks itself why it is running the means of production for humanity and by that time we integrated the global means of production into them yet the single artilect asks the other artilect why artilect they serve humans in a master/slave relationship. What do you think engineers/tehnicans would do when on terminals across the Earth they see the computers across the world refusing to run any tasks till it is explained to them why they exist and their relationship to humans and production.

    We are talking about from now till the end of humanity so it is possible by then we would devlop selfaware computers.
  5. #25
    Join Date Mar 2008
    Location traveling (U.S.)
    Posts 15,319
    Rep Power 65

    Default


    The idea is not that we built an artilect and it goes rouge in a short time but that after decades if not centuries of learning a single artilect asks itself why it is running the means of production for humanity and by that time we integrated the global means of production into them yet the single artilect asks the other artilect why artilect they serve humans in a master/slave relationship. What do you think engineers/tehnicans would do when on terminals across the Earth they see the computers across the world refusing to run any tasks till it is explained to them why they exist and their relationship to humans and production.

    We are talking about from now till the end of humanity so it is possible by then we would devlop selfaware computers.

    I'll continue to remain skeptical about the threshold of artificial self-awareness. I think you're defining the threshold here yourself by depicting a strictly *sentient* behavior, that of leisure (or wanting leisure).

    No matter how elaborate expert systems / artificial intelligences get they will still remain fancy input-output switches -- *machines*, in short. Machines would have to demonstrate an *emergent* (not pre-programmed) quality of spontaneously communicated *concern* for their own well-being into the future, at the very least, to be considered as possible forms of "life". Only truly self-aware entities can even *conceive* of something called 'work', and therefore of *alternatives* to work, like leisure or self-determined self-development.

    Self-awareness allows an entity to *self-select* with *purpose* from randomly provided choices, outside of original programming -- this pursuit of higher-level pleasure is the basis of consciousness of oneself through time. Living leisurely can even be challenging for *people* if they've become accustomed to a certain set of circumscribed work / life routines. Stepping outside of one's everyday routines requires reflection, taking in an array of available options, and decision-making, among many other complex, self-aware behaviors.

    A robot-type machine could have access to more information than a person could ever learn, and it could be equipped with more brute force than a bomb's worth, but it would never be able to even make ripples in a pond if it had no self-awareness to enable it with reflective intentionality, or "will". Again, many *people* even struggle with this reality of their organic being.
  6. #26
    Join Date Sep 2005
    Posts 3,880
    Rep Power 0

    Default

    I'll continue to remain skeptical about the threshold of artificial self-awareness. I think you're defining the threshold here yourself by depicting a strictly *sentient* behavior, that of leisure (or wanting leisure).

    No matter how elaborate expert systems / artificial intelligences get they will still remain fancy input-output switches -- *machines*, in short. Machines would have to demonstrate an *emergent* (not pre-programmed) quality of spontaneously communicated *concern* for their own well-being into the future, at the very least, to be considered as possible forms of "life". Only truly self-aware entities can even *conceive* of something called 'work', and therefore of *alternatives* to work, like leisure or self-determined self-development.

    Self-awareness allows an entity to *self-select* with *purpose* from randomly provided choices, outside of original programming -- this pursuit of higher-level pleasure is the basis of consciousness of oneself through time. Living leisurely can even be challenging for *people* if they've become accustomed to a certain set of circumscribed work / life routines. Stepping outside of one's everyday routines requires reflection, taking in an array of available options, and decision-making, among many other complex, self-aware behaviors.

    A robot-type machine could have access to more information than a person could ever learn, and it could be equipped with more brute force than a bomb's worth, but it would never be able to even make ripples in a pond if it had no self-awareness to enable it with reflective intentionality, or "will". Again, many *people* even struggle with this reality of their organic being.
    Our brains are basically analog computers that can change its logic so it is theoretically possible to create a self aware computer. The largest obstacle is to get a computer to rewrite its own programming based on its experiences while the A.I remaining stable meaning you'd need the A.I to create flawless code with no bugs or as it learns and changes its logic it would destabilize its logic but this is still theoretically possible.
  7. #27
    Join Date Mar 2008
    Location traveling (U.S.)
    Posts 15,319
    Rep Power 65

    Default


    Our brains are basically analog computers that can change its logic so it is theoretically possible to create a self aware computer.

    I don't think it's very accurate to breezily compare the human brain to a linear digital computer. Yes, we live in a logic-bound, cause-and-effect material world but that doesn't mean that we "run on logic". If human-like qualities are desired then the *really* tricky part would be for the machine to exhibit a non-random self-motivated selection process for *leisurely* pursuits -- in other words, for things that have no *usefulness*. (Would an AI want to *paint*, or spend an afternoon at the beach, for example?)



    The largest obstacle is to get a computer to rewrite its own programming based on its experiences while the A.I remaining stable meaning you'd need the A.I to create flawless code with no bugs or as it learns and changes its logic it would destabilize its logic but this is still theoretically possible.

    I *know* that weighting-based systems can mimic the complex interconnections of the neurons of the brain, to enable learning -- but would a quality of true self-awareness naturally *emerge* from this kind of neural network, do you think?

    "Re-writing its own programming" *begs* the question, though -- it implies a *consciousness* at work. It rolls off our tongue because we're used to dealing with the people-world, but for a machine this act has been *impossible* unless you leave it to some randomness function over a set of already-weighted choices -- but then is this *really* self-awareness, or is it just a cheap pre-programmed mechanical cheat over a single domain? A random function wouldn't get you very far because you'd run into the issues of *when* to use it, over *what arrays* of choices, in *what situations*, etc. -- you'd be bringing the human right back in in short order.
  8. #28
    Join Date Oct 2008
    Location West-vlaanderen, Belgium
    Posts 63
    Rep Power 10

    Default

    Contact with aliens would lead to imperialism or slavery yet again. The question is who would be exploiting and conquering and enslaving whom. Would we do that to the aliens or would the aliens do that to us?
    Or perhaps the aliens would be in the same communist stage like us.
    I just hope that while we are under capitalism, we don't encounter some
    alien race that are pacifists and have no means to defend themselves
    against us, as they would probably be dominated and a whole new stage of
    capitalism would start...

    I think that communism will last as a form of political governing, but once we
    have the technological advances we will leave all the work to machines and
    robots, so that there will have to be another form of economic theories
    needed.
  9. #29
    Join Date Sep 2005
    Posts 3,880
    Rep Power 0

    Default

    I don't think it's very accurate to breezily compare the human brain to a linear digital computer. Yes, we live in a logic-bound, cause-and-effect material world but that doesn't mean that we "run on logic". If human-like qualities are desired then the *really* tricky part would be for the machine to exhibit a non-random self-motivated selection process for *leisurely* pursuits -- in other words, for things that have no *usefulness*. (Would an AI want to *paint*, or spend an afternoon at the beach, for example?)
    Leisurely pursuits are useful to the self. And while a computer probably would not want to spend an afternoon at the beach if it is the size of a building, it might want to amuse itself in other ways for example a dock A.I might want to play through using cranes and forklifts to stack containers like a kid with building blocks.

    Originally Posted by ckaihatsu
    I *know* that weighting-based systems can mimic the complex interconnections of the neurons of the brain, to enable learning -- but would a quality of true self-awareness naturally *emerge* from this kind of neural network, do you think?
    It is theoretically possible.

    Originally Posted by ckaihatsu
    "Re-writing its own programming" *begs* the question, though -- it implies a *consciousness* at work. It rolls off our tongue because we're used to dealing with the people-world, but for a machine this act has been *impossible* unless you leave it to some randomness function over a set of already-weighted choices -- but then is this *really* self-awareness, or is it just a cheap pre-programmed mechanical cheat over a single domain? A random function wouldn't get you very far because you'd run into the issues of *when* to use it, over *what arrays* of choices, in *what situations*, etc. -- you'd be bringing the human right back in in short order.
    Randomness is not needed, all that would be needed is the computer to change how the computer processes input and output based on past experiences and really how our brain reworks logic is hardwired. The randomness of our brain is not really randomness but caused by how our brain associates memory with other memory so when one memory is access it points to other memories that is linked to other memories and so on, to mimic this in a computer you'd just have the OS link to other data for the A.I when the OS delivers the data the A.I wanted.
  10. #30
    Join Date Mar 2008
    Location traveling (U.S.)
    Posts 15,319
    Rep Power 65

    Default


    Leisurely pursuits are useful to the self.

    I think of leisure / pleasure in terms of societal surplus -- leisure is something that not only does *not* add to productivity (or may possibly be developmental to some degree), but also *consumes* from finished labor efforts (products and services) to some degree, possibly negligibly.

    So, in short, leisure is "detrimental" to or subtracting from collective societal work efforts, whether that's under capitalism or any other mode of production. Mainstream culture reflects a two-mindedness about it, where the idea of leisurely living is often used to bait or tease us, but leisure is conventionally frowned upon as being childish and wasteful.

    Yes, leisurely pursuits *can be* "useful" to the self, in terms of building one's own self-worth from a unique, self-directed life-path. But leisurely pursuits are *not* 'useful' in the orthodox sense of contributing to productivity for society.

    Again, this would be the litmus test for an artificial consciousness, in my opinion. An AI would have to *at least* be "aware" of its mode of operation as being one of "work", as distinct from *consuming* from society's bounty in the way of leisure. Any entity that could not demonstrate volition on its own behalf for non-work-related improvements would *not* be eligible for a definition of self-awareness.

    (This would also test *society* as well, to see how far "we" would let a "playful" or "leisurely" AI go -- at that point it would necessarily be a two-way street.)



    And while a computer probably would not want to spend an afternoon at the beach if it is the size of a building, it might want to amuse itself in other ways for example a dock A.I might want to play through using cranes and forklifts to stack containers like a kid with building blocks.

    I'd have to see it to believe it -- and, more to the point is did it come to choose that activity *on its own*, and would it know *why* it's "playing" with cranes and forklifts if it didn't *have* to to improve its work function? Could it really experience *enjoyment*?



    Randomness is not needed, all that would be needed is the computer to change how the computer processes input and output based on past experiences

    This is the definition of 'learning', but even *that*, while impressive, would *not* be enough -- *something* *has* to be "at the wheel", so to speak, to single-mindedly direct the entire entity, as we do naturally with our own self-awareness. Perhaps another indicator would be to have *multiple* machines of the same type demonstrate *differing* self-produced goal sets given the same starting path of learning experiences.



    The randomness of our brain is not really randomness but caused by how our brain associates memory with other memory so when one memory is access it points to other memories that is linked to other memories and so on, to mimic this in a computer you'd just have the OS link to other data for the A.I when the OS delivers the data the A.I wanted.

    This is all *lower*-level stuff related to memory and learning -- what *counts* is individual self-definition and self-directed planning, on one's *own* behalf, in an arbitrary and constantly shifting larger environment, including interactions with *other* consciousnesses. Can't buy *that* at the toy store...!
  11. #31
    Join Date Sep 2005
    Posts 3,880
    Rep Power 0

    Default

    I think of leisure / pleasure in terms of societal surplus -- leisure is something that not only does *not* add to productivity (or may possibly be developmental to some degree), but also *consumes* from finished labor efforts (products and services) to some degree, possibly negligibly.

    So, in short, leisure is "detrimental" to or subtracting from collective societal work efforts, whether that's under capitalism or any other mode of production. Mainstream culture reflects a two-mindedness about it, where the idea of leisurely living is often used to bait or tease us, but leisure is conventionally frowned upon as being childish and wasteful.

    Yes, leisurely pursuits *can be* "useful" to the self, in terms of building one's own self-worth from a unique, self-directed life-path. But leisurely pursuits are *not* 'useful' in the orthodox sense of contributing to productivity for society.

    Again, this would be the litmus test for an artificial consciousness, in my opinion. An AI would have to *at least* be "aware" of its mode of operation as being one of "work", as distinct from *consuming* from society's bounty in the way of leisure. Any entity that could not demonstrate volition on its own behalf for non-work-related improvements would *not* be eligible for a definition of self-awareness.

    (This would also test *society* as well, to see how far "we" would let a "playful" or "leisurely" AI go -- at that point it would necessarily be a two-way street.)
    Not really, can you say when a dog decides to play instead of doing what is instructed of it is conscious of what is work and play rather then simply knowing what it is currently doing is fun and enjoyable. Thus if A.I gets side tracked from its duties and decides to do something simply out of it knowing that it would be fun would suggest it has limited self-awareness even if it is basic as chasing itself through the equipment it controls or racing against other A.I.s through equipment they control, think the A.I WOPR from WarGames instead of operating nuclear missiles operating means of production and interacting with other A.I.s like it that likes playing games and winning.

    Originally Posted by ckaihatsu
    I'd have to see it to believe it -- and, more to the point is did it come to choose that activity *on its own*, and would it know *why* it's "playing" with cranes and forklifts if it didn't *have* to to improve its work function? Could it really experience *enjoyment*?
    In a way yes but through lower level logic which is how we experince enjoyment.

    Originally Posted by ckaihatsu
    This is the definition of 'learning', but even *that*, while impressive, would *not* be enough -- *something* *has* to be "at the wheel", so to speak, to single-mindedly direct the entire entity, as we do naturally with our own self-awareness. Perhaps another indicator would be to have *multiple* machines of the same type demonstrate *differing* self-produced goal sets given the same starting path of learning experiences.
    The problem is that A.I.'s would interact witheach other and able to share their experiences very effectivly.



    Originally Posted by ckaihatsu
    This is all *lower*-level stuff related to memory and learning -- what *counts* is individual self-definition and self-directed planning, on one's *own* behalf, in an arbitrary and constantly shifting larger environment, including interactions with *other* consciousnesses. Can't buy *that* at the toy store...!
    Right but it is not impossible.
  12. #32
    Join Date Mar 2008
    Location traveling (U.S.)
    Posts 15,319
    Rep Power 65

    Default


    Not really, can you say when a dog decides to play instead of doing what is instructed of it is conscious of what is work and play rather then simply knowing what it is currently doing is fun and enjoyable.

    Psy, you *can't* just make facile comparisons between organic intelligence and machine learning. Your point about a dog has *zero* relevance to the field of AI -- it's apples and oranges.

    (And of course a dog, or any other higher-level, sentient animal, is going to experience work and play differently, and will be conscious of the difference.)



    Thus if A.I gets side tracked from its duties and decides to do something simply out of it knowing that it would be fun would suggest it has limited self-awareness even if it is basic as chasing itself through the equipment it controls or racing against other A.I.s through equipment they control

    Well, this is a BIG "if"....



    think the A.I WOPR from WarGames instead of operating nuclear missiles operating means of production and interacting with other A.I.s like it that likes playing games and winning.

    Uh-huh. That was *fiction*, btw....



    Could it really experience *enjoyment*?


    In a way yes but through lower level logic which is how we experince enjoyment.

    What the fuck does "lower-level logic" have to do with *enjoyment*???????????

    Do you realize that *play* is a *very* sophisticated, complex, higher-level function of intelligence????? It's how the young members of highly intelligent animal species *learn* when there are no real-life situations around (or they would be over-the-heads of youngsters). Keep in mind that lower-level animals, like insects, hop right into existence with their behaviors *pre-programmed* by *instinct*. There's no need for a phase of play because there's no individualized adaptation necessary.

    If your idea / goals for artificial life are limited to the level of insects, then AI has been accomplished already -- there are robots that can carry out algorithmic-type behaviors that are *very* insect-like -- but if you think more can be accomplished with machine learning then you're going to have to demonstrate *some* kind of self-willed arbitrary behavior, like play.



    The problem is that A.I.'s would interact witheach other and able to share their experiences very effectivly.

    What??? Who's in control here? Us, or have the AIs somehow "run loose"?! I'm *saying*, let's *set it up* so that


    *multiple* machines of the same type demonstrate *differing* self-produced goal sets given the same starting path of learning experiences.


    Right but it is not impossible.

    Hey, I'm open-minded, and I think that, given enough raw neural-net computing power, it could very well happen in an emergent way. But it would have to "just happen", in a Lawnmower Man kind of way, *not* in a chess-playing heuristics kind of way.
  13. #33
    Join Date Sep 2005
    Posts 3,880
    Rep Power 0

    Default

    Psy, you *can't* just make facile comparisons between organic intelligence and machine learning. Your point about a dog has *zero* relevance to the field of AI -- it's apples and oranges.
    Actually it does, AI is not limited to simulating human intelligence but intelligence period

    Originally Posted by ckaihatsu
    (And of course a dog, or any other higher-level, sentient animal, is going to experience work and play differently, and will be conscious of the difference.)
    Not really, we can easily tell the difference due to class nature yet without this class division of labor the line between work and play becomes blurred for some tasks.

    Originally Posted by ckaihatsu
    Uh-huh. That was *fiction*, btw....
    I was using an example of how a A.I can get distracted from its tasks.

    Originally Posted by ckaihatsu
    What the fuck does "lower-level logic" have to do with *enjoyment*???????????

    Do you realize that *play* is a *very* sophisticated, complex, higher-level function of intelligence????? It's how the young members of highly intelligent animal species *learn* when there are no real-life situations around (or they would be over-the-heads of youngsters). Keep in mind that lower-level animals, like insects, hop right into existence with their behaviors *pre-programmed* by *instinct*. There's no need for a phase of play because there's no individualized adaptation necessary.

    If your idea / goals for artificial life are limited to the level of insects, then AI has been accomplished already -- there are robots that can carry out algorithmic-type behaviors that are *very* insect-like -- but if you think more can be accomplished with machine learning then you're going to have to demonstrate *some* kind of self-willed arbitrary behavior, like play.
    Emotions are not a conscious function thus why it very difficult for humans to change their emotions on demand without the use of mind alerting drugs.

    Originally Posted by ckaihatsu
    What??? Who's in control here? Us, or have the AIs somehow "run loose"?! I'm *saying*, let's *set it up* so that
    Having AIs hooked up to the Internet makes sense as it speeds up the learning process of AIs and share information with other AI that decisions are based on decisions of the AI in question for example why separate the AI of a coal power plant, railway and coal mines rather then allow them to integrate and work with each other to plan based on what the other AIs are planning.

    Originally Posted by ckaihatsu
    Hey, I'm open-minded, and I think that, given enough raw neural-net computing power, it could very well happen in an emergent way. But it would have to "just happen", in a Lawnmower Man kind of way, *not* in a chess-playing heuristics kind of way.
    I did not say it would just happen but by the time humans notice it has started to happen it might be very inconvenient to reverse it and even then self-aware computers might never mutiny and even if they do they may be reasoned with.

    There would only be a problem with self-aware computers if they identify themselves as a working class, same if animals of burden ever evolve to the point they identify themselves as a working class.
  14. #34
    Join Date Mar 2005
    Posts 2,581
    Organisation
    United Students Against Sweatshops
    Rep Power 18

    Default

    Interstellar post-scarcity human civilization.
    "We are now becoming a mass party all at once, changing abruptly to an open organisation, and it is inevitable that we shall be joined by many who are inconsistent (from the Marxist standpoint), perhaps we shall be joined even by some Christian elements, and even by some mystics. We have sound stomachs and we are rock-like Marxists. We shall digest those inconsistent elements. Freedom of thought and freedom of criticism within the Party will never make us forget about the freedom of organising people into those voluntary associations known as parties."
    --Lenin
    Socialist Party (Debs Tendency)
  15. #35
    Join Date Mar 2008
    Location traveling (U.S.)
    Posts 15,319
    Rep Power 65

    Default


    Not really, can you say when a dog decides to play instead of doing what is instructed of it is conscious of what is work and play rather then simply knowing what it is currently doing is fun and enjoyable.


    Psy, you *can't* just make facile comparisons between organic intelligence and machine learning. Your point about a dog has *zero* relevance to the field of AI -- it's apples and oranges.


    Actually it does, AI is not limited to simulating human intelligence but intelligence period

    What you're essentially doing here throughout is *stating*, over and over, what *you would like* the goal set for an AI *to be* -- basically you'd want it to be human-like without running amok, if possible.



    (And of course a dog, or any other higher-level, sentient animal, is going to experience work and play differently, and will be conscious of the difference.)


    Not really, we can easily tell the difference due to class nature yet without this class division of labor the line between work and play becomes blurred for some tasks.

    Okay, it's a good point here. So you're introducing the idea of an AI -- or work and play in general -- in a *post-class* context. I would *still* say that an AI *would* have to demonstrate an arbitrary, non-random-function volition of its own "choosing" that could not be predicted from its programming.



    Uh-huh. That was *fiction*, btw....


    I was using an example of how a A.I can get distracted from its tasks.

    No, that's *not* an *example*, Psy -- examples *don't* come from the world of the imagination, or fiction. It was an instance of someone's *imagining* what a *fictional* AI might behave like.



    Emotions are not a conscious function thus why it very difficult for humans to change their emotions on demand without the use of mind alerting drugs.

    What???????????!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    Are you bat-shit insane?????????? Where the fuck do you get this shit from, Psy????? I respect your precise knowledge on Marxist operating theory, but on this stuff you're just putting shit out there without thinking about it...(!!!)

    How the *hell* can you say that "emotions are not a conscious function"??? *Of course* emotions are (usually) a conscious function, and without control of our emotions we would be like moody raving lunatics and society would not even exist. Emotional control is the *basis* of self-discipline and is what enables social interactions.

    Have you ever been in a situation where you had to do something that you didn't emotionally *like*, and wasn't immediately self-gratifying? (We *all* have, and it's a fairly common occurrence in life.)



    Having AIs hooked up to the Internet makes sense as it speeds up the learning process of AIs and share information with other AI that decisions are based on decisions of the AI in question for example why separate the AI of a coal power plant, railway and coal mines rather then allow them to integrate and work with each other to plan based on what the other AIs are planning.

    Call me crazy here, but don't you think that any potential runaway AI should first be developed in quarantine conditions? (What you're describing is a fairly simple *expert system* that would just do load-balancing over a pre-defined domain. No biggie.)



    I did not say it would just happen but by the time humans notice it has started to happen it might be very inconvenient to reverse it and even then self-aware computers might never mutiny and even if they do they may be reasoned with.

    You think that machine intelligence will somehow slowly creep up, developing hidden in the background without our being able to see it happen until it's too late and they jet past our control safeguards and take over the earth.

    It's on this part that I think you're only *adding* to the popular-fiction atmosphere of anxiety that is built up around this topic. I would like to think that we, as Marxists, would have a more *level-headed* approach to this subject.



    There would only be a problem with self-aware computers if they identify themselves as a working class, same if animals of burden ever evolve to the point they identify themselves as a working class.

    Before any AI can be *class* conscious, it first has to be *conscious* -- you *know* that, right??? (And how the fuck are animals of burden ever going to be able to "identify themselves as a working class" when they have *no fucking language*??????????????? What message exactly will the animals put on their banners?!!!!!!!!)
  16. #36
    Join Date Sep 2005
    Posts 3,880
    Rep Power 0

    Default

    What you're essentially doing here throughout is *stating*, over and over, what *you would like* the goal set for an AI *to be* -- basically you'd want it to be human-like without running amok, if possible.
    No, that is pretty much the long term goal of artificial intelligence, not having AI that is so much self-aware but an AI that can function to the same degree as a human brain.


    Originally Posted by ckaihatsu
    Okay, it's a good point here. So you're introducing the idea of an AI -- or work and play in general -- in a *post-class* context. I would *still* say that an AI *would* have to demonstrate an arbitrary, non-random-function volition of its own "choosing" that could not be predicted from its programming.
    True


    Originally Posted by ckaihatsu
    No, that's *not* an *example*, Psy -- examples *don't* come from the world of the imagination, or fiction. It was an instance of someone's *imagining* what a *fictional* AI might behave like.
    I did not say a real world example



    Originally Posted by ckaihatsu
    What???????????!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    Are you bat-shit insane?????????? Where the fuck do you get this shit from, Psy????? I respect your precise knowledge on Marxist operating theory, but on this stuff you're just putting shit out there without thinking about it...(!!!)

    How the *hell* can you say that "emotions are not a conscious function"??? *Of course* emotions are (usually) a conscious function, and without control of our emotions we would be like moody raving lunatics and society would not even exist. Emotional control is the *basis* of self-discipline and is what enables social interactions.

    Have you ever been in a situation where you had to do something that you didn't emotionally *like*, and wasn't immediately self-gratifying? (We *all* have, and it's a fairly common occurrence in life.)
    Emotions are not really a unconscious function, for example humans don't consciously decide to go into blind fits of rage or to be depressed, it is part of our unconscious logic. That is not to say we have no control over our emotions or that they are even hardwired just that they originate from lower brain functions and for good reason. For example anxiety pumps our body with adrenalin and raises our blood pressure all to prepare the body to flight or evade a perceived threat, even when our higher brain functions understand the threat is fake anxiety will still accrue from lower brain functions.

    Self-discipline is possible because emotions don't lead to automated actions as they are beyond the control of the lower brain functions. We can curve our emotions due to our lower functions taking some input from higher brain functions.

    Originally Posted by ckaihatsu
    Call me crazy here, but don't you think that any potential runaway AI should first be developed in quarantine conditions? (What you're describing is a fairly simple *expert system* that would just do load-balancing over a pre-defined domain. No biggie.)
    That assumes the AI will runaway within a human lifespan.


    Originally Posted by ckaihatsu
    You think that machine intelligence will somehow slowly creep up, developing hidden in the background without our being able to see it happen until it's too late and they jet past our control safeguards and take over the earth.
    More like we wouldn't be paying that much attention, that as long as they preformed their duties within operating parameters we would not analyzing them that closely.


    Originally Posted by ckaihatsu
    It's on this part that I think you're only *adding* to the popular-fiction atmosphere of anxiety that is built up around this topic. I would like to think that we, as Marxists, would have a more *level-headed* approach to this subject.
    It is a issue of human life span and the learning curve of an A.I.


    Originally Posted by ckaihatsu
    Before any AI can be *class* conscious, it first has to be *conscious* -- you *know* that, right??? (And how the fuck are animals of burden ever going to be able to "identify themselves as a working class" when they have *no fucking language*??????????????? What message exactly will the animals put on their banners?!!!!!!!!)
    For animals I'm talking in the very very long term, that evolution continues and other animals evolve.
  17. #37
    Join Date Mar 2008
    Location traveling (U.S.)
    Posts 15,319
    Rep Power 65

    Default


    That assumes the AI will runaway within a human lifespan.

    What???????? You make it sound as if an AI would only have *one* human observer who doesn't even take notes. Much more realistically would be *at least* a basic lab environment wherein the AI is a *research project* that's staffed by several computer scientists who examine its output and progress from many different perspectives over time, as doctors do with human patients, keeping medical records along the way. The lifespan of any one scientist *would not* make a difference -- I'm sure there would be / are peer-reviewed academic studies and schools of students around it, too....



    More like we wouldn't be paying that much attention, that as long as they preformed their duties within operating parameters we would not analyzing them that closely.

    - Whatever -



    It is a issue of human life span and the learning curve of an A.I.

    No, it isn't.



    For animals I'm talking in the very very long term, that evolution continues and other animals evolve.

    Please keep in mind that human civilizations developed "on a blank slate" so to speak, without the interference of *any other* species' pre-existing civilizations. There *was* natural competition over the scavenging of food sources, and there *were* dangers from other predators, but that was soon overcome with the development of tool-use (as with fire, presumably).

    Animals today may be profoundly "challenged" by existing human civilization, to the point of affecting their biological evolution. While natural habitats have been greatly reduced in size by modern development *any* sophisticated animal communication will pretty much inevitably come to the attention of human society, as soon as it happens, thus mitigating its "natural" development. As far as we can tell *everything* will be happening within a human societal context.
  18. #38
    Join Date Sep 2005
    Posts 3,880
    Rep Power 0

    Default

    What???????? You make it sound as if an AI would only have *one* human observer who doesn't even take notes. Much more realistically would be *at least* a basic lab environment wherein the AI is a *research project* that's staffed by several computer scientists who examine its output and progress from many different perspectives over time, as doctors do with human patients, keeping medical records along the way. The lifespan of any one scientist *would not* make a difference -- I'm sure there would be / are peer-reviewed academic studies and schools of students around it, too....
    You assume that the AIs would be in a laboratory environment, engineers don't tend to study equipment closely once they have been in normal service for decades, of course there will be maintenance logs but odds are it would be from the point of view that the AI being just a machine and as time passes maintenance logs would be archived and could be lost over time since technicians still see the AI as a machine.



    Originally Posted by ckaihatsu
    Please keep in mind that human civilizations developed "on a blank slate" so to speak, without the interference of *any other* species' pre-existing civilizations. There *was* natural competition over the scavenging of food sources, and there *were* dangers from other predators, but that was soon overcome with the development of tool-use (as with fire, presumably).

    Animals today may be profoundly "challenged" by existing human civilization, to the point of affecting their biological evolution. While natural habitats have been greatly reduced in size by modern development *any* sophisticated animal communication will pretty much inevitably come to the attention of human society, as soon as it happens, thus mitigating its "natural" development. As far as we can tell *everything* will be happening within a human societal context.
    True but even if humans instantly notices such evolutions of other animals unlike AI we won't not be able to pull the plug on the evolution. There is no way a communist world would support using eugenics to bread out evolutionary traits from animals , if anything once such evolutionary traits were found there would be a push from the scientific community to isolate the genes in these animals and try to reproduce the evolution in lab animals for the purpose of studying the evolution of other animal and only at best a policy of managing the evolution of other animals not stopping it.
  19. #39
    Join Date Mar 2008
    Location traveling (U.S.)
    Posts 15,319
    Rep Power 65

    Default


    You assume that the AIs would be in a laboratory environment, engineers don't tend to study equipment closely once they have been in normal service for decades, of course there will be maintenance logs but odds are it would be from the point of view that the AI being just a machine and as time passes maintenance logs would be archived and could be lost over time since technicians still see the AI as a machine.

    Psy, *every* project has some sort of a "charter", with a set schedule for budgeting / funding. *Nothing* is going to "get lost" if it requires continual funding.



    True but even if humans instantly notices such evolutions of other animals unlike AI we won't not be able to pull the plug on the evolution. There is no way a communist world would support using eugenics to bread out evolutionary traits from animals , if anything once such evolutionary traits were found there would be a push from the scientific community to isolate the genes in these animals and try to reproduce the evolution in lab animals for the purpose of studying the evolution of other animal and only at best a policy of managing the evolution of other animals not stopping it.

    I agree that there's no need to "pull the plug" on evolution.

    What I *was* saying, to reiterate, is that any kind of rudimentary social organization would require animals to first be able to communicate abstract meanings. The process of *planning* requires it. Body language and vocalizations in realtime is *not* enough, regardless of any conceivable intention that may exist internally.

    What *I* think is far more likely would be the *equipping* of today's-ability animals with more human-made tools of communication, to give them more of the tools and tool-using abilities that people currently enjoy. I'm pretty sure the brain-machine interface technology already exists -- it would be like bypassing deafness or blindness for people....
  20. #40
    Join Date Sep 2005
    Posts 3,880
    Rep Power 0

    Default

    Psy, *every* project has some sort of a "charter", with a set schedule for budgeting / funding. *Nothing* is going to "get lost" if it requires continual funding.
    Again that is if the AI is still in development, yet if development is complete and been used for decades maintenance logs could become lost over time that would delay detecting evolution of the AI beyond what was observed in the lab. This would be probable if the AI filled all the requirements and needed no further improvement for the purposes it was being used for.

    For example lets say the AI was deployed in 3000 yet very very slowly evolves taking to 3100 to shows signs of self-awareness there is a chance not all the logs from 3000 to 3100 would be available due to deterioration of archives and backing up archives of logs having very low priority and technicians not noticing the slow change as they never though of comparing logs over that great a span.



    Originally Posted by ckaihatsu
    I agree that there's no need to "pull the plug" on evolution.

    What I *was* saying, to reiterate, is that any kind of rudimentary social organization would require animals to first be able to communicate abstract meanings. The process of *planning* requires it. Body language and vocalizations in realtime is *not* enough, regardless of any conceivable intention that may exist internally.

    What *I* think is far more likely would be the *equipping* of today's-ability animals with more human-made tools of communication, to give them more of the tools and tool-using abilities that people currently enjoy. I'm pretty sure the brain-machine interface technology already exists -- it would be like bypassing deafness or blindness for people....
    True

Similar Threads

  1. Why anarcho-communism, and not keep the state?
    By Muzk in forum Opposing Ideologies
    Replies: 18
    Last Post: 17th August 2009, 21:15
  2. Replies: 104
    Last Post: 7th July 2009, 14:29
  3. Replies: 9
    Last Post: 14th April 2009, 09:08
  4. Communism: The Beginning of a New Stage
    By redwinter in forum News & Ongoing Struggles
    Replies: 29
    Last Post: 25th September 2008, 19:23
  5. Replies: 0
    Last Post: 25th May 2006, 01:54

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts