Log in

View Full Version : Marx & AI



Strannik
29th June 2011, 10:26
From what I read of Marx, I understand that only workers can be exploited. Machinery does not produce new labor, it's "frozen" labor of workers. I guess this means that if capitalist cuts back worker's inputs, they will usually adapt. While when capitalist cuts back machine inputs, machine will stop functioning or breaks down.

Does this mean that "exploitability" is the property of intelligence? That when one would build a machine that can be exploited as described by Marx, this machine should be considered intelligent?

Mather
29th June 2011, 11:10
To be honest I don't think will ever find out what Marx's opinions would have been on the subject of AI. Likewise it is a subject that has not really been covered by marxist theorists and writers.

Broletariat
29th June 2011, 12:10
Human labour is unique in its value creating properties because value is a social relationship amongst human society.

AI cannot create value (and therefor be exploited) because it is not a member of human society.

Strannik
29th June 2011, 12:26
So in order to be considered intelligent, one would have to be (to function as) a member of human society?

Mather
29th June 2011, 16:33
So in order to be considered intelligent, one would have to be (to function as) a member of human society?

I think the issue of defining AI intelligence is a separate question to that of whether AI life can fit into and live within human society.

Also, given that there has been very little discussion on the topic of AI amongst marxists and anarchists, there is no real idea as to what role AI would play in relation to the means of production and class society.

Mr. Cervantes
29th June 2011, 16:38
I think it's interesting how technological advance has been used to a great degree in exploiting people more specifically the working class.

More and more in this technological era we are coming to the notion that specific types of human beings are obsolete in terms of adaptation to modern society where such people are looked upon as no longer necessary in even existing. In this current era we are all on the verge of a technocracy if we haven't already entered one.

I think as technology advances there will only be even more of a disregard to basic human life in general.

There is this cultic mantra that technology is bringing the world closer together where it is improving all our lives progressively. I think it is the direct opposite actually.

Technology in the hands of the ruling class is creating a world where the elite social classes actually have it easier in enslaving and controlling the rest of us. It is the elite ruling social class that controls the prospects and applications of technology on the rest of the populance to which it forcibly makes everybody submit to.

The technological advance of humanity is also where every couple of years we find new innovations in creating weapons to kill ourselves in modernized war.

I'm somewhat of a luddite I guess in that I do not see salvation in technology whatsoever in that for me as technology grows and becomes more innovated so does our social damnation into oppression grow as well.

Broletariat
29th June 2011, 18:16
Also, given that there has been very little discussion on the topic of AI amongst marxists and anarchists, there is no real idea as to what role AI would play in relation to the means of production and class society.

AI would have to be considered as members of human society with all the rights of humans in order to create value. Otherwise, they're just means of production.

Mather
29th June 2011, 21:44
AI would have to be considered as members of human society with all the rights of humans in order to create value. Otherwise, they're just means of production.

Sadly I don't know enough on this particular subject to say any more really, other than the fact that if AI were to come into existence it would complicate a lot of things.

Broletariat
29th June 2011, 21:47
Sadly I don't know enough on this particular subject to say any more really, other than the fact that if AI were to come into existence it would complicate a lot of things.
An understanding of how value is a social relationship allows this subject to be pretty clearly seen.

It would probably be sort of complicated while integrating AI into human society during a transition phase sort of thing.

Mather
29th June 2011, 22:32
I would like to say that this thread is about what Marx's view of AI would have been had he been alive today and what implications AI would pose for marxist theory. I am willing to respond to your post so as to state my view but if you wish to start a debate about technology per se then please do so in another thread so as not to divert this one.

PS: I couldn't disagree more with your view on technology.

Broletariat
29th June 2011, 22:37
I would like to say that this thread is about what Marx's view of AI would have been had he been alive today and what implications AI would pose for marxist theory. I am willing to respond to your post so as to state my view but if you wish to start a debate about technology per se then please do so in another thread so as not to divert this one.

PS: I couldn't disagree more with your view on technology.

I feel like I've accurately espoused Marx's view on AI, I'm not really sure what you're talking about in most of this post to be honest.

Mather
29th June 2011, 22:39
I feel like I've accurately espoused Marx's view on AI, I'm not really sure what you're talking about in most of this post to be honest.

I was responding to NicolasCervantes.

I forgot to quote him, sorry.

Broletariat
29th June 2011, 22:52
I was responding to NicolasCervantes.

I forgot to quote him, sorry.

Oh alright, I was quite confused :P

Mr. Cervantes
30th June 2011, 07:43
I would like to say that this thread is about what Marx's view of AI would have been had he been alive today and what implications AI would pose for marxist theory. I am willing to respond to your post so as to state my view but if you wish to start a debate about technology per se then please do so in another thread so as not to divert this one.

PS: I couldn't disagree more with your view on technology.

Very well.

http://www.revleft.com/vb/technology-and-disruption-t157242/index.html?p=2159229

New thread created. I would like a civil debate.

Strannik
30th June 2011, 10:14
I was inspired by an article (sorry, I can't post links) that is written in bourgeois perspective and asks what would be the status of artificial intelligent organisms in future society (which for the author, naturally, is capitalistic). For example would robots (artificial self-aware organisms) be born "in debt" to those who produced them?

Actually, bourgeois A.I. reseach does not have a commonly accepted definition of A.I. either, so I was wondering, whether marxist theory could offer a better framework.

From discussion in this thread so far I get, that intelligence is the ability to participate consciously in social relationships. Value is one such social relationship, which means that only intelligent systems can produce value and only those who produce value can be exploited.

Lets see these examples:

An ordinary worker in a factory is replaced with a machine, directed by a preprogrammed microcontroller, capable of performing all worker's labor operations.

A worker is replaced with a self-aware AI with equal or superior level of intelligence and ability to participate in social relations.

Its clear, that we as society would not be in a social relationship with the microcontroller, but we were in a social relationship with the worker that was replaced. Presumably, if the AI would be truly intelligent, we would also be in social relationship with it. So the difference between a worker and microcontroller can't be in the activities they perform at the production line. Its in their ability to function as active and conscious members of social production, THEY would be able do anything, not just what they were designed for.

So definition of intelligence is - capability of creative labor?

Blackscare
30th June 2011, 10:43
AI cannot create value (and therefor be exploited) because it is not a member of human society.

I think that the issue of value creation is a separate issue from whether or not an AI could be "exploited". I understand that a machine, as Marx conceptualized them in his time, is essentially a "static" entity that only "creates" value when used by a human being. Value and exploitation are tied to human labor because only human beings can manipulate tools and machines to make them produce value. And a human being compelled to produce value under capitalism must be exploited. Or at least that was the case. Marx may explicitly state that value is tied only to human labor, but that may be because in his time that was the only possibility and glaringly obvious. Who is to say what Marx would have thought about the concept of AI that could operate, manage, and maintain productive forces alone?


I think that if you had an AI that could manage the operation of more simple machines, it would basically be serving the same function as a human being in an abstract sense. I think reducing the issue of value strictly to a matter of social relations is a bit reductionist, in an day and age drastically different from Marx's own.



If, hypothetically, an AI built by humans, manipulating more simple machines that were also built by humans, would be able to reproduce said machines and itself, eventually replacing all human-made parts, could it be said to at that point independently create value? Taking it farther, if the AI was drawing energy from a plant built and operated by itself or from another AI, if the materials required for production were gathered and transported by AIs, and in fact it was "isolated" from the product of human work in every way, could it not create value?


I suppose though that if you go back far enough, it's all down to human labor in the end. The technology itself would be the product of human labor, and even if a scenario as described above happened, the result would would be, essentially, just the creation of advanced machine doing what it was constructed to do, owing it's existence to man no matter how insulated it's production could become from human labor or even if it was designed by AI as well. I don't know, this is sort of a mindfuck for me. I never thought my mind would run in circles like this talking about such a concrete system as Marxism. :P


Would the AI equivalent of human labor, the origin of value, possibly be a measurement of CPU cycles or some such? Although, unless we started treating the robotic underclass (metallic scum!) as individuals who could, for whatever reason, buy things, I doubt that calculating their "labor" would even be necessary.

Now, the problem with this scenario is that, IMO, automation on such a scale could only be brought about by socialism (which would realistically entail human efforts to build such a system, I'm not seriously suggesting that an AI could or would replace all industry on it's own) and would result in communism (eliminating all or the vast majority of jobs would make this necessary, and is exactly why the full potential of AI and automation can never be released under capitalism; there needs to be buyers of goods). If all goods are produced in an automated fashion, and there are far too few workers to support a market, the exchange of capital for goods would be impossible. Without a market as we think about it today, how could ANY goods have value recognizable in any form that would be at all relevant to human affairs?


Exploitation, however, is something that I think would not be a factor with AI. I mean, maybe if it started saying how much it liked kittens and it was in love with a dell or something, I'd have to rethink that, but I have my doubts that any sort of AI would have that sort of intelligence. In the more strict sense, it wouldn't be "working" to sustain itself economically, without a choice in the matter as humans are exploited, it would "work" simply because it was designed to work.



Xanax and vivance make for an odd combination of total muddle-headedness and a compulsion to continue typing, sorry :P

Blackscare
30th June 2011, 11:20
I think it's interesting how technological advance has been used to a great degree in exploiting people more specifically the working class.

More and more in this technological era we are coming to the notion that specific types of human beings are obsolete in terms of adaptation to modern society where such people are looked upon as no longer necessary in even existing. In this current era we are all on the verge of a technocracy if we haven't already entered one.

I think as technology advances there will only be even more of a disregard to basic human life in general.

There is this cultic mantra that technology is bringing the world closer together where it is improving all our lives progressively. I think it is the direct opposite actually.


Well, this is true only in the context of capitalism, really. I won't really touch the weapons bit because, well, in a communist world this wouldn't really be a problem.


Under capitalism, automation is more of a means of wage suppression than anything else. The capitalist class, as a whole, has no interest in fully implementing automation because it would eliminate demand. Automation reduces the number of workers and makes those that remain more easily replaced and less essential to production. You made reference to the ruling class, to be fair, but I still think that you have too grim a view of technology. Industrial automation will ultimately be the only way to achieve communism and liberate people from work.


I feel that, failing some sort of near-future revolutionary cascade that isn't too likely to happen, the "final" communist revolution worldwide will result in the bourgeoisie's inability to reconcile the obvious advantages of automation and AI with the need to harvest surplus-value. The only answer to this will be the suppression of it's widespread use intentionally to an "acceptable" level, probably aided in this effort by unions and the like trying to protect human jobs from the "threat" of automation. It will be increasingly apparent to people that the progress of automation is being hampered by the ruling class, even if they originally supported such policy in their own self interest, as automation in the home and throughout certain sectors of society becomes advanced and prevalent enough to make it obvious that it could be implemented throughout the economy without the working class losing out. The old argument that "capitalism has it's flaws but its the best system around" will be basically a joke, as anyone will be able to see that it clearly can't cope with such massive progress. Since capitalism is prone to ups and downs, so to speak, it's still pretty hard to convince people that capitalism is a doomed system altogether even in times of crisis. People believe it will bounce back, etc. People need to lose faith in capitalism. This won't be the result of all of our best efforts or really happen until a new system (something we proselytize for) to replace capitalism meets with a clear route to it's implementation. Of course, economic crisis will have to precipitate this.

Broletariat
30th June 2011, 12:12
A few points I want to make.

1. Exploitation in the economic sense. Something that doesn't create value can't be exploited in the economic sense.

2. In a completely automated society as you have proposed, no one has a job, no one gets paid, nobody can sell anything to people who don't have money, there is no value.

3. I really don't think talking about value in terms of social relationships is "reductionist" because that's exactly what value is. It is no more and no less than a social relationship designed to distribute human labour across the spheres of production.

Strannik
30th June 2011, 14:03
I think also that value has to be a social construct, because concept of value has only meaning when it is interpreted by humans.

As I remember now, labor is not the sole source of value. Nature is also a source of value. Natural resources can be interpreted by humans as valuable and natural processes can create value independent of human action.

Thus, when we have a fully automated, self-maintaining factory, it could be seen as a source of "natural value"; system of such human-independent processes would be "interactive nature".

If such a factory would be in private property, then considering the fact that people have nothing to do there, the owner could only demand "servitude" for the use of their factory - but this system is not capitalism, its feudalism.

By the way, technological singularity - the point where humans lose control over technological processes? Can this be possible from a Marxist point of view?

Broletariat
30th June 2011, 16:08
As I remember now, labor is not the sole source of value. Nature is also a source of value. Natural resources can be interpreted by humans as valuable and natural processes can create value independent of human action.

You're remembering wrong, Marx said this.

"We see, then, that labour is not the only source of material wealth, of use values produced by labour. As William Petty puts it, labour is its father and the earth its mother. "

He was speaking of USE-value not value here. This causes your next bit to cease to follow.


If such a factory would be in private property, then considering the fact that people have nothing to do there, the owner could only demand "servitude" for the use of their factory - but this system is not capitalism, its feudalism. I don't see why you'd say this at all. If there was nothing to do at the factory, what could the owner possibly want in "Servitude" from the people? Sex slaves or something?


By the way, technological singularity - the point where humans lose control over technological processes? Can this be possible from a Marxist point of view?Don't see why not.

Strannik
1st July 2011, 12:53
You're remembering wrong, Marx said this.

"We see, then, that labour is not the only source of material wealth, of use values produced by labour. As William Petty puts it, labour is its father and the earth its mother. "

He was speaking of USE-value not value here. This causes your next bit to cease to follow.

I don't see why you'd say this at all. If there was nothing to do at the factory, what could the owner possibly want in "Servitude" from the people? Sex slaves or something?

Don't see why not.

Well, what I had in mind was that Marx would consider AI-led factories to belong into the realm of the "mother", and labor would be the act of "convincing" these factories to produce something that people consider useful.

Correct, this high-tech feudalism idea seems crazy :)

About singularity, what I really wanted to ask is - would Marx consider it a positive or negative thing?

Broletariat
1st July 2011, 17:32
Well, what I had in mind was that Marx would consider AI-led factories to belong into the realm of the "mother", and labor would be the act of "convincing" these factories to produce something that people consider useful.

I don't see what prevents us from classifying current factories like this either.


About singularity, what I really wanted to ask is - would Marx consider it a positive or negative thing?

That, I have no idea about.

Mr. Cervantes
6th July 2011, 08:18
Wouldn't artificial intelligence or machines in general fall under Marx's concept of dead labor?

Is there not a communist seperation of dead labor versus living labor? Machines versus working people?

Broletariat
6th July 2011, 17:28
Wouldn't artificial intelligence or machines in general fall under Marx's concept of dead labor?

Yes


Is there not a communist seperation of dead labor versus living labor? Machines versus working people?
There is, it's the distinction between constant and variable capital.

ckaihatsu
24th July 2011, 05:12
So definition of intelligence is - capability of creative labor?





From discussion in this thread so far I get, that intelligence is the ability to participate consciously in social relationships. Value is one such social relationship, which means that only intelligent systems can produce value and only those who produce value can be exploited.





I'm somewhat of a luddite I guess in that I do not see salvation in technology whatsoever in that for me as technology grows and becomes more innovated so does our social damnation into oppression grow as well.





By the way, technological singularity - the point where humans lose control over technological processes? Can this be possible from a Marxist point of view?








Given that people will most likely continue to be around, it's very difficult to even *conceive* of a clean separation between human intelligence / activity and any possible, potential "AI". Certainly popular fiction amps up the dramatic qualities to make the imagined AI's creation something akin to the birthing of a metallic alien, but in reality, then as before, the very nature and form of any technologies will take on the qualities that human intelligence imbues into them, usually for specific applications.

So, in short, if there's no general political and economic backing for the creation of a purely autonomous artificial life form then it just won't happen -- instead, specific applications will prevail, like the one that's the topic of this thread.

However, that said, I think the substance of concerns behind standard cyberphobia AI-run-amuk nightmares is that machine "intelligence" can be "faked" to considerable extents by simply piling on layers of abstraction. Wouldn't any newcomer be fairly dazzled by today's cell phones that can call a person just by the mention of that name into the phone? It *looks* like some kind of intelligent device, but we know that it's just a "trick" of using several layers of sub-systems in overlapping ways that leverage the end-user functionality we see and use.








[W]e have to present ourselves to others a certain way, in realtime, going forward. This cannot merely be done mechanically or algorithmically, because if that were the case then one would be a machine oneself. Rather we give certain importance to past events and experiences in our lives, blending them into our present state of being.

Any artificial life that could not imbue its past with some kind of arbitrary, relative, yet self-selected subjective meaning could not be said to have a 'soul' in the secular sense of the term -- it would not be able to reference its own being from the past and so would be limited to referencing Wikipedia, at best.


Interpersonal Meanings

http://postimage.org/image/1d5a6d1c4/

MarxSchmarx
24th July 2011, 05:15
Interpersonal Meanings

http://postimage.org/image/1d5a6d1c4/

Sorry I don't udnerstand what that picture is getting at. could you elaborate?

ckaihatsu
24th July 2011, 05:41
Sorry I don't udnerstand what that picture is getting at. could you elaborate?


Sure! How much time you got...?

Heh -- for the context of this topic I'll just say that a potential artificial consciousness would have to be able to subjectively place a given "experience" of its own into that framework in an arbitrary self-determined way. The act of making such a subjective value judgment would demonstrate self-awareness, just as with anything political-oriented that we readily do as socialized human beings.

ÑóẊîöʼn
24th July 2011, 06:03
[W]e have to present ourselves to others a certain way, in realtime, going forward. This cannot merely be done mechanically or algorithmically, because if that were the case then one would be a machine oneself.

This sounds like you're arguing that there is something "special" about human intelligence, rather than it just being a particular subset of minds-in-general, like so:

http://i78.photobucket.com/albums/j99/NoXion604/summit_challenge_24.png
(taken from HERE (http://www.acceleratingfuture.com/people-blog/2007/the-challenge-of-friendly-ai/), it's well worth reading)

AIs, being created rather than evolved, can theoretically have any kind of mental architecture. The human brain, despite its status as the most complex human organ and its billions of individual neurons, only has a limited amount of different states. AIs would not have such limitations.

ckaihatsu
24th July 2011, 06:21
This sounds like you're arguing that there is something "special" about human intelligence, rather than it just being a particular subset of minds-in-general, like so:


Well, as usual, you're advancing something that's purely imaginary and hypothetical, at best.





AIs, being created rather than evolved, can theoretically have any kind of mental architecture. The human brain, despite its status as the most complex human organ and its billions of individual neurons, only has a limited amount of different states. AIs would not have such limitations.


The key operative word here being 'would' -- as in 'if'.

And, more technically, *any* typical computer these days is able to run "more states" -- through multitasking -- than the average person's consciousness handles at any given moment, but that confers no quality of self-aware consciousness. It just makes for a more powerful glorified calculator.

Jose Gracchus
24th July 2011, 07:10
If we're talking about 'conscious' or 'hard' AI then who knows. But there is existing AI, and it is related to production: it helps route 'just-in-time' production and distribution networks, it makes a huge and growing percentage of the stock market and bond market trades and purchases nowadays, automatically and autonomously.

ckaihatsu
24th July 2011, 07:20
You do realize that the approach you referenced to consciousness -- whether existing or potential -- is based in *idealism*, right???

Any talk of 'morals' or 'values' -- especially in this context -- serves to *abstract* one's goal-setting to the point of meaninglessness. Piling a bunch of them on top of each other, like "truth, justice, freedom, individuality, art, music, love, friendship" and then going out into the world to do them would only make one's life into a caricature.

By putting such a list out in front of oneself a person would be turning themselves into a machine -- the *opposite* of this field which claims to aim at turning machines into people.








Is it necessarily a good thing to have a powerful AI with the terminal value of keeping humans healthy? The classic story “With Folded Hands” by Jack Williamson is about AI’s that try to keep humanity happy by keeping them in nice, safe nursery playpens and lobotomize anyone who isn’t happy enough. Williamson’s AI’s had the terminal values of health and happiness. And who’s against health and happiness? But Williamson’s AI’s did not have terminal values for truth, justice, freedom, individuality, art, music, love, friendship, or any of the other things that we are happy about and stay alive for. The “Folded Hands” AI traded off security against freedom, without caring about the other half of the equation, without having a term for freedom in their utility function. Just because health is a good thing doesn’t mean that an ultra-powerful agent that only cares about health is a good thing.

It could be extremely unpleasant to be be around an ultra-powerful moral agent that shares only some, rather than all, of your terminal values. And I’ve probably got hundreds of terminal values. I can’t print out the complete list any more than I can print out the positions of all the neurons in my cerebral cortex.




What we need is not to point an AI at our current values, but point an AI at the moral trajectory we would follow over time.

http://www.acceleratingfuture.com/people-blog/2007/the-challenge-of-friendly-ai/


I'll maintain that self-awareness is demonstrated through *independent* value-judgment-making and goal-setting, especially using arbitrary self-selected information from past experiences.


Consciousness, A Material Definition

http://postimage.org/image/35t4i1jc4/

ckaihatsu
24th July 2011, 19:26
If we're talking about 'conscious' or 'hard' AI then who knows. But there is existing AI, and it is related to production: it helps route 'just-in-time' production and distribution networks, it makes a huge and growing percentage of the stock market and bond market trades and purchases nowadays, automatically and autonomously.


The use of the term 'AI' for this is a misnomer. It's more accurately called an 'expert system' because its domain is very specific and well-defined, like that for a game of chess.

Jose Gracchus
24th July 2011, 20:40
I feel more comfortable using industry terminology, rather than idiosyncratic ones invented on the spot.

ckaihatsu
24th July 2011, 20:48
I feel more comfortable using industry terminology, rather than idiosyncratic ones invented on the spot.


An expert system is software that uses a knowledge base of human expertise for problem solving, or to clarify uncertainties where normally one or more human experts would need to be consulted. Expert systems are most common in a specific problem domain, and are a traditional application and/or subfield of artificial intelligence (AI).

http://en.wikipedia.org/wiki/Expert_system

ÑóẊîöʼn
25th July 2011, 01:54
Well, as usual, you're advancing something that's purely imaginary and hypothetical, at best.

If we're going to be creating Friendly AI, and there's no reason as long as civilisation stays around that we won't, we have to start somewhere. I don't see how intelligence is any less achievable artificially than powered flight.


The key operative word here being 'would' -- as in 'if'.

And, more technically, *any* typical computer these days is able to run "more states" -- through multitasking -- than the average person's consciousness handles at any given moment, but that confers no quality of self-aware consciousness. It just makes for a more powerful glorified calculator.

Unless you believe in some form of Cartesian dualism, how can you say that the algorithms that lead to the emergence of intelligent behaviour are not reproducible in machines?


You do realize that the approach you referenced to consciousness -- whether existing or potential -- is based in *idealism*, right???

Any talk of 'morals' or 'values' -- especially in this context -- serves to *abstract* one's goal-setting to the point of meaninglessness. Piling a bunch of them on top of each other, like "truth, justice, freedom, individuality, art, music, love, friendship" and then going out into the world to do them would only make one's life into a caricature.

By putting such a list out in front of oneself a person would be turning themselves into a machine -- the *opposite* of this field which claims to aim at turning machines into people.

That's the whole point of Eliezer's example of the Folded Hands AI. It's not good enough to simply give a laundry list of "good things" for the AI to uphold, we have to work out what makes us capable of metamoral reasoning and apply that to Artificial Intelligence.


I'll maintain that self-awareness is demonstrated through *independent* value-judgment-making and goal-setting, especially using arbitrary self-selected information from past experiences.

I don't see how a computer couldn't do the same thing with the appropriate programming.

ckaihatsu
25th July 2011, 02:12
If we're going to be creating Friendly AI, and there's no reason as long as civilisation stays around that we won't,


No, this is *very* presumptuous about a society's sustained technological trajectories -- we could just as breezily assume that the space program would put people on Mars, or that weapons development would surpass the neutron bomb.





we have to start somewhere. I don't see how intelligence is any less achievable artificially than powered flight.


'Intelligence' is an abstraction -- it lends itself all-too-easily to institutionalization and commodification. I've provided an alternative to your grandiose imaginings of an artificial "intelligence" with a clear definition of 'consciousness', or self-awareness.





Unless you believe in some form of Cartesian dualism, how can you say that the algorithms that lead to the emergence of intelligent behaviour are not reproducible in machines?


Again you're positing the actual existence of what's *only* imaginary -- "algorithms that lead to the emergence of intelligent behavior".





That's the whole point of Eliezer's example of the Folded Hands AI. It's not good enough to simply give a laundry list of "good things" for the AI to uphold, we have to work out what makes us capable of metamoral reasoning and apply that to Artificial Intelligence.




I don't see how a computer couldn't do the same thing with the appropriate programming.


I guess I couldn't stop you even if I wanted to, huh?


= )

ÑóẊîöʼn
25th July 2011, 03:40
No, this is *very* presumptuous about a society's sustained technological trajectories -- we could just as breezily assume that the space program would put people on Mars, or that weapons development would surpass the neutron bomb.

Unlike your other examples, AI is simply too useful a tool to pass up. Any technological society would have a myriad array of applications for AI technology.


'Intelligence' is an abstraction -- it lends itself all-too-easily to institutionalization and commodification. I've provided an alternative to your grandiose imaginings of an artificial "intelligence" with a clear definition of 'consciousness', or self-awareness.

Intelligence is not an abstraction - objects with intelligence can be observed to behave much differently to objects without it.


Again you're positing the actual existence of what's *only* imaginary -- "algorithms that lead to the emergence of intelligent behavior".

There must be some kind of physical process going on to produce intelligence, just as physical processes are necessary to powered flight. Whatever that is, what reason have we to believe we can't build a machine that "does intelligence" better than we can?

If our planes can fly further and faster than any naturally evolved animal, why can't our AIs think faster and better than even the smartest human?


I guess I couldn't stop you even if I wanted to, huh?

You've not provided any compelling arguments to suggest that Strong AI is impossible.

ckaihatsu
25th July 2011, 04:45
Intelligence is not an abstraction -


Yes, yes it is -- we should take as a baseline that (organic) organisms are genetically made to adapt to their environments and to thrive as much as possible, given those found conditions.

Obviously we as human beings are able to *change* the environment according to our will, and we have cognitive faculties that complement this ability -- internally playing out different "scenarios" symbolically.

But to use the term 'intelligence' means that you have to *define* 'intelligence', and then provide some kind of test that purports to *measure* 'intelligence'. This is where the *arbitrariness* sets in, because many will contend that a posited "test" is actually *controversial* at "measuring" "intelligence".

Beyond what we do to satisfy our most basic biological needs is actually *social* and *cultural*, and each kind of culture has its own areas of knowledge and expertise that it considers to be important. A test measuring 'intelligence' or 'capability' in one culture may tend to be wildly different from what *another* culture tends to think of as 'intelligent' or 'capable'.





objects with intelligence can be observed to behave much differently to objects without it.


What objects, exactly? (More imaginings on your part?) (Or do you actually mean 'robotics using expert systems' -- ?)





There must be some kind of physical process going on to produce intelligence, just as physical processes are necessary to powered flight. Whatever that is, what reason have we to believe we can't build a machine that "does intelligence" better than we can?


Because your definition of 'intelligence' so far rests entirely on referencing *other* abstract terms.





If our planes can fly further and faster than any naturally evolved animal, why can't our AIs think faster and better than even the smartest human?


Because "your AIs" don't exist. (You continue to use hyperbole.)





You've not provided any compelling arguments to suggest that Strong AI is impossible.


Yes, my argument is that people will decide to continue with *separate*, *non*-conscious electronic machinery to *augment* their own task-processing abilities. Due to a lack of mass interest sufficient human labor will remain unavailable to bring a full "artificial consciousness" to fruition.

ÑóẊîöʼn
25th July 2011, 20:23
Yes, yes it is -- we should take as a baseline that (organic) organisms are genetically made to adapt to their environments and to thrive as much as possible, given those found conditions.

Obviously we as human beings are able to *change* the environment according to our will, and we have cognitive faculties that complement this ability -- internally playing out different "scenarios" symbolically.

Why can't that behaviour be replicated, or even bettered, by a machine?


But to use the term 'intelligence' means that you have to *define* 'intelligence', and then provide some kind of test that purports to *measure* 'intelligence'. This is where the *arbitrariness* sets in, because many will contend that a posited "test" is actually *controversial* at "measuring" "intelligence".

Measuring intelligence among humans is contraversial because human abilities typically don't vary by much. But surely you can't deny that an average human is more intelligent than an average cat, which is turn more intelligent than say, any plane's autopilot function.


Beyond what we do to satisfy our most basic biological needs is actually *social* and *cultural*, and each kind of culture has its own areas of knowledge and expertise that it considers to be important. A test measuring 'intelligence' or 'capability' in one culture may tend to be wildly different from what *another* culture tends to think of as 'intelligent' or 'capable'.

Intelligence is about problem-solving capability, something that is applicable no matter the culture. Social and cultural interactions are mediated through our brains, not our kidneys.


What objects, exactly? (More imaginings on your part?) (Or do you actually mean 'robotics using expert systems' -- ?)

Biological organisms display a wide range of behaviours not associated with inanimate objects, and the range roughly tracks with the ratio of neural tissue to other tissues. Humans display a capability for planning for the future more advanced than any other known animal, and our brain-to-body ratio is huge.


Because your definition of 'intelligence' so far rests entirely on referencing *other* abstract terms.

Intelligence is demonstrable, not abstract. Objects with intelligence have goals, and that goal-seeking behaviour can be clearly observed. You can't reduce it to a single number, but that doesn't mean it isn't there.


Because "your AIs" don't exist. (You continue to use hyperbole.)

I'm asking you why they can't be created in the first place, not why they can't be created today.


Yes, my argument is that people will decide to continue with *separate*, *non*-conscious electronic machinery to *augment* their own task-processing abilities. Due to a lack of mass interest sufficient human labor will remain unavailable to bring a full "artificial consciousness" to fruition.

"Lack of interest"? What rock have you been living under? AI is of significant interest to many industries, as well as advocacy and research organisations.

JustMovement
25th July 2011, 20:39
interesting questions. As regards to the increasing use of robots (not AI though) Peter Thompson makes this interesting point:

In economic terms, what before was a tangible surplus product is now transformed into intangible surplus value. You enter into this apparently free contract with an employer but the wage you draw from that employment is only a part of the value you create. Just as before a portion of the cabbages and linen you made belonged to the master, now a proportion of the monetary value you make through the production process belongs to the employer and you will only be employed if a competitive rate of surplus value can be generated through your labour. This is at the root of Marx's version of the labour theory of value. The employer will provide the machines or tools for the completion of the task (constant capital) while the worker provides the labour power (variable capital). The employer will always be trying to improve labour productivity and can do so in various ways, but all of them boil down to improving the gap between your wage and the amount of value created by your labour power.

This means that for Marx the commodity labour power has a special character in that it is the only commodity which can be employed to increase value, while all the others are merely reified forms of dead human labour, useless without labour input. An advanced car-producing robot no more creates value than does a peasant's shovel. In theory there is no difference here to previous epochs where we accept the labour theory of value because it is measured in tons of cabbages and yards of linen but now that it becomes a commodified and monetarised relationship it also becomes a quasi-mystical one, with value apparently emerging mysteriously out of all sorts of transactions and technologies and with market mechanisms and competition wiping out and obfuscating the distinction between what it costs to produce something and its price.

On these threads, for example, a critique of Marx has emerged which posits a kind of paradoxical capitalist utopia in which we have reached 100% automation of production with no labour input at all anywhere by anyone. This reductio ad absurdum is of course as realistic as the world of Arnie's Terminator or of Joh Fredersen's Metropolis in which workers become surplus to requirements, but it does serve to illustrate a point because the further question then emerges as to how the goods produced are going to be purchased if no one is earning any wages through the productive process.

Under capitalism labour productivity may improve massively, but it can never be reduced to zero because that would remove all demand for the goods produced. You would then have to distribute commodities or vouchers to the entire population based on some sort of criteria not linked to labour input and then where do we end up? Oh, of course, at communism, in which each gives according to their ability and receives according to their need. Capitalist competition over labour productivity thus not only produces its own gravediggers but also provides the shovels (or robots) to finish the job.

from http://www.guardian.co.uk/commentisfree/belief/2011/may/09/karl-marx-part-6-economics

ps if anyone can put it in spoilers it would be great i tried but it didnt work

ckaihatsu
25th July 2011, 22:39
[W]e as human beings are able to *change* the environment according to our will, and we have cognitive faculties that complement this ability -- internally playing out different "scenarios" symbolically.





Why can't that behaviour be replicated, or even bettered, by a machine?


But all you're looking for here is more *information*, and/or more chess-like computational power. This enhanced performance would still make for an analogue of what we have today -- standard general-purpose computers and dedicated expert systems.





Measuring intelligence among humans is contraversial because human abilities typically don't vary by much. But surely you can't deny that an average human is more intelligent than an average cat, which is turn more intelligent than say, any plane's autopilot function.


I'll actually call this hierarchy of yours into question. First let me eliminate the autopilot function from the list since that's an expert system. For any remaining *organic* consciousness we need to look at *where* and *how* that organism's abilities can be applied. All animals except ourselves do *not* have the ability to gainfully manipulate their environments, and so they are incapable of self-motivated labor, much less *organizing* labor among themselves.

To *really* test extended cognitive potentials there would have to be some kind of elaborate experiment in which lab animals' physical abilities are augmented with machinery and their situation engineered to where they'd have to use those artificial abilities to survive and reproduce. Note that I am *not* necessarily recommending this experiment actually be done.





Social and cultural interactions are mediated through our brains, not our kidneys.


Uh, yes, thank you for that.





Intelligence is about problem-solving capability, something that is applicable no matter the culture.


Okay, even taking that general definition as valid, now you have to take on the responsibility for defining *that* -- what kinds of problems solved, exactly, would demonstrate 'intelligence' -- ? (And whatever definition you posit here for that would *not* be objective, or free of controversy.)





Biological organisms display a wide range of behaviours not associated with inanimate objects, and the range roughly tracks with the ratio of neural tissue to other tissues. Humans display a capability for planning for the future more advanced than any other known animal, and our brain-to-body ratio is huge.




Intelligence is demonstrable, not abstract. Objects with intelligence have goals, and that goal-seeking behaviour can be clearly observed. You can't reduce it to a single number, but that doesn't mean it isn't there.


If an object has been *assigned* a goal then it's an expert system. It would have to somehow devise *from scratch* a goal of its own, and commit to goal-oriented actions over time, to be considered as being an artificial consciousness / AI.





"Lack of interest"? What rock have you been living under? AI is of significant interest to many industries, as well as advocacy and research organisations.




I'm asking you why they can't be created in the first place, not why they can't be created today.


I can only repeat what I already said:




[M]y argument is that people will decide to continue with *separate*, *non*-conscious electronic machinery to *augment* their own task-processing abilities. Due to a lack of mass interest sufficient human labor will remain unavailable to bring a full "artificial consciousness" to fruition.