View Full Version : Misplaced Technological Fetishes
AnarchistRevolutionary
19th February 2013, 13:30
Many think technology is humanity's salvation. I am one of the few who doesn't look to technology as being that where instead I look at technology in a way that can be viewed as the direct opposite.
What we find time and time again is that those with the most amount of capital control the direction or application of all created technologies.
Instead of technology being the liberator of people it has become the method of their enslavement. Instead of there being discussion on worker or labor rights what we see through the application of technology is reducing the human workforce to definition of obsolete as their labor becomes replaced by machines altogether.
I imagine a very different future where the oppressive global elites create militarized armed drone robots to enslave humanity.
I think it is about time people reexamine their misplaced faith in technology because what they will find is that it isn't the salvation elites have promised everybody for centuries that they have said it is.
Dean
19th February 2013, 14:49
Moved to OI. Please be aware that primitivists are generally restricted here. As such, I'm moving this discussion to the Opposing Ideologies forum.
Primitivists are conceived of as those who oppose technology. It is not a problem to say that technology is often bad when used by the ruling classes. But openly opposing technology is viewed as fringe and unhelpful in the main forums.
ÑóẊîöʼn
19th February 2013, 16:39
Many thing technology is humanity's salvation. I am one of the few who doesn't look to technology as being that where instead I look at technology in a way that can be viewed as the direct opposite.
What we find time and time again is that those with the most amount of capital control the direction or application of all created technologies.
Instead of technology being the liberator of people it has become the method of their enslavement. Instead of there being discussion on worker or labor rights what we see through the application of technology is reducing the human workforce to definition of obsolete as their labor becomes replaced by machines altogether.
Full automation is impossible under capitalism, because consumers would find it hard (if not impossible) to buy products if they don't have a wage with which to do so.
I imagine a very different future where the oppressive global elites create militarized armed drone robots to enslave humanity.
What would be the point of (human) enslavement, if machines are making everything?
I think it is about time people reexamine their misplaced faith in technology because what they will find is that it isn't the salvation elites have promised everybody for centuries that they have said it is.
The ruling classes have a controlling interest in how technology is developed and deployed. It's therefore not a mystery as to how we haven't all been freed by technology.
AnarchistRevolutionary
19th February 2013, 21:43
I am not opposed to all technology.
I just favor certain forms of technology over others where some specific forms of technology I find very unhelpful in human beings social interaction and daily existence.
The technology I can do without are the ones that subjugate and alienate people all in the name of control.
I sympathize with anarcho primitivists in that I certainly view the birth of civilization as the moment of humanity's downfall however I wouldn't call myself one of them as I would describe myself more as a agrarian or agriculturalist.
I believe that specific technology is fundamentally limited in it's application to enhance human life and I also view it in the hands of the ruling elite as potentially disasterous.
I question the motives of those who say science and technology is the primary salvation humanity could ever hope for. Such people are supplanting one dogma for another.
Still in some ways I am not sure if such technology will ever have a chance to be implemented worldwide oppressively as I fear it could since modern civilization is currently going through a global energy crisis that has the potential of destroying societies everywhere from within.
AnarchistRevolutionary
19th February 2013, 22:00
Full automation is impossible under capitalism, because consumers would find it hard (if not impossible) to buy products if they don't have a wage with which to do it would be the point of (human) enslavement, if machines are making everything?
The ruling classes have a controlling in how technology is developed and deployed. It's therefore not a mystery as to how we haven't all been freed by technolo
gy.
In every form of governance the ruling party dictates the application, direction, usage, and discipline of how technology is used. It's not just a capitalistic phenomena.
The point of human enslavement would be to fulfill the labor that machines haven't adequately been calibrated to do themselves yet. I imagine once everything does become automated that the elites would purge all individuals of whom they view to have no longer any function that directly benefits them.
Full automation of everything within a capitalist or any society for that matter is very possible if the elites own everything privately.
AnarchistRevolutionary
19th February 2013, 22:42
Just for the record I don't advocate going back to a hunter gatherer form of living. Most people are either unwilling or unable to do so since tasting and experiencing all the lavish conveniences of civilization for better and worse. Still I must say I do admire some of the few hunter gatherer tribes still in existence as they are a living testament to our past.
The type of agrarian or agricultural existence I myself favor I personally aspire towards as my own lifestyle. I don't believe that everybody else should conform to it.
I choose and aspire towards such a lifestyle myself because I view modern civilization to be doomed and self destructive. I think modern civilization will inevitably destroy itself from within.
I feel that I am entitled to my own opinions.
I am willing to explain more of my sentiments if it is asked of me.
ÑóẊîöʼn
20th February 2013, 08:22
I am not opposed to all technology.
I just favor certain forms of technology over others where some specific forms of technology I find very unhelpful in human beings social interaction and daily existence.
The technology I can do without are the ones that subjugate and alienate people all in the name of control.
An important part of the social impact of technology is in its application, unless you're trying to argue that spraying a meeting of bourgeois politicians with bullets is equivalent to doing the same to some random village.
I sympathize with anarcho primitivists in that I certainly view the birth of civilization as the moment of humanity's downfall however I wouldn't call myself one of them as I would describe myself more as a agrarian or agriculturalist.
Warmed-over "original sin" nonsense with no basis in material reality. Was the development of civilisation a natural occurrence brought forth by material conditions, or something else?
I believe that specific technology is fundamentally limited in it's application to enhance human life and I also view it in the hands of the ruling elite as potentially disasterous.
Technology such as... ?
I question the motives of those who say science and technology is the primary salvation humanity could ever hope for. Such people are supplanting one dogma for another.
Your position isn't much of an improvement, by my reckoning.
Still in some ways I am not sure if such technology will ever have a chance to be implemented worldwide oppressively as I fear it could since modern civilization is currently going through a global energy crisis that has the potential of destroying societies everywhere from within.
What energy crisis? You mean the profiteering of the energy companies?
In every form of governance the ruling party dictates the application, direction, usage, and discipline of how technology is used. It's not just a capitalistic phenomena.
Simplistic to the point of uselessness. Feudalism, for example, doesn't have "ruling parties". In fact under modern capitalism the difference between mainstream political parties likely to be elected is mainly cosmetic.
The point of human enslavement would be to fulfill the labor that machines haven't adequately been calibrated to do themselves yet. I imagine once everything does become automated that the elites would purge all individuals of whom they view to have no longer any function that directly benefits them.
Consumer capitalism requires consumers, and the service sector is a major sector of industry in developed countries. Where's your evidence that a significant proportion of capitalists have any intention whatsoever of destroying their source of profits?
Full automation of everything within a capitalist or any society for that matter is very possible if the elites own everything privately.
It may be possible but you have yet to establish anything approaching likelihood.
Just for the record I don't advocate going back to a hunter gatherer form of living. Most people are either unwilling or unable to do so since tasting and experiencing all the lavish conveniences of civilization for better and worse. Still I must say I do admire some of the few hunter gatherer tribes still in existence as they are a living testament to our past.
"Noble Savage" bullshit. While their method of subsistence may be the longest-running yet, hunter-gatherers in this day and age face a whole new constellation of issues that hunter-gatherers of past eras didn't have to worry about.
The type of agrarian or agricultural existence I myself favor I personally aspire towards as my own lifestyle. I don't believe that everybody else should conform to it.
Therein lies the problem.
I choose and aspire towards such a lifestyle myself because I view modern civilization to be doomed and self destructive. I think modern civilization will inevitably destroy itself from within.
Many people have felt the same way you have. Funny how they died before civilisation did.
I feel that I am entitled to my own opinions.
You are not however, entitled to your own facts.
T-800
21st February 2013, 14:16
Instead of technology being the liberator of people it has become the method of their enslavement. Instead of there being discussion on worker or labor rights what we see through the application of technology is reducing the human workforce to definition of obsolete as their labor becomes replaced by machines altogether.
If machines become hardier, stronger, smarter and purer* than human beings, then why exactly is it that they shouldn't rule the world?
Is there something I'm missing here?
* i.e. above / physically incapable of typical primate behavior
ÑóẊîöʼn
22nd February 2013, 03:00
If machines become hardier, stronger, smarter and purer* than human beings, then why exactly is it that they shouldn't rule the world?
Is there something I'm missing here?
* i.e. above / physically incapable of typical primate behavior
Fuck no. Because those properties don't necessarily make them any better at discernment in the human sphere. Greater intelligence is no guarantee of infallibility.
Yuppie Grinder
22nd February 2013, 04:17
Moved to OI. Please be aware that primitivists are generally restricted here. As such, I'm moving this discussion to the Opposing Ideologies forum.
Primitivists are conceived of as those who oppose technology. It is not a problem to say that technology is often bad when used by the ruling classes. But openly opposing technology is viewed as fringe and unhelpful in the main forums.
I disagree with a lot of what OP has to say, especially his predictions of the future, but being opposed to the fetishization of technology is not a something you can get restricted for.
NGNM85
22nd February 2013, 05:40
I disagree with a lot of what OP has to say, especially his predictions of the future, but being opposed to the fetishization of technology is not a something you can get restricted for.
Technically; no. Of course; such statements are conveniently, (and, probably; deliberately) vague. I'm not going to single anybody out, but there are a couple of members who love to be really evasive, and creative with language, and insist that they aren't primitivists, but these same members leap to it's defense, every time it's criticized, and spare no expense in extolling it's virtues. That sort of thing tends to make the needle jump, on my bullshit detector. That said; while primitivism is a philosophical dead-end, and totally antithetical to Marxism, and Anarchism, I also tend to think the administration really needs to lighten up on Restricting, and Banning people left, and right. So; while said individuals may very well merit Restriction, according to the spirit, if not the letter, of the forum policy, and even though I find such rhetoric personally objectionable; I would not support any effort to do so.
Raúl Duke
22nd February 2013, 06:24
Many think technology is humanity's salvation. I am one of the few who doesn't look to technology as being that where instead I look at technology in a way that can be viewed as the direct opposite.
What we find time and time again is that those with the most amount of capital control the direction or application of all created technologies.
Instead of technology being the liberator of people it has become the method of their enslavement. Instead of there being discussion on worker or labor rights what we see through the application of technology is reducing the human workforce to definition of obsolete as their labor becomes replaced by machines altogether.
I imagine a very different future where the oppressive global elites create militarized armed drone robots to enslave humanity.
I think it is about time people reexamine their misplaced faith in technology because what they will find is that it isn't the salvation elites have promised everybody for centuries that they have said it is.
Despite this thread being move to OI and oddly being considered "primitivist" right off the bat.
Many leftists actually agree with this premise, but take a more nuance view.
Technology is, in a vacuum, neutral (for the most part). But we don't live in a vacuum, we live in a capitalist society and all that entails. Thus technology will be implemented to keep this system afloat. Arguably, some elements of technology may manifest "internal problems" for capitalism (i.e. increased labor efficiency by machines, automation all decrease the need for the amount of employees. But with less employed people with "disposable income," who is going to buy those commodities?). Some technology has been used for good, even under this class system. Some elements of technology may be alienating as well.
But I feel people don't really have a lot of "misplaced faith" on technology except for some idealists and so on; not like how I imagine it was like in the 50s or 60s.
If machines become hardier, stronger, smarter and purer* than human beings, then why exactly is it that they shouldn't rule the world?
Is there something I'm missing here?
* i.e. above / physically incapable of typical primate behavior I don't think we would ever create machines to rule the world, ever. We would probably put it somewhere in their programming, like the 3 laws and probably a directive about serving people.
Full automation of everything within a capitalist or any society for that matter is very possible if the elites own everything privately. It may be possible, but anathema to a class society like capitalism. If all the working class is unemployed...I don't imagine a system like this living that long unless profound social changes occur possibly to the point where it isn't capitalism anymore.
The type of agrarian or agricultural existence I myself favor I personally aspire towards as my own lifestyle. I don't believe that everybody else should conform to it.
I choose and aspire towards such a lifestyle myself because I view modern civilization to be doomed and self destructive. I think modern civilization will inevitably destroy itself from within.
I feel that I am entitled to my own opinions.You could start a commune...
Even post-revolution, you can continue your agricultural work.
T-800
22nd February 2013, 19:56
Fuck no. Because those properties don't necessarily make them any better at discernment in the human sphere. Greater intelligence is no guarantee of infallibility.
We should "infallability" be a criterion that we judge something by?
Because rationality is necessarily bounded, this is a red herring to begin with.
I don't think we would ever create machines to rule the world, ever. We would probably put it somewhere in their programming, like the 3 laws and probably a directive about serving people.
Well, first of all, I doubt some kind of "categorical imperative" can be coded directly into an artificial general intelligence, because it will have to learn and develop, rather than know things innately from the beginning.
But in any case, this leaves the progress of civilization permanently bottlenecked by human evolution (or the want thereof).
Raúl Duke
22nd February 2013, 22:53
Well, first of all, I doubt some kind of "categorical imperative" can be coded directly into an artificial general intelligence, because it will have to learn and develop, rather than know things innately from the beginning.AI isn't "all or nothing." I may be wrong, but people have put particular instructions in code while allowing AI so the "agents/bots" find out the more effective way of doing those things they are instructed to do in their programming code.
You could put codes to "override" things (killing people, I guess would be a futurist example) while allowing AI for many other things (the fastest way to make a lasagna, to fix the door, etc).
Pure AI may be the most common example, but they're all relegated in research labs (like the ones in Carnegie Mellon U) and mechanically engineered (if they are robots, some AI are just programs) in a way to minimize hazard if the robot/etc ended up turning into a genocidal maniac which hasn't happened yet.
ÑóẊîöʼn
22nd February 2013, 22:58
We should "infallability" be a criterion that we judge something by?/
Because if we're talking about handing over control of the world to something that isn't human, then we should be making damn sure that doing so won't come back to bite our arses.
Because rationality is necessarily bounded, this is a red herring to begin with.
What is "rational" is dependent on the logical premises being used. See this LessWrong wiki entry on paperclip maximisers (http://wiki.lesswrong.com/wiki/Paperclip_maximizer).
Well, first of all, I doubt some kind of "categorical imperative" can be coded directly into an artificial general intelligence, because it will have to learn and develop, rather than know things innately from the beginning.
But it will need to learn and develop in the right way, and in order for us to make sure of that we will have to have a better knowledge of meta-ethics than we do at present.
But in any case, this leaves the progress of civilization permanently bottlenecked by human evolution (or the want thereof).
Technologies are emerging that offer the potential for humans to directly control their own evolutionary path - AI is not strictly necessary.
T-800
23rd February 2013, 06:53
AI isn't "all or nothing." I may be wrong, but people have put particular instructions in code while allowing AI so the "agents/bots" find out the more effective way of doing those things they are instructed to do in their programming code.
You could put codes to "override" things (killing people, I guess would be a futurist example) while allowing AI for many other things (the fastest way to make a lasagna, to fix the door, etc).
That assumes that "thou shalt" / "thou shalt not" propositions can be hard-coded into the artificial intelligence in question.
This assumes that a successful AGI can stem from a logicist / propositional basis.
However, the vast majority, if not all of the most successful (embodied) artificial intelligences of late have proceeded from a sub-symbolic architecture which does not allow, or at least severely hinders, this sort of hard-coding.
Because if we're talking about handing over control of the world to something that isn't human, then we should be making damn sure that doing so won't come back to bite our arses.
It's just evolution. Don't you get it?
What is "rational" is dependent on the logical premises being used. See this LessWrong wiki entry on paperclip maximisers.
Yeah I'm aware of Less Wrong and Eliezer Yudkowsky. I don't take "friendly AI" or, for that matter, the Singularity very seriously.
Sorry for eliding that link, btw. Can't post it.
Technologies are emerging that offer the potential for humans to directly control their own evolutionary path - AI is not strictly necessary.
So if, by and by, I turn myself into an artificial general intelligence you would have loathed if it had just been built from scratch, you would have no problem with this?
Let's Get Free
23rd February 2013, 07:33
I think that in our own time, the development and growth of technology and the growth of cities has brought mans alienation to a breaking point. Western man finds himself confined to a largely synthetic urban environment, far removed physically from the land, his relationship to the natural world mediated by machines. Not only does he lack familiarity with how much of his goods are produced, but his foods bear only the faintest resemblance to the plants and animals from which they are derived. Boxed into this environment, modern man is denied even a spectatorial role in the agricultural and industrial systems that satisfy his material needs. He is a pure consumer, an insensate receptacle. It wouldn't be right to say that he is disrespectful toward his natural environment: the fact is that he scarcely knows what ecology means or what his environment requires to maintain balance. Technology reflects the interests of those in power, so I think it's further development can only make this problem worse.
ÑóẊîöʼn
25th February 2013, 08:31
It's just evolution. Don't you get it?
What is just evolution?
Yeah I'm aware of Less Wrong and Eliezer Yudkowsky. I don't take "friendly AI" or, for that matter, the Singularity very seriously.
You don't think that (potentially) superhuman artificial intelligences might present an existential risk to human civilisation as we recognise it?
So if, by and by, I turn myself into an artificial general intelligence you would have loathed if it had just been built from scratch, you would have no problem with this?
Why would you turn yourself into something loathsome?
T-800
25th February 2013, 17:08
What is just evolution?
You know, like: "Evolution, Morpheus, evolution ..."
You don't think that (potentially) superhuman artificial intelligences might present an existential risk to human civilisation as we recognise it?
Well maybe, but that isn't my point.
My point is that hard-coding deontic axioms into a successful artificial general intelligence will be difficult if not impossible, because the sort of symbolic architecture that would support them is now widely considered too brittle for real world use.
Find a successful (embodied) AI of the now that reasons like SHAKEY did.
Why would you turn yourself into something loathsome?
*loathsome to you
zoot_allures
25th February 2013, 18:13
Technology has been around for as long as humanity. I don't think it makes much sense to conceive of "technology", in general, as causing either liberation or enslavement. Any act of liberation will involve technology, and any act of enslavement will, too.
The notion of "militarized armed drone robots" enslaving humanity strikes me as rather silly. However, I do agree that in our society, there are many people who have an unjustified degree of faith in modern science and technology. I think this is related to a general (also unjustified) faith in Progress (it is one thing to note local progress; another to believe in Progress as some kind of basic fact of human society, such that (in the long term, at least, and barring invasion, natural disaster, etc) civilization is bound by some mysterious cosmic force to bring us more wealth, more happiness, longer lives, and so on).
I don't think it's a case of elites duping people. My feeling is that elites are probably as blind to the long-term influence and direction of technology as anybody else. Attitudes towards technology seem to be related to more general social trends. The modern view of technology emerged, I think, with the development of modern science, during the Enlightenment. Earlier, around the Medieval period, far from there being any kind of faith in technology, it was widely believed (1) that the world had fallen from a "golden age", and even (2) that the end of the world was imminent. Many resistance movements took it for granted that civilization was corrupt - the Brethren of the Free Spirit, the Diggers, the Adamites, etc - and attempted to turn back the wheels of technology.
Today, some of the most prominent people reacting against new technology are the elites themselves: for example, consider the attitude of the music industry towards the internet and downloading (which isn't to suggest that the internet should be seen as a source of liberation), or the attitude of the oil industry on green energy.
My guess is that modern faith in technology derives from:
(1) the huge advances in science during the last century or so, and the very palpable impact these advances have had on our lives;
(2) excessively harsh judgments about more "primitive" ways of life;
(3) the acceptance of evolutionary theory and the popular misunderstandings of it: people often imagine evolution as something progressive (so progress applies not just to human society but to all of life), and as something that strongly influences society;
(4) blindness to the problems.
Re (4), our judgments of technology tend to reinforce our positive belief. So, technology takes the credit for the things that have improved the world (antibiotics, say) but often avoids blame for the problems it's caused. Climate change, for example, isn't generally blamed on technology. Rather, climate change is a problem for technology to solve - and, indeed, we must be thankful that we have the technology to detect such a problem. Thus a severe complication is transformed into a triumph, and the faith in technology may remain intact.
Thug Lessons
25th February 2013, 19:11
I thought this thread was going to be about people who blindly promote technology as a solution to our problems rather than confronting the social reality that creates those problems, (we're going to solve global warming with fusion power and in the future everything will be automated!!), but instead it's a bunch of romantic nonsense about "good" and "bad" technologies that repeats all of the mistakes of that former view, except in reverse. Epic fail, OP.
Orange Juche
26th February 2013, 03:35
I tend to think that while some technology is spectacular - such as modern medical technology - the problem lies in the fact that no one questions the morality of certain "progress" in technologically and it is culturally rendered morally neutral. The negative impacts of certain technologies that are more harmful than good aren't recognized as such for this reason.
I think any technology should be evaluated by culture as to its use, potential impacts, and coming from that, its value and actual need in a society.
ÑóẊîöʼn
28th February 2013, 13:29
You know, like: "Evolution, Morpheus, evolution ..."
Evolution is not a phenonenon with moral validity in itself, though. It's a natural process, not something with any ethical force. It just happens, without a care for fairness or justice.
Well maybe, but that isn't my point.
My point is that hard-coding deontic axioms into a successful artificial general intelligence will be difficult if not impossible, because the sort of symbolic architecture that would support them is now widely considered too brittle for real world use.
So you think that's the only way of making Friendly AI? Although you say that you're familiar with Yudkwosky, it seems you haven't actually listened to much of what he's said. Hard-coding deontic axioms is a no-go because it fossilises the AIs ethical framework into something from the period it was created - such an entity would have no meta-ethical capacity. See Yudkowsky: The Challenge of Friendly AI (http://www.youtube.com/watch?v=nkB1e-JCgmY)
Find a successful (embodied) AI of the now that reasons like SHAKEY did.
Why?
*loathsome to you
So you're saying that if you were to become a mechanical/electronic entity, you would become a violent, domineering, oppressive individual?
T-800
1st March 2013, 16:16
Evolution is not a phenonenon with moral validity in itself, though. It's a natural process, not something with any ethical force. It just happens, without a care for fairness or justice.
Whereof H. sapiens.
So you think that's the only way of making Friendly AI? Although you say that you're familiar with Yudkwosky, it seems you haven't actually listened to much of what he's said. Hard-coding deontic axioms is a no-go because it fossilises the AIs ethical framework into something from the period it was created - such an entity would have no meta-ethical capacity. See Yudkowsky: The Challenge of Friendly AI?
So does "correct" meta-ethical reasoning also require a strictly symbolic framework of reasoning?
If not, how can you get GUARANTEED "proper" results from a more flexible sub-symbolic architecture whose execution is in a sense stochastic?
Why?
See above.
So you're saying that if you were to become a mechanical/electronic entity, you would become a violent, domineering, oppressive individual?
I suspect that humans would fire the first shot in that war.
YouthLiberation
1st March 2013, 16:36
Great, so now we are discussing a hypotethical "rise of the machines" in a thread that was started by a closet primitivist. Ain't that quaint. Robots will never be intelligent, ever heard of "the chinese room"?
ÑóẊîöʼn
1st March 2013, 16:54
Whereof H. sapiens.
What? Stop being such a taciturn tit.
So does "correct" meta-ethical reasoning also require a strictly symbolic framework of reasoning?
If not, how can you get GUARANTEED "proper" results from a more flexible sub-symbolic architecture whose execution is in a sense stochastic?
Define what you would consider a "proper" result first. My definition is the survival and flourishing of the human species and its descendants. If the AI's supergoals are concomitant with that, then it's a Friendly AI.
See above.
What's the connection? Why is it relevant?
I suspect that humans would fire the first shot in that war.
So what's all this business about "becoming loathsome to me" if it's only a matter of self-defence? Unless you think unwarranted aggression by others provides you with the excuse needed to go beyond self-defence and try to take over?
Great, so now we are discussing a hypotethical "rise of the machines" in a thread that was started by a closet primitivist. Ain't that quaint. Robots will never be intelligent, ever heard of "the chinese room"?
The Chinese Room is a bullshit argument, because it neglects the fact that while no single component understands Chinese, the system as a whole does. One might as well try to argue that since a single neuron is incapable of self-awareness, brains cannot be self-aware.
zoot_allures
1st March 2013, 17:17
The Chinese Room is a bullshit argument, because it neglects the fact that while no single component understands Chinese, the system as a whole does. One might as well try to argue that since a single neuron is incapable of self-awareness, brains cannot be self-aware.
Though I don't buy the Chinese Room, I don't find the "systems reply", or its variants, at all persuasive. The biggest objection I have to the Chinese Room is that I don't consider it at all helpful to sit on an armchair imagining extremely unlikely scenarios and then exploring what our intuitions tell us about these scenarios. That objection applies equally to your "systems reply" (as you've phrased it there, at least).
The primary problem being that intuitions are rarely much of a guide to reality; the secondary problem being that very often, thought experiments such as these strike me as having little relevance. My intuition tells me that no kind of understanding would obtain in a Chinese Room-type scenario. But even if this intuition is accurate, so what? What does this tell us about artificial intelligence in general? Not much, in my opinion. Actual computers, even the ones we have today, are in many ways very different from Chinese Rooms. To me, all the Chinese Room really drives home is that syntax is not semantics, and that it seems unlikely that syntax alone is sufficient for semantics. This is a point with which I completely, but I don't see it as telling us much about the potential for AI. Maybe it rules out the most simple-minded conceptions of AI.
That said, I don't think there's any reason to suppose that mental states are even possible in something other than a brain-like substance. That's not to say that they aren't possible, just that I'm agnostic on the question.
ÑóẊîöʼn
1st March 2013, 20:34
Though I don't buy the Chinese Room, I don't find the "systems reply", or its variants, at all persuasive. The biggest objection I have to the Chinese Room is that I don't consider it at all helpful to sit on an armchair imagining extremely unlikely scenarios and then exploring what our intuitions tell us about these scenarios. That objection applies equally to your "systems reply" (as you've phrased it there, at least).
I'd say it's pretty obvious (i.e. it's an observation, not an intuition) that the whole of a system can be capable of far more than its subcomponents individually considered.
That said, I don't think there's any reason to suppose that mental states are even possible in something other than a brain-like substance. That's not to say that they aren't possible, just that I'm agnostic on the question.
How are you defining "brain-like substance" exactly?
That aside, why shouldn't we assume that mental states are a matter of organisation and function rather than substance? Is there something special about the substance of brain-matter such that its organisation and/or function could not be approximated using different materials?
zoot_allures
1st March 2013, 21:09
I'd say it's pretty obvious (i.e. it's an observation, not an intuition) that the whole of a system can be capable of far more than its subcomponents individually considered.
Yes, that is obvious. Nobody's denying that. This point does absolutely zilch to establish that understanding would obtain for a Chinese Room.
How are you defining "brain-like substance" exactly?
I'm not defining it exactly. I intentionally left it vague.
Is there something special about the substance of brain-matter such that its organisation and/or function could not be approximated using different materials?
There may be. As I said, I'm agnostic about it. I don't think we have the evidence to judge one way or the other (and actually, I'm not sure we could ever have the evidence).
Simply assuming that mental states "are a matter of organisation and function rather than substance" seems rather silly to me. It's also silly to assume the opposite. Both are rather strong claims I don't see much evidence in support of either.
ÑóẊîöʼn
1st March 2013, 21:19
Yes, that is obvious. Nobody's denying that. This point does absolutely zilch to establish that understanding would obtain for a Chinese Room.
It doesn't "obtain zilch" since it undermines the primary contention of the Chinese Room, which amounts to a fallacy of composition.
I'm not defining it exactly. I intentionally left it vague.
There may be. As I said, I'm agnostic about it. I don't think we have the evidence to judge one way or another (and actually, I'm not sure we could ever have the evidence).
Simply assuming that mental states "are a matter of organisation and function rather than substance" seems rather silly to me. It's also silly to assume the opposite. Both are rather strong claims I don't see much evidence in support of either.
It's an assumption based on the evidence, which strongly indicates that there is nothing special about the substance (as opposed to the organisation or function) of brains which prevents approximation of said organisation and function in other substrates.
zoot_allures
1st March 2013, 21:31
It doesn't "obtain zilch" since it undermines the primary contention of the Chinese Room, which amounts to a fallacy of composition.
Who are talking to, me or Searle? I'm not defending the Chinese Room. I explicitly criticized it. It so happens that my criticism applies equally to your view that understanding would obtain.
And even if you are talking to Searle, it's just not good enough to respond that "the whole of a system can be capable of far more than its subcomponents individually considered". You can say the same about any system (and obviously, it doesn't follow that any system understands anything). Your point is probably necessary for a successful "systems reply" but it's woefully insufficient.
It's an assumption based on the evidence, which strongly indicates that there is nothing special about the substance (as opposed to the organisation or function) of brains which prevents approximation of said organisation and function in other substrates.
This would be a good time to cite some of that evidence.
ÑóẊîöʼn
1st March 2013, 21:51
Who are talking to, me or Searle? I'm not defending the Chinese Room. I explicitly criticized it. It so happens that my criticism applies equally to your view that understanding would obtain.
And even if you are talking to Searle, it's just not good enough to respond that "the whole of a system can be capable of far more than its subcomponents individually considered". You can say the same about any system (and obviously, it doesn't follow that any system understands anything). Your point is probably necessary for a successful "systems reply" but it's woefully insufficient.
Searle argues that the Chinese Room doesn't understand Chinese because none of it's individual subcomponents do. However, looking at the Chinese Room as a whole system reveals that it does in fact understand Chinese in so far as it returns sensible answers.
This would be a good time to cite some of that evidence.
How about the consistent failure of non-material models of life/consciousness? Things like souls, ectoplasm, elan vitale and so on have failed again and again to explain anything. Via a process of elimination we can conclude that whatever the final explanation is, it will be a material one.
Progress in understanding consciousness has been slow and we've yet to arrive at a definitive model for it, but what we have discovered so far has been on the basis of a material approach.
This strongly suggests that a sufficiently sophisticated Chinese Room would in fact be capable of understanding by virtue of the fact that it's reactions would be indistinguishable from a human that understands Chinese.
zoot_allures
1st March 2013, 22:08
Searle argues that the Chinese Room doesn't understand Chinese because none of it's individual subcomponents do. However, looking at the Chinese Room as a whole system reveals that it does in fact understand Chinese in so far as it returns sensible answers.
If that's what's required for understanding, of course it understands Chinese - simply by what's stipulated in the argument. Searle himself wouldn't object to that. Of course, he probably would object (quite rightly in my view) to whatever definition of "understanding" you're using. He's obviously talking about something rather more substantial than whether or not it "returns sensible answers".
How about the consistent failure of non-material models of life/consciousness? Things like souls, ectoplasm, elan vitale and so on have failed again and again to explain anything. Via a process of elimination we can conclude that whatever the final explanation is, it will be a material one.
Progress in understanding consciousness has been slow and we've yet to arrive at a definitive model for it, but what we have discovered so far has been on the basis of a material approach.
This strongly suggests that a sufficiently sophisticated Chinese Room would in fact be capable of understanding by virtue of the fact that it's reactions would be indistinguishable from a human that understands Chinese.
Because brains aren't part of the material world, in your view? In what sense are brains not material?
I'm a materialist (John Searle's a materialist, too). The supposition that a brain-like substance is necessary for mental states doesn't in any way violate materialism.
The Garbage Disposal Unit
1st March 2013, 23:59
I think any technology should be evaluated by culture as to its use, potential impacts, and coming from that, its value and actual need in a society.
Mrm. See there's where things start to get confusing. Insofar as any culture is shaped by its use of technology - there's not a neutral detached ground from which to "evaluate".
That said, I think this whole business, on the contrary, is incredibly detached.
What it comes down to, for me, is, "Who's going to mine yr stupid bauxite?"
I'm not saying that nobody will, I'm just saying that, with out the techniques of capitalism, we're probably going to end up ditching a lot of the accompanying technology.
ALSO, RE: CHINESE ROOM
GO READ A BOOK ON SEMIOTICS.
IT WILL BE MORE PLEASANT THAN PHILOSOPHY OF SCIENCE,
AND WILL HELP YOU HAVE SEX WITH ATTRACTIVE POST-GRADS.
zoot_allures
2nd March 2013, 00:10
AND WILL HELP YOU HAVE SEX WITH ATTRACTIVE POST-GRADS.
It's deeply unfortunate that the subject I find most interesting, and would ideally like to devote my life to - academic, analytic philosophy - also seems to be the one field you should definitely not go for if you want to be hip and attract women.
The Garbage Disposal Unit
2nd March 2013, 00:31
It's deeply unfortunate that the subject I find most interesting, and would ideally like to devote my life to - academic, analytic philosophy - also seems to be the one field you should definitely not go for if you want to be hip and attract women.
No kidding.
And I think you meant ". . . if you want to live communism and roll with fabulous Queers."
Sorry for the derail though. Point being, to get this back on track: Technology doesn't exist separately from how we do stuff, so when we're doing communism, we probably won't use the same machines (or they'll be repurposed in ways that are not immediately apparent).
NGNM85
2nd March 2013, 01:59
Mrm. See there's where things start to get confusing. Insofar as any culture is shaped by its use of technology - there's not a neutral detached ground from which to "evaluate".
Lord. More postmodernist bullshit.
Second; who's claiming to be neutral? No sane, rational person is neutral on the subject human well-being. (In general.) Furthermore; this is not some arbitrary quirk, or predilection, such as a fondness for chocolate ice cream, or spaghetti westerns, nor is it simply a souvenir of dated evolutionary software, rather, this is a perfectly logical imperative. First; because, as human beings, we share a general interest in human prosperity. Second; because every human being possesses the potential for consciousness, the potential to be a; 'judge of the universe', with the near-infinite range of possibilities that entails, and it is this that is the most precious thing in the universe. In a similar fashion; no Socialist worthy of the name can claim to be neutral on the subject of the working
class. Etc., etc.
What it comes down to, for me, is, "Who's going to mine yr stupid bauxite?"
The robots, of course. (Duh!)
...Just kidding. (Mostly.) That's no different from saying; 'Who will clean the toilets, in a Socialist society?'
'Nuff said.
I'm not saying that nobody will, I'm just saying that, with out the techniques of capitalism, we're probably going to end up ditching a lot of the accompanying technology.
If you mean things like tanks, stealth bombers, and nuclear warheads; I'd have to agree. However; we can be fairly certain that things like tractors, forklifts, computers, jackhammers, washing machines, etc., will be no less indispensable in a Socialist society than they are today.
The Garbage Disposal Unit
2nd March 2013, 02:30
...Just kidding. (Mostly.) That's no different from saying; 'Who will clean the toilets, in a Socialist society?'
'Nuff said.
I sense that you may have cleaned toilets, but never lived near a Bauxite mine.
One of those things is likely to poison you, your loved ones, and your extended community for generations. The other is not.
That's part of the problem with discussions like this - equating penicillin and the internet is st00pid.
NGNM85
2nd March 2013, 02:42
I sense that you may have cleaned toilets, but never lived near a Bauxite mine.
You should get your own 800 number.
One of those things is likely to poison you, your loved ones, and your extended community for generations. The other is not.
While I'm hardly an expert on the mining of bauxite; I'm fairly certain this can be done without poisoning anyone. However; safety precautions cost money, which cuts into profits. The problem isn't bauxite; it's capitalism.
That's part of the problem with discussions like this - equating penicillin and the internet is st00pid.
They are different things, which are different in many ways. However; that does not mean they have nothing in common. They are both examples of technology, more than that; they are examples of how technology can empower us, and improve our quality of life.
T-800
2nd March 2013, 02:57
What? Stop being such a taciturn tit.
You seem to be going out of a kind of status quo bias.
More specifically, you seem to be assuming that human existence on this planet is somehow "fair" or "just".
Define what you would consider a "proper" result first. My definition is the survival and flourishing of the human species and its descendants. If the AI's supergoals are concomitant with that, then it's a Friendly AI.
OK.
What's the connection? Why is it relevant?
The most successful AIs in recent memory I can think of have all been somewhat stochastic, somewhat uncertain in their behavior.
So to the extent that Yudkowsky's "friendly AI" program depends on provably correct behavior, it can be forgotten as a realistic goal.
So what's all this business about "becoming loathsome to me" if it's only a matter of self-defence? Unless you think unwarranted aggression by others provides you with the excuse needed to go beyond self-defence and try to take over?
This planet can hardly sustain one technological species (if it can at all).
To say nothing of two.
ÑóẊîöʼn
3rd March 2013, 20:35
You seem to be going out of a kind of status quo bias.
More specifically, you seem to be assuming that human existence on this planet is somehow "fair" or "just".
There's good reason for that. As a member of the social species H. sapiens myself I have a direct interest in its long-term survival.
The most successful AIs in recent memory I can think of have all been somewhat stochastic, somewhat uncertain in their behavior.
So to the extent that Yudkowsky's "friendly AI" program depends on provably correct behavior, it can be forgotten as a realistic goal.
It's the path that's stochastic, not the target.
This planet can hardly sustain one technological species (if it can at all).
To say nothing of two.
Nonsense. Even in thoughtless nature a million flowers can bloom. Just as more developed ecosystems can support larger and more complex lifeforms, so a multi-species civilisation would be capable of greater feats together than one species alone.
And who says we have to stay on this one planet?
T-800
3rd March 2013, 21:15
There's good reason for that. As a member of the social species H. sapiens myself I have a direct interest in its long-term survival.
So you're willing to bottleneck the progress of civilization for yourself?
It's the path that's stochastic, not the target.
No guarantee that it reaches the target then.
You tell me how an absolutely certain meta-ethical system is supposed to be built on a machine that deals exclusively in shades of grey.
Yudkowsky's idea of what AI is appears to be permanently stuck in the 1970s. Doesn't he know that his logicism has virtually zero credit in the world of people actually doing things with AI?
Nonsense. Even in thoughtless nature a million flowers can bloom. Just as more developed ecosystems can support larger and more complex lifeforms, so a multi-species civilisation would be capable of greater feats together than one species alone.
I imagine our successors would feel we contribute about as much to their society as chimpanzees in a zoo do to ours.
How would a "multi-species" civilization handle limiting resources?
And who says we have to stay on this one planet?
No one. But given how poorly the human species has managed "Spaceship Earth", I can only imagine long term human colonization of deep space would be a total lollacaust.
ÑóẊîöʼn
3rd March 2013, 22:18
So you're willing to bottleneck the progress of civilization for yourself?
Begs the question, don't you think? Since humans are thus far the only known species to develop civilisation, after all.
No guarantee that it reaches the target then.
You tell me how an absolutely certain meta-ethical system is supposed to be built on a machine that deals exclusively in shades of grey.
The point of meta-ethics is that delineating moral behaviour through the use of systems rather than reasoning is a losing proposition, because human societies and therefore human morality changes over historical time-scales.
Yudkowsky's idea of what AI is appears to be permanently stuck in the 1970s. Doesn't he know that his logicism has virtually zero credit in the world of people actually doing things with AI?
Why don't you take it up with him? I'm sure it would be an interesting exchange...
I imagine our successors would feel we contribute about as much to their society as chimpanzees in a zoo do to ours.
Why? We're not chimpanzees, despite our common ancestor we've become something distinctly different.
How would a "multi-species" civilization handle limiting resources?
Same way that any civilisation would. Find more of the same if possible, otherwise if not find appropriate substitutes.
No one. But given how poorly the human species has managed "Spaceship Earth", I can only imagine long term human colonization of deep space would be a total lollacaust.
Why don't you just kill yourself already? After all, if you're right then you'd be doing the planet a favour by removing yet another useless mouth to feed.
Or let me guess, that sort of judgemental misanthropic shit somehow doesn't apply to you?
T-800
3rd March 2013, 23:13
Begs the question, don't you think?
I don't think you know what that phrase means.
The point of meta-ethics is that delineating moral behaviour through the use of systems rather than reasoning is a losing proposition, because human societies and therefore human morality changes over historical time-scales.
OK.
Why don't you take it up with him?
Maybe his years of writing Harry Potter fanfics can illuminate how his apparently unrealistic logicist scheme will pan out.
Why? We're not chimpanzees, despite our common ancestor we've become something distinctly different.
Would not argue that.
Same way that any civilisation would. Find more of the same if possible, otherwise if not find appropriate substitutes.
How is phosphorus to be substituted in agriculture?
I mean you are doing neoclassical economic shit here. (Infinite substitutability.)
Why don't you just kill yourself already? After all, if you're right then you'd be doing the planet a favour by removing yet another useless mouth to feed.
Or let me guess, that sort of judgemental misanthropic shit somehow doesn't apply to you?
I live primarily that I may help give rise to a successor.
But whether I'm a hypocrite or not is really sort of besides the point.
The point being that the human species is incurring a mass extinction event and is managing its affairs on this planet like a meth-junkie manages their money.
Or?
ÑóẊîöʼn
4th March 2013, 00:13
I don't think you know what that phrase means.
So please explain to me how wanting to ensure the survival and comfort of the human race as a whole (rather than the just the ones who are like you) could possibly constitute a bottleneck to progress, when progress is something defined by humans?
Maybe his years of writing Harry Potter fanfics can illuminate how his apparently unrealistic logicist scheme will pan out.
It would sure be more interesting than the sophomoric nihilistic crapfest you've been dishing out thus far.
How is phosphorus to be substituted in agriculture?
I mean you are doing neoclassical economic shit here. (Infinite substitutability.)
Phosphorous doesn't disappear into the ether. It can be reclaimed e.g. from urine and sewage.
I live primarily that I may help give rise to a successor.
A successor... do you mean a child, or something else?
But whether I'm a hypocrite or not is really sort of besides the point.
The point being that the human species is incurring a mass extinction event and is managing its affairs on this planet like a meth-junkie manages their money.
Or?
Well we're obviously doomed then, aren't we? I wonder why you're bothering when you could be spending what little time you have on this forsaken ball of rock in a much more immediately pleasurable way, rather than wasting your too-short human life trying to talk to people (Who are for the most part incorrigibly stupid, as you've argued, so again why bother?) as if what anything that anyone says here and now will still actually matter by the time solar activity becomes too vigorous not to burn all life from the surface of the Earth.
Humans incurring a mass extinction event? Most species that have ever existed are now fucking extinct! Extinction is par for the course as far as nature is concerned, so this attempt to pass it off as a symptom of how especially fucked up humanity can be is ludicrous. You want to prove that humans are selfishly destructive? I suggest you wait until the vast majority of the thousands of cooperatively built-and-lived-in cities lie in self-made ruins populated only by human corpses and vermin. Wait until a time when we've actually managed to wipe out large numbers of ourselves through our own short-sighted stupidity, rather than living in the hundreds of millions on each continent (bar one especially cold example) as things stand now.
Pawn Power
4th March 2013, 00:27
Moved to OI. Please be aware that primitivists are generally restricted here. As such, I'm moving this discussion to the Opposing Ideologies forum.
Primitivists are conceived of as those who oppose technology. It is not a problem to say that technology is often bad when used by the ruling classes. But openly opposing technology is viewed as fringe and unhelpful in the main forums.
Did you read the original post and their followup post? Surely a critique of technology fetishism and a lack of faith in technology in an of itself is not primitivism nor an opposing ideology (except to capitalism).
Vonnegut would be rolling in his grave!
T-800
4th March 2013, 01:16
So please explain to me how wanting to ensure the survival and comfort of the human race as a whole (rather than the just the ones who are like you) could possibly constitute a bottleneck to progress, when progress is something defined by humans?
That might change.
It would sure be more interesting than the sophomoric nihilistic crapfest you've been dishing out thus far.
What is "sophomoric" here?
Furthermore, what is "nihilistic"? I have a value system.
re: Yudkowsky's Harry Potter fan-fics ... idk ... this seems like a forum where women's issues are taken seriously. Now I'm not the first person to scream "RAPE APOLOGIST, RAPE ENABLER" but if you look on the Talk section of the RationalWiki article on the man then look at the Harry Potter fic section, you'll notice some red warning flags going up. Yudkowsky appears to be somewhat disturbed on the rape issue, by most people's standards anyway. And certainly those of most leftists.
No links yet, sorry.
Phosphorous doesn't disappear into the ether. It can be reclaimed e.g. from urine and sewage.
That's true, but it's another can / will issue.
A successor... do you mean a child, or something else?
"Mind children" as Hans Moravec (and Minsky) put it.
Well we're obviously doomed then, aren't we? I wonder why you're bothering when you could be spending what little time you have on this forsaken ball of rock in a much more immediately pleasurable way, rather than wasting your too-short human life trying to talk to people (Who are for the most part incorrigibly stupid, as you've argued, so again why bother?) as if what anything that anyone says here and now will still actually matter by the time solar activity becomes too vigorous not to burn all life from the surface of the Earth.
I have to admit I do some things for the lulz.
Bounded rationality, etc.
Humans incurring a mass extinction event? Most species that have ever existed are now fucking extinct! Extinction is par for the course as far as nature is concerned, so this attempt to pass it off as a symptom of how especially fucked up humanity can be is ludicrous.
I suggest you look into terms like "Anthropocene extinction" and for that matter the role of methane in our changing climate and possible similarities to the Permian-Triassic extinction (i.e. "the Great Dying").
...
This just occurred to me now:
NóẊîöʼn do you mean to tell me your confidence in artificially intelligent, provably correct meta-ethics is invested in a man who:
Appears not to notice that his ideas are computationally intractable in reality and
Whose major claim to fame is a rapey Harry Potter fan-fic?
Correct me if I'm wrong here.
Powered by vBulletin® Version 4.2.5 Copyright © 2020 vBulletin Solutions Inc. All rights reserved.