Log in

View Full Version : Could automation be taken for granted by a society?



Psy
2nd April 2008, 04:05
If we we automate most of the industrial society could there be risk of society taking it for granted and losing the skilled necessary to maintain it?

If computers can dispatch robots to keep the industrial world going with very little need for human intervention would there be a risk that humans abandoning work? Relying on automation to the point where when there is a problem the computers can't correct causing them to crash and the industrial world grind to halt there is a unskilled population unable to even comprehend how their production systems works?

Edit: typo in the subject, it should be automation.

MarxSchmarx
4th April 2008, 03:55
The need to improve machines and industrial systems requires human input. We are a long way off from machines being able to identify opportunities for improvement and remedy those entirely automatically.

I bet we shouldn't be too worried about this possible "automation collapse" for a long time. In everything from toasters to airplanes, machines fail all the time. Because of their complex interdependencies, maintaining the integrity of the whole requires cognitive abilities only humans have displayed. AI is a far way off from these tasks.

ckaihatsu
8th April 2008, 11:52
If we we automate most of the industrial society could there be risk of society taking it for granted and losing the skilled necessary to maintain it?


Automation requires specialized knowledge -- as long as human civilization keeps and maintains libraries (and/or networks of digitally stored information) the knowledge will be there for the learning and using.



If computers can dispatch robots to keep the industrial world going with very little need for human intervention would there be a risk that humans abandoning work?


I think the point of being a revolutionary is to argue and fight for a world in which human labor *is* obsolete. There are plenty of types of jobs that have already been automated, freeing people to do other, higher-level things with their lives. I don't think we should argue for a policy of employment-for-employment's-sake where people are put into stupid, busywork-type jobs just so that they can earn a paycheck according to the laws of the exploitative, market-based system of economics.

Now that we've developed mechanical and digital switching we don't have any need for people (women) to do manual telephone circuit switching -- the very image of professional women with headsets, forever unplugging and replugging phone plugs into giant panels of phone jack outlets, in order to route phone calls, now feels anachronistic and pointless as jobs go.

Same thing for transportation by covered sedan on servants' shoulders or math with slide rules -- if our day should consider that to be "work" then I'd rather see unemployment.

Again, I think the point of having civilization and cities at all -- as opposed to living life on family farms -- is to allow people to have a greater variety of lives to choose from, with *more* automation at our disposal.

In past ages only nobles and royalty had access to music on command, as they had the means to hire or force musicians to play for them. Today the average consumer can access thousands of hours' worth of musical recordings, from a worldwide selection -- should we ditch all that in order to employ more musicians? I think not.



Relying on automation to the point where when there is a problem the computers can't correct causing them to crash and the industrial world grind to halt there is a unskilled population unable to even comprehend how their production systems works?


This sounds like a dystopian, science-fiction-type scenario to me. Mainstream society really freaked out when computerization started becoming more commonplace, around the '60s and '70s, and led to all sorts of cyberphobia horror fantasies, like the Hal computer from "2001: A Space Odyssey".

The thing about automation, in the first place, is that *it works*. That's not to say that it works perfectly, right off the drawing table -- I actually argue that technological progress has been *too slow* thanks to the incrementalism of the markets, where every tiny improvement is marketed and sold as a whole new thing to consumers, thereby hampering faster progress.

In the realm of automation we can speak of a point of increasing returns, where certain functions get to the point of becoming so well-developed and useful that consumers enjoy ever-increasing returns from them with ever-diminishing needs for human maintenance. As an example I might point to a simple steel table -- the material is pretty much perfect, for our needs, and will easily outlast many, many consumers' lives, as long as it is not intentionally destroyed.

A more complex example might be a common handheld, solar-powered calculator, or an electric clock -- you get the idea.

The unfounded distress is that our civilization would develop some sort of artificial, human-level-or-above consciousness which would turn out to be malevolent and would outstrip humanity's abilities to control and disable it, running amuck and destroying all of us.

I guess I can't honestly give a blanket assurance against this, any more than I can assure that atomic weapons will never be used again, but I like to think that the worst era of contentiousness in human development -- industrialization and modernization -- is over. I think we've passed a critical mass in human education and comfort -- not to mention proletarianization -- one which the masses of the earth would be hard-pressed to give up to a rapidly shrinking ruling elite.

What this means is that there are more eyes, ears, and brains than ever before to double- and triple-check everything going on. We don't mind having cars that can recognize our voices to play music on command, but we *would* draw the line at those cars being able to communicate with each other at will to discuss driving destinations, if that were even possible.

Every step of automation is really a technical universe of its own, so -- for now at least -- we have enough separation of the tools from each other to prevent a total breakdown, or a total runaway monster.



The need to improve machines and industrial systems requires human input. We are a long way off from machines being able to identify opportunities for improvement and remedy those entirely automatically.


The premise of the Matrix movie series speaks to this theme, where the artificially intelligent machines begin to venture into self-development, and then eventual hegemony, in a gradual sort of way.

I'd like to point out that this line of thought may be symptomatic of our market-based society's preoccupation with the supply side of things -- we've been conditioned to think in terms of how to invent and build things for the market, when what really matters is what we *need* and *want* (collectively) from society / civilization.

Should we even *bother* to invent a computer system that tries to anticipate, and then supply our desires? Why not instead figure out for *ourselves* what we desire in the course of our lives, and then *demand* that productive capacity go towards fulfilling those demands, specifically? The market does accomplish this, to some extent, for those who can pay for it, but that's hardly everyone, of course. It's also a very incremental, piecemeal, and wasteful process under the markets....

Instead of trying to invent some sort of all-purpose butler we can continue to develop artificial intelligence (expert systems) for particular needs, which may or may not interlock, to limited degrees.

If anything I think that increased automation challenges us to figure out how to lead more enlightened lives, given that we can't waste our lives with pointless busywork anymore. At worst we can always be mindless hedonists, but usually one finds out that even pleasure-seeking can be as challenging as any other pursuit.


Chris




--
___

YFI S Dis cussion B oard
ht tp:// discussion. newy outh .com

Fa vor ite we b si tes: chica go.indym edia. org, wsw s. org, ma rxist. com, rwor .org, lab ourstart .or g, fightba ckne ws .org, lab oraction. org, ifam ericansknew .org, subst ancenews . com, soci alismandliberation .o rg, wh atreally hap pened .co m, ple nglis h. com, moneyfile s.o rg/temp .h tml, inform ationcl earingho use .i nfo, blac kcom menta to r. com, na rconew s. com, tru tho ut. org, ra ven1 .n et

Ph otoi llustr atio ns, P oliti cal Di ag rams by Ch ris K ai hatsu
h ttp :/ /co mmunit y.w ebsho ts. co m/u ser /ck aihatsu/

M ySp ace:
ww w. mys pace .co m/ck aihatsu

Co uchSu rfing:
htt p:/ /tinyu rl.c om/ yo h74u

Dimentio
8th April 2008, 14:25
If we we automate most of the industrial society could there be risk of society taking it for granted and losing the skilled necessary to maintain it?

If computers can dispatch robots to keep the industrial world going with very little need for human intervention would there be a risk that humans abandoning work? Relying on automation to the point where when there is a problem the computers can't correct causing them to crash and the industrial world grind to halt there is a unskilled population unable to even comprehend how their production systems works?

Edit: typo in the subject, it should be automation.

We are already there today. Take a typical western society and deprive it of energy, and we will see a collapse of infrastructure. Therefore it is important to learn the youth basic survival skills.

MarxSchmarx
11th April 2008, 07:31
I'd like to point out that this line of thought may be symptomatic of our market-based society's preoccupation with the supply side of things -- we've been conditioned to think in terms of how to invent and build things for the market, when what really matters is what we *need* and *want* (collectively) from society / civilization.

Should we even *bother* to invent a computer system that tries to anticipate, and then supply our desires? Why not instead figure out for *ourselves* what we desire in the course of our lives, and then *demand* that productive capacity go towards fulfilling those demands, specifically?

I rather think we should "demand" the free time that we save by not having to go back and tinker with a machine in the future.



Instead of trying to invent some sort of all-purpose butler we can continue to develop artificial intelligence (expert systems) for particular needs, which may or may not interlock, to limited degrees.


I'm all for an ad hoc approach to AI research - in fact, it's the real driver of the field. Still, stepping back and synthesizing the known results, and attempting to generalize to conceivable (but not concrete) future situations can only help develop better solutions to particular problems in the long run.

ckaihatsu
11th April 2008, 08:59
We are already there today. Take a typical western society and deprive it of energy, and we will see a collapse of infrastructure. Therefore it is important to learn the youth basic survival skills.

(Serpent)


I rather think we should "demand" the free time that we save by not having to go back and tinker with a machine in the future.

(MarxSchmarx)


These comments are both troubling to me -- really they sound anti-progressive. Serpent, you're saying that Western society, in some areas, could possibly suffer *total energy collapse* and force millions to have to revert to *basic survival skills* in nature -- ??? I think that's far too pessimistic and unrealistic.

And MarxSchmarx, you're saying that we should abandon machines altogether in favor of using the time saved as free time for ourselves?

I would much rather see automation improved so that there *is no* maintenance needed -- like the switches used for routing Internet traffic. I have an Ethernet hub right over here that never needs any maintenance -- all of society's infrastructure should eventually be like that.


I'm all for an ad hoc approach to AI research - in fact, it's the real driver of the field. Still, stepping back and synthesizing the known results, and attempting to generalize to conceivable (but not concrete) future situations can only help develop better solutions to particular problems in the long run.

(MarxSchmarx)


So here you're saying that all research in the field of AI should be done with *thought experiments* alone, with no implementations at all, no clinical studies, no field testing?

My only concern with fields like this (AI) and others that are potentially Frankensteinian, like genetic engineering, is that the systems should not be interconnected *too* much, or given too much creative autonomy. I think -- with current systems -- we *could* build a robot that *could* go haywire, in a destructive way, but *someone* would have to *consciously* set it up that way in the first place. Therefore there's accountability -- it's not a robot-built-a-robot kind of situation....


Chris

Psy
11th April 2008, 17:17
This sounds like a dystopian, science-fiction-type scenario to me. Mainstream society really freaked out when computerization started becoming more commonplace, around the '60s and '70s, and led to all sorts of cyberphobia horror fantasies, like the Hal computer from "2001: A Space Odyssey".

The thing about automation, in the first place, is that *it works*. That's not to say that it works perfectly, right off the drawing table -- I actually argue that technological progress has been *too slow* thanks to the incrementalism of the markets, where every tiny improvement is marketed and sold as a whole new thing to consumers, thereby hampering faster progress.

In the realm of automation we can speak of a point of increasing returns, where certain functions get to the point of becoming so well-developed and useful that consumers enjoy ever-increasing returns from them with ever-diminishing needs for human maintenance. As an example I might point to a simple steel table -- the material is pretty much perfect, for our needs, and will easily outlast many, many consumers' lives, as long as it is not intentionally destroyed.

A more complex example might be a common handheld, solar-powered calculator, or an electric clock -- you get the idea.

The unfounded distress is that our civilization would develop some sort of artificial, human-level-or-above consciousness which would turn out to be malevolent and would outstrip humanity's abilities to control and disable it, running amuck and destroying all of us.

Actually I was watching a documentary that was a what if humans suddenly pooffed out of existence, which our automated system won't be able last more then a few weeks as auxiliary power systems will eventually fail and in decades structures will start to fail due to lack of maintenance.

So my thought was not computers running amuck but systems failing with no-one fix them. If coal power plants were automatically fed by automated freight trains (add automatic track repair vehicles dispatched automatically by computers) that were automatically loaded by coal mines that were automatically mined then society would have reached a point were large scale production can exist without human intervention for decades.

You could factories without any workers, were computer decided how much to produced based on computers tracking consumption and automatically adjusting production levels. At that point you could have people that clueless how production is planned due to it too being automated, programed into computers decades earlier.

ckaihatsu
11th April 2008, 17:53
The scenario of automation you describe, Psy, would be absolutely fine in my book -- keep in mind all we'd need would be a book lying around somewhere (or a dedicated information kiosk nearby with its own power supply) to make sure that future generations would be able to understand and repair the machines, if need be.

Psy
11th April 2008, 19:22
The scenario of automation you describe, Psy, would be absolutely fine in my book -- keep in mind all we'd need would be a book lying around somewhere (or a dedicated information kiosk nearby with its own power supply) to make sure that future generations would be able to understand and repair the machines, if need be.
Well they will need to be repaired and eventually eventually reprogrammed as they wouldn't be artificial intelligence but simply turning production in a routine that system can automate. The problem being while you can automate routine maintenance you can't easily automate trouble shooting and uncommon repairs, also as equipment ages the more they act up in unpredictable ways.

The dystopia scenario would be forgotten systems failing, and engineers all of sudden having to deal with systems created way before their time with technology so old they till then were not exposed to it. Think how well current young American engineers would fair working with systems from the 60's and 70's even with documentation? Think what would happen if the engineers that know how the stuff work are in retirement and haven't touched the stuff in decades.

ckaihatsu
12th April 2008, 02:28
Well they will need to be repaired and eventually eventually reprogrammed as they wouldn't be artificial intelligence but simply turning production in a routine that system can automate. The problem being while you can automate routine maintenance you can't easily automate trouble shooting and uncommon repairs, also as equipment ages the more they act up in unpredictable ways.

The dystopia scenario would be forgotten systems failing, and engineers all of sudden having to deal with systems created way before their time with technology so old they till then were not exposed to it. Think how well current young American engineers would fair working with systems from the 60's and 70's even with documentation? Think what would happen if the engineers that know how the stuff work are in retirement and haven't touched the stuff in decades.


Yeah -- I hear ya. So I guess what we can conclude from this is that we would never want to have a total "hands-off" attitude towards the machinery we employ. And, as you're pointing out, it would probably be impossible anyway.

In general automation allows us to eliminate the "grunt work" from society by placing it on inanimate, unconscious machinery, using energy. It frees us up for higher-level tasks like engineering, and person-to-person stuff.

If many more tasks became sufficiently routinized / automated then perhaps we wouldn't even need to check in on them except maybe once every generation or so -- look at how smoothly much of modern civilization's technological routines have become already, *despite* capitalism and the markets -- roads, plumbing / water systems, sewage treatment, agriculture, and so on. Under communism we could eliminate many more routine, life-sapping "careers" by automating them, *without* eliminating oversight -- vaster areas of production could be further integrated and made more efficient, and monitored by a security-guard kind of employee from a single laptop. In the rare cases of things needing maintenance the appropriate people would be on-call and would simply make a professional stop to check in and see what was going on.

Academia would continue to educate engineers, doctors, educators, and other professionals to keep the respective fields up-to-date so that improvements to the machinery could be made and implemented.

MarxSchmarx
12th April 2008, 04:50
I would much rather see automation improved so that there *is no* maintenance needed -- like the switches used for routing Internet traffic. I have an Ethernet hub right over here that never needs any maintenance -- all of society's infrastructure should eventually be like that.

Yep that was what I was getting at. You said it much better than I could:)





So here you're saying that all research in the field of AI should be done with *thought experiments* alone, with no implementations at all, no clinical studies, no field testing?

What I said was that there is a place for thought experiments (I prefer "theory") in AI research, not that it should be the dominant kind (much less only kind) of AI research.


My only concern with fields like this (AI) and others that are potentially Frankensteinian, like genetic engineering, is that the systems should not be interconnected *too* much, or given too much creative autonomy. I think -- with current systems -- we *could* build a robot that *could* go haywire, in a destructive way, but *someone* would have to *consciously* set it up that way in the first place. Therefore there's accountability -- it's not a robot-built-a-robot kind of situation....


Well, these are very legitimate concerns. But they are also avoidable if scientists take the proper precautions. For instance, with proper containment, much of the concern about GMOs escaping and wreaking havoc on the local ecosystem could be avoided. For example, we can grow GMO tilapia in tanks in the middle of the sahara, knowing any that escapees are doomed.