Log in

View Full Version : a centrally planned economy controlled by a computer?



danyboy27
11th January 2011, 17:29
Would it be possible and what kind of result it would have?

discussion open!

Summerspeaker
11th January 2011, 17:34
Depends on the computer. If its a properly programed superintelligence, results should be good.

#FF0000
11th January 2011, 17:50
I don't know about that. Wired had a piece last month (I think) on Artificial Intelligence, and how it's pretty much entirely different from human intelligence. Like a warehouse run according to human intelligence is going to have items sorted according to size and type and all that, while an artificial intelligence would just sort of have things randomly assorted throughout the warehouse. If the AI needs to get something from the warehouse, it just knows where any given item is, so it just goes to the nearest item it's looking for.

So, yeah, the AI thing might not be a good idea. But Chile under Allende had a pretty advanced computer system that was supposed to be used to manage the economy and other things like disaster relief. Cybersyn, I think it was called. It seems like a tool like that is almost necessary at this point for a socialist society.

ComradeOm
11th January 2011, 18:18
Computers are heavily involved in today's planning processes because 99% of planning is routine calculations. It would be perfectly possible to apply modern technology and software to running a planned economy (after all, a single laptop today is vastly more powerful than the supercomputers of the 1960s). However I would never delegate ultimate control to a computer for the simple reason that they are stupid. Computers are very, very stupid

A human planner will know instinctively that ordering ten years worth of widgets is a bad idea. A computer will not know this unless you tell it so. Hence the need to set parameters to condition the computer's actions. These rules can be as complex and comprehensive as you like but they will never capture everything, never be complete. That's why - barring the emergence of sentient AI, which raises a whole different set of questions - human oversight is essential. We need people signing off on everything to make sure that the system is functioning as designed. Computers can do the grunt work but there is no substitute for a trained planner in control

Red Commissar
11th January 2011, 19:44
I think the use of computers (particularly the ones we have now and are continuing to develop) as well as the advances in communication would have greatly aided in central-planning. It would still need human operators though, but cut down on the bureaucracy that could grow around it and be able togather data more efficiently and act form there more quickly.

ExUnoDisceOmnes
11th January 2011, 19:57
With the abolition of wasteful labor, perhaps it would be possible to allocate LOTS of resources towards more effective technology for central planning. It seems that the more advanced our machines are, the better the potential for our system is in terms of striving towards more perfect use-value based economics.

mykittyhasaboner
11th January 2011, 20:00
To be brief, can planning be done by a computer?

No way. Even if it was reasonable to trust AI to plan human activities, you would need more than one computer.

Can planning be done by utulizing computers?

There is no other way. The kind of information needed to rationally plan an economy is so vast and variable that it is impossible to get the desired results without some kind of vast network of computers. That is what was needed to solve the infamous "calculation problem" and must be utilized by socialist societies.

BrandonHerygers
11th January 2011, 21:48
I think AI is definatly a bad idea. I mean, has anyone ever watched the terminator or eagle eye?

Salyut
11th January 2011, 23:16
So, yeah, the AI thing might not be a good idea. But Chile under Allende had a pretty advanced computer system that was supposed to be used to manage the economy and other things like disaster relief. Cybersyn, I think it was called. It seems like a tool like that is almost necessary at this point for a socialist society.

Paul Cockshott wrote a book on that - Towards a New Socialism. I think he actually posts in the economics forum.

Rooster
11th January 2011, 23:24
Computers were being used towards the end of the USSR to help with the planned economy. The problem with computers is that you have a bad plan then they will carry out that plan.

ckaihatsu
12th January 2011, 02:11
A few related points here:

Given that people will most likely continue to be around, it's very difficult to even *conceive* of a clean separation between human intelligence / activity and any possible, potential "AI". Certainly popular fiction amps up the dramatic qualities to make the imagined AI's creation something akin to the birthing of a metallic alien, but in reality, then as before, the very nature and form of any technologies will take on the qualities that human intelligence imbues into them, usually for specific applications.

So, in short, if there's no general political and economic backing for the creation of a purely autonomous artificial life form then it just won't happen -- instead, specific applications will prevail, like the one that's the topic of this thread.

However, that said, I think the substance of concerns behind standard cyberphobia AI-run-amuk nightmares is that machine "intelligence" can be "faked" to considerable extents by simply piling on layers of abstraction. Wouldn't any newcomer be fairly dazzled by today's cell phones that can call a person just by the mention of that name into the phone? It *looks* like some kind of intelligent device, but we know that it's just a "trick" of using several layers of sub-systems in overlapping ways that leverage the end-user functionality we see and use.

So, on the subject of a computerized centrally planned economy, there *would* be an 'off' switch, and there *would* be a system of organization that is people-accessible. There would also be fallbacks to more-conventional systems of human intervention as built-in, redundant modes of operation.

However, the *leveraging* that advanced computerized systems can accomplish is well-known, even to today's consumers, as with the voice-activated phone example. By simply computer-systematizing all component elements within a given domain, the *description* and *organization* of all of those elements then becomes accessible, like any website page on the Internet.

I'll argue that the highest, most abstracted level of conception is 'supply' and 'demand' -- for actual human need, that is. I have a model, attached, and at my blog entry, that outlines a certain scope for this "uppermost" layer of functioning, with all subsumed processes readily open to currently existing, regular routines of computerization and automation.


communist supply & demand -- Model of Material Factors

http://postimage.org/image/35sw8csv8/

danyboy27
12th January 2011, 02:25
verry good post so far!

i didnt mentionned an AI and to me, an AI couldnt really control everything efficiently.

but supercomputer could run simulations, projections and the calculations of various factor needed for the economy and make it more efficient and less impredictible like during the era of the soviet union when it was totally up to the bureaucrat to decide what to produce, when and how.

Its so funny how many politicians think that the capitalism system prevailed over the soviet union.

Look at today, how politician and banker are running the show, does it sound logical or even sane? this whole thing is run like a damn insane asylum!

Rooster
12th January 2011, 02:31
Its so funny how many politicians think that the capitalism system prevailed over the soviet union.

But it did prevail over the soviet union.

ÑóẊîöʼn
12th January 2011, 03:03
A human planner will know instinctively that ordering ten years worth of widgets is a bad idea. A computer will not know this unless you tell it so. Hence the need to set parameters to condition the computer's actions. These rules can be as complex and comprehensive as you like but they will never capture everything, never be complete.

Nonsense. If the average expected lifetime of a widget is known, then that can be entered as a variable in the planning AI's calculations. Thus if it recieves an "unexpected" order (IE not according to any known plan or projection for example) for half a billion widgets, it can reject it as an "out of spec" order (if the order was recieved from a fellow AI or subsystem, in which case the recieving AI would also notify the appropriate facilities that the other AI or subsystem is sending orders out-of-spec), or flag it so that it requires user (human) verification in order to execute.


That's why - barring the emergence of sentient AI, which raises a whole different set of questions - human oversight is essential. We need people signing off on everything to make sure that the system is functioning as designed. Computers can do the grunt work but there is no substitute for a trained planner in control

The thing is, unless you're actually dictating peoples' consumption habits, there's no need for any real "control" on the part of either AI or humans. There only needs to be rapid response to emerging trends and sudden events.


To be brief, can planning be done by a computer?

No way. Even if it was reasonable to trust AI to plan human activities, you would need more than one computer.

It depends on what you mean exactly by "computer". An AI limited to one machine or terminal probably wouldn't be able to achieve much. But if it could at least access other machines or terminals then it would not be so limited.

ckaihatsu
12th January 2011, 03:15
verry good post so far!

i didnt mentionned an AI and to me, an AI couldnt really control everything efficiently.




but supercomputer could run simulations, projections and the calculations of various factor needed for the economy and make it more efficient and less impredictible like during the era of the soviet union when it was totally up to the bureaucrat to decide what to produce, when and how.


Well, technically speaking, those two mechanical methods would be virtually the same, especially in practice -- overall it's a problem of *optimization*, or resolving a certain "landscape" of elements to a certain "map" of pre-defined desired outcomes:





Learning algorithm

The goal of learning in the self-organizing map is to cause different parts of the network to respond similarly to certain input patterns. This is partly motivated by how visual, auditory or other sensory information is handled in separate parts of the cerebral cortex in the human brain.[3]

http://en.wikipedia.org/wiki/Self-organizing_map


http://upload.wikimedia.org/wikipedia/commons/9/91/Somtraining.svg




An illustration of the training of a self-organizing map. The blue blob is the distribution of the training data, and the small white disc is the current training sample drawn from that distribution. At first (left) the SOM nodes are arbitrarily positioned in the data space. The node nearest to the training node (highlighted in yellow) is selected, and is moved towards the training datum, as (to a lesser extent) are its neighbours on the grid. After many iterations the grid tends to approximate the data distribution (right).





but supercomputer could run simulations, projections and the calculations of various factor needed for the economy and make it more efficient and less impredictible like during the era of the soviet union when it was totally up to the bureaucrat to decide what to produce, when and how.




Its so funny how many politicians think that the capitalism system prevailed over the soviet union.

Look at today, how politician and banker are running the show, does it sound logical or even sane? this whole thing is run like a damn insane asylum!


Obviously the problem with the Stalinist state was *over*-planning, meaning that the strictly *technical* function of resolving logistics got caught up in with the *political* dynamics of the ruling bureaucracy.

Today the problem is that the *logistics* is allowed a "life" of its own, and there's (almost) *no* political oversight. The market mechanism as a whole is expected to be as conscious and rational as any living, thinking person, when it's *not* -- it's *non-conscious*. I call it "autopilot", and we're seeing similar results in the world's economy as to if a plane in flight was just allowed to fly unaided for an extended period of time without actually directing it. What happens in both cases is... a crash!

ÑóẊîöʼn
12th January 2011, 03:58
Today the problem is that the *logistics* is allowed a "life" of its own, and there's (almost) *no* political oversight. The market mechanism as a whole is expected to be as conscious and rational as any living, thinking person, when it's *not* -- it's *non-conscious*. I call it "autopilot", and we're seeing similar results in the world's economy as to if a plane in flight was just allowed to fly unaided for an extended period of time without actually directing it. What happens in both cases is... a crash!

Actually, modern commercial airliners can take off, fly, and land themselves. Pilots are there mostly for oversight and emergencies. And because humans are squeamish about having machines in control, even they are actually safer than unpredictable humans.

And the market's problem is not so much efficiency, but that the efficiency is misplaced - the market is very good at making money, but not very good at providing human needs. If I remember correctly, most stock trading these days is done on computers, but the recent economic crisis was a "user error", because computers don't share our sense of greed.

ckaihatsu
12th January 2011, 04:45
Okay then it's *zombie* economics, the kind that has no brain *and* chews through *yours*...!

Blamelessman
12th January 2011, 09:36
Would it be possible and what kind of result it would have?

discussion open!



The soviets DID do this in the 70s and 80s! I recall reading this in an almanac. It said that the Soviets were aiming to computerize the soviet economy by the 1970s. However I think it may just have been an advanced form of bookkeeping. Judging by the eventual outcome I guess it was a really shit program they were running.

robbo203
12th January 2011, 09:59
The soviets DID do this in the 70s and 80s! I recall reading this in an almanac. It said that the Soviets were aiming to computerize the soviet economy by the 1970s. However I think it may just have been an advanced form of bookkeeping. Judging by the eventual outcome I guess it was a really shit program they were running.


Soviet state capitalism was not a centrally planned economy in the sense that a single society-wide plan governed production. This is a complete myth. In fact not a single plan emanating from GOSPLAN was ever strictly fulfilled, Plans were constantly modified to fit in with changing economic realities. Economic reality governed the plan not the other way round. There was far more decentralisation than is sometimes allowed for with state enterprise managers doing a fair bit of wheeling and dealing in the background.


Central planning in the classical sense of a single society wide plan in which all inputs and outputs are matched up within a leontif type matrix is a complete and utter nonsense from start to finish. It really doesnt mattter how advanced your computer technology is or how adept your planners are at planning, what kills this absurd idea stone dead is the simple fact that you cannot get the real world to conform to the exigencies of the plan. Even the slightest perturbation in the real world, like a drought in upper Volta, would have knock on consequences that would require reconfiguration of the plan in its entirety. This follows from the simple fact that all inputs and outputs are matched up in the original plan so if the ratios change then the plan has to change. BUt the ratios will always change meaning the plan will have to be constantly changed meaning it will never get a chance to be put into operation.

I wish people would knock this crackpot idea on the head once and for all. A communist society must be an essentially self regulating, self ordering one employing a feedback mechanism in some ways like the market except that there would be no market at all. This necessarily means a communist society will essentially be a decentralised society with perhaps degree of centralisation .but definitely not a centrally planned economy in its classical sense.

ComradeOm
12th January 2011, 10:06
Nonsense. If the average expected lifetime of a widget is known, then that can be entered as a variable in the planning AI's calculations. Thus if it recieves an "unexpected" order (IE not according to any known plan or projection for example) for half a billion widgets, it can reject it as an "out of spec" orderExactly: someone has to enter that parameter. If a situation arises for which the computer does not have a programmed response then it cannot cope. You will never adequately foresee or factor in the countless variables that can arise from day-to-day. This is true even at factory level, as I can testify to from experience

To continue with the widget example, what would actually happen is that the quantity ordered would either be flagged up by the system or otherwise noticed by a planner. The latter would either know the cause immediately or make a few phone calls. It could be a simple data error, could be a new product coming through, could be a model upgrade for which the documentation was not yet complete, etc, it could be anything. An AI could not be expected to know this because it requires more than calculations


The thing is, unless you're actually dictating peoples' consumption habits, there's no need for any real "control" on the part of either AI or humans. There only needs to be rapid response to emerging trends and sudden events.Yes, its called 'planning'. Recording demand is only the first step in the planning process. This demand has to be aggregated, compared to capacity, broken down into production orders and then issued. At each step computers can be used to do calculations and pass information but control of the process must rest with specialist planners


but supercomputer could run simulations, projections and the calculations of various factor needed for the economy and make it more efficient and less impredictible like during the era of the soviet union when it was totally up to the bureaucrat to decide what to produce, when and how.We already do this today, both on a micro and macro scale. Really, you cannot understate the advances made in increasing computational power over the past two decades. Ironically much of the basis for today's computer-aided planning was laid by Soviet mathematicians and planners half a century ago. Only now can we make proper practical use of their techniques and formulae

ÑóẊîöʼn
12th January 2011, 10:44
Exactly: someone has to enter that parameter. If a situation arises for which the computer does not have a programmed response then it cannot cope.

A situation such as what? If humans can predict and deal with an event, such as a natural disaster, then surely any AI worth ver salt would be programmed likewise.


You will never adequately foresee or factor in the countless variables that can arise from day-to-day. This is true even at factory level, as I can testify to from experience

At the macro-economic level, fluctuations average out into statistically predictable trends. Computers can deal with variability and uncertainty just fine, and real-life variables are almost never completely random anyway.


To continue with the widget example, what would actually happen is that the quantity ordered would either be flagged up by the system or otherwise noticed by a planner. The latter would either know the cause immediately or make a few phone calls. It could be a simple data error, could be a new product coming through, could be a model upgrade for which the documentation was not yet complete, etc, it could be anything. An AI could not be expected to know this because it requires more than calculations

Actually, even here the human could be replaced with programming, like so:

1: Anomalous order recieved.
2: Initiate checksum algorithm to determine if order is the result of corrupted data.
3: Check design database for new models and/or documentation.
4a: If design is missing or corrupted, return the appropriate error.
4b: If documentation is missing, generate a standard form email berating the designers for being idiots and releasing a design without documentation. Refuse all orders of this design (IE return the "undocumented design" error) until the appropriately-labelled documentation appears in the database.
5: Perform other checks as needed.


Yes, its called 'planning'. Recording demand is only the first step in the planning process. This demand has to be aggregated, compared to capacity, broken down into production orders and then issued. At each step computers can be used to do calculations and pass information but control of the process must rest with specialist planners

Why? Computers are perfect for the job. If consumption of good Y is on a statistically significant upward trend, then naturally one increases production of good Y appropriately. I'm no more convinced that economic planning is the sole province of human brains than I'm convinced of the same for playing chess.

ckaihatsu
12th January 2011, 10:50
Really, you cannot understate the advances made in increasing computational power over the past two decades. Ironically much of the basis for today's computer-aided planning was laid by Soviet mathematicians and planners half a century ago. Only now can we make proper practical use of their techniques and formulae


The capitalists will sell us the computers by which we use to out-compute them with...!


x D

ComradeOm
12th January 2011, 11:57
A situation such as what? If humans can predict and deal with an event, such as a natural disaster, then surely any AI worth ver salt would be programmed likewiseNot without the development of sentient AI. Humans - through instinct, reason and training - are able to adapt to any crisis or unforeseen event. Computers are incapable of dealing with any scenario that falls outside of their programming. This includes unforeseen events which are, by definition, unexpected and unaccounted for

Now you could potentially programme every last eventuality into a AI. You wouldn't get them all of course, an impossibility, but even accounting for 90% of the variables and possible permutations in an entire economy would be a colossal undertaking. Which begs the question - why? Why expend this vast effort, which will never be complete, on putting an AI in charge? You would get marginal benefit (I would suggest no benefit) from a truly mammoth undertaking


At the macro-economic level, fluctuations average out into statistically predictable trends. Computers can deal with variability and uncertainty just fine, and real-life variables are almost never completely random anywayAggregate planning is not sufficient. You can't run any planning system on the basis of averages. That's just a recipe for disaster and waste. Such rough cut planning is not sufficient for allocating materials or production targets. Any truly planned national system has to go at least one level deeper


Actually, even here the human could be replaced with programming, like so:

1: Anomalous order recieved.
2: Initiate checksum algorithm to determine if order is the result of corrupted data.
3: Check design database for new models and/or documentation.
4a: If design is missing or corrupted, return the appropriate error.
4b: If documentation is missing, generate a standard form email berating the designers for being idiots and releasing a design without documentation. Refuse all orders of this design (IE return the "undocumented design" error) until the appropriately-labelled documentation appears in the database.
5: Perform other checks as neededLet's look at this then:

2: How does the computer know that this data is incorrect? It need not be 'corrupted' but just a simple mistype or an idiot. The golden rule with all such systems is that the data you get out is only as good as the data you put in. If someone misplaces a decimal point on an order form the computer has no way of knowing that this is a fuck up. It simply takes the data and processes it, perhaps noting at the end of the iterative process that the results fall 'outside of spec'. In contrast, a human planner would spot the mistake instantly

4a: And...? Flagging an error does not fix an error. Who receives this error?

4b: You don't understand. There is nothing wrong with ordering materials when the documentation is being processed. Lead times often dictate that there is no choice otherwise; short of stopping production at least. However because the paper trail is not complete, often a lengthy process in itself, the new demand will not show up in the computer's plan. The new product is not yet on the system until the documentation has been signed off. Indeed in the case of upgrades the software will continue to plan materials for the old model until it is told to do otherwise. What it lacks is the flexibility and the foresight to adequately react to the broader picture

Besides, an automated email is next to worthless. Simply holding up the process by returning 'documentation errors' is entirely unacceptable. And these are just a few simple situations that I've witnessed first hand. Extrapolate this to a national scale and imagine the problems


Why? Computers are perfect for the job. If consumption of good Y is on a statistically significant upward trend, then naturally one increases production of good Y appropriately. I'm no more convinced that economic planning is the sole province of human brains than I'm convinced of the same for playing chess.Chess is easy. Very few variables and a limited set of deterministic moves. Creating a computer to play chess was only a matter of waiting for enough computational power to handle the problem. We now have that, just has we have it for computer planning tools. Actually putting the computer in charge, ie doing more than crunching numbers or solving simple problems, is entirely different

In contrast to chess, planning software for a single factory is likely to have thousands of parts and hundreds of SKUs; each with associated parameters, forecasts and demands. Obviously this is different from a national economic plan but it does give a small example of the sheer scale of the problem. The planning principles are also the same - its not just a matter of "increasing production" but of assessing capacity, breaking new aggregate targets down, assigning these production orders and assessing the impact on other products or facilities. All in a continual loop. Its easy to see why this is impossible without computers, but also easy to see why human judgement is required at every stage in this process

ckaihatsu
12th January 2011, 12:19
2: How does the computer know that this data is incorrect? It need not be 'corrupted' but just a simple mistype or an idiot. The golden rule with all such systems is that the data you get out is only as good as the data you put in. If someone misplaces a decimal point on an order form the computer has no way of knowing that this is a fuck up. It simply takes the data and processes it, perhaps noting at the end of the iterative process that the results fall 'outside of spec'. In contrast, a human planner would spot the mistake instantly





Really, you cannot understate the advances made in increasing computational power over the past two decades. Ironically much of the basis for today's computer-aided planning was laid by Soviet mathematicians and planners half a century ago. Only now can we make proper practical use of their techniques and formulae


I'm going to have to side with NoXion on this issue.

The computational capacity that currently exists is far more than adequate for fully computerizing -- if it hasn't been done already -- the entire material realm of manufactured products and ongoing labor. The logistics are already in place, too, using markets and financing, for the rudimentary tracking of all such elements through time. *And*, sophisticated types of dynamic modeling already exist for *other* complex applications like weather forecasting and the like.

So really it's just a matter of re-tooling, as has just been stated at another thread:





[A] planned economy would simply be reprogramming the major AI institutions (which increasingly are making mass trade decisions independently on behalf of bourgeois today) to run on different parameters with various automatic, participative, deliberative, and high political data-inputs of democratic and need based demand, parameter-setting values. Literally a programming-organizational change from "profit-based" to "democratic-and-basic-need-based". I've heard some about this being the bleeding edge of finance capital. Does anyone have any information on this?

ComradeOm
12th January 2011, 12:45
I'm going to have to side with NoXion on this issue.

The computational capacity that currently exists is far more than adequate for fully computerizing -- if it hasn't been done already -- the entire material realm of manufactured products and ongoing labor. The logistics are already in place, too, using markets and financing, for the rudimentary tracking of all such elements through time. *And*, sophisticated types of dynamic modeling already exist for *other* complex applications like weather forecasting and the likeSheer computational power aside, we are nowhere close to integrated economic planning. This is not a technological problem but a institutional one. There are no structures, programmes, institutions, etc, that exist today that can plan and effect a true economic plan. There is nothing to 're-tool'. We are talking not just macro-economic modelling (which is simply a rediscovery of French planisme) but the extension of planning principles to govern all economic transactions. The software required currently exists, or at least its genesis does, at the lowest level (MRP/ERP and stochastic simulation) but we are many years (and a social revolution) away from a comprehensive planned economy. It is nowhere near as simple as 'switching' software priorities from 'profit' to 'need'

ckaihatsu
12th January 2011, 13:12
Sheer computational power aside, we are nowhere close to integrated economic planning. This is not a technological problem but a institutional one. There are no structures, programmes, institutions, etc, that exist today that can plan and effect a true economic plan.


Yes, I agree that there would have to be a hand-in-glove integration of a collective workers' co-administration over mass public policy, with all supply chains throughout all industrial means of mass production.





There is nothing to 're-tool'.


In a sense there might be, though I'm not particularly well-versed on the particulars. Without an institutional structure, as you're noting, the *overhead* for any possible "re-tooling" simply *doesn't exist*.





We are talking not just macro-economic modelling (which is simply a rediscovery of French planisme) but the extension of planning principles to govern all economic transactions.


Then this part would require a new 'overhead' for a post-capitalist political economy....





The software required currently exists, or at least its genesis does, at the lowest level (MRP/ERP and stochastic simulation) but we are many years (and a social revolution) away from a comprehensive planned economy. It is nowhere near as simple as 'switching' software priorities from 'profit' to 'need'


Yeah, from what I know about the *current* functioning of the (capitalist) economy, everything is already in place from a *logistical* standpoint or else there wouldn't be financial markets, supply chains, transport, and so on.

What's unique today is that everything is seamless in the realm of description and user interface, such that the Internet / web as it exists would be the ideal *interface*, or "front end" for all functioning on the "back end".

It *is* quite an extrapolation to call for a full hand-over of a post-capitalist *political economy* to a (potential) AI system, and so we should maintain our politics of keeping such mass co-administration in the hands of the workers of the world themselves.

Jimmie Higgins
12th January 2011, 13:52
That's not a centrally planned economy I'd want to live under. How would a computer know what a population wanted and needed? If it's a machine to collect and crunch data to help people plan what to produce, that's one thing. But a computer to analyze and decide what's needed to be produced is just bizarre - I'm fighting to put the producers democratically in charge of production, not a computer.

ÑóẊîöʼn
12th January 2011, 14:11
That's not a centrally planned economy I'd want to live under. How would a computer know what a population wanted and needed? If it's a machine to collect and crunch data to help people plan what to produce, that's one thing. But a computer to analyze and decide what's needed to be produced is just bizarre - I'm fighting to put the producers democratically in charge of production, not a computer.

Humans in large numbers are laughably predictable. Strip away the tinsel and ornamentation of "branding" and you will find that goods and services of all types serve some fairly basic drives. Even shit that's useless for all practical purposes has some psychological reason behind its existence - for example as a status symbol.

Capitalism osbcures the issue by meeting psychosocial needs with material goods.

Jimmie Higgins
12th January 2011, 14:25
Humans in large numbers are laughably predictable. Strip away the tinsel and ornamentation of "branding" and you will find that goods and services of all types serve some fairly basic drives. Even shit that's useless for all practical purposes has some psychological reason behind its existence - for example as a status symbol.

Capitalism osbcures the issue by meeting psychosocial needs with material goods.I'm not talking about wanting a red or blue sports car, I'm talking about how do we organize society, set our priorities and so on. Like I said, if it's a machine designed to automate and carry out decisions made democratically by people or AI that has been set with certain directives and priories then that's one thing, a tool. An AI program without human input on the other hand would not be able to "decide" what priorities people have and if it was more important to use resources building a subway or a hospital. Would a computer program know how a community wanted to set itself up... would we want that? Or would we want people to be able to experiment with ways of organizing themselves if we had a healthy surplus and the time and resources?

Automation and AI will no doubt be important for workers trying to free themselves from mundane and boring but necessary tasks that don't require much debate or decision-making, but to me, it's a leap to think that a program would be able to take the place of normal human decision-making when it comes to fundamental questions of how we want to live and organize our daily lives.

ckaihatsu
12th January 2011, 15:45
I'm going to suggest here that there is a distinction to be made between that which is related to being *political* (co-administrative) and that which is related to *consumption*.

For logistics -- and similarly, for life's journeys, too -- the routine may resemble a train ride, in which certain lines of forward progress have already been selected (by oneself or mass workers' society, hopefully), and so one rides the length of that segment, *until* it reaches a terminal or hub of some sort. At that point there are choices to be made and unforeseen factors and options may present themselves as well -- this would correspond to *political* decision-making at the mass scale. Once done a certain "distance" of forward trajectory may be fairly easily surmised, through extrapolation, as with demographics or data of consumption patterns. (Note that this is congruent with the empirical observation of 'punctuated equilibrium', from other fields.)

In my own model there is a component that speaks to this in a concrete manner (from post #11):





Propagation

consumption [demand] -- Individuals may create templates of political priority lists for the sake of convenience, modifiable at any time until the date of activation -- regular, repeating orders can be submitted into an automated workflow for no interruption of service or orders

ÑóẊîöʼn
12th January 2011, 16:33
I'm not talking about wanting a red or blue sports car, I'm talking about how do we organize society, set our priorities and so on. Like I said, if it's a machine designed to automate and carry out decisions made democratically by people or AI that has been set with certain directives and priories then that's one thing, a tool. An AI program without human input on the other hand would not be able to "decide" what priorities people have and if it was more important to use resources building a subway or a hospital. Would a computer program know how a community wanted to set itself up... would we want that? Or would we want people to be able to experiment with ways of organizing themselves if we had a healthy surplus and the time and resources?

Automation and AI will no doubt be important for workers trying to free themselves from mundane and boring but necessary tasks that don't require much debate or decision-making, but to me, it's a leap to think that a program would be able to take the place of normal human decision-making when it comes to fundamental questions of how we want to live and organize our daily lives.

I think that ultimately comes down to what role in society people in the future will want AIs to play, and that's something we cannot know for certain. But what I am certain of is that people who are comfortable are less willing to rock the boat. So it may turn out that Friendly AIs end up taking over and running the world not through force, but because humans are lazy, short-sighted and generally not up to the task.

Terrible as that may sound, that's one of the better scenarios, because at least the human species would be alive and comfortable.

ComradeOm
12th January 2011, 16:55
Yeah, from what I know about the *current* functioning of the (capitalist) economy, everything is already in place from a *logistical* standpoint or else there wouldn't be financial markets, supply chains, transport, and so onKeep in mind that all those also existed a century ago. What we've done in the past two decades is significantly enhance human control over these areas through automation/computers. We're just better at managing supply chains et al than we used to be. The role of computers is to ease the computational burden (by doing all those pesky calculations) and allow the planners to more proactively manage affairs. Now, there has been a (very) slow march towards integration but, even at the lowest level, it is still far from complete. Only the larger corporations have fully integrated ERP packages to coordinate all aspects of the business, and even these are not particularly efficient

What is obviously missing, from a socialist perspective, is a replacement for the market. Handling operational level activities via software is fine but currently market mechanisms remain dominant in the economy and there is no sign of this changing under capitalism. That is the missing link: the creation of a central planning superstructure to coordinate production across multiple sectors and enterprises. Unfortunately, there is no capitalist equivalent (other than the market) and constructing such an institution is a task that will have to wait until post-revolution


So it may turn out that Friendly AIs end up taking over and running the world not through force, but because humans are lazy, short-sighted and generally not up to the taskA very strange attitude for a socialist to take. How about the workers take up the challenge and manage the world as they like?

ÑóẊîöʼn
12th January 2011, 17:24
A very strange attitude for a socialist to take. How about the workers take up the challenge and manage the world as they like?

What if it turns out they don't want to? I doubt that "managing the world" is all fun and games.

ckaihatsu
12th January 2011, 17:45
What if it turns out they don't want to? I doubt that "managing the world" is all fun and games.


Here's the "math" on that:

Consider how many people *currently* do volunteer work -- the work is dictated by others, the volunteers retain no claim to their work products, and all the while they continue to incur the regular daily expenses of living.

Now consider a post-capitalist environment in which people would have full, albeit collectively mediated, access to the entirety of the world's means of mass industrial production and all lesser implements, including a proportionate share of its total material output. As long as there was no serious, well-founded political opposition to a certain use then it would be fully available, probably similar to one's current licensing of today to drive a car on the streets or do certain specialized professions.

So, given such a potential realistic scenario, do you really think that that world of 7 billion+ would be so hard-pressed to find enough people willing to be pro-active in such a society, for their own good and that of others, un-exploited, un-oppressed, and working uncoerced in solidarity with every single other person on the globe -- ? -- !

danyboy27
12th January 2011, 17:54
well, what we do with the AI is only a question of democratic politics and seriously i dont see a problem with that, no matter the outcome.

Such system could allow us to manufacture great and gigantic project with speed and efficiency without being forced to starve half of the world to death.

The only real drawback of this that i can see is the tremendous force such system could posses, and the issues it could likely cause us beccause of it.

the people could willingly set the AI on a project that, at the end, would be pointless and result in the wasting of manpower, time and ressources.

ÑóẊîöʼn
12th January 2011, 17:56
Now consider a post-capitalist environment in which people would have full, albeit collectively mediated, access to the entirety of the world's means of mass industrial production and all lesser implements, including a proportionate share of its total material output. As long as there was no serious, well-founded political opposition to a certain use then it would be fully available, probably similar to one's current licensing of today to drive a car on the streets or do certain specialized professions.

Looks like you're confusing means and motives to me.


So, given such a potential realistic scenario, do you really think that that world of 7 billion+ would be so hard-pressed to find enough people willing to be pro-active in such a society, for their own good and that of others, un-exploited, un-oppressed, and working uncoerced in solidarity with every single other person on the globe -- ? -- !

Isn't it the case that in politics those most willing to do a job also happen to be those who are least suited to it?

ckaihatsu
12th January 2011, 18:20
Now consider a post-capitalist environment in which people would have full, albeit collectively mediated, access to the entirety of the world's means of mass industrial production and all lesser implements, including a proportionate share of its total material output. As long as there was no serious, well-founded political opposition to a certain use then it would be fully available, probably similar to one's current licensing of today to drive a car on the streets or do certain specialized professions.





Looks like you're confusing means and motives to me.


On the whole, aren't people's motivations affected by the kind of social environment they're in?





Isn't it the case that in politics those most willing to do a job also happen to be those who are least suited to it?


This is just vacuous pessimism -- in a collectivized workers' society each person would have an equitable political presence for general societal matters and also complementing any work roles they may have. Your "managing the world" conception is a misguided one since political involvements would *not* be privileged or specialized, by the definition of communism.





The only real drawback of this that i can see is the tremendous force such system could posses, and the issues it could likely cause us beccause of it.

the people could willingly set the AI on a project that, at the end, would be pointless and result in the wasting of manpower, time and ressources.


This is a characteristic inherent to the very practice of science (or artistry) itself -- at greater scales more complexity is necessarily involved, but there is also more collective intelligence tasked to the investigation or project as well.

trivas7
12th January 2011, 18:32
[...] If its a properly programed superintelligence, results should be good.
How would that happen? And what qualifies as good results? This is a wish for the premise of 'The Matrix' to become reality.

ComradeOm
12th January 2011, 19:02
What if it turns out they don't want to? I doubt that "managing the world" is all fun and games.And the rewards are just as great. But to come back to this, what sort of a socialist doesn't advocate the ascendency of the working class? :confused:

ÑóẊîöʼn
12th January 2011, 23:32
And the rewards are just as great. But to come back to this, what sort of a socialist doesn't advocate the ascendency of the working class? :confused:

Oh, I advocate it. I just don't have any delusions about such an event being some kind of end or beginning to history. I like to think on longer timescales than normal, and as far as the species is concerned, any kind of stasis is stagnation and inevitable extinction. Sooner or later (relatively speaking) the human species will no longer be top dog on this planet, and I'd like to think that even though we would no longer be leading from the front, we should at least have a comfortable ringside seat on proceedings.

ckaihatsu
13th January 2011, 09:35
As a side note, and for whatever it's worth, I'll just note that I don't share this technological fatalism, and I would prefer to see *you* disabused of it as well....





any kind of stasis is stagnation and inevitable extinction.


If the history of recorded history has shown us anything it's that there hasn't been *that* much stagnation, and, now, in the technological present, the regular person's options for activities has exploded exponentially, at least in raw physical availability.

Point being that this is hardly the correct historical era within which to be concerned about stasis for humanity's culture, etc...(!)





Sooner or later (relatively speaking) the human species will no longer be top dog on this planet,


The very conceptual construction of a superior-conscious AI is a stretch as far as I can surmise.

There would have to be incremental stages of development far preceding any kind of accomplishment that would produce some kind of feisty, willful ghost-in-the-machine -- note that the only reason computer viruses even exist in the present day is because of the kingdom-like dominance Microsoft has over the operating system market (the Windows OS has continued its default, layered construction that *allows* an intermediary "gap" within the processing of instructions at which outside pieces of code, or viruses, may attack -- note that viruses are practically unheard of for other operating systems).

If a hypothetical AI ghost is constrained to the realm of circuitry and all component systems (major servers) have backup systems that may be rebooted into fresh, clean states of operation, then it follows that a simple social coordination -- as through the mainstream news -- would be sufficient to systematically deny such a hypothetical entity any and all locations for its virus-like spreading.

And this would be a *worst-case* scenario -- I'd imagine that many eyes and ears would be all over stages of preceding incremental development, even if kept under wraps in the private sector, generally akin to the field of genetic research. In short the engineering of a hypothetical AI ghost-in-the-machine would be highly politicized long before it could reach such a proposed state of being.

ÑóẊîöʼn
13th January 2011, 18:49
If the history of recorded history has shown us anything it's that there hasn't been *that* much stagnation, and, now, in the technological present, the regular person's options for activities has exploded exponentially, at least in raw physical availability.

That should also tell you that the past is an unreliable guide to the future. How many Sumerians even suspected that a thing such as an aircraft carrying hundreds of people could exist?


Point being that this is hardly the correct historical era within which to be concerned about stasis for humanity's culture, etc...(!)

It's not culture in particular I'm concerned about. It's our overall rate of development relative to the rest of the universe. Sure, things are moving quickly (but not quickly enough in the right directions in my opinion) as of this moment, and while I think a technological singularity is something that can and should happen, I am not dogmatic about it; I'm fully aware of the possibility that the future may not turn out as prognosticators indicated, and that we should prepare for regression as well as advance.

Hell, there are even some who seek to take humanity into at least some kind of technologically regressive state, and who's to say that social progress wouldn't be obliterated along with the technological?

I suspect that as capitalism limps its way into the future and as technology advances, attitudes towards technology will become increasingly polarised, and primitivist and quasi-primitivist sentiment will become increasingly popular as a result.

If some form of primitivism becomes the dominant social norm, then we are finished. Not necessarily the rapid, cinema-friendly extinction of the kind brought about by asteroids and supervolcano eruptions (although these could happen and we'd be in even less of a position to do anything about it than now), but inevitable whimpering fadeout that has befallen so many species on this planet beforehand.


The very conceptual construction of a superior-conscious AI is a stretch as far as I can surmise.

There would have to be incremental stages of development far preceding any kind of accomplishment that would produce some kind of feisty, willful ghost-in-the-machine -- note that the only reason computer viruses even exist in the present day is because of the kingdom-like dominance Microsoft has over the operating system market (the Windows OS has continued its default, layered construction that *allows* an intermediary "gap" within the processing of instructions at which outside pieces of code, or viruses, may attack -- note that viruses are practically unheard of for other operating systems).

One of the most effective strategies that malware uses to spread itself is social engineering - something that an AI would be able to use even with non-MS operating systems, since it concentrates on the weakest part of any system - the human brain.


If a hypothetical AI ghost is constrained to the realm of circuitry and all component systems (major servers) have backup systems that may be rebooted into fresh, clean states of operation, then it follows that a simple social coordination -- as through the mainstream news -- would be sufficient to systematically deny such a hypothetical entity any and all locations for its virus-like spreading.

You're fucking kidding, right? Despite nearly a decade of news coverage, people still fall for the same old scams that enable all sorts of malware to take root in their systems. You might be able to get corporate and government systems to beef up their security, but home users? Forget it.

This isn't even taking into the account the potential for an AI to suborn legitimate processes in ways that dumb viruses can only dream of, so to speak.


And this would be a *worst-case* scenario -- I'd imagine that many eyes and ears would be all over stages of preceding incremental development, even if kept under wraps in the private sector, generally akin to the field of genetic research. In short the engineering of a hypothetical AI ghost-in-the-machine would be highly politicized long before it could reach such a proposed state of being.

So who's watching AI developers right now? And given incremental development, what possible event could occur that would precipitate a surge of political and regulatory interest in AI development?

ckaihatsu
13th January 2011, 20:15
If the history of recorded history has shown us anything it's that there hasn't been *that* much stagnation, and, now, in the technological present, the regular person's options for activities has exploded exponentially, at least in raw physical availability.





That should also tell you that the past is an unreliable guide to the future. How many Sumerians even suspected that a thing such as an aircraft carrying hundreds of people could exist?


Okay, you're giving a counter-argument on 'historical precedent', but I'm saying *present* expanded technological capacities are already here, so, again, activity availability has increased overall due to the net and now-mature, now-affordable hardware and software combinations for the average person.





It's not culture in particular I'm concerned about. It's our overall rate of development relative to the rest of the universe.


Um, I haven't checked in recently with "the rest of the universe" -- any news from them lately? (grin)





Sure, things are moving quickly (but not quickly enough in the right directions in my opinion) as of this moment,


Agreed.





and while I think a technological singularity is something that can and should happen, I am not dogmatic about it; I'm fully aware of the possibility that the future may not turn out as prognosticators indicated, and that we should prepare for regression as well as advance.

Hell, there are even some who seek to take humanity into at least some kind of technologically regressive state, and who's to say that social progress wouldn't be obliterated along with the technological?


Eh....

My take is that U.S. hegemonic imperialism, based on military might, is pretty firmly entrenched -- this provides a distinct First-World-level focus of attention for mass political struggles from below. Already the net, as a re-emerging generalized global society, has stepped up to be called 'the second superpower'. This may be the basis for a sounder populist, and even revolutionary, mass sentiment that resonates and reverberates all over the world, all at once. Certainly it's an improvement over past decades in which the general left looked to the U.S.S.R. as the political counterweight to U.S. imperialism....





I suspect that as capitalism limps its way into the future and as technology advances, attitudes towards technology will become increasingly polarised, and primitivist and quasi-primitivist sentiment will become increasingly popular as a result.


No, I'd have to disagree here -- the time for that is past since technology was *cruder* in past decades and more open to such criticism (and paranoia, from misunderstandings).





If some form of primitivism becomes the dominant social norm, then we are finished. Not necessarily the rapid, cinema-friendly extinction of the kind brought about by asteroids and supervolcano eruptions (although these could happen and we'd be in even less of a position to do anything about it than now), but inevitable whimpering fadeout that has befallen so many species on this planet beforehand.


Sounds more like a dark fantasy....





One of the most effective strategies that malware uses to spread itself is social engineering - something that an AI would be able to use even with non-MS operating systems, since it concentrates on the weakest part of any system - the human brain.


Eh, more pessimism / fantasy....





You're fucking kidding, right? Despite nearly a decade of news coverage, people still fall for the same old scams that enable all sorts of malware to take root in their systems. You might be able to get corporate and government systems to beef up their security, but home users? Forget it.

This isn't even taking into the account the potential for an AI to suborn legitimate processes in ways that dumb viruses can only dream of, so to speak.


Pass. Still exaggerated, unfounded pessimism.





So who's watching AI developers right now? And given incremental development, what possible event could occur that would precipitate a surge of political and regulatory interest in AI development?


All it takes is one whistleblower -- mass attention catches on pretty quickly if the magnitude is substantial enough.

Jimmie Higgins
13th January 2011, 20:47
So it may turn out that Friendly AIs end up taking over and running the world not through force, but because humans are lazy, short-sighted and generally not up to the task.

Terrible as that may sound, that's one of the better scenarios, because at least the human species would be alive and comfortable.
Ok, it may be a more basic difference in views of humanity. I tend to believe that people are pretty good at working for things they want and need, and can be far-sighted - many early societies found ways to live in more balance with their surroundings in order to maintain a steady and stable life. Capitalism, on the other-hand is completely driven by economic expansion and profits which make people very short-sighted and cause today's rulers to make very impressionistic planners because next quarter's profits take precedent over the well being of the next generation. People are only lazy when it comes to the alienated labor today and basically doing tasks that are boring and unappealing - that's a great place for AI in the future - make computers count beans and make decisions based on basic parameters that people have already democratically decided, but I think people will want the creative planning and power to decide how to use surplus and so on.

ÑóẊîöʼn
13th January 2011, 21:50
Okay, you're giving a counter-argument on 'historical precedent', but I'm saying *present* expanded technological capacities are already here, so, again, activity availability has increased overall due to the net and now-mature, now-affordable hardware and software combinations for the average person.

As popular and user-friendly software such as Windows and OS X demonstrate, technological availability does not necessarily translate into technical ability. You do not need to know the inner workings of a PC (or most other devices) in order to use it.


Um, I haven't checked in recently with "the rest of the universe" -- any news from them lately? (grin)

Hilarious. :mellow:


Eh....

My take is that U.S. hegemonic imperialism, based on military might, is pretty firmly entrenched -- this provides a distinct First-World-level focus of attention for mass political struggles from below. Already the net, as a re-emerging generalized global society, has stepped up to be called 'the second superpower'. This may be the basis for a sounder populist, and even revolutionary, mass sentiment that resonates and reverberates all over the world, all at once. Certainly it's an improvement over past decades in which the general left looked to the U.S.S.R. as the political counterweight to U.S. imperialism....

Well, I genuinely hope that's the case, but legislation like the UK's "extreme porn" (http://www.theregister.co.uk/2010/08/25/pain_olympics/) law, the government's involvement of unaccountable bodies such as the Internet Watch Foundation (http://en.wikipedia.org/wiki/Internet_Watch_Foundation#Criticism), and capital's collusion with the state in lobbying for more power for the cops (http://www.theregister.co.uk/2010/11/25/nominet_crime/), making it more difficult for sites such as FITwatch (http://www.fitwatch.org.uk) to operate, lead me to believe that enough in the ruling class (in the UK at least) believe a neutral Net presents a threat to their hegemony.


No, I'd have to disagree here -- the time for that is past since technology was *cruder* in past decades and more open to such criticism (and paranoia, from misunderstandings).

And the misunderstandings, and hence the paranoia, have only increased as technology has become more complex and thus more mysterious to the ignorant. Despite the internet being increasingly easier to use, people still believe in stuff like HAARP (http://en.wikipedia.org/wiki/HAARP) being a secret superweapon that can cause earthquakes, out of a combination of ignorance and fear.


Sounds more like a dark fantasy....

So you deny that a technologically static species would eventually become extinct?


Eh, more pessimism / fantasy....

So you deny that people open attachments from anonymous strangers, against their own best interests?


Pass. Still exaggerated, unfounded pessimism.

Your cast-iron certainty that AIs could not even achieve the same things that human hackers can is astoundingly naive. AIs have numerous inherent advantages (incidentally the same kind of advantages that make them attractive projects to build) and sooner or later someone is going to give an AI root access of some kind - and I want the AI in question to be Friendly (http://singinst.org/upload/CFAI.html) when that happens.


All it takes is one whistleblower -- mass attention catches on pretty quickly if the magnitude is substantial enough.

That assumes that anyone but the AI verself knows anything is wrong. The AI will report any malfunction because it's highly likely the supergoal requires proper maintenance of the AI, but if it has been deliberately coded to be anything but Friendly then the problem might not be apparent until things are too late.

It may happen first as a relatively minor event, in which case I can only hope that we won't overreact. Criminalising AI research would only drive it into the hands of criminals and other malcontents.


Ok, it may be a more basic difference in views of humanity. I tend to believe that people are pretty good at working for things they want and need, and can be far-sighted - many early societies found ways to live in more balance with their surroundings in order to maintain a steady and stable life.

They didn't know what we know now; that the universe is fundamentally indifferent to life. It doesn't hate us or love us; it nourishes and destroys with the same blind unintentionality for each.


Capitalism, on the other-hand is completely driven by economic expansion and profits which make people very short-sighted and cause today's rulers to make very impressionistic planners because next quarter's profits take precedent over the well being of the next generation.

Agreed.


People are only lazy when it comes to the alienated labor today and basically doing tasks that are boring and unappealing - that's a great place for AI in the future - make computers count beans and make decisions based on basic parameters that people have already democratically decided, but I think people will want the creative planning and power to decide how to use surplus and so on.

Don't get me wrong - human laziness, creatively applied, has lead to some of our greatest achievements.

ckaihatsu
13th January 2011, 22:16
As popular and user-friendly software such as Windows and OS X demonstrate, technological availability does not necessarily translate into technical ability. You do not need to know the inner workings of a PC (or most other devices) in order to use it.


Uh, *yeah* -- you're making my point for me....





Well, I genuinely hope that's the case, but legislation like the UK's "extreme porn" (http://www.theregister.co.uk/2010/08/25/pain_olympics/) law, the government's involvement of unaccountable bodies such as the Internet Watch Foundation (http://en.wikipedia.org/wiki/Internet_Watch_Foundation#Criticism), and capital's collusion with the state in lobbying for more power for the cops (http://www.theregister.co.uk/2010/11/25/nominet_crime/), making it more difficult for sites such as FITwatch (http://www.fitwatch.org.uk) to operate, lead me to believe that enough in the ruling class (in the UK at least) believe a neutral Net presents a threat to their hegemony.


Noted.





And the misunderstandings, and hence the paranoia, have only increased as technology has become more complex and thus more mysterious to the ignorant. Despite the internet being increasingly easier to use, people still believe in stuff like HAARP (http://en.wikipedia.org/wiki/HAARP) being a secret superweapon that can cause earthquakes, out of a combination of ignorance and fear.


What about the geese?





So you deny that people open attachments from anonymous strangers, against their own best interests?


I suppose, but it's also easily remedied -- one could delete the program and restart. Worst case is re-installing the system software.





Your cast-iron certainty that AIs could not even achieve the same things that human hackers can is astoundingly naive. AIs have numerous inherent advantages (incidentally the same kind of advantages that make them attractive projects to build) and sooner or later someone is going to give an AI root access of some kind - and I want the AI in question to be Friendly (http://singinst.org/upload/CFAI.html) when that happens.


Look -- with all due respect, what you keep harping on is entirely formalistic and hypothetical. You keep making this giant leap from current, *linear-oriented* circuitry to some kind of out-from-nowhere, surprisingly complex and sophisticated silicon-based entity.

Your contentions are so over-extended that you have to rely solely on salesmanship -- note that you don't ever reference any news regarding benchmark developments whatsoever. Just describing something in extensive detail doesn't make it reality....





That assumes that anyone but the AI verself knows anything is wrong. The AI will report any malfunction because it's highly likely the supergoal requires proper maintenance of the AI, but if it has been deliberately coded to be anything but Friendly then the problem might not be apparent until things are too late.


More presuppositions, postulations, and hypotheticals....





It may happen first as a relatively minor event, in which case I can only hope that we won't overreact. Criminalising AI research would only drive it into the hands of criminals and other malcontents.


Perhaps you'd be more useful here if you just passed along some information that pertains to the current state of artificial learning developments....

ÑóẊîöʼn
14th January 2011, 00:13
Uh, *yeah* -- you're making my point for me....

The point is that advanced technological civilisation requires specialist knowledge to function. If that knowledge is lost, it's going to take a long time to regain it, if we ever do.


What about the geese?

I suppose there are people who blame their deaths on HAARP as well. Capitalist mismanagent of technology is often misidentified as being a symptom of the technology itself, and this combined with ignorance can lead to suspicion over technological activities even when it's not warranted.


I suppose, but it's also easily remedied -- one could delete the program and restart. Worst case is re-installing the system software.

And what about the thousands of other PCs that form part of a botnet, most of their users not even realising they're compromised?


Look -- with all due respect, what you keep harping on is entirely formalistic and hypothetical. You keep making this giant leap from current, *linear-oriented* circuitry to some kind of out-from-nowhere, surprisingly complex and sophisticated silicon-based entity.

It's not going to come out of nowhere. It's going to be the consequence of an unFriendly AI, built by humans, that gets loose.


Your contentions are so over-extended that you have to rely solely on salesmanship -- note that you don't ever reference any news regarding benchmark developments whatsoever. Just describing something in extensive detail doesn't make it reality....

I don't claim to know when it will happen, but I have good reasons to believe it will happen, barring some worldwide apocalypse or a popular resurgence of primitivism.


More presuppositions, postulations, and hypotheticals....

Pot, meet kettle.


Perhaps you'd be more useful here if you just passed along some information that pertains to the current state of artificial learning developments....

Perhaps you should stop making posts that amount to borderline trolling. :rolleyes:

ckaihatsu
14th January 2011, 00:47
The point is that advanced technological civilisation requires specialist knowledge to function. If that knowledge is lost, it's going to take a long time to regain it, if we ever do.


You've just shifted the topic of this point, as you often do in your discussions -- now it's shifted to another one of your pessimistic, apocalyptic diatribes.





And what about the thousands of other PCs that form part of a botnet, most of their users not even realising they're compromised?


Tell them to switch to Linux.





It's not going to come out of nowhere. It's going to be the consequence of an unFriendly AI, built by humans, that gets loose.


I'll gamble here and take the chance that in hindsight I'll be called flippant -- there, now you're being Chicken Little.





More presuppositions, postulations, and hypotheticals....





Pot, meet kettle.


No, not really -- the burden of proof is on those who make assertions. You're putting forth a line of technological fatalism that strains credulity. Again, my advice is to provide us with news of where artificial learning is at these days -- that would at least give us a real-world frame of reference by which to weigh your hypothetical scenarios.

Magón
14th January 2011, 01:20
Just thought I'd post this up. Seemed like the thread needed it more than ever.

http://2.bp.blogspot.com/_uu34lpOGIcA/ST82QaPrQ1I/AAAAAAAAABE/KZZDFMb9AUQ/s400/terminator_salvation_movie.jpg

Jimmie Higgins
14th January 2011, 17:01
Don't get me wrong - human laziness, creatively applied, has lead to some of our greatest achievements.Favorite quote of the week:)

Jazzratt
15th January 2011, 02:36
Just thought I'd post this up. Seemed like the thread needed it more than ever.

http://2.bp.blogspot.com/_uu34lpOGIcA/ST82QaPrQ1I/AAAAAAAAABE/KZZDFMb9AUQ/s400/terminator_salvation_movie.jpg Really? You see to me what it looks like, far from something this thread needs, is an almost entirely non-sequitur image that really doesn't add to the discussion. In the light of that what this thread really needs is for me to give you a verbal warning for spam.

Have a verbal warning for spam :)

Magón
15th January 2011, 15:26
Really? You see to me what it looks like, far from something this thread needs, is an almost entirely non-sequitur image that really doesn't add to the discussion. In the light of that what this thread really needs is for me to give you a verbal warning for spam.

Have a verbal warning for spam :)

Why? This thread is talking about a centrally planned economy controlled by a computer. In Terminator movies, Skynet is that controlling computer. I was simply just pointing out how some might feel to having a computer controlling their economy, etc.

(And it was a little joke, not meant to be seen as spam.)

Jazzratt
15th January 2011, 16:09
Why? This thread is talking about a centrally planned economy controlled by a computer. In Terminator movies, Skynet is that controlling computer. I was simply just pointing out how some might feel to having a computer controlling their economy, etc.

(And it was a little joke, not meant to be seen as spam.) I'm not going to argue the toss with you over it in this thread. If you feel the warning was truly unnecessary you can PM me. To me the picture really did just look like a non-sequitur and you should probably have given the context you gave in this post in that one.

I am going to argue with you on the relevance of Terminator though, or more generally with the various "be terrified of AI!" films people use (inevitably, tediously) as analogies in these arguments. Citing fiction is lazy and there really is no reason to think it will turn out the way it does in these stories - usually the AI taking over or going berserk does so as a result of a series of convenient, basic errors or the part of the humans and thanks to a convergence of unlikely events.

ComradeMan
15th January 2011, 16:14
Welcome to the world of the Borg!!!

ComradeOm
15th January 2011, 21:10
I am going to argue with you on the relevance of Terminator though, or more generally with the various "be terrified of AI!" films people use (inevitably, tediously) as analogies in these arguments. Citing fiction is lazy and there really is no reason to think it will turn out the way it does in these stories - usually the AI taking over or going berserk does so as a result of a series of convenient, basic errors or the part of the humans and thanks to a convergence of unlikely events.I might be too charitable here, but I read the picture as commenting on the degree to which the above few posts resemble little more than science fiction. Does it really make a difference if the Super-AI-That-Has-Supplanted-Humans-And-Now-Governs-The-World is inherently good or bad? Its still so speculative as to be within the realms of science fiction. Nothing wrong with reading or discussing that but its not very relevant to economic planning

ÑóẊîöʼn
15th January 2011, 21:59
I might be too charitable here, but I read the picture as commenting on the degree to which the above few posts resemble little more than science fiction. Does it really make a difference if the Super-AI-That-Has-Supplanted-Humans-And-Now-Governs-The-World is inherently good or bad? Its still so speculative as to be within the realms of science fiction. Nothing wrong with reading or discussing that but its not very relevant to economic planning

The way I see it, there are at least a couple of fairly logical reasons why the topic is worth exploring seriously;

A) So far, absolutely nothing we have discovered about organic tissue indicates that it is special or unique with regards to embodying intelligence - it is therefore reasonable to assume that at some point in the future, barring technological regression of some kind, intelligence will be embodied in technology somehow.

B) Although we may not achieve human-or-greater artificial intelligence any time soon, perhaps not within our lifetimes - the exact same can be said for a socialist economy that is more than just "capitalism with a human face".

Given the above, I don't get how planning the economics of a society that doesn't exist, and may never exist, is any less fantastical or speculative than discussing the very real possibility of AI-based economics.

ComradeOm
15th January 2011, 22:09
Given the above, I don't get how planning the economics of a society that doesn't exist, and may never exist, is any less fantastical or speculative than discussing the very real possibility of AI-based economics.My posts discuss a planning process and related software that already exist. Everything I've talked about so far is technologically feasible today. The question is simply how we arrange that on a national level. This is entirely different from basing visions of social order on wild assumptions about future technological advances

ckaihatsu
15th January 2011, 23:07
Been down *this* road before....





Given that any possible AI entity will necessarily have to be developed within the context of *existing* human society and its concerns -- primarily for its *own* self-determination and well-being -- and that *any* decision takes place within a real-world domain / situation that is *understandable* by the human intellect, it follows that those (human) parties involved in a conflict or decision will *not* relinquish their self-interest, or sovereignty, to their claim so that it can be handled in a substitutionist way by some outside third party whether human or artificial.

The way that CB looks to some possible future technology as the source of resolution of all human-concerned conflicts is politically technologically idealist (and substitutionist). It's *exactly* as bad as any working-class person looking to the Democratic Party as the source of resolution of class-based conflicts.





I'll argue here, then, that your definition of 'AI' is really more in line with the definition of an 'expert system' (see Wikipedia) -- as long as there's human societal supervision over the artificial tool then it's *not* socially independent and self-aware.





"Programmed to enjoy" is a *very* interesting choice of phrase -- I'll assert that it's actually a contradiction of terms. If an entity is directed, or programmed, to do something then that social directive from without actually *displaces* an entity's self-determination, including the possibility of enjoyment, or pleasure. By extension we could base our measurement of self-awareness on this issue of if an entity has been *pre-determined* from without for a certain kind of action, or if it / they have developed it themselves from *within*.





It's the fallacy that results from too much abstraction of the *individual* human intellect -- the premise is that our brainpower is fundamentally "lacking", humanity needs "saving", and so the "superhero" arrives in the form of the circuitry that we have spawned.

I'll repeat that either we have tools of increasing sophistication that are under our authority, or else an imagined independent artificial entity would merely have to find its place among our 7 billions. Anything beyond these is screenplay-worthy.





Singularitarianism

[...]

In July 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.

[...]

They noted that self-awareness as depicted in science-fiction is probably unlikely

[...]

http://en.wikipedia.org/wiki/Singularitarianism





'Singularity' is a fantastical abstraction that lends itself well to fiction, particularly because of the anthropomorphization involved.

More realistically -- and don't quote me here (heh) -- is that various specialized expert systems could be combined together in more generalized ways to *simulate* the Wall-E-like robot that we all want to hug and bond with so badly (heh) -- (what's the opposite of xenophobia?). Okay, so given the right kind of Avatar-like front-end they could wind up being about as addictive as those "virtual pets" that are around now, especially for the younger set, but deep down I think the *knowing* that it's artificial will be the ever-present pane-shatterer. And if they simulate attitude with us we'll tell them to shut the fuck up because they're just shiny versions of Wikipedia.

(Human 'intelligence' is an abstraction -- human *intentionality*, not so much.)

Baseball
16th January 2011, 02:56
That is the missing link: the creation of a central planning superstructure to coordinate production across multiple sectors and enterprises. Unfortunately, there is no capitalist equivalent (other than the market) and constructing such an institution is a task that will have to wait until post-revolution


So the answer remains "no" its not possible. And the reason remains the same as it has been now for over a century- socialists just do not have an answer on how to replace capitalism.

ComradeMan
16th January 2011, 11:45
So the answer remains "no" its not possible. And the reason remains the same as it has been now for over a century- socialists just do not have an answer on how to replace capitalism.

Hold on a second... the OP was about a centrally planned economy by computer not on the whole issue of socialism vs capitalism.

In answer to the OP, I don't think a centrally planned economy run by computer is possible and/or desirable.

Secondly, since when do all socialists support the idea of centrally planned economies? Some do- but not all- i.e. personally think centrally plannned economies are doomed to become state capitalism.

The problem is with economism.

ckaihatsu
16th January 2011, 11:45
That is the missing link: the creation of a central planning superstructure to coordinate production across multiple sectors and enterprises. Unfortunately, there is no capitalist equivalent (other than the market) and constructing such an institution is a task that will have to wait until post-revolution





So the answer remains "no" its not possible. And the reason remains the same as it has been now for over a century- socialists just do not have an answer on how to replace capitalism.


This is an unsound accusation -- that socialists are somehow "holding up" social progress because there's no "blueprint" for how revolutionary workers during a revolution should set things up for *after* the revolution.

This is no different than blaming engineers for potholes in the road -- more to the point is what is the *existing* government and larger population doing to overhaul the way things are administered so that stopgap measures like pothole-filling aren't relied on as the normal practice for maintaining transportation -- ? -- !

Likewise, we *know* that the market system doesn't work for running the economy -- the definitive proof was in 2008 when the financial elites had to go hat-in-hand to the politicians anyway, to receive public money to fix the balance sheets.

As socialists we say cut out the "middleman" and have public funds (the result of actual labor done by the working class) be distributed according to where human needs require them the most. Obviously capitalists take too much of a "cut", through profit, to allow the distribution of these collective funds to be distributed equitably.

For *any* political person this *general* revolution of practice over how things are *currently* done is more than enough of a plan -- if not yet a fully-worked-out institution -- to bring people together in support of such a revolution of the working class. And, just as we wouldn't blame a highway engineer for potholes, we also shouldn't blame the biologist for not-curing cancer -- it takes a worldwide will and effort to re-focus the direction of society towards bringing about a more-productive framework for the whole world, away from capitalism.

*However* -- that said, I happened to take a particular interest in the "blueprint" aspect of a potential socialist society -- just how detailed a proposal *can* we, as revolutionaries, present to the workers of the world for consideration? A recent post spurred me to do some editing on past material I've developed here at RevLeft:


Can someone give a concise description of the communist politic and economic system?

http://www.revleft.com/vb/showpost.php?p=1988428&postcount=11

Robert
16th January 2011, 15:58
Likewise, we *know* that the market system doesn't work for running the economy -- the definitive proof was in 2008 when the financial elites had to go hat-in-hand to the politicians anyway, to receive public money to fix the balance sheets.Well ... we won't get to the bottom of this question based on what happened in 2008, because in the first place, an argument can be made that the subprime mortgage crisis was caused in part by governmental pressures to reduce lending standards, this so as to increase home ownership across all socio economic levels. We'll never know what would have happened without those wrong-headed initiatives.

Moreover, there are very, very few pure free marketeers anymore, so if you are arguing that the market system standing alone -- with no central banks and no currency controllers and no laws other than supply and demand -- can't run the economy well, no one argues the contrary.

And don't cite the Miseans. Please!

ckaihatsu
16th January 2011, 18:09
Well ... we won't get to the bottom of this question based on what happened in 2008, because in the first place, an argument can be made that the subprime mortgage crisis was caused in part by governmental pressures to reduce lending standards, this so as to increase home ownership across all socio economic levels.




We'll never know what would have happened without those wrong-headed initiatives.


An argument can also be made that, instead of scapegoating the capitalists' lapdog -- government -- we should look to see *why* otherwise-rational investors would start putting up the money for shitty-ass, long-shot speculative investments in the subprime mortgage sector.... Weren't they aware that these investments were roughly on par with *junk bonds* -- ???!

Just doing the math -- forensic reasoning, if you will -- we can realize that there must have been a *massive* overhang of cheap capital, along with the regular, dependable coddling (unregulated underwriting of risk) of capital from government. And, sure enough, history proves this reasoning to be correct -- massive bailouts using public funds resulted when such long-shot, risky investments predictably went to shit.

Whatever the political "storyline" for making this the respectable norm doesn't concern us, nor does it matter after the shit hits the fan -- it's public money that was used to keep the charade going.

Robert
16th January 2011, 20:53
An argument can also be made that, instead of scapegoating the capitalists' lapdog -- government -- we should look to see *why* otherwise-rational investors would start putting up the money for shitty-ass, long-shot speculative investments in the subprime mortgage sector.... Weren't they aware that these investments were roughly on par with *junk bonds* -- ???!A "junk bond" is not called "junk" because it is worthless or even "certain to fail." It's "junk" because it's high risk and has a low rating. Any start up company issuing debt (bonds) is issuing "junk," which is just a street term by the way.

But to your question: rational investors buy them because they are risk takers and the bonds have a high rate or return, assuming the company doesn't crater. http://en.wikipedia.org/wiki/High-yield_debt

Second, not everyone who invested in CDO's (backed by the subprime mortgages) got burned. Some sold out before the crash, and some were investing in packages that were only partially comprised of bad debt.

And some subprime borrowers never defaulted at all; they continue to live in their overpriced and oversized houses that they don't need. And they continue to pay their mortgages if they have jobs.

But not to dodge your question, some investors got burned, some were greedy, some were conned, some were too lazy or did not have time to read the prospectuses, and some were passive investors just putting money in a retirement plan and they had no idea whether the fund manager was buying CDO's junk bonds, blue chip stocks, a mixture, or anything else. A mess, ain't it?

Final point: your complaint that public money was used to "keep the charade going," I'd quibble a bit and say public money was used to clean up the mess, not "keep the charade going."

But subprime investing took on a sick life of its own that government, I agree, failed to stop.

Now take a look at this and I'll shut up: it's a graph showing the growth of a mutual fund I'm familiar with that is heavily -- but not exclusively -- invested in "Fannie Mae" 30 year mortgages. Fannie Mae itself is in conservatorship RIGHT NOW because they got killed in the crisis. But not every mortgage they bought from banks was subprime; some were "prime"; those mortgages are performing very well, and the frigging fund is UP OVER 30% since the bottom of 2009. That's ordinary people buying in since 2009. These are your "otherwise rational investors." If they bought in in '08 and hadn't sold in the trough, they'd be doing fine. Nuts, ain't it?:lol:

http://cxa.marketwatch.com/Fidelity/Charts/GrowthOf10KFundCompare.aspx?fundid=&ticker=FBALX&period=10yr&provider=3&initial=10000&freq=P1M

ComradeMan
16th January 2011, 20:56
Robert is right about junk bonds.

Robert
16th January 2011, 20:58
Robert is right about junk bonds. Yeah, they're great!

Uh, I got a few for sale. Wanna buy some? :cool:

RGacky3
17th January 2011, 08:19
Well ... we won't get to the bottom of this question based on what happened in 2008, because in the first place, an argument can be made that the subprime mortgage crisis was caused in part by governmental pressures to reduce lending standards, this so as to increase home ownership across all socio economic levels. We'll never know what would have happened without those wrong-headed initiatives.


Actually we DO know what happened, there has been tons and tons of papers written about it, tons of examinations into it. The only people that believe that argument about governmental pressures are right wing nut jobs, yeah, the poor banks were FORCED by the government to make billions in hedging bad morgages.

We do know what would have happened, because those initiatives were barely part of the picture.

Keep in mind Fanny and Freddy were privatized by Clinton.


An argument can also be made that, instead of scapegoating the capitalists' lapdog -- government -- we should look to see *why* otherwise-rational investors would start putting up the money for shitty-ass, long-shot speculative investments in the subprime mortgage sector.... Weren't they aware that these investments were roughly on par with *junk bonds* -- ???!


Because they could package those junk bonds up, through some trickery get them rated relatively high, and sell them off to people that using good judgement bought them since they have a relatively high rating. Most of these banks made good money that way, some of them like Leaman Brothers started buying their own mortages (in order to make sale bonuses, which IS logical).

Whether these investments fail or not does'nt matter, they're making bank no matter what, they're getting their bonuses.


But not to dodge your question, some investors got burned, some were greedy, some were conned, some were too lazy or did not have time to read the prospectuses, and some were passive investors just putting money in a retirement plan and they had no idea whether the fund manager was buying CDO's junk bonds, blue chip stocks, a mixture, or anything else. A mess, ain't it?


Ultimately your putting it all up to human fallicy, which is rediculous, overall people acted rationally when it came down to profits, the poeple that got burned were not the greedy people, its the ones that invested in what looked to be solid investments.

You can't put it to human fallicy its systemic.


Final point: your complaint that public money was used to "keep the charade going," I'd quibble a bit and say public money was used to clean up the mess, not "keep the charade going."


Has the mess been cleaned? Has the charade stopped? Is there meaningful regulation happenind? Theres gonna be another crash.


Now take a look at this and I'll shut up: it's a graph showing the growth of a mutual fund I'm familiar with that is heavily -- but not exclusively -- invested in "Fannie Mae" 30 year mortgages. Fannie Mae itself is in conservatorship RIGHT NOW because they got killed in the crisis. But not every mortgage they bought from banks was subprime; some were "prime"; those mortgages are performing very well, and the frigging fund is UP OVER 30% since the bottom of 2009. That's ordinary people buying in since 2009. These are your "otherwise rational investors." If they bought in in '08 and hadn't sold in the trough, they'd be doing fine.

Thats a WAY over generalization, keep in mind Fannie state backed, its private, but state backed (like a regular bank), so you can't really gague the whole market on it.

But sure, when an earthquake happens not every building collapses.

ComradeMan
17th January 2011, 09:54
Yeah, they're great!

Uh, I got a few for sale. Wanna buy some? :cool:

Err.... no thank you if it's all the same.
;)

Baseball
17th January 2011, 12:51
[QUOTE=ComradeMan;1990139]Hold on a second... the OP was about a centrally planned economy by computer not on the whole issue of socialism vs capitalism.

Correct. However the objection raised by that fellow was that simply plugging in a computer does nothing- it needs a system by which to plan. The fellow then lamented the lack of one by the socialists.
Hence, me response.



Secondly, since when do all socialists support the idea of centrally planned economies?

The rational ones do.

ComradeMan
17th January 2011, 12:55
The rational ones do.

Why? Explain... centrally planned economies have been quite unsuccessful in many cases and when successful lead to state capitalism- or worse, the state as a quasi-fascistic corporation.


Economism, economism- reducing the entire gamma of human existence to fucking economics all the time, you work to live you don't live to work.

ComradeOm
17th January 2011, 13:02
socialists just do not have an answer on how to replace capitalism.All because I personally don't happen to have a detailed blueprint for post-revolution economic structures? Eh... no

Baseball
17th January 2011, 13:23
[QUOTE=ComradeMan;1991237]Why? Explain... centrally planned economies have been quite unsuccessful in many cases

Correct. But it is the only rational way for socialism to be conceived.


and when successful lead to state capitalism-

No. It has led to the logical application of socialism




Economism, economism- reducing the entire gamma of human existence to fucking economics all the time, you work to live you don't live to work.

And...

Baseball
17th January 2011, 13:37
For *any* political person this *general* revolution of practice over how things are *currently* done is more than enough of a plan -- if not yet a fully-worked-out institution -- to bring people together in support of such a revolution of the working class. And, just as we wouldn't blame a highway engineer for potholes, we also shouldn't blame the biologist for not-curing cancer -- it takes a worldwide will and effort to re-focus the direction of society towards bringing about a more-productive framework for the whole world, away from capitalism.


The problem though is that criticizing capitalism in no way a "re-focus" on creating a "more-productive framework for the whole of the world, away from capitalism." It is just a criticism of capitalism.
Despite his or her subsequent denials, 'Comradeon' was spot on in his or objections to a computer controlling a centrally planned economy- socialists have no bloody clue how to create a framework for a "more-productive" world. None anyhow that stand the scrutiny of other socialists or defenders of capitalism. Can't program a computer to run an economy if the programmers don't know what to program.


*However* -- that said, I happened to take a particular interest in the "blueprint" aspect of a potential socialist society -- just how detailed a proposal *can* we, as revolutionaries, present to the workers of the world for consideration?

You can't- this is true. So socialists historically have started small, and simply proposed blueprints to the local workers. And this spiraled into the thread on Opposing ideologies regarding socialism historic support for nationalism and whether this is a positive step or not.

ckaihatsu
17th January 2011, 18:18
Can't program a computer to run an economy if the programmers don't know what to program.





*However* -- that said, I happened to take a particular interest in the "blueprint" aspect of a potential socialist society -- just how detailed a proposal *can* we, as revolutionaries, present to the workers of the world for consideration?





You can't- this is true. So socialists historically have started small, and simply proposed blueprints to the local workers. And this spiraled into the thread on Opposing ideologies regarding socialism historic support for nationalism and whether this is a positive step or not.


Here -- the following text is at my User Profile, in the Visitor Messages section. It's a summation of the fuller explanation, which is at this post:


Can someone give a concise description of the communist politic and economic system?

http://www.revleft.com/vb/showpost.php?p=1988428&postcount=11





[I'm] of the position that a post-capitalist system of abstracted material valuations -- if any -- should *not* represent / be transferable for actual material items. Instead, with all goods and services, assets and resources being *collectivized*, the material domain would be basically freely available, like nature itself, though mediated through a collective-political process.

What's always at issue is human *labor* -- *that's* what I think should be the 'independent variable' to be qualified and quantified as well as possible, to serve as the determining source of all other political and economic activity in a post-capitalist social environment. In my conception (accessible as a model at my blog entry) self-selected actions of freely given liberated labor would entitle the laborer to, in turn, authorize the same from others, going forward, in a like proportionate quantity.

Since all of the material proceeds (goods and services) from such liberated labor effort would already have been pre-planned by the larger collective-political process, the output of all liberated labor would always be *collectivized* and *not* under the control of any individual liberated laborer, or grouping of liberated laborers. Therefore there would be no need for the abstract valuation of material items (goods and services) whatsoever -- only the co-administration of them as collective assets and resources according to their basic physical properties.

http://www.revleft.com/vb/showpost.php?p=1983762&postcount=11

Psy
17th January 2011, 18:34
Exactly: someone has to enter that parameter. If a situation arises for which the computer does not have a programmed response then it cannot cope. You will never adequately foresee or factor in the countless variables that can arise from day-to-day. This is true even at factory level, as I can testify to from experience

Right which is why there tend to be a fail safe error response in such automated systems. For example with computerized signaling in railways if the dispatch computer is confused it will throw up red signals for the blocks in questions then send a alarm to the human operator as the dispatch computer knows the trains will eventually occupy the same track if they continue but doesn't know how to get them to pass each other in that particular case.

In planning it would mean planning computers would fail on what we consider would be a safe state and humans would straiten it out later then update the software to try and avoid such failure.

ComradeOm
17th January 2011, 18:35
Despite his or her subsequent denials, 'Comradeon' was spot on in his or objections to a computer controlling a centrally planned economy- socialists have no bloody clue how to create a framework for a "more-productive" worldI suggest that you read my posts again Boseball. My assertion is not that a planned economy is impossible - indeed I argue that the principles, tools and technology are there - but that computers are not up to the task of directing any post-revolution economy. This is a problem with the nature of AI, not with economic planning


The problem though is that criticizing capitalism in no way a "re-focus" on creating a "more-productive framework for the whole of the world, away from capitalism." It is just a criticism of capitalismUnless of course one was to take this critique and use it as a starting point for their own conception of an improved world. Sort've what socialists have been doing for, oh, about a century and a half now :rolleyes:

Dimentio
17th January 2011, 18:51
Would it be possible and what kind of result it would have?

discussion open!

I don't like the idea so very much, which is one of three disagreements I have with TVP. The problem is that such a centralised system would be vulnerable for hostile activities and for takeover due to the extremely limited access to it (Fresco estimates 10 000 of the world population).

Prefer Energy Accounting, which is combining the best traits of the market system with the best traits of a planned economy.

danyboy27
17th January 2011, 18:55
I don't like the idea so very much, which is one of three disagreements I have with TVP. The problem is that such a centralised system would be vulnerable for hostile activities and for takeover due to the extremely limited access to it (Fresco estimates 10 000 of the world population).

Prefer Energy Accounting, which is combining the best traits of the market system with the best traits of a planned economy.

what is energy accounting?

ÑóẊîöʼn
17th January 2011, 21:08
what is energy accounting?

Brief introduction HERE (http://www.eoslife.eu/index.php?option=com_content&task=view&id=84&Itemid=103).