Log in

View Full Version : Thomas Kuhn: model philosopher and dynamat-ician?



Die Neue Zeit
27th March 2009, 00:26
A few years back I had the distinct privilege of taking a brief course on the so-called "philosophy of science," in which I came about a few words regarding Thomas Kuhn and paradigm shifts. Yesterday, I read an article by Steve Keen (the post-Keynesian author of Economics Debunked) that mentions paradigm shifts in economics:

http://www.rgemonitor.com/us-monitor/256145/neoclassical_economics_mad_bad_and_dangerous_to_kn ow


The pedagogic pressure from students and the wider community has to be matched by the accelerated development of alternatives to neoclassical economics. Though we know much more today about the innate flaws in neoclassical thought than was known at the time of the Great Depression (Keen 2001), the development of a fully-fledged alternative to it is still a long way off. There are multiple alternative schools of thought extant—from Post Keynesian to Evolutionary and Behavioural Economics, and Econophysics—but these are not developed enough to provide a fully fledged alternative to neoclassical economics.

This should not dissuade us from dispensing completely with the neoclassical approach. For some substantial period, and especially while the actual economy remains in turmoil, we have to accept a period of turmoil in the teaching of and research into economics. Hanging on to parts of a failed paradigm simply because it has components that other schools lack would be a tragic mistake, because it is from precisely such relics that a neoclassical vision could once again become dominant when—or rather if—the market economy emerges from this crisis.

...

The economic theory that should eventually emerge from the rejection of neoclassical economics and the basic adoption of dynamic methods will come much closer than neoclassical economics could ever do to meeting Marshall’s dictum that “The Mecca of the economist lies in economic biology rather than in economic dynamics” (Marshall 1920: xiv). As Veblen correctly surmised over a century ago (Veblen 1898), the failure of economics to become an evolutionary science is the product of the optimising framework of the underlying paradigm, which is inherently antithetical to the process of evolutionary change. This reason, above all others, is why the neoclassical mantra that the economy must be perceived as the outcome of the decisions of utility maximising individuals must be rejected.



Was Thomas Kuhn a model philosopher, and could he be considered as one who emphasized dynamics? Also, was he a Materialist or an Idealist?

Rosa Lichtenstein
27th March 2009, 00:54
Kuhn wasn't really a philosopher as such, but a physicist who turned to the history of science.

He was most certainly not an idealist, but then he wasn't a materialist either.

He was a sort of Wittgensteinian, which means that he, like me, was more of a 'Nothing-at-all-ist' (not to be confused with a nihilist) when it came to philosophical theories.

Better articles on Kuhn:

http://www.uea.ac.uk/~j339/Kuhnnatkinds.htm

http://www.uea.ac.uk/~j339/howtounderstand.htm

http://www.uea.ac.uk/~j339/KuhnWittgenstein.htm

http://www.uea.ac.uk/~j339/sharrockandreadagainstfuller.pdf

And this is by far the best book on Kuhn:

http://www.polity.co.uk/book.asp?ref=9780745619286

If you live in the UK, you can obtain a copy for less than £14:

http://www.amazon.co.uk/gp/offer-listing/0745619290/ref=sr_1_olp_5?ie=UTF8&s=books&qid=1238111498&sr=1-5

and if you live in the US, for less than $16:

http://www.amazon.com/gp/offer-listing/0745619290/ref=dp_olp_new?ie=UTF8&qid=1238111596&sr=1-1&condition=new

JimFar
27th March 2009, 02:22
George Reisch in his How the Cold War Transformed Philosophy of Science, drew up a comparison between the ideas and careers
of the logical empiricist philosopher (and physicist) Philipp Frank and Thomas Kuhn. He noted that both men taught at Harvard, both had worked under and had been close to Harvard president, James B. Conant.

Reisch discerned a number of underlying similarities between Frank's ideas and Kuhn's. Both men were critical of overly formalistic approaches to the philosophy of science, both emphasized the history of science as a basis for assessing the adequacy of philosophical accounts. Both men advanced similar accounts of how science develops and changes over time, with an emphasis on how scientific change occurs when accepted beliefs come under strain from the accumulation of contradictory evidence, so that eventually these beliefs are overthrown and replaced by new theories.

Reisch found Kuhn's notion of scientific paradigms to have been anticipated by Frank's notion of analogy as guiding scientific thinking. Both men emphasized the discontinuous nature of scientific change.

Where Reisch saw the two men as differing in their accounts of science, is that Kuhn tended to see scientific change as being driven by professional, sociological, and psychologic dynamics that were internal to scientific communities. Frank certainly acknowledged the importance of these dynamics but he also looked to external social, economic, and political forces as driving scientific change as well. More importantly, for Resich, the two men differed drew different conclusions from their analyses of scientific change in regards to implications for the professionalization and specialization in science studies. Frank argued that it is neither possible nor desirable for us to draw sharp distinctions between science and philosophy. For him, during periods of revolutionary change, the distinctions between science and philosophy tend to disappear. Kuhn on the other hand argued that it was desirable for us to draw and maintain sharp distinctions between science and philosophy. Indeed, the future progress of science depended on this, in his opinion.

Reisch saw Kuhn's views as fitting the academic climate of his time which saw an increased role by the Federal government in funding scientific research, which in turn promoted an ever increasing professionalization and specialization.

Invariance
16th April 2009, 15:06
Steve Keen follows more of Lakatos' model, which whilst it was influenced by Kuhn, criticised what he saw of it as reducing science to 'mob psychology.' It was similar to Kuhn's view insofar as the research program included a hard core of principles which is taken beyond criticism (a bit like how normal science doesn't question its basic axioms until in crises). However, Lakatos thought that research programs could be evaluated in an objective way by comparing them over time. Kuhn essentially argued of the incommensurability of opposing paradigms. A progressive research program modifies its 'protective belt of claims' to allow for new predictions. A degenerative research program, on the other hand, merely reacts to anomalies, and only tries to build up a mounting weight of 'evidence' to keep it from attack. Hence, Keen seems to employ this model over Kuhn's one.

Unfortunately, we can only know if one research program is better than another after the facts. Further, Lakatos doesn't think that a research program should be abandoned just because it is degenerative - it may become progressive. Lakatos also has some 'odd' views on making the history of science seems as rational as it can be - that is the point of the philosopher to him. Lakatos takes it as a given that scientists reject Newton in favor of Einstein, the philosopher's job is to explain 'why.'

In that sense, it is a very big departure from Kuhn - Kuhn wanted to show science 'warts and all' and emphasise its dogmatism and that it is not necessarily a 'cumulative' discipline, even if scientists, philosophers and historians make it seem that way.

I think Kuhn has a very dynamic, dare-I-say, dialectic approach - a failing paradigm which is unable to explain anomalies, replaced with another paradigm. All the while scientists and philosophers are trying to explain the process as rational - a bit like bourgeoisie social scientists explain history and revolutions as a mere sequence of logical events.

Here is where I am questionable of economics being a science - it simply hasn't progressed - it might even have gone backwards. This, in part, is owed to being completely dominated by bourgeoisie ideology. Sraffa, Keynes, Robinson, et al had little or no effect on modern economics: the anomalies remain, yet the economists continue as if there are none. (I still think economics is a science, because I accept the Kuhnian view of science, but that doesn't mean I have to accept science as true! In fact, by one measure of counting, the vast majority of science has been shown to be false and what is considered science today will one day probably be considered false, or a very scientifically lacking)

And from memory, Kuhn, like Popper, rejected Marxism as a science - as did Lakatos if I am not mistaken.

(Edit: there are also other reasons why it hasn't progressed. One of them is dumping political economy. Another is the poor application of mathematics.)

Invariance
16th April 2009, 15:29
Incidentally, the chapter on philosophy of science in Debunking Economics, is the only chapter which is fully available, free-of-charge. Since I cannot link it, I will post the chapter in its entirety; Keen explains it better then I did:

Economics would have us believe that it is a science, fully able to stand tall beside the more conventional physical sciences and mathematics.

After the preceding chapters, you may be inclined to doubt that belief. Surely, whatever ‘science’ is, one might hope that it is undertaken with more impartiality, regard for the facts and logical consistency than economics has displayed.

However, the critiques of conventional economics which form the substance of this book were devised by critical economists (and sometimes, inadvertently, by conventional economists themselves) and some of these critiques have been acknowledged as valid by some conventional economists. There is also a small but robust minority working on other approaches to economic analysis, as you’ll find in Chapter 14. There are thus some systematic and logical aspects to what economists in general do, which could qualify as scientific behaviour.

The position I favour is that economics is a science, but a rather pathological one. I am particularly critical of what has occurred since 1950, but I still hold out hope of better behaviour in the future. But before better behaviour can take widespread root, economics will have to wean itself from a methodological myth. This is the proposition, first put by Milton Friedman, that a theory cannot be judged by its assumptions, but only by the accuracy of its predictions.

Leaving aside the question of whether economics has ever accurately predicted anything, the argument that “the more significant the theory, the more unrealistic [are] the assumptions” is simply bad philosophy.

The kernel

Have you heard the joke about the chemist, physicist and economist who get wrecked on a desert isle, with a huge supply of canned baked beans as their only food ? The chemist says that he can start a fire using the neighbouring palm trees, and calculate the temperature at which a can will explode. The physicist says that she can work out the trajectory of each of the baked beans, so that they can be collected and eaten. The economist says “Hang on guys, you’re doing it the hard way. Let’s assume we have a can opener.”

That assumption is not too different to the type of assumption that economists routinely make, and yet they defend themselves on the apparently convincing grounds that the assumptions don’t matter–a theory can only be evaluated on the basis of the accuracy of its predictions.

This methodological defence is invalid, because it confuses ‘negligibility’ assumptions, which argue that some minor details can be ignored, with ‘domain’ assumptions, that determine the range of applicability of a given theory. Assumptions also do matter to economists, in that they genuinely believe that their theories describe reality, and they reject economic argument that is not based upon their preferred set of assumptions.

The roadmap

In this chapter I outline the paper in which Friedman introduced the notion that ‘assumptions don’t matter’. Following Musgrave, I classify assumptions under three headings: negligibility assumptions, domain assumptions, and heuristic assumptions. Friedman’s paradoxical statement that “the more significant the theory, the more unrealistic the assumptions” is only partially true of the first class of assumptions, and manifestly untrue of the latter two classes. Finally, I detail the many ways in which assumptions do matter to economists.

A paradoxical proposition

There would be few if any academic economists who have not had a lecture disturbed by some recalcitrant student, interjecting that the assumptions of the model being discussed are unrealistic. Fortunately, there is a simple weapon at hand: an appeal to the authority of Milton Friedman that a theory can’t be judged by its assumptions, but only by how well its predictions accord with reality.
In fact, Friedman’s case went further: he argued that unrealistic assumptions were the hallmark of good theory. In what Paul Samuelson later dubbed ‘the F-twist’, Friedman argued that

Truly important and significant hypotheses will be found to have “assumptions” that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense). The reason is simple. A hypothesis is important if it “explains” much by little, that is, if it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained and permits valid predictions on the basis of them alone. To be important, therefore, a hypothesis must be descriptively false in its assumptions; it takes account of, and accounts for, none of the many other attendant circumstances, since its very success shows them to be irrelevant for the phenomena to be explained.

To put this point less paradoxically, the relevant question to ask about the “assumptions” of a theory is not whether they are descriptively “realistic”, for they never are, but whether they are sufficiently good approximations for the purpose in hand. And this question can be answered only by seeing whether the theory works, which means whether it yields sufficiently accurate predictions. (Friedman 1953)

The F-twist

The proposition that a theory is not regarded as a description of reality, but merely a way of predicting the future, is known as ‘instrumentalism’. This position is superficially appealing, and sufficiently persuasive to quieten the average interjector. It appears scientific, in that most scientists would admit that their theories can never exactly describe reality. It also implies a healthy dose of theoretical agnosticism, in that the economist is purportedly detached from her theory, and is only really interested in ‘the facts’.

However despite its superficial appeal, instrumentalism suffers from several flaws which were clearly set out by the philosopher Alan Musgrave in 1981. Musgrave argued that there were three classes of assumptions, and that Friedman’s dictum was only partially true in the least important of them.

Negligibility assumptions

Negligibility assumptions state that some aspect of reality has little or no effect on the phenomenon under investigation. Friedman’s paper made heavy use of the example of a ball being dropped near the earth, which fell very nearly ‘as if’ it had been dropped in a vacuum. In this instance it was valid to assume that the ball was falling in a vacuum, since air resistance has negligible impact on the ball’s flight. However the same was obviously not true of a feather dropped under the same circumstances.
Friedman argued that though it was unrealistic to say ‘assume the ball was dropped in a vacuum’, the theory of gravity had great explanatory power: it explained much (the acceleration of bodies in free fall close to the earth) with very little (a gravitational constant and simple calculus). This theory should only be dropped in favour of another if a rival is at least as accurate and equally acceptable on other grounds, or “when there exists a theory that is known to yield better predictions but only at a greater cost” (Friedman 1953).

Musgrave argued that much of Friedman’s musings were reasonable in this domain, but even here his ‘dialectical’ proposition that “the more significant the theory, the more unrealistic the assumptions” is overblown. In fact, it is possible to rephrase these ‘unrealistic’ statements as ‘realistic’ ones: for example, it is realistic to say that air resistance is negligible for dense bodies falling from rest over short distances. As Musgrave put it, these assumptions:

are not necessarily “descriptively false”, for they do not assert that present factors are absent but rather that they are “irrelevant for the phenomena to be explained”... Galileo’s assumption that air-resistance was negligible for the phenomena he investigated was a true statement about reality, and an important part of the explanation Galileo gave of those phenomena. (Musgrave 1981)

However negligibility assumptions are the minnows of the assumptions family. Far more important are domain assumptions, and it is these to which rightly troubled students often object.

Domain assumptions

A domain assumption specifies the conditions under which a particular theory will apply. If those conditions do not apply, then neither does the theory.

An economic example of this is the assumption that risk can be used as a proxy for uncertainty–an assumption that permeates the conventional theories of macroeconomics and finance, which we will investigate in Chapters 9 and 10.

Risk applies to situations in which the regularity of past events is a reliable guide to the course of future events. Gambling gives us many such examples: if a tossed coin is seen to land showing heads roughly half the time, then you can reliably bet that there will be a 50:50 chance of heads in the future. If anyone bet you that heads would in future come up only 40% of the time, it would be sensible to take the bet. A risky event will have a probability associated with it, and a variance of outcomes around those probabilities, which can be reliably estimated using the techniques of statistics.

Uncertainty applies when the past provides no reliable guide to future events. Though the fact that we cannot predict the future is the essence of the human condition, the very nebulousness of uncertainty means that many people–and certainly the vast majority of economists–have difficulty grasping the concept. As a result, they act as if the quantifiable concept of risk can be safely substituted for unquantifiable uncertainty.

A somewhat intimate example might illustrate the fallacy of identifying uncertainty with risk. Imagine that you are very attracted to a particular individual, and that you know this person has gone out with 20% of those who have asked him or her out in the past. Does this mean that you have a 20% chance of being lucky if you ‘pop the question’?

Of course not. Each instance of attraction between two people is a unique event, and the past behaviour of the object of your desires provides no guide as to how your advances will be received. How he or she will react cannot be reduced to some statistical prediction based on past apparent regularities. From your perspective, their reaction is truly uncertain–and this uncertainty is at the root of much of the angst that romantic attraction generates.

A similar observation can be made about each new business investment. Even if similar investments have been made in the past, the economic environment of a new investment differs from those which have gone before. Past trends therefore cannot be confidently extrapolated to predict future performance–but this procedure is the essential assumption behind using statistics to calculate risk.

The assumption that risk can be used as a proxy for uncertainty when evaluating investments is therefore unrealistic. A theory that makes such an assumption is quite clearly not better than an alternative one which does not–quite the opposite in fact. This assumption says that the domain of relevance of the theory is a world in which the future is simply subject to chance.

Since there is no such world, the domain of applicability of theories which make such an unrealistic assumption is ‘nowhere’. Yet assumptions of this type abound in economic theory (especially so, it must be said, in the work of Milton Friedman).
Such an assumption should be made only if it fits into Musgrave’s third class, the heuristic assumption.

Heuristic assumptions

A heuristic assumption is one which is known to be false, but which is made as a first step towards a more general theory. Musgrave gives the example of Newton’s assumption that there the solar system consisted only of the sun and the earth. This gave rise to the theory that planets would follow elliptical orbits (which is a reasonable medium-term guide to actual planetary orbits in our solar system).

The next major step came with Poincaré in 1899, who tried to develop a formula describing planetary motion in a system with more than one planet. His proof that there was no such formula-and that the actual orbits would interact in wildly unpredictable ways-ushered in what is now known as ‘chaos theory’ or ‘complexity theory’ (though it lay dormant for 68 years until modern computers allowed its accidental re-discovery).

The modern theory of planetary behaviour now recognises that the stable orbits of our solar system can only have evolved–over an enormous period of time–from far less stable orbits, which must have led to collisions between proto-planets. It is now accepted that the Moon, for example, was the product of a collision between another proto-planet and the early Earth.

Collisions are not possible in a single planet solar system–the kind of system that Newton assumed to derive his initial theory. Though that heuristic assumption was a major step in the development of the scientific mode of thinking about astronomy, dropping it led to a better theory, not a worse one.

When heuristic assumptions are made consciously by a theorist in the course of developing a theory, they are normally explicitly described as such. For instance, when developing the theory of relativity, Einstein at one point stated that the distance covered by a person walking from one side to the other of a moving train is equal to the sum of the distance covered by the train, and the width of the carriage. However, he continued that “We shall see later that this result … cannot be maintained; in other words, the law that we have just written down does not hold in reality. For the time being, however, we shall assume its correctness” (Einstein 1916). When Einstein dropped this heuristic assumption, the theory of relativity was the result.

The greater realism at the heart of Einstein’s theory transformed our understanding of reality, and dramatically expanded the physical and intellectual capabilities of our species. Yet if we accept Friedman’s methodology, then we would have to argue that Einstein’s theory was poorer than Newton’s because it was more realistic.
In general then, and contrary to Friedman, abandoning a factually false heuristic assumption will normally lead to a better theory–not a worse one.

Judging the assumptions

Theories can therefore be evaluated by their assumptions to some extent, if one has an intelligent taxonomy of assumptions. A theory may well draw power from ‘unrealistic’ assumptions if those assumptions assert, rightly, that some factors are unimportant in determining the phenomena under investigation. But it will be hobbled if those assumptions specify the domain of the theory, and real world phenomena are outside that domain.

These assumptions may be justified if they are simply heuristic devices used to simplify the process of deriving a more general theory–but only if that more general theory is in fact derived. Economists often imply, when they fob off some critical student, that the unrealistic assumptions in introductory economics courses are dropped in more advanced theory–which portrays these assumptions as heuristic tools. In fact, as preceding chapters have illustrated, the assumptions used in more advanced theory are often more unrealistic than those presented in introductory lectures.

Scientific realism vs instrumentalism

Musgrave also points out that most scientists reject an instrumental view of science in favour of ‘scientific realism’–the belief that scientific theories should not merely predict reality but should, in some sense, represent it.

Ironically, this is actually the belief that most economists have about economic theory. Friedman’s instrumentalism is little more than a smokescreen behind which to hide when one wishes to quell a budding class rebellion. It is often evident to the student objector that, though professing that the assumptions don’t matter, his teachers continue to use the same small class of assumptions over and over again: rational utility maximising individuals, profit maximising firms, and a plethora of ancillary assumptions built on these foundations.

These assumptions are used because economists believe that these assumptions do capture essential elements of reality, and regard any theory which does not use these building blocks as ‘unrealistic’. This belief is most clearly seen in the manner in which the ‘Bibles’ of economics, its academic journals, filter out papers that do not make this core set of assumptions.

Assumptions do matter–to economists

The proposition that assumptions don’t matter implies that economists would be quite willing to accept a theory which assumed irrational behaviour if the model generated results which accorded with observation. It also implies that the development of economic theory would be driven primarily by the desire to produce theories that provide a closer fit to observed data.

Both these implications are strongly at variance with reality.
As any non-orthodox economist knows, it is almost impossible to have an article accepted into one of the mainstream academic economic journals unless it has the full panoply of economic assumptions: rational behaviour (according to the economic definition of rational!), markets that are always in equilibrium, risk as an acceptable proxy for uncertainty, and so on. When it comes to safeguarding the channels of academic advancement, little else matters apart from preserving the set of assumptions that defines economic orthodoxy.

Similarly, the development of economic theory over time has been propelled by the desire to make every aspect of it conform to the preferred economic model. Macroeconomics, when it first began, bore little resemblance to microeconomics. Fifty years later, macroeconomics is effectively a branch of microeconomics. As I outline in Chapter 9, a major factor behind this tribal coup was the belief that, regardless of its predictive validity, macroeconomics was unsound because its assumptions did not accord with those of microeconomics. It was therefore extensively revised, especially during the 1970s and 1980s, so that macroeconomic theory was more consistent with microeconomic assumptions. Far from assumptions not mattering to economists, assumptions in fact drove the development of economic theory.

Assumptions and logic

Assumptions matter in a more profound sense because, as this book shows, assumptions can be logically incoherent. For example, as discussed in Chapter 4 and the extension to it on the web, the economic model of the firm is internally contradictory. Economists wish to prove that perfect competition is superior to monopoly, but to prove this result, they must show that the marginal product of numerous small firms equals the marginal product of one large firm. The only situation in which this is possible is when marginal product curves are horizontal–but with horizontal marginal product curves, the output of a perfectly competitive industry is indeterminate (because a horizontal marginal cost curve can never intersect a horizontal marginal revenue curve). The assumptions of the theory are thus logically inconsistent.

A theory that contains logically inconsistent assumptions will be a bad theory–and, as this book shows, economics is replete with logical inconsistencies.

This is a science?

The behaviour of economists hardly fits the stereotype of scientists as dispassionate seekers of truth. But their behaviour does fit modern, sociological theories of how scientists behave.

Briefly, these theories argue that each ‘science’ is as much a society as it is an intellectual discipline. A collection of scholars in a science will share a perspective on what defines their discipline, and what constitutes scientific behaviour. This shared mind set includes core beliefs, which cannot be challenged without threatening your membership of the group (and hence your status as a scientist), ancillary beliefs which are somewhat malleable, a set of analytic techniques, and as yet unsolved problems to which these techniques should be applied. The core beliefs are known as the ‘hard core’–since they cannot be altered without rejecting, in some crucial sense, the very foundations of the science. The ancillary beliefs are known as the ‘protective belt’, since their function is to protect the core beliefs from attack.

The scholars expect that their beliefs and techniques will be able to solve the outstanding problems, thus increasing the explanatory power of their science. If they fail, then the first response is to adjust the ancillary beliefs rather than the core propositions. Only when the problem proves both intractable and crucial is there any possibility that core beliefs will be abandoned, leading to the formation of a new school of thought–or the ascendancy of an existing rival school. While a school of thought is expanding the range of phenomena it can explain using its core beliefs–by experiments that confirm its predictions, or extensions of its theories to novel areas–then it is said to be a ‘progressive’ scientific research program which manifests a ‘positive heuristic’. If instead, experimental results contradict its predictions, and its theories are adjusted to rationalize these failures, then it is said to be ‘degenerative’ with a ‘negative heuristic’.

It is possible for more than one such collection of scholars to exist in a science at any one time, so that it makes sense to speak of schools of thought within a science. Each school of thought will compete with the others, emphasising their weaknesses and its own strengths.

Clearly this sociological description of a science fits the historical record of economics. At the beginning of the third millennium, there are at least five schools of thought. The neoclassical school is clearly dominant, but there are several other competing schools–in particular, the Post Keynesian, Austrian, and evolutionary schools of economics. Each is developing its own approach to explaining similar phenomena, and there is clearly a rivalry between the minority schools and neoclassical economics–with the other schools criticising neoclassical economics while it largely ignores its rivals.
However, it might be thought that this provides a fairly demeaning perspective on science itself. Surely this behaviour is aberrant, and true sciences are beyond this petty bickering? No, strange as it may seem, a similar picture can be painted even of the queen of sciences, physics.

Quantum uncertainty?

In order to comprehend some of the bizarre results of experimental particle physics, most physicists argue that matter is in some sense ‘probabilistic’, and that the observer fundamentally affects reality. If an observer tries to ‘tie down’ one aspect of a particle–say, its location–then some other aspect becomes fundamentally unknowable. Physicists say that an elementary particle is always in a ‘superposition’ of both states, and testing for one leads to the other state resolving itself in a completely random way. The act of observing a particle thus directly–but unpredictably–alters its state. This is not because of any statistical properties of large numbers of electrons, but because randomness is an inherent feature of fundamental particles.

Two crucial aspects of this ‘Copenhagen school’ interpretation of quantum reality are (a) that particles can be treated as ‘wave functions’ in what is known as the wave-particle duality, so that a fundamental particle can be completely represented by its wave function; and (b) that there are two sets of physical laws, one which applies when there is no observer (’superposition’) and one which exists when there is an observer.

The most famous popular representation of what this means, when put in terms of everyday objects, is ‘Schrodinger’s Cat’. This is a thought experiment in which a box contains a cat, a radioactive element, and a vial of poison. If the radioactive element emits a particle, the vial opens and the cat dies. If it doesn’t, the cat lives.

What state is the cat in before an experimenter opens the lid to see whether it is alive or dead? In the Copenhagen school interpretation, the cat is in a superposition of being both alive and dead. The act of the observer opening the box resolves the cat into one or other state.

But this is not the only way to make sense of the experimental data. A rival interpretation, established by David Bohm, provides a completely deterministic interpretation, with none of the ‘quantum uncertainty’ of the Copenhagen school. It can explain the same experimental results as can the Copenhagen school–and some which it can’t explain–without resorting to the apparently metaphysical position that the observer somehow affects reality at the quantum level. In Bohm’s theory, Schrodinger’s cat is either alive and well if the radioactive element hasn’t emitted a particle, or dead if it has, independent of the human observer who eventually opens the box to check.

How have physicists reacted to this co-existence of two rival explanations of reality? As the physicist David Albert sees it, in much the same way that economists have reacted to alternative schools of thought–by refusing to take them seriously. It is worth citing Albert at some length to show that, quite possibly, scientists in other disciplines are no different to economists when it comes to their reaction to intellectual challenges to accepted dogma:

Despite all the rather spectacular advantages of Bohm’s theory, an almost universal refusal even to consider it, and an almost universal allegiance to the standard formulation of quantum mechanics, has persisted in physics, astonishingly, throughout most of the past 40 years. Many researchers have perennially dismissed Bohm’s theory on the grounds that it granted a privileged mathematical role to particles. The complaint was that this assignment would ruin the symmetry between position and momentum ... as if ruining that symmetry amounted to a more serious affront to scientific reason than the radical undermining, in the Copenhagen formulation, of the very idea of an objective reality. Others dismissed Bohm’s theory because it made no empirical predictions (no obvious ones, that is) that differed from those of the standard interpretation–as if the fact that those two formulations had much in common on that score somehow transparently favoured one of them over the other. Still others cited “proofs” in the literature ... that no deterministic replacement for quantum mechanics of the kind that Bohm had already accomplished was even possible. (Albert 1994)

At this sociological level, therefore, economics appears to be very similar to the conventional sciences. However, even at this level, the current state of economics can be distinguished from, for example, the current state of astronomy.

A degenerate scientific research program

There was a time when the neoclassical school of economics was clearly progressive, while its main rival was clearly degenerate. When the neoclassical school coalesced in the 1870s in the works of Jevons, Menger and Walras, the preceding classical school was in crisis. The classical school always had a difficulty in explaining the relationship between what it called value, and prices; yet it insisted that value was in some way fundamental to the determination of price. This problem was accentuated by the work of the final member of the classical school, Karl Marx (the subject of Chapter 13).

At the same time, the neoclassical school was expanding its core belief that human behaviour was driven by the desire to maximise utility. This had developed from a guiding principle in Bentham’s hands, to a coherent theory of consumer and producer behaviour in the hands of Jevons, and to an explanation for the overall coordination of a market economy in Walras. At the turn of the 19th century, neoclassical economists were confident that their science could continue expanding its explanation of the economy. It was clearly then a progressive scientific research program.

Though the majority of economists still believe that this is the case today, there are manifest signs that this is no longer true. Instead, the theory today is degenerate: rather than expanding the range of phenomena it can explain, the leading edge of the theory is dominated by adjusting the protective belt of ancillary beliefs to defend the hard core beliefs from attack. For example, the Sonnenshein-Mantel-Debreu conditions (discussed in Chapter 2) are a way of maintaining the hard core belief that individual behaviour is driven by utility maximisation, despite the proof that individual preferences cannot be aggregated. A similar interpretation could be given of responses of neoclassical economics to the many logical problems documented in this book.
But the problems with economics go beyond just this, since if economics were as fully a science as astronomy, eventually its litany of failures would lead to at least a general acknowledgement of crisis.

The incredible inertness of economics

What makes economics different from and inferior to other sciences is the irrational tenacity with which it holds to its core beliefs in the face of either contrary factual evidence, or theoretical critiques that establish fundamental inconsistencies in its intellectual apparatus.

The discovery, for example, that firms believe they experience constant or falling marginal costs (Eiteman & Guthrie 1952), and generally set prices by placing a markup on average cost, led not to the abandonment of the economic theory of price setting, but to a welter of papers arguing that in a competitive market, the effect of markup pricing was the same as if firms did consciously equate marginal cost to marginal revenue (Langlois 1989). On the same note, Sraffa’s theoretical argument that diminishing marginal returns were unlikely to occur in practice was ignored.
As a result, students at the end of the 20th century are receiving much the same instruction about how firms set prices as did their counterparts at the end of the 19th century.

Physical sciences hold onto their core beliefs with some tenacity, but nowhere near this much–even Albert’s paper goes on to observe that “serious students of the foundations of quantum mechanics rarely defend the standard formulation anymore” (Albert 1994). As a result, revolutions in physical sciences–where one dominant paradigm is replaced by another–occur much more frequently than they do in economics. Often, these revolutions outpace the popular understanding of a science.
Astronomy provides an example of this. I expect that most lay people think that the dominant theory of how the universe came into being is the ‘Big Bang’. In this theory, the universe originated in a ‘quantum singularity’ some 12-15 billion years ago. This explosion kick-started matter and time, leading to the immense universe we observe today. Back in the 1950s, this theory won out against its rival, that the universe had always been in a ‘steady state’ of expansion.

The Big Bang was indeed the dominant theory for some time–until it was pointed out that, according to calculations from quantum mechanics, the Big Bang would have resulted in a universe consisting of a mere handful of elementary particles.

A rival theory then developed which argued that, for a substantial period of time, the laws of physics of the current universe did not apply. Matter, for example, could move much faster than the speed of light. This ‘inflationary universe’ theory has subsequently been embellished to predict that there are many universes–as opposed to the one universe postulated by the Big Bang.

The shifts from the Big Bang paradigm, to the inflationary universe, to ‘mulitverses’, are big ones conceptually. The first envisages a single finite universe, while the last muses that ours may be only one of many universes, each with different ‘fundamental’ physical laws. But the science of astronomy made this move over a period of about 20 years, and it continues to undergo development today. Now even the inflationary/multiverse theory is under challenge, as measurements imply that the rate of expansion of the universe is actually increasing with time.

Economics, in contrast, has had only one acknowledged revolutionary episode in the last century–the Keynesian revolution during the 1930s. Yet at the end of the 20th century, the dominant school of thought in economics retains almost nothing from that revolution, and in fact appears as a direct descendant of pre-Keynesian neoclassical economics.

Think of the many revolutions in our understanding of the physical world which have occurred in the 20th century: from Newtonian to Einsteinian physics; from Mendelian genetics to DNA and the human genome; from determinism to chaos theory. Any scientist from the 19th century would be bewildered by what is commonplace today in his discipline–save an economist.

Why is economics so resistant to change? Is it because everything economists believed at the end of the 19th century was correct? Hardly, as this book shows. Instead, to understand the incredible inertness of economics, we have to consider an essential difference between social sciences in general and the physical sciences, and the thorny topic of ideology.

My kingdom for an experiment

In the 19th century, scientists and philosophers of science generally believed that what distinguished the social sciences from the physical sciences was that the latter could undertake experiments to test their theories, whereas the former could not. In the 20th century, Popper instead argued that the distinction between a science–like physics–and a non-science–like astrology–was not that one could undertake experiments and the other could not, but that one made falsifiable statements, while the other did not. Popper’s distinction between science and non-science wasn’t completely relevant to the ‘experiments versus no experiments’ distinction, but it did tend to play down the importance of experimentation in deciding what was and what was not a science.

The history of economics implies that Popper’s distinction does not give sufficient attention to whether or not a falsifiable statement can in fact be experimentally falsified. For example, Milton Friedman is famous as the father of the now relatively defunct sub-branch of economics known as monetarism. One falsifiable statement he made was that inflation is caused by the government increasing the money supply more rapidly than the economy is going.

This implied that, to reduce inflation, all the government had to do was to increase the money supply more slowly than the economy was growing. This was the basis of the economic policies of Margaret Thatcher, yet eventually this approach was abandoned. One reason why was that the government was never able to meet its targets for the rate of growth of the money supply–it might aim to increase it by, say, 6%, only to see it grow by 11%. Also, the relationship between the three crucial variables in Friedman’s theory–the rate of inflation, the rate of growth of the economy, and the rate of growth of the money supply–was never as watertight in practice as it appeared to be in his theory.

You could thus argue that Friedman’s statement–that inflation is caused by the government expanding the money supply faster than the rate of growth of the economy–had been falsified. Did this lead Milton and his supporters to abandon his theory? Of course not: monetarists instead argued that all sorts of attenuating features disturbed the results.

In other words, because the monetarist experiment in Great Britain wasn’t a controlled experiment, monetarist economists could refuse to accept that their theory had been falsified.

The same observation can be made about Marxist economists, and their attitude towards the data on Marx’s theory that the rate of profit would tend to fall, or the inevitability of socialism, and so on. In other words, this isn’t just a disease of the political Right, but an endemic problem in economics: without the ability to undertake controlled experiments, statements which could be falsified will be unfalsifiable in practice. Economists of all persuasions are therefore liable to hang on to beliefs that they argue are scientific, but which in the end are ideological.

The experience of another social science, psychology, provides some support for the argument that the ability to undertake experiments is crucial to scientific progress. For much of the 20th century, psychology was dominated by the ‘behaviourist’ school. This school argued that an organism’s behaviour had to be understood as a response to an external stimulus: it was ‘unscientific’ to postulate any unobservable mental processes of the organism which mediated between the stimulus and the response. To this school, complex behaviour–such as playing a piano–had to be understood as a chain of stimuli and responses. However, experiments showed that Even average pianists move their hands too quickly for the tactile information to pass along the sensory nerves to the central nervous system and for the command to move the hands to be sent down the motor nerves... Therefore, the behaviourist hypothesis that each new action is a response to an external stimulus is implausible. (Bond 2000)

This and several other experimental falsifications of behaviourism led to its demise, and replacement by cognitive psychology, which accepts that “there are cognitive processes that determine our behaviour … which we, as psychologists, must explain, even if they are not directly observable” (Bond 2000). Thus psychology, with the help of experiments, was able to undergo a revolution from one dominant school to another–while economics continues to be dominated by the same school (which, ironically, has a very behaviourist view of human behaviour). Unless it develops a means to undertake experiments to test rival theories, economics may be unable to break from the grip of ideology.

Equilibrium and an invisible ideology

Economics as a discipline arose at a time when English society was in the final stages of removing the controls of the feudal system from its mercantile/capitalist economy. In this climate, economic theory had a definite (and beneficial) political role: it provided a counter to the religious ideology that once supported the feudal order, and which still influenced how people thought about society. In the feudal system the pre-ordained hierarchy of king, lord, servant and serf was justified on the basis of the ‘divine right of Kings’. The King was God’s representative on earth, and the social structure which flowed down from him was a reflection of God’s wishes.

This structure was nothing if not ordered, but this order imposed severe restrictions on the now dominant classes of merchants and industrialists. At virtually every step, merchants were met with government controls and tariffs. When they railed against these imposts, the reply came back that they were needed to ensure social order.

Economic theory–then rightly called political economy–provided the merchants with a crucial ideological rejoinder. A system of government was not needed to ensure order: instead, social order would arise naturally in a market system in which each individual followed his own self-interest. Smith’s phrase ‘the invisible hand’ came along rather late in the process, but the notion played a key role in the political and social transformations of the late 18th and early 19th centuries.

An essential aspect of this market social order was equilibrium.

From the outset, economists presumed that the market system would achieve equilibrium. Indeed, the achievement of equilibrium was often touted as an advantage of the free market over any system where prices were set by fiat. Equilibrium was therefore an essential notion of the economic defence of capitalism: the equilibrium of the capitalist market would replace the legislative order of the now defunct feudal hierarchy.

More importantly, whereas the feudal order endowed only the well-born with welfare, the equilibrium of the market would guarantee the best possible welfare for all members of society. The level of individual welfare would reflect the individual’s contribution to society: people would enjoy the lifestyle they deserved, rather than the lifestyle into which they had been born.

If, instead of equilibrium, economists had promised that capitalism would deliver chaos; if, instead of meritocracy, economists had said that the market could concentrate inequality, then economists could have hindered rather than helped the transition to capitalism (though they more likely would have been ignored).

By the middle of the 19th century, the transition to capitalism was complete: what was left of feudalism was a mere vestige. But rather than the promised equilibrium, 19th century capitalism was wracked by cycles and enormous disparities of wealth. A major depression occurred roughly every 20 years, workers’ conditions would improve and then rapidly deteriorate, prices rise and then fall, banks expand and then collapse. New ‘robber barons’ replaced the barons of old. It appeared that, while promising a meritocratic equilibrium, capitalism had instead delivered unbalanced chaos. A new political challenge arose: that of socialism.

Once again, economics rose to the challenge, and once again equilibrium was a central tenet. This time the defence was mounted by what we today call neoclassical economics, since classical economics had been turned into a weapon against capitalism by the last great classical economist, Karl Marx.

In contrast to the hand-waving of Smith, the neoclassical economists of the late 19th century provided a substantive mathematical analysis of how equilibrium could be achieved by an idealised market economy, and how this equilibrium could be fair to all. However, unlike the earlier classical championing of capitalism, this technical edifice provided very little in the way of libertarian slogans for the battle against the ideology of socialism. Instead of arming capitalism’s defenders with rhetoric to deploy against socialists, it gave birth to the academic discipline of economics.

Capitalism eventually transcended the challenge of socialism, with little real assistance from economic theory. But while the economics had little impact upon capitalism, the need to defend capitalism had a profound impact upon the nature of economic theory. The defensive imperative, and the role of equilibrium in that defence, cemented equilibrium’s role as a core belief of economic theory.

At the beginning of the 3rd millennium, there is no competing social system against which capitalism must prove its superiority. Feudalism is long dead, and those socialist societies which remain are either socialist in name only, or bit players on the world stage.

Today, most economists imperiously dismiss the notion that ideology plays any part in their thinking. The profession has in fact devised the term ‘positive economics’ to signify economic theory without any value judgments, while describing economics with value judgments as ‘normative economics’–and the positive is exalted far above the normative.

Yet ideology innately lurks within ‘positive economics’ in the form of the core belief in equilibrium.As previous chapters have shown, economic theory has contorted itself to ensure that it reaches the conclusion that a market economy will achieve equilibrium.[4] The defence of this core belief is what has made economics so resistant to change, since virtually every challenge to economic theory has called upon it to abandon the concept of equilibrium. It has refused to do so, and thus each challenge–Sraffa’s critique, the calamity of the Great Depression, Keynes’s challenge, the modern science of complexity–has been repulsed, ignored, or belittled.

This core belief explains why economists tend to be extreme conservatives on major policy debates, while simultaneously believing that they are non-ideological, and are in fact motivated by knowledge rather than bias.

If you believe that a free market system will naturally tend towards equilibrium–and also that equilibrium embodies the highest possible welfare for the highest number–then ipso facto, any system other than a complete free market will produce disequilibrium and reduce welfare. You will therefore oppose minimum wage legislation and social security payments–because they will lead to disequilibrium in the labour market. You will oppose price controls–because they will cause disequilibrium in product markets. You will argue for private provision of services–such as education, health, welfare, perhaps even police–because governments, untrammelled by the discipline of supply and demand, will either under or oversupply the market (and charge too much or too little for the service).

In fact, the only policies you will support are ones that makes the real world conform more closely to your economic model. Thus you may support anti-monopoly laws–because your theory tells you that monopolies are bad. You may support anti-union laws, because your theory asserts that collective bargaining will distort labour market outcomes.

And you will do all this without being ideological.

Really?

Yes, really–in that most economists genuinely believe that their policy positions are informed by scientific knowledge, rather than by personal bias or religious-style dogma. Economists are truly sincere in their belief that their policy recommendations will make the world a better place for everyone in it–so sincere, in fact, that they often act against their own self-interest.

For example, there is little doubt that an effective academic union could increase the wages paid to academic economists. If economists were truly self-motivated–if they behaved like the entirely self-interested rational economic man of their models–they would do well to support academic unions, since the negative impacts they predict unions to have would fall on other individuals (fee-paying students and unemployed academics). But instead, one often finds that economists are the least unionised of academics, and they frequently argue against actions that, according to their theories, could conceivably benefit the minority of academics at the expense of the greater community. However ideological economists may appear to their critics, in their hearts they are sincerely non-partisan–and, ironically, altruistic.

But non-partisan in self-belief does not mean non-partisan in reality. With equilibrium both encapsulating and obscuring so many ideological issues in economics, the slavish devotion to the concept forces economists into politically reactionary and intellectually contradictory positions.

Of course, if economists were right that equilibrium embodies the best possible outcome for the greatest number, then their apparently ideological policy positions would be justified–if the economy always headed back to equilibrium when disturbed from its Nirvana. In the next chapter, we’ll put aside the critiques which establish that the building blocks of equilibrium are invalid, and instead ask whether economic equilibrium, as defined by economic theory, is in fact stable.

gilhyle
21st April 2009, 23:33
Kuhn's major work is highly readable and I recommend it strongly - by contrast I found Lakatos schematic and somewhat ideological. Kuhn is a much better historian. Phillip Frank I havent read - sound interesting in relation to science which is patently not 'defined' but just exists as a social and economic phenomenon - the nearby thread on scientific method illustrates well the myth-making which continues in this are which Kuhn rightly sought to escape.

Invariance
22nd April 2009, 02:52
Gilhyle - you make a fair point on Lakatos' scheme being ideological. But then again, which philosophy can we claim isn't ideological? I'm not one to reduce philosophy of science to politics, but I think one should take into background Lakatos' history when understanding why he came up with what he came up with; to Lakatos, Kuhn placed too much emphasis, too much trust, on science as a social practice.

'As in political revolutions, so in paradigm choice, there is no standard higher than the consent of the relevant community.' (Kuhn)

Lakatos had been part of the Marxist resistance in Germany, after the war held a post in government, but was later imprisoned by that government for three years for 'revisionism' (I'm not particularly sure what he 'revised', if anyone could explain that). And later on he was a figure of the '56 uprising, after which he had to flee the country.

If science rests on mere 'mob pyschcology', without permitting rules on what is acceptable science, the results will be the same as what happens in societies without rules and procedures; we'll get undesirable results and a lack of progress.

So I think they had very real reasons for being too 'schematic' but whether that is how science works, or should work is another matter.

Invariance
22nd April 2009, 02:59
I also agree on Kuhn being highly readable, and Lakatos being less so. Also not unconnected to their philosophical approaches. I would highly recommend everyone read Kuhn's work. Its quite short and readable.

The Structure of Scientific Revolutions by Thomas Kuhn (http://www.megaupload.com/?d=AUWUKJ0F)

The Methodology of Scientific Research Programmes by Imre Lakatos (http://www.megaupload.com/?d=1MHJIVWH)

gilhyle
24th April 2009, 00:04
Lakatos had been part of the Marxist resistance in Germany, after the war held a post in government, but was later imprisoned by that government for three years for 'revisionism' (I'm not particularly sure what he 'revised', if anyone could explain that). And later on he was a figure of the '56 uprising, after which he had to flee the country.


While it is irrelevant to me, I think this version of Lakatos' personal history somewhat overstates his pretty disgraceful role in the resistance movement in ....Hungary if I recall correctly .....and understates his careerist approach in the post war period...and then overstates his role in 56 which - if I recall correctly - he just used as a opportunity to defect.

However, I dont judge him by his biography.

gilhyle
24th April 2009, 00:08
If science rests on mere 'mob pyschcology', without permitting rules on what is acceptable science, the results will be the same as what happens in societies without rules and procedures; we'll get undesirable results and a lack of progress.


This is, however, the reality. Science is - in practice - full of misconceived projects, progress needlessly held back, needed research projects not done, possibilities not investigated, views lacking credibility surviving long after they should have been rejected, censorship, plagiarism, sloppiness, personal financial corruption etc.....as well as (even during the imperialsit epoch of capitalism) an incredible amount of great work, precise clear analsysis, brilliant experimentation etc.

Invariance
26th April 2009, 19:42
While it is irrelevant to me, I think this version of Lakatos' personal history somewhat overstates his pretty disgraceful role in the resistance movement in ....Hungary if I recall correctly Yes, it was Hungary not Germany, that was a slip of the mind - obviously as there was no uprising in 'Germany' in '56. But now you have me interested, how was his role in the resistance movement disgraceful? So far as I remember, his mother and grandmother both were killed in Auschwitz, being Jewish. And he was a 'communist' during that time. Disgraceful how?


.....and understates his careerist approach in the post war period...and then overstates his role in 56 which - if I recall correctly - he just used as a opportunity to defect. Careerist? Perhaps - most academics are. Merely using it as an opportunity to defect? I'm not so convinced; he was warned of his arrest and fled. I don't see why he wouldn't have fled earlier if he had been imprisoned previously? Mind you, my only biographical information of him is from lecturers, so its necessarily incomplete and is only based on that. Still, I'm interested in what you know that I don't know about his history.


However, I dont judge him by his biography. Well, neither do I. I just thought it relevant to raise his 'For Method' approach as influenced by his political mistrust of institutions.


This is, however, the reality. Science is - in practice - full of misconceived projects, progress needlessly held back, needed research projects not done, possibilities not investigated, views lacking credibility surviving long after they should have been rejected, censorship, plagiarism, sloppiness, personal financial corruption etc.....as well as (even during the imperialsit epoch of capitalism) an incredible amount of great work, precise clear analsysis, brilliant experimentation etc. I quite agree.

gilhyle
27th April 2009, 00:21
disgraceful

Lakatos was part of a communist cell which didnt actually do anything in the resistance, but mostly hide out they did have some jewish members. Each one hid in a different place and they met up to read and discuss. the onlything they did was to to deal with the fact that one of them did not have a safe place to hide. They met together and decided that the only way for the rest of them to be safe was for her to commit suicide. She was led to the forrest and asked to kill herself which she did; that to my mind was disgraceful = they should have taken the risk of hidding her with one of themselves