View source for Against Hayek
From Critiques Of Libertarianism
Jump to:
navigation
,
search
<!-- you can have any number of categories here --> [[Category:Allin Cottrell]] [[Category:Paul Cockshott]] [[Category:Socialist Calculation Debate]] [[Category:Friedrich von Hayek]] <!-- 1 URL must be followed by >= 0 Other URL and Old URL and 1 End URL.--> {{URL | url = http://mpra.ub.uni-muenchen.de/6062/1/MPRA_paper_6062.pdf}} <!-- {{Other URL | url = }} --> <!-- {{Old URL | url = }} --> {{End URL}} {{DES | des = "Hayek and his followers have grossly overestimated the difficulties of carrying out rational socialist planning. They have coupled this with an exaggerated idea of the effectiveness of the free market as an economic regulator." | show=}} <!-- insert wiki page text here --> <!-- DPL has problems with categories that have a single quote in them. Use these explicit workarounds. --> <!-- otherwise, we would use {{Links}} and {{Quotes}} --> {{List|title=Against Hayek|links=true}} {{Quotations|title=Against Hayek|quotes=true}} {{Text | Contents 1 Hayek on Information and Knowledge 2 2 Prices as a telecoms system 5 2.1 Informationloss............................. 7 2.2 Whycanpricesworkatall?....................... 8 2.3 Whencentralisationhelps........................ 11 3 Is economic coordination tractable? 13 3.1 Canmillionsofplanningequationsbesolved? . . . . . . . . . . . . . 14 4 Information in planned and market economies 18 4.1 Howmuchinformationisneeded? ................... 18 4.2 Theargumentfromdynamics...................... 22 5 Conclusion 24 A A simple planning program. 27 A.1 calcstep ................................. 28 1 Chapter 1 Hayek on Information and Knowledge Examination of the economics of information is associated with Hayekian school. Friedrich August von Hayek (1899–1992) was an Austrian economist and political philosopher, noted for his defense of liberal democracy and free-market capitalism against socialist and collectivist thought in the mid-20th century. Hayek’s ideas ac- quired a practical relevance from their political adoption, first by the Thatcher gov- ernment in Britain in the 1980s and later by post-Soviet governments in Russia and Eastern Europe. We consider that he made fundamental errors in his analysis of eco- nomic information, errors which when they became the basis for practical policy, had catastrophic effects on economic co-ordination and performance. Prices, according to Hayek, provide the telecoms system of the economy, a means by which knowledge is diffused and disseminated. Whereas the present authors strongly believe in the applicability of the methods of natural science to the study of social phenomena, Hayek (1955) was concerned to distinguish radically between the two domains of investigation. In the natural sciences, advances involve recognizing that things are not what they seem. Science dissolves the immediate categories of subjective experience and replaces them with underlying, often hidden, causes. The study of society on the other hand has to take as its raw material the ideas and beliefs of people in society. The facts studied by social science differ from the facts of the physical sciences in being beliefs or opinions held by particular people, beliefs which as such are our data, irrespective of whether they are true or false, and which, moreover, we cannot directly observe in the minds of people but which we can recognize from what they say or do merely because we have ourselves a mind similar to theirs. (Hayek, 1955, p. 28) He argues that there is an irreducible subjective element to the subject mater of the social sciences which was absent in the physical sciences. [M]ost of the objects of social or human action are not “objective facts” in the special narrow sense in which the term is used in the Sciences and con- trasted to “opinions”, and they cannot at all be defined in physical terms. So far as human actions are concerned, things are what the acting people think they are. (Hayek, 1955, pp. 27–27) 2 His paradigm for the social or moral sciences is that society must be understood in terms of men’s conscious reflected actions, it being assumed that people are con- stantly consciously choosing between different possible courses of action. Any collec- tive phenomena must thus be conceived of as the unintended outcome of the decisions of individual conscious actors. This imposes a fundamental dichotomy between the study of nature and of society, since in dealing with natural phenomena it may be reasonable to suppose that the indi- vidual scientist can know all the relevant information, while in the social context this condition cannot possibly be met. We believe that Hayek’s objection is fundamentally misplaced. Even Laplace, who is famously cited as an advocate of determinism argued that although the universe was in principle predictable to the smallest detail, this was in practice impossible because of limited knowledge and that thus science had to have recourse to probability theory. Certainly since Boltzmann it has been understood how collective phenomena arise as ‘unintended’ or emergent outcomes of a mass of uncoordinated processes. The work of Wright (2003) shows how the law of value arises in a similar way. But he did not have to model consciousness on the part of the economic actors to get this result. In Hayek’s view, there were two knowledge forms: scientific knowledge (under- stood as knowledge of general laws) versus “unorganized knowledge” or “knowledge of the particular circumstances of time and place”. The former, he says, may be sus- ceptible of centralization via a “body of suitably chosen experts” (Hayek (1945), p. 521) but the latter is a different matter. [P]ractically every individual has some advantage over others in that he possesses unique information of which beneficial use might be made, but of which use can be made only if the decisions depending on it are left to him or are made with his active cooperation. (Hayek (1945), pp. 521–22) Hayek is thinking here of “knowledge of people, of local conditions, and special circumstances” (Hayek (1945), p. 522), e.g., of the fact that a certain machine is not fully employed, or of a skill that could be better utilized. He also cites the sort of spe- cific, localized knowledge relied upon by shippers and arbitrageurs. He claims that this sort of knowledge is often seriously undervalued by those who consider general scien- tific knowledge as paradigmatic. But this leaves out of account whole layer of knowl- edge that is crucial for economic activity, namely knowledge of specific technologies, knowledge captured in designs, knowledge captured in software1. Such knowledge is not reducible to general scientific law (it is generally a non-trivial problem to move from a relevant scientific theory to a workable industrial innovation), but neither is it so time- or place-specific that it is non-communicable. The licensing and transfer of tech- nologies in a capitalist context shows this quite clearly. It also misses out the tendency of capitalist society to capture ever human knowledge in objective form: once a worker’s knowledge is captured as structural capital, you can then do away with the worker. In industrial capitalism the worker’s surplus labor was expropriated, but you had to retain the worker as long as you wanted to make use of his labor. The worker still owned his labor power, and sold it for his wages. But in the new economy, knowledge is both labor and the means of production, both of which are expropriated and 1 It would be anachronistic to accuse Hayek of not seeing knowledge in software, but in his day knowledge already existed in the control programs for automatic machines, for instance piano-la rolls. 3 turned into structural capital for the exclusive use of the corporation. Thus, intellectual capital can be totally alienated from the worker. Not only is the value of the labor stolen, but the labor itself. Harris (1996) Hayek’s notion of knowledge existing solely ‘in the mind’ is an obstacle to under- standing. It is by now all but universal practice for firms to keep records of their inputs and outputs in the form of some sort of computer spreadsheet. These computer files form an image of the firm’s input–output characteristics, an image which is readily transferable.2 Further, even the sort of ‘particular’ knowledge which Hayek thought too localized to be susceptible to centralization is now routinely centralized. Take his example of the information possessed by shippers. In the 1970s American Airlines achieved the posi- tion of the world’s largest airline, to a great extent on the strength of their development of the SABRE system of computerized booking of flights Gibbs (1994). Since then we have come to take it for granted that either we will be able to tap into the Internet to determine where and when there are flights available from just about any A to any B across the world. Hayek’s appeal to localized knowledge in this sort of context may have been appropriate at the time of writing, but it is now clearly outdated. 2Admittedly, such an image does not of itself provide any information on how, for instance, a particularly favorable set of input–output relations can be achieved, only that it is possible. 4 Chapter 2 Prices as a telecoms system Prices, according to Hayek, provide the telecoms system of the economy. But how adequate is this telecoms system and how much information can it really transmit? While insisting that very specific, localized knowledge is essential to economic decision making, Hayek clearly recognizes that the “man on the spot” needs to know more than just his immediate circumstances before he can act effectively. Hence there arises the problem of “communicating to him such further information as he needs to fit his decisions into the whole pattern of changes of the larger economic system” (Hayek, 1945, p. 525) How much does he need to know? Fortuitously, only that which is conveyed by prices. Hayek constructs an example to illustrate his point: Assume that somewhere in the world a new opportunity for the use of some raw material, say tin, has arisen, or that one of the sources of supply of tin has been eliminated. It does not matter for our purpose and it is very significant that it does not matter which of these two causes has made tin more scarce. All that the users of tin need to know is that some of the tin they used to consume is now more profitably employed elsewhere, and that in consequence they must economize tin. There is no need for the great majority of them even to know where the more urgent need has arisen, or in favor of what other uses they ought to husband the supply. (Hayek, 1945, p. 526) Despite the absence of any such overview, the effects of the disturbance in the tin market will ramify throughout the economy just the same. The whole acts as one market, not because any of its members survey the whole field, but because their limited individual fields of vision sufficiently overlap so that through many intermediaries the relevant information is communicated to all. (ibid.) Therefore the significant thing about the price system is “the economy of knowl- edge with which it operates” (Hayek, 1945, pp. 526–7). He drives his point home thus: It is more than a metaphor to describe the price system as a kind of machin- ery for registering change, or a system of telecommunications which en- ables individual producers to watch merely the movement of a few point- ers, as an engineer might watch the hands of a few dials, in order to adjust 5 their activities to changes of which they may never know more than is reflected in the price movements. (Hayek, 1945, p. 527) He admits that the adjustments produced via the price system are not perfect in the sense of general equilibrium theory, but they are nonetheless a “marvel” of economical coordination. (ibid.) Hayek’s example of the tin market bears careful examination. Two preliminary points should be made. First, the market system does manage to achieve a reasonable degree of coordina- tion of economic activities. The “anarchy of the market” is far from total chaos. In the end, through the fluctuation of prices the law of value acts. Fluctuations of prices about values do function to regulate the allocation of labour between branches of production. Second, even in a planned economy there will always be scope for the disappoint- ment of expectations, for projects that looked promising ex ante to turn out to be fail- ures and so on. Failures of coordination are not confined to market systems. That said, it is nonetheless clear that Hayek grossly overstates his case. In order to make rational decisions relating to changing one’s usage of tin, one has to know whether a rise in price is likely to be permanent or transient, and that requires knowing why the price has risen. The current price signal is never enough in itself. Has tin become more expensive temporarily, due, say, to a strike by the tin miners? Or are we approaching the exhaustion of readily available reserves? Actions that are rational in the one case will be quite inappropriate in the other. Prices in themselves provide adequate knowledge for rational calculation only if they are at their long-run equilibrium levels, but of course for Hayek they never are. On this point it is useful to refer back to Hayek’s own theory of the trade cycle1, in which the ‘misinformation’ conveyed by disequilibrium prices can cause very sub- stantial macro-economic distortions. In Hayek’s cycle theory, the disequilibrium price that can do such damage is the rate of interest, but clearly the same sort of argument applies at the micro level too. Decentralized profit-maximizing responses to unsustain- able prices for tin or RAM chips are equally capable of generating misinvestment and subsequent chaos. At minimum, prices may be said to carry information regarding the terms on which various commodities may currently be exchanged, via the mediation of money (so long as markets markets clear, which is not always the case). It does not follow, however, that a knowledge of these exchange ratios enable agents to calculate the profitability, let alone the social usefulness, of producing various commodities. A commodity can be produced at profit if its price exceeds the sum of the prices of the inputs required to produce it, using the production method which yields the least such sum, but the use of current prices in this calculation is legitimate only in a static context: either prices are unchanging or production and sale take zero time. Hayek, of course, stresses constant change as the rule, so he is hardly in a position to entertain this sort of assumption. Whether production of commodity x will in fact prove profitable or not depends on future prices as well as current prices. And whether production of x currently appears profitable depends on current expectations of future prices. If we start from the assumption that prices will almost certainly not remain un- changed in future, how are agents supposed to form their expectations? One possibility is that they are able to gather sufficient relevant information to make a definite forecast of the changes that are likely to occur. This clearly requires that they 1Hayek (1935); see also Lawlor and Horn (1992) and Cottrell (1994) 6 know much more than just current prices. They must know the process whereby prices are formed, and form forecasts of the evolution of the various factors (at any rate, the more important of them) that bear upon price determination. Hayek’s informational minimalism is then substantially breached. A second possibility is that described by Keynes (1936), (esp. chapter 12): agents are so much in the dark on the future that, although they are sure that some (unspecified) change will occur, they fall back upon the convention of assuming that tomorrow’s prices will equal today’s. This enables them to form a conventional assessment of the profitability of producing various com- modities, using current price information alone; but the cost of this approach (from the standpoint of a defender of the efficiency of the market) is the recognition that those ex ante assessments will be regularly and perhaps substantially wrong. Prices do convey objective information about the social costs of production, through the noise of their fluctuations the signal of value shines through. Because of this they may well function as a regulator of production. Divergences of prices above or below values could serve to attract or repel labour resources into and from branches of produc- tion. It is one thing to recognize that this is possible, another to assess its importance in regulating the economy. Posted prices are not the only telecoms system the economy has. Actual orders for commodities are another. Firms set prices and then get orders which are specified in quantities. If a business manager paid attention only to the prices she sold things at and ignored the quantities being ordered, the firm would not survive long. Apriori one can not say whether the price system or the quantity system is more significant in regulating the economy. One has to know how flexible firms actually are in adjusting their prices in response to sales and then to compare this with how often they adjust their actions in response to changes in orders. Consider a supermarket, how many price adjustments does it make in a day compared to the number of new quantitative orders it places with its warehouse? Or consider a TV factory: how often does the factory respond to orders with a change in price as compared to how often it responds by adjusting the current level of production? Consider a design engineer deciding what components to use in a new Set Top Box for digital TV. Should the engineer base their choice solely on component price, or should they take into account information such as availability, what stocks held by suppliers, the existence of second sources? The relative importance of the price channel and the quantity channel in inter-firm communication is an open question. One could answer it either by empirical studies of business practice or by multi-agent simulations similar to those described earlier in the book, but which had been extended to incorporate input/output tables coding the flows between industries. Given such a model one could vary the rules used by firms to respond to orders between variants in which the firms responded primarily to quantity signals and ones in which the firms responded primarily to price signals. Initial investigations by one of the authors seem to indicate that are more reliant upon price signaling can be subject to catastrophic instabilities. Fluctuations in deliveries can lead to key industries collapsing and the whole economy shutting down. 2.1 Information loss Hayek is certainly right to say that prices involve an economy of information, since the process by which a price is formed is entropy reducing. If we consider an input/output 7 table, we see that it is a square matrix. A full input output table of an economy with n products would contain n2 numbers. But the prices of these products can be encoded in a vector of only n distinct numbers. Let us assume that the entropy of interconnection of an economy HI is encoded in the input/output table, then the entropy of the price vector HP grows according to the law HP ≈ √HI We will see later that this treatment somewhat overestimates the entropy of inter- connection, but it is clear that there is a very substantial information reduction going on here. 2.2 Why can prices work at all? How then can such a reduced information structure function to regulate the economy? How can it work if it allows “individual producers to watch merely the movement of a few pointers”? We will leave aside for now the relative importance of the price and quantity chan- nels in economic information flows, and concentrate on how a single vector of prices might act as a regulator for a complex matrix of inter-sector flows. There seem to be two basic reasons why it could work: 1. The universality of human labour means that it is possible to associate with each commodity a single scalar number - price - which indirectly represents the amount of labour that was used to make it. Deviations of relative prices from relative values can then allow labour to move from where it is less socially neces- sary to where it is more necessary. But this is only possible because all economic activity comes down in the end to human activity. Were that not the case, a single indicator would not be sufficient to regulate the consumption of inputs that were fundamentally of different dimensions. It is only because the dimension of all inputs is ultimately labour - direct or indirect that prices can regulate activity. 2. Another answer lies in the computational tractability of systems of linear equa- tions. Consider the method that we gave in Cottrell and Cockshott (1992) for comput- ing the labour values of commodities from an input output table. We made an initial estimate of the value of each commodity and then used the I/O table to make successively more precise estimates. What we have here is an iterative functional system where we repeatedly apply a function to the value vector to arrive at a new value vector. Because the mapping is what is termed a contrac- tive affine transform the functional system has an attractor to which it converges. For a discussion of such systems see Barnsley (1988), in particular Chapter 3.. This attractor is the system of labour values. The system must constitute a con- tractive transform because any viable economy must have a net surplus product in its basic sector. Hence an initial error in the estimate of the value of an input commodity is spread over a larger quantity of the commodity on output and thus after an iteration the percentage error must decline. The process that we described algorithmically in Cottrell and Cockshott (1992) is what happens in a distributed manner in a real economy as prices are being 8 formed. Firms add up wage costs and costs of other commodity inputs, add a mark-up and set their prices accordingly. This distributed algorithm, which is nowadays carried out by a combination of people and company computers, is structurally similar to that we described. It too, constitutes a contractive affine transform which converges on a price vector2 The exact attractor is not relevant at this point, what is relevant is that the iterative functional system has a stable attractor. It has this because the process of economic production can be well approximated by a piecewise contractive linear transform on price or value space. Were it the case that production processes were strongly non linear such that the output of say corn were a polynomial, then the iterative functional system would be highly unstable, and the evolution of the entire price system would be completely chaotic and unpredictable. Prices would then be useless as a guide to economic activity. For the instability of such systems see Becker and Dorfler (1989) or Baker and Gollub (1990). Neither of the two factors above are specific to a market economy. Labour is the key universal resource in any society prior to full robotisation. By the full version of the Church-Turing thesis3 if a problem coud be solved by a distributed collection human computers, then it can be solved by a Universal Computer. If it is tractable for a distributed collection of humans it is also algorithmically tractable when calculated by the computers of a socialist planning agency. The very factors which make the price system relatively stable and useful are the factors which make socialist economomic calculation tractable. Computing the labour value of each product is tractable4 hence labour values could be used as a basis for pricing in a planned economy - transmitting basically the same information as is transmitted in prices. Having argued that the centralized processing of much economic information is tractable, we now consider its desirability. When economic calculation is viewed as a computational process, the advantages of calculation on a distributed or decentralized basis are far from evident; the question hinges on how a multiplicity of facts about pro- duction possibilities in different branches of the economy interrelate. The interrelation of facts is, partially, an image in the field of information of the real interrelation of the branches of the economy. The outputs of one activity act as inputs for another: this is 2Empirical evidence, Petrovic (1987), Ochoa (1989), Cockshott and Cottrell (1997), Michaelson et al. (1995), Shaikh (1998), Cockshott and Cottrell (2003) indicates that the price vector that it converges on lies somewhere in-between the vector of labour values and the vector of Sraffian prices. 3Every function that can be physically computed can be computed by a Turing machine. Informally the Church-Turing thesis states that our notion of algorithm can be made precise and computers can run those algorithms. Furthermore, a computer can theoretically run any algorithm; in other words, all ordinary computers are equivalent to each other in terms of theoretical computational power, and it is not possible to build a calculation device that is more powerful than the simplest computer (a Turing machine). 4The computational complexity of iteratively determining labour values is relatively low, significantly lower than the process of computing a strict matrix inverse which is the normal way the problem is specified in the literature. Naive matrix inversion has complexity N3 but optimal versions exist with complexity N2.38 (see Numerical Recipies Software (1988) page 104). The iterative approximation method has complexity kN2 where k is the number of iterations required to get an acceptably accurate answer. The answer converges rapidly so acceptable results are obtained with k < 10. If fact disaggregated input output matrices are typically sparse with most elements being zero which allows further significant speedups by compacting the data to elide the zero elements. The resulting complexity is of order kNM where M is the mean number of direct inputs that go to make an output. For fully disaggregated tables M grows much slower than N, so the overall complexity is significantly less than N2. 9 the real interdependence. In addition, there are potential interactions where different branches of production function as alternative users of inputs. It is important to distinguish the two types of interaction. The first represents real flows of material and is a static property of a snapshot of the economy. The second, the variation in potential uses for goods, is not a property of the real economy but of the phase space of possible economies. The latter is part of the economic problem insofar as this is considered to be a search for optimal points within this phase space. Accord- ing to neo-classical economic theory, the evolution of a real market economy—the real interdependencies between branches—provides the search procedure by which these optima are sought. The economy describes a trajectory through its phase space. This trajectory is the product of the trajectories of all of the individual economic agents, with these individual agents deciding upon their next position on the basis of the information they get from the price system. Following up on Hayek’s metaphor of the price system as telecoms system or ma- chinery for registering changes, the market economy as a whole acts as a single proces- sor5. A single processor, because at any one point in time it can be characterized by a single state vector that defines its position in the phase space of the economic problem. Moreover, this processor operates with a very slow cycle time, since the transmission of information is bounded by the rate of change of prices. To produce an alteration in prices, there must be a change in the real movement of goods (we are abstracting here from the small number of highly specialized futures markets). Thus the speed of information transmission is tied to the speed with which real goods can be moved or new production facilities brought on line. In sum, a market economy performs a single- threaded search through its state space, with a relatively slow set of adjustments to its position, the speed of adjustments being determined by how fast the real economy can move. Contrast this now with what can potentially be done if the relevant facts can be concentrated, not in one place—that would be impossible—but within a small volume of space. If the information is gathered into one or more computing machines, these can search the possible state space without any change in the real economy. Here the question of whether to concentrate the information is very relevant. It is a basic property of the universe that no portion of it can affect another in less time than it takes for light to propagate between them. Suppose one had all the relevant information spread around a network of computers across the country. Assume any one of these could send a message to any other. Suppose that this network was now instructed to simulate possible states of the economy in order to search for optima. The evolution from one simulated state to another could proceed as fast as the computers could exchange information regarding their own current state. Given that electronic signals between them travel at the speed of light this will be far faster than a real economy can evolve. 5If we take neo-classical theory in its own terms the processor would have to be an analogue processor, since the maths of neo-classical theory is cast in terms of real variables. According to Velupillai (2003) this fundamentally undermines many of its conclusions. However, Cockshott and Michaelson (2007) have argued, analogue computation with real numbers is, for physical reasons a fantasy. Moreover all economic transactions are done in integer quantities of money. 10 2.3 When centralisation helps Whether planning is implemented using central supercomputers or a distributed net- work of local machines, or some combination of these is an essentially pragmatic issue relating to the technology available. There are however a number of practical advan- tages from the centralisation of certain computation and control facilities. The speed with which a complex decision making apparattus can function depends both upon how fast information can propagate through it, and upon how fast its indi- vidual components can respond to this information. One of the arguments against the market is that the price signals it transmits have, except in financial markets a relatively slow rate of propagation. This is because changes in price come about through changes in production and their frequency is bounded by the rate at which productive capacity can be adjusted. This implies a relatively long, and very costly, cycle time - we typ- ically measure the business cycles as having a duration of 3 to 7 years. In contrast a cybernetic planning system could work out the intermediate and capital goods impli- cations of a change in consumer demand in hours or days. Just how fast it would work would depend on whether the calculation used distributed or centralised computing techniques. One component of a cybernetic control system has to be distributed. Clearly it is the Airbus factories that have the information about what parts are used to make an A340, the car plants have the information about what parts are used to make a Mondeo. This information approximates to what Hayek and the Austrian school of economics call contextual or tacit knowledge - but it is of course no longer human knowledge. Literally nobody knows what parts go into an A340. The information, too vast for a human to handle, is stored in a relational database. At an earlier stage of industrial development it would have been dealt with by a complex system of paper records. Again the knowledge would have been objective - residing in objects rather than in human brains. The very possibility of large scale, co-ordinated industrial activity rests upon the existence of such objectivised information. The information to construct the parts explosion is generated by a computerised design process within the collaborating factories of Airbus Industrie. In a cyberneti- cally controlled socialist economy, the parts explosion data for the A340, along with the parts explosion data for other products would have to be computationally combined to arrive at a balanced production plan. This computation could be done either in a distributed or a centralised way. In the one case it would proceed by the exchange of messages between local computers, in the other, the parts explosion data would be transmitted to a single processing center to be handled by highly parallel super-computers. If one uses widely distributed parallel processors the speed of computation tends to be markedly slower than when one uses tightly coupled parallel machines. If the com- putation requires extensive inter-communication of information - as those involved in economic equilibration do, then it becomes bounded by the transmission speed of mes- sages from one part of the computational system to another. A tightly coupled comput- ing system with n processors will tend to compute faster than a distributed system with n equivalent processors. This is because the communication channels between proces- sors are shorter in the tightly coupled system, and in consequence messages travelling at the speed of light pass between processors in less time. A cybernetic system of economic control using computer technology will faster than a market one, since the electronic transmission of messages between computing centres is orders of magnitude faster than a process of price adjustments brought about 11 by overshooting or undershooting demand; but because of the light speed limit on elec- tronic messages there are advantages to centralising part of the computational process in the cybernetic system. 12 Chapter 3 Is economic coordination tractable? It may be objected that the sheer scale of the economic problem is such that although conceivable in principle, such computations would be unrealisable in practice (Hayek (1955);1 see also Nove (1983)). Given modern computer technology this is far from the case as we show in Chapter 3.1. However neo-classical economists and the Austrian school have a very different concept of equilibrium from us. Our concept is that of statistical equilibrium as described by Farjoun and Machover (1983). Statistical equi- librium is not a point in phase space, but a region defined by certain macroscopic vari- ables, such that there is a large set of microscopic conditions that are compatible with it. The concept of equilibrium with which Hayek was familiar was that of a mechanical equilibrium, a unique position in phase space at which all forces acting on the economy come into balance. Arrow and Debreu (1954) supposedly established the existence of this sort of equilibria for competitive economies, but as Velupillai (2003) showed, their proof rested on theorems that are only valid in non-constructive mathematics. Why does it matter whether Arrow used constructive or non-constructive mathe- matics? Because only constructive mathematics has an algorithmic implementation and is guaranteed to be effectively computable. But even if 1. a mechanical economic equilibrium can be proven to exist, 2. it can be shown that there is an effective procedure by which this can be deter- mined : i.e., the equilibrium is in principle computable, there is still the question of its computation tractability. What complexity order governs the computation process that arrives at the solution? Suppose that an equilibrium exists, but that all algorithms to search for it are NP- hard, that is, the algorithms may have a running time that is exponential in the size of the problem. This is just what has been shown by Deng and Huang (2006). Their result might at first seem to support Hayek’s contention that the problem of rational economic 1The specific reference here is to p. 43, and more particularly to note 37 on pp. 212–213, of The Counter- Revolution of Science. In the note, Hayek appeals to the judgment of Pareto and Cournot, that the solution of a system of equations representing the conditions of general equilibrium would be practically infeasible. This is perhaps worth emphasizing in view of the tendency of Hayek’s modern supporters to play down the computational issue. 13 planning is computationally intractable. In Hayek’s day, the notion of NP-hardness had not been invented, but he would seem to have been retrospectively vindicated. Problems with a computational cost that grows as Oen soon become astronomically difficult to solve. We mean astronomical in a literal sense. One can readily specify an NP-hard prob- lem that involves searching more possibilities than there are atoms in the universe before arriving at a definite answer. Such problems, although in principle finite, are beyond any practical solution. But this knife cuts with two edges. On the one hand it shows that no planning computer could solve the neo-classical problem of economic equilibrium. On the other it shows that no collection of millions of individuals interacting via the market could solve it either. In neo-classical economics, the number of constraints on the equilib- rium will be proportional, among other things, to the number of economic actors n. The computational resource constituted by the actors will be proportional to n but the cost of the computation will grow as en. Computational resources grow linearly, com- putational costs grow exponentially. This means that a market economy could never have sufficient computational resources to find its own mechanical equilibrium. It follows that the problem of finding the neo-classical equilibrium is a mirage. No planning system could discover it, but nor could the market. The neo-classical problem of equilibrium misrepresents what capitalist economies actually do and also sets an impossible goal for socialist planning. If you dispense with the notion of mechanical equilibrium and replace it with statis- tical equilibrium one arrives at a problem that is much more tractable. The simulations described by Wright (2005, 2003) show that a market economy can rapidly converge on this sort of equilibrium. But as we have argued above, this is because regulation by the law of value is computationally tractable. This same tractability can be exploited in a socialist planning system. Economic planning does not have to solve the impossi- ble problem of neo-classical equilibrium, it merely has to apply the law of value more efficiently. 3.1 Can millions of planning equations be solved? If we assume that the economy retains some form of market for consumer goods as proposed by Lange to provide information on final requirements then the process of deriving a balanced plan is tractable. Let us take a very simple example, an economy with 4 types of goods which we will call bread, corn, coal and iron. In order to mine coal, both iron and coal are used as inputs. To make bread we need corn for the flour and coal to bake it. To grow the corn, iron tools and seed corn are required. The making of iron itself demands coal and moreironimplements. Wecandescribethisasasetoffourprocesses: 1 ton iron 1 ton coal 1 ton corn 1 ton bread ← 0.05 ton iron + 2 ton coal + 20 days labour ← 0.2 ton coal + 0.1 ton iron + 3 days labour ← 0.1 ton corn + 0.02 ton iron + 10 days labour ← 1.5 ton corn * 0.5 ton coal + 1 days labour Assume, following Lange (1938), that the planning authorities have a current es- timate of consumer demand for final outputs. The planners start with the required net output. This is shown on the first line of Table 3.1. We assume that 20000 tons of coal and 1000 tons of bread are the consumer goods required. They estimate how much iron, corn, coal, and labour would be directly consumed 14 Table 3.1: Convergence of gross production on that required for the final net product iron coal corn bread 0 20000 0 1000 2000 24500 1500 1000 2580 29400 1650 1000 3102 31540 1665 1000 3342 33012 1666 1000 .. .. .. .. 3708 34895 1667 1000 3708 34895 1667 1000 3708 34896 1667 1000 labour 0 Net output 61000 1st estimate gross usage 129500 157300 174310 .. hidden steps 196510 196515 196517 20th estimate gross usage in producing the final output: 2000 tons of iron, 1500 tons of corn and 4500 additional tons of coal. They add the intermediate inputs to the net output to get a first estimate of the gross usage of goods. Since this estimate involved producing more iron, coal and corn than they had at first allowed for, they repeat the calculation to get a second estimate of the gross usage of goods. Each time they repeat the process they get different total requirement of iron, coal corn and labour, as shown in Table 3.1. Does this confirm the claims of Hayek that the equations necessary for socialist planning are unsolveable? No, it does not. The answers differ each time round, but the differences between sucessive answers get smaller and smaller. Eventually, after 20 attempts in this exam- ple, the planners get a consistent result: if the population is to consume 20000 tons of coal and 1000 tons of bread, then the gross output of iron must be 3708 tons, coal must be 34896 tons and that that of corn 1667 tons. Is it feasible to scale this up to the number of goods produced in a real economy? Whilst the calculations would be impossibly tedious to do by hand, they are read- ily automated. Table 3.1 was produced by running the computer algorithm given in Appendix A. If detailed planning is to be feasible, we need to know: 1. How many types of goods an economy produces. 2. How many types of inputs are used to produce each output. 3. How fast a computer program running the algorithm would be for the scale of data provided in (1) and (2). Table 3.2 illustrates the effect of running the planning algorithm on a cheap personal computer of 2004 vintage. We determined the calculation time for economies whose number of industries ranged from one thousand to one million. Two different assump- tions were tested for the way in which the mean number of inputs used to make a good depends on the complexity of the economy. It is clear that the number of direct inputs used to manufacture each product is only a tiny fraction of the range of goods produced in an economy. It is also plausible that as industrial complexity develops, the mean number of inputs used to produce each output will also grow, but more slowly. In the first part of Table 3.2 it is assumed that the mean number of inputs (M) grows as the square root of the number of final outputs 15 Table 3.2: Timings for applying the planning algorithm in Appendix A to model economies of different sizes. Timings were performed on a 3 Ghz Intel Zeon running Linux, with 2 GB of memory. Law M = √N Law M ≈ log N Industries N 1,000 10,000 40,000 160,000 320,000 1,000 10,000 100,000 1,000,000 Mean Inputs M 30 100 200 400 600 30 40 50 60 CPU Time seconds 0.1 3.8 33.8 77.1 166.0 0.1 1.6 5.8 68.2 Memory Requirement bytes 150KB 5MB 64MB 512MB 1.5G 150KB 2.4MB 40MB 480MB (N). In the second part of the table the growth of M is assumed to follow a logarithmic law. It can be seen that calculation times are modest even for very big economic models. The apparently daunting million equation foe, yields gracefully to the modest home computer. The limiting factor in the experiments is computer memory. The largest model tested required 1.5 Gigabytes of memory. Since the usable data space of a P4 processor is at most 2 Gigabytes larger models would have required a more advanced 64-bit computer. The experiment went up to 1 million products. The number of industrial products in the Soviet economy was estimated by Nove (1983) to be around of 10 million. Nove believed this number was so huge as to rule out any possibility of constructing a bal- anced disagregated plan. This may well have been true with the computer technology available in the 1970s, but the situation is now quite different. A single PC could com- pute a disaggregated plan for a smallish ecoomy like Sweden in a couple of minutes. Suppose we want to plan a continental scale economy. It might have 10 million products. Let us assume that the average number of inputs required to produce each output is, a very large, 2000. On the basis Table 3.2 this would require a computer with 80 Gigabytes of memory: Euro 6000 at 2006 prices. Using a single 2006 vintage 64-bit processor the computation would take of the order of an hour. The algorithm we have presented is for a single processor, but the problem lends itself well to parallelisation. A Beowulf cluster of PCs, costing perhaps Euro 40,000 could probably cut the compute time to under 10 minutes. More sophisticated algo- rithms capable of allocating fixed capital stocks have comparable complexities and running times.2 The compute time required is sufficiently short for a planning authority, should it so wish, to be able to perform the operation on a daily basis. In performing this calculation the planners arrive at the various scales of production that the market economy would operate at were it able to attain equilibrium. Faced with an exogenous change, the planners can compute the new equilibrium position and issue directives to production units to move directly to it. This direct move will involve the physical movement of 2Cockshott (1990), Cottrell and Cockshott (1992) 16 goods, laying of foundations, fitting out of buildings etc, and will therefore take some considerable time. We now have two times, the time of calculation and the time of physical adjust- ment. If we assume that the calculation is performed with an iterative algorithm, we find that in practice it will converge acceptably within a dozen iterations. Since each of these iterations would take a few minutes on a supercomputer the overall time would probably be under an hour. In a market economy, even making the most favourable assumptions about its ability to adjust stably to equilibrium, the individual iterations will take a time proportional to the physical adjustment time. The overall relaxation period would be around a dozen times as long as that in the planned system ( assuming a dozen convergence steps). 17 Chapter 4 Information in planned and market economies It is one of the progressive features of capitalism that the process of competition forces some degree of convergence upon least-cost methods of production (even if the cost in question is monetary cost of production, which reflects social cost in a partial and distorted manner). Hayek reminds us, and rightly so, that this convergence may in fact be far from complete. Firms producing the same commodity (and perhaps even using the same basic technology) may co-exist for extended periods despite having quite divergent costs of production. If the law of one price applies to the products in question, the less efficient producers will make lower profits and/or pay lower wages. The question arises whether convergence on best practice could be enforced more effectively in a planned system. This may be the case. If all workers are paid at a uniform rate for work done, it will be impossible for inefficient producers to mask their inefficiency by paying low wages. Indeed, with the sort of labour-time accounting system advocated elsewhere (Cottrell and Cockshott (1989), (1993)), differentials in productive efficiency will be immediately apparent. Not only that, but there should be a broader range of mechanisms for eliminating differentials once they are spotted. A private firm may realize that a competitor is producing at lower cost, but short of in- dustrial espionage may have no way of finding out how this is achieved. Convergence of efficiency, if it is attained at all, may have to wait until the less efficient producer is driven out of business and its market share usurped by more efficient rivals. In the context of a planned system, on the other hand, some of the managers or technical ex- perts from the more efficient enterprises might, for instance, be seconded as consultants to the less efficient enterprises. One can also imagine—in the absence of commercial secrecy—economy-wide wikipedia on which the people concerned with operating par- ticular technologies, or producing particular products, share their tips and tricks for maximizing efficiency. 4.1 How much information is needed? One of Hayek’s most fundamental arguments is that the efficient functioning of an economy involves making use of a great deal of distributed information, and that the task of centralizing this information is practically impossible. 18 In what follows we attempt to put this argument to a quantitative test. We compare the information transmission costs implicit in a market system and a planned system, and examine how the respective costs grow as a function of the scale of the economy. Communications cost is a measure of work done to centralize or disseminate economic information: we will use the conceptual apparatus of algorithmic information theory (Chaitin (1999)) to measure this cost. Our strategy is first to consider the dynamic problem of how fast, and with what communications overhead, an economy can stabilize. We will demonstrate that this can be done faster and at less communications cost by the planned system. We consider initially the dynamics of convergence on a fixed target, since the control system with the faster impulse response will also be faster at tracking a moving target. Consider an economy E = [A, c, r, w] with n producers each producing distinct prod- ucts using technology matrix A, with a well defined vector of final consumption expen- diture c that is independent of the prices of the n products, an exogenously given wage rate w and a compatible rate of profit r. Then there exists a possible Sraffian solution e = [U, p] where U is the commodity flow matrix and p a price vector. We will assume, as is the case in commercial arithmetic, that all quantities are expressed to some finite precision rather than being real numbers. How much information is required to specify this solution? The argument that follows is relatively insensitive to the exact way we have spec- ified the starting condition from which a solution is to be sought. This is because we consider convergence in information space. Recall that we have in Section ?? ex- pressed scepticism about the existence of a given rate of profit r as assumed in Sraffian theory. We are not concerned with showing that a capitalist economy does converge towards a solution, that can be left to the neo-classical and neo-ricardian economists. Whether or not such a convergence tendency actually exists, let us concede that it does for the sake of the current argument. Assuming that we have some efficient binary encoding method and that I(s) is a measure in bits of the information content of the data structure s using this method, then the solution can be specified by I(e), or, since the solution is in a sense given in the starting conditions, it can be specified by I(E)+I(ps) where ps is a program to solve an arbitrary system of Sraffian equations. In general we have I(e) ≤ I(E)+I(ps). In the following we will assume that I(e) is specified by I(E)+I(ps). Let I(x|y) be the conditional or relative information (Chaitin (1987)) of x given y. The conditional information associated with any arbitrary configuration of the econ- omy, k = [Uk,pk], may then be expressed relative to the solution, e, as I(k|e). If k is in the neighbourhood of e we should expect that I(k|e) ≤ I(k). For instance, suppose that we can derive Uk from A and an intensity vector uk which specifies the rate at which each industry operates then I(k|e) ≤ I(uk)+I(pk)+I(pu) where pu is a program to compute Uk from some A and some uk. Since Uk is a matrix and uk a vector, each of scale n, we can assume that I(Uk) > I(uk). As the converges on a solution the conditional information required to specify it will shrink, since uk starts to approximate to ue.1 Intuitively we only have to supply 1Note that this information measure of the distance from equilibrium, based on a sum of logarithms, dif- fers from a simple Euclidean measure, based on a sum of squares. The information measure is more sensitive to a multiplicity of small errors than to one large error. Because of the equivalence between information and entropy it also measures the conditional entropy of the system. 19 the difference vector between the two, and this will require less and less information to encode, the smaller the distance between uk and ue. A similar argument applies to the two price vectors pk and pe. If we assume that the system follows a dynamic law that causes it to converge towards a solution then we should have the relation I(kt+1|e) < I(kt |e). Now construct a model of the amount of information that has to be transmitted between the producers of a market economy in order to move it towards a solution. Make the simplifying assumptions that all production process take one time step to operate, and that the whole process evolves synchronously. Assume the process starts just after production has finished, with the economy in some random non-equilibrium state. Further assume that each firm starts out with a given selling price for its product. Each firm i carries out the following procedure. 1. Itwritestoallitssuppliersaskingthemtheircurrentprices. 2. Itrepliestoallpricerequeststhatitgets,quotingitscurrentpricepi. 3. Itopensandreadsallpricequotesfromitssuppliers. 4. Itestimatesitscurrentper-unitcostofproduction. 5. Itcalculatestheanticipatedprofitabilityofproduction. 6. If this is above r it increases its target production rate ui by some fraction. If profitability is below r a proportionate reduction is made. 7. It now calculates how much of each input j is required to sustain that production. 8. It sends off to each of its suppliers j, an order for amount Ui j of their product. 9. Itopensallordersthatithasreceivedand (a) totalsthemup. (b) If the total is greater than the available product it scales down each order propor- tionately to ensure that what it can supply is fairly distributed among its customers. (c) Itdispatchesthe(partially)filledorderstoitscustomers. (d) If it has no remaining stocks it increases its selling price by some increasing func- tion of the level of excess orders, while if it has stocks left over it reduces its price by some increasing function of the remaining stock. 10. It receives all deliveries of inputs and determines at what scale it can actually proceed with production. 11. Itcommencesproductionforthenextperiod. Experience with computer models of this type of system indicates that if the readi- ness of producers to change prices is too great, the system could be grossly unstable. We will assume that the price changes are sufficiently small to ensure that only damped oscillations occur. The condition for movement towards solution is then that over a suf- ficiently large ensemble of points k in phase space, the mean effect of an iteration of the above procedure is to decrease the mean error for each economic variable by some factor 0 ≤ g < 1. Under such circumstances, while the convergence time in vector space will clearly follow a logarithmic law—to converge by a factor of D in in vector space will take time of order log 1 (D)—in information space the convergence time will g be linear because of the logarithmic nature of information measures. Thus if at time t the distance from equilibrium is I(kt|e), convergence to within a distance ε will take a take a time of order I(kt|e)−ε δlog(1) 20 g where δ is a constant related to the number of economic variables that alter by a mean factor of g each step. The convergence time in information space, for small ε, will thus approximate to a linear function of I(k|e) which we can write as ∆I(k|e). We are now in a position to express the communications costs of reducing the conditional entropy of the economy to some level ε. Communication takes place at steps 1, 2, 8 and 9c of the procedure. How many messages does each supplier have to send, and how much information must they contain? Letters through the mail contain much redundant pro-forma information: we will assume that this is eliminated and the messages reduced to their bare essentials. The whole of the pro forma will be treated as a single symbol in a limited alphabet of message types. A request for a quote would thus be the pair [R,H] where R is a symbol indicating that the message is a quotation request, and H the home address of the requester. A quote would be the pair [Q,P] with Q indicating the message is a quote and P being the price. An order would similarly be represented by [O,Uij], and with each delivery would go a dispatch note [N,Uij] indicating the actual amount delivered, whereUij ≤Uij. If we assume that each of n firms has on average m suppliers, the number of mes- sages of each type per iteration of the procedure will be nm. Since we have an alphabet of message types (R,Q,O,N) with cardinality 4, these symbols can be encoded in 2 bits each. We will further assume that (H,P,Uij,Uij) can each be encoded in binary num- bers of b bits. We thus obtain an expression for the communications cost of an iteration of 4nm(b + 2). Taking into account the number of iterations, the cost of approaching the equilibrium will be 4nm(b + 2)∆I(k|e). Let us now contrast this with what would be required in a planned economy. Here the procedure involves two distinct procedures, that followed by the (state-owned) firm and that followed by the planning bureau. The model of socialist economy we are describing is roughly that given in Lange (1938) or Cottrell and Cockshott (1992). The firms do the following: 1. Inthefirsttimeperiod: (a) They send to the planners a message listing their address, their technical input co- efficients and their current output stocks. (b) Theyreceiveinstructionsfromtheplannersabouthowmuchofeachoftheiroutput is to be sent to each of their users. (c) Theysendthegoodswithappropriatedispatchnotestotheirusers. (d) Theyreceivegoodsinward,readthedispatchnotesandcalculatetheirnewproduc- tion level. (e) Theycommenceproduction. 2. Theythenrepeatedlyperformthesamesequencereplacingstep1awith: (a) Theysendtotheplannersamessagegivingtheircurrentoutputstocks. The planning bureau performs the complementary procedure: 1. Inthefirstperiod: (a) Theyreadthedetailsofstocksandtechnicalcoefficientsfromalloftheirproducers. (b) They compute the equilibrium point e from technical coeffients and the final de- mand. (c) Theycomputeaturnpikepath(Dorfmanetal.(1958))fromthecurrentoutputstruc- ture to the equilibrium output structure. 21 (d) Theysendoutforfirmstomakedeliveriesconsistentwithmovingalongthatpath. 2. Inthesecondandsubsequentperiods: (a) Theyreadmessagesgivingtheextenttowhichoutputtargetshavebeenmet. (b) They compute a turnpike path from the current output structure to the equilibrium output structure. (c) Theysendoutforfirmstomakedeliveriesconsistentwithmovingalongthatpath. We assume that with computer technology the steps b and c can be undertaken in a time that is small relative to the production period (see below section 3.1). Comparing the respective information flows, it is clear that the number of orders and dispatch notes sent per iteration is invariant between the two modes of organization of production. The only difference is that in the planned case the orders come from the center whereas in the market they come from the customers. These messages will again account for a communications load of 2nm(b + 2). The difference is that in the planned system there is no exchange of price information. Instead, on the first iteration there is a transmission of information about stocks and technical coefficients. Since any coefficient takes two numbers to specify, the communications load per firm will be: (1 + 2m)b. For n firms this approximates to the nm(b + 2) that was required to communicate the price data. The difference comes on subsequent iterations, where, assuming no technical change, there is no need to update the planners’ record of the technology matrix. On i − 1 subse- quent iterations, the planning system has therefore to exchange only about half as much information as the market system. Furthermore, since the planned economy moves on a turnpike path to equilibrium, its convergence time will be less than that of the mar- ket economy. The consequent communications cost is 2nm(b + 2)(2 + (i − 1)) where i < ∆I(k|e). The consequence is that, contrary to Hayek’s claims, the amount of information that would have to be transmitted in a planned system is substantially lower than for a market system. The centralized gathering of information is less onerous than the com- mercial correspondence required by the market. Hayek’s error comes from focusing on the price channel to the exclusion of the quantity channel. In addition, the conver- gence time of the market system is slower. The implication of faster convergence for adaptation to changing rather than stable conditions of production and consumption are obvious. In addition, it should be noted that in our model for the market, we have ignored any information that has to be sent around the system in order to make payments. In practice, with the sending of invoices, cheques, receipts, clearing of cheques etc., the information flow in the market system is likely to be several times as high as our estimates. The higher communications overheads of market economies are reflected in the numbers of office workers they employ, which in turn leaves its mark on the architecture of cities—as witnessed by the skylines of Moscow and New York in the 1980s. 4.2 The argument from dynamics Does Hayek’s concentration on the dynamic aspect of prices, price as a means of dy- namically transmitting information, make any sense? In one way it does. In section ?? we showed that the information content of a price in the UK was less than 14 bits. If we consider todays price of a cup of coffee as an 22 example, then yesterday’s price was probably the same. If the price changes only once a year, then for 364 days the only information that it conveys is that the price has not changed. The information content of this, − log 364 , is about 0.0039 of a bit. Then 2 365 when the price does change its information content is −log 1 +b where b is the 2 365 number of bits to encode the price increase. For a reasonable value of the increase, say 10 pence, the whole amounts to some 12 bits. So on the day the price changes, it conveys some 3000 times as much information as it did every other day of the year. So it is almost certainly true that most of the information in a price series is encoded in the price changes. From the standpoint of someone observing and reacting to prices, the changes are all important. But this is a viewpoint internal to the dynamics of the market system. One has to ask if the information thus conveyed has a more general import. The price changes experienced by a firm in a market economy can arise from many different causes, but we have to consider which of these represent information that is independent of the social form of production. We can divide the changes into those that are direct results of events external to the price system, and those which are internal to the system. The discovery of new oil reserves or an increase in the birth rate would directly impinge upon the price of oil or of baby clothes. These represent changes in the needs or production capabilities of society, and any system of economic regulation should have means of responding to them. On the other side, we must count a fall in the price of acrylic feed stocks and a fall in the price of acrylic sweaters, among the second- and third-order internally generated changes consequent upon a fall in oil prices. In the same category would go the rise in house prices that follows an expansion of credit, any fluctuation in share prices, or the general fall in prices that marks the onset of a recession. These are all changes generated by the internal dynamics of a market system, and as such irrelevant to the consideration of non-market economies. Hayek is of course right that the planning problem is greatly simplified if there are no changes, but it does not follow from this that all the changes of a market economy are potential problems for a planned one. 23 Chapter 5 Conclusion We have argued that Hayek and his followers have grossly overestimated the difficulties of carrying out rational socialist planning. They have coupled this with an exaggerated idea of the effectiveness of the free market as an economic regulator. Their fundamental theoretical errors are: 1. To talk about information in a general and nonquatitative way. This leads them to overestimate how important information about prices is, when compared to other information flows that regulate quantities and qualities of goods. 2. To talk in a vague way about the intractability of socialist calculation, without attempting to be systematic about what these alleged difficulties are. Once one specifies what calculations actually have to be done, one can see that these gen- eral objections are without substance. The coherence of an economy is basically maintained by regular exchanges of in- formation about quatitites in material rather than monetary units. In the USSR these information flows about material units were co-ordinated through the planning system. Being antagonistic to anything that smacked of Neurath’s calculations in kind Neu- rath (2004), the importance of these quatitative measures in economic regulation were systematically underestimated by Hayekians. The western economists who had criticised the socialist system as inefficient had anticipated that the inauguration of a market economy would lead to accelerated eco- nomic growth in the USSR. Instead it regressed from a super-power to an economic basket case. It became dominated by gangsterism. Its industries collapsed and it ex- perienced untold millions of premature deaths, revealed in the statistics of a shocking drop in life expectancy (Table 5.2). A discipline less sure of itself than economics, might question its starting hypothe- sis when an experiment went so drastically wrong. Two of todays leading Hayekians, have instead attempted to use the Searlean dis- tinction between syntax and semantics to explain this signal failure of economic ad- vice Boettke and Subrick (2002). They claim that the shock therapy in the USSR had changed the syntax of the economy but not the semantics: ”Just because the political structure collapsed, there is no reason to assume that the social structure did. Social arrangements persisted prior to and after the fall of communism. The reformers and western advisors failed to acknowledge that the newly freed countries were not tabula rasa. They 24 Year Total Mortality 1000s Excess Relative to 1986 1000s 0.0 33.6 71.1 85.8 158.0 192.7 309.4 631.3 803.4 705.8 584.2 517.8 490.7 646.3 727.3 753.8 6,711,200.0 Hayekian policies in Russia 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 1,498.0 1,531.6 1,569.1 1,583.8 1,656.0 1,690.7 1,807.4 2,129.3 2,301.4 2,203.8 2,082.2 2,015.8 1,988.7 2,144.3 2,225.3 2,251.8 Total Excess Deaths Table 5.2: Excess deaths following 25 were instead countries that had residents who held beliefs about the world and the structure of society.” These beliefs and attitudes that persisted from socialism are then blamed for the eco- nomic collapse1. What Boettke and Subrick are attempt to move towards with their syntax/semantics distinction applied to a society is something very like what Marx’s distinction between base and superstructure2. Marx was concerned from the outset with the historical pro- cess of transition between forms of economy - modes of production. Once the Austrian economists became proponents of social engineering, they started to encroach, albeit in reverse gear, a traditional concerns of Marxian economics: tran- sitions between modes of production. But they approached it with a theoretical frame- work inimical to the object under study. Faced with the manifest failure of their poli- cies3 they are reduced to metaphors borrowed from linguistics to explain it. They and the whole Austrian school are unwilling to contemplate the possibility that they were fundamentally wrong in their faith in the organising and communications ability of the market. 1This is reminiscent of the way the ‘poisonous weeds from the past’ in mens minds were an explanation for economic problems in China during the Cultural Revolution. 2It might be objected that there was a metaphorical character to this distinction in Marx. So there was. But a century and more of theoretical writings by other Marxists have given a dense social-theoretical content to what were once architectural metaphors. It remains to be seen whether the Austrian school can achieve a similar theoretical development of Boettke’s syntax/semantics dichotomy. 3An embarassed admission of which they give: Since the fall of the communism, the former Soviet Bloc countries have had an extremely difficult time moving towards a market economy. Boettke and Subrick (2002)[section 4] 26 Appendix A A simple planning program. This algorithm performs the planning calculations that are presented in Chapter 3.1. program plan ; type good =( iron ,coal ,corn ,bread ,labour ); consv =array [good ] of real ; const usage: array [good ,1..3] of real =( ( 0.05,2.0,20.0), ( 0.2,0.1,3.0), ( 0.1,0.02,10.0), ( 1.5,0.5,1.0), ( 0,0,0)); inputs: array [good ,1..3] of good =( ( iron ,coal ,labour ), ( coal ,iron ,labour ), ( corn ,iron ,labour ), ( corn ,coal ,labour ), ( corn ,coal ,labour )); demand :consv =( 0,2e4,0,1e3,0); var Let used, previous ∈ consv; procedure calcstep ; (see Section A.1 ) var Let l ∈ integer; begin used← demand; previous← 0; writeln(iron, coal, corn, bread, labour); write(round(used )); for l←1 to 20 do calcstep; end . 27 A.1 calcstep procedure calcstep ; This performs one step of the plan balancing by adding up the ingredients used to make the previous step of the iteration var begin temp← 0; for end ; i← iron to labour do for j←1 to 3 do begin k← inputsi,j; tempk← tempk + (usedi - previousi) × usagei,j; end ; Let i, k ∈ good; Let j ∈ integer; Let temp ∈ consv; previous← used; used← used + temp; write(round(used )); 28 Bibliography Arrow, K. and Debreu, G.: 1954, Existence of an Equilibrium for a Competitive Econ- omy, Econometrica 22(3), 265–290. Baker, G. L. and Gollub, J. P.: 1990, Chaotic Dynamics, Cambridge University Press. Barnsley, M.: 1988, Fractals Everywhere, Academic Press. Becker, K. H. and Dorfler, M.: 1989, Dynamical Systems and Fractals, Cambridge University Press. Boettke, P. J. and Subrick, J. R.: 2002, From the philosophy of mind to the philosophy of the market, Journal of Economic Methodology 9(1), 53–64. Chaitin, G.: 1987, Information, Randomness and Incompleteness, World Scientific. Chaitin, G.: 1999, Information and randomness: A survey of algorithmic information theory, The Unknowable, Springer, Signapore. Cockshott, P. and Michaelson, G.: 2007, Are there new models of computation: A reply to Wegner and Eberbach, Computer Journal . to appear. Cockshott, W. P.: 1990, Application of artificial intelligence techniques to economic planning, Future Computing Systems 2, 429–443. Cockshott, W. P. and Cottrell, A. F.: 1997, Labour time versus alternative value bases: a research note, Cambridge Journal of Economics 21, 545–549. Cockshott, W. P. and Cottrell, A. F.: 2003, A note on the organic composition of capital and profit rates, Cambridge Journal of Economics 27, 749–754. Cottrell, A.: 1994, Hayek’s early cycle theory re-examined, Cambridge Journal of Economics 18, 197–212. Cottrell, A. and Cockshott, P.: 1992, Towards a New Socialism, Vol. Nottingham, Bertrand Russell Press. Cottrell, A. and Cockshott, W. P.: 1989, Labour value andsocialist economic calcula- tion, Economy and Society 18, 71–99. Deng, X. and Huang, L.: 2006, On the complexity of market equilibria with maximum social welfare, Information Processing Letters 97(1), 4–11. Dorfman, R., Samuelson, P. and Solow, R.: 1958, Linear Programming and Economic Analysis, McGraw Hill, New York. 29 Farjoun, E. and Machover, M.: 1983, Laws of Chaos, a Probabilistic Approach to Political Economy, Verso, London. Gibbs, W. W.: 1994, Software’s chronic crisis, Scientific American 271, 86–95. Harris, J.: 1996, From das capital to dos capital: A look at recent theories of value, Technical report, Chicago Third Wave Study Group. Hayek, F. A.: 1935, Prices and Production, Routledge, London. Hayek, F. A.: 1945, The use of knowledge in society, American Economic Review pp. 519–530. Hayek, F. A.: 1955, The Counter-Revolution of Science, The Free Press, New York. Keynes, J. M.: 1936, The General Theory of Employment Interest and Money, Macmil- lan, London. Lange, O.: 1938, On the Economic Theory of Socialism, University of Minnesota Press. Lawlor, M. S. and Horn, B. L.: 1992, Notes on the Sraffa–Hayek Exchange, Review of Political Economy 4. Michaelson, G., Cockshott, W. P. and Cottrell, A. F.: 1995, Testing marx: some new results from uk data, Capital and Class pp. 103–129. Neurath, O.: 2004, Economic plan and calculation in kind, Otto Neurath: Economic Writings 1904-1945 . Nove, A.: 1983, The Economics of Feasible Socialism, George Allen and Unwin, Lon- don. Ochoa, E. M.: 1989, Values, prices, and wage–profit curves in the us economy, Cam- bridge Journal of Economics 13, 413–29. Petrovic, P.: 1987, The deviation of production prices from labour values: some methodolog and empirical evidence, Cambridge Journal of Economics 11, 197–210. Shaikh, A. M.: 1998, The empirical strength of the labour theory of value, in R. Bellofiore (ed.), Marxian Economics: A Reappraisal, Vol. 2, Macmillan, pp. 225– 251. Software, N. R.: 1988, Numerical Recipies in C, Cambridge University Press. Velupillai, K.: 2003, Essays on Computable Economics, Methodology and the Philos- ophy of Science, Technical report, Universita’ Degli Studi di Trento - Dipartimento Di Economia. Wright, I.: 2003, Simulating the law of value, Submitted for publication, preprint at http://www. unifr. ch/econophysics/articoli/fichier/WrightLawOfValue. pdf . Wright, I.: 2005, The social architecture of capitalism, Physica A: Statistical Mechan- ics and its Applications 346(3-4), 589–620. }}
Template:DES
(
view source
)
Template:End URL
(
view source
)
Template:Extension DPL
(
view source
)
Template:List
(
view source
)
Template:Quotations
(
view source
)
Template:Red
(
view source
)
Template:Text
(
view source
)
Template:URL
(
view source
)
Return to
Against Hayek
.
Navigation menu
Views
Page
Discussion
View source
History
Personal tools
Log in
Search
Search For Page Title
in Wikipedia
with Google
Translate This Page
Google Translate
Navigation
Main Page (fast)
Main Page (long)
Blog
Original Critiques site
What's new
Current events
Recent changes
Bibliography
List of all indexes
All indexed pages
All unindexed pages
All external links
Random page
Under Construction
To Be Added
Site Information
About This Site
About The Author
How You Can Help
Support us at Patreon!
Site Features
Site Status
Credits
Notes
Help
Toolbox
What links here
Related changes
Special pages
Page information
Guidelines To Create
Indexable Page/Quote
Indexable Book/Quote
Indexable Quote
Unindexed
Templates
Edit Sidebar
Purge cache this page