The Calculus of Consent: Logical Foundations of Constitutional Democracy
By James M. Buchanan and Gordon Tullock
This is a book about the
political organization of a society of free men. Its methodology, its conceptual apparatus, and its analytics are derived, essentially, from the discipline that has as its subject the economic organization of such a society. Students and scholars in
politics will share with us an interest in the central problems under consideration. Their colleagues in
economics will share with us an interest in the construction of the argument. This work lies squarely along that mythical, and mystical, borderline between these two prodigal offsprings of political economy. [From the Preface]
First Pub. Date
1958
Publisher
Indianapolis, IN: Liberty Fund, Inc.
Pub. Date
1999
Comments
Foreword by Robert D. Tollison.
Copyright
The text of this edition is copyright: Foreword, coauthor note, and indexes ©:1999 by Liberty Fund, Inc. Content (including Preface) from The Calculus of Consent, by James M. Buchanan and Gordon Tullock, ©: 1962 by the University of Michigan. Published by the University of Michigan Press. Used with permission. Unauthorized reproduction of this publication is prohibited by Federal Law. Except as permitted under the Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a data base or retrieval system, without prior permission of the publisher. For more information, contact the University of Michigan Press: http://www.press.umich.edu. Picture of James M. Buchanan and Gordon Tullock: File photo detail, courtesy Liberty Fund, Inc.
- Foreword
- Ch. 1, Introduction
- Ch. 2, The Individualistic Postulate
- Ch. 3, Politics and the Economic Nexus
- Ch. 4, Individual Rationality in Social Choice
- Ch. 5, The Organization of Human Activity
- Ch. 6, A Generalized Economic Theory of Constitutions
- Ch. 7, The Rule of Unanimity
- Ch. 8, The Costs of Decision-Making
- Ch. 9, The Structure of the Models
- Ch. 10, Simple Majority Voting
- Ch. 11, Simple Majority Voting and the Theory of Games
- Ch. 12, Majority Rule, Game Theory, and Pareto Optimality
- Ch. 13, Pareto Optimality, External Costs, and Income Redistribution
- Ch. 14, The Range and Extent of Collective Action
- Ch. 15, Qualified Majority Voting Rules, Representation, and the Interdependence of Constitutional Variables
- Ch. 16, The Bicameral Legislature
- Ch. 17, The Orthodox Model of Majority Rule
- Ch. 18, Democratic Ethics and Economic Efficiency
- Ch. 19, Pressure Groups, Special Interests, and the Constitution
- Ch. 20, The Politics of the Good Society
- Appendix 1, Marginal Notes on Reading Political Philosophy
- Appendix 2, Theoretical Forerunners
2. Theoretical Forerunners
by Gordon Tullock
Introduction
Although the theory presented in this book (as Appendix 1 indicates) had some foreshadowings in political science proper, its true intellectual roots lie in other areas. Economics and probability theory are its major sources, but it also owes a good deal to a series of investigations in a poorly defined field which I shall call the “strict theory of politics.” It is with this latter field that the bulk of this Appendix will concern itself, largely because any more general discussion of the history of ideas in economics and in probability is beyond both my competence and my interests. Nevertheless, some remarks about the development of probability theory and economics will be of assistance in setting the theory in its proper place among the disciplines.
The theory of permutations and combinations, which eventually developed into statistics, game theory, and modern decision theory, started out with the analysis of games of chance. A game of chance in its pure form involves a device of some sort which produces various results with varying probabilities. The initial work in what we now call statistics was an exploration of the relative frequency with which various results may be expected to appear. It might be regarded as an attempt to determine the proper way to place bets. Among gambling games, however, there are a number in which the gains or losses of some given player depend not only on the performance of a device but also on the actions of another player. In such games, although simple probability calculations are normally of some assistance to a shrewd player, they cannot give a complete set of instructions on proper play.
In these “games of strategy,” to use a modern term, if one party chooses a strategy, then this strategy will form part of the data which the other party should consider in choosing his own strategy. This is obviously true if each party announces his strategy, but it is also true if each party tries to conceal his strategy. In the latter case each will try to guess the other’s strategy, while choosing a strategy for himself which will not be anticipated by his opponent. In each case an individual’s choice of strategy depends on his opponent’s choice or on his estimate of his opponent’s choice. Examining the games with which they were familiar, the mathematicians discovered that any effort to specify the “correct” rules for a player wishing to win as much as possible led to an infinite regress. If the proper strategy for player A was strategy 1, then player B should take that fact into account and choose strategy 2, but if B chose strategy 2, then 1 was not the proper strategy for A, who should choose 3, etc. These early investigators, therefore, concluded that this type of problem was insoluble and confined their investigations to pure games of chance.
Since the investigations of these mathematicians developed eventually into the wonders of modern statistics, we can hardly criticize their decision, but other investigators had unknowingly found the clue to the solution of most strategic games. The presence of the infinite regress in games (in the old sense of the word, i.e., amusement games) is a contrived result. It comes from the fact that the games are human inventions and that the inventors aim at making games fair, interesting, and unpredictable. A well-designed game does lead to the infinite regress which disturbed the mathematicians, but there is no reason to believe that the real world has been carefully designed to be fair.*83 In the real world the process of adjustment to the strategies of the other players may well lead to a perfectly definite result. Returning to the example in the last paragraph, it may well be that after player B has chosen strategy 4 and A has responded by choosing 5, neither can better himself by shifting to another strategy. Strategy 4 may be the best response to 5, and 5 the best reply to 4. In this event the parties have reached a situation which is called a “saddle point” in modern game theory.
A set of cases where the individual “players’ ” attempts to adjust to the strategies chosen by other “players” lead to a determinate result was early discovered in the economic field, thus establishing the science of economics. The early economists discovered that if a large enough number of people were engaged in buying and selling something and each attempted to adjust his strategy to the strategy (guessed or observed) of the others, then this would lead to a perfectly definite result.*84 This result (the situation which would arise when each player had successfully adjusted his strategy to that of all the others, and no player still wished to make changes) was labeled by economists “equilibrium,” a term which is really operationally identical to the game theorists’ “saddle point.” If we were inventing a terminology de novo, I would opt for “saddle point” rather than “equilibrium” as the name for this condition. “Equilibrium” is widely used in the biological and physical sciences, but with a rather different meaning. This leads to a good deal of unnecessary confusion. It was not normally assumed that equilibrium would ever be achieved—there were always too many endogenous changes for that—but a continuous tendency to approach a continually changing equilibrium point was demonstrated.
This made human behavior in certain areas reasonably predictable. It further turned out to be possible to investigate what type of equilibrium would result from various “rules of the game,” and from this examination to decide which sets of such “rules” were most likely to lead to desired results. From this developed political economy, the science of improving social institutions. Economics progressed rapidly, and today it is by far the most highly developed of the social sciences. At the same time, the mathematicians were developing simple permutations and combinations into the wonder of modern statistics. Neither group appeared to recognize the existence of the relationship between the two fields that I have sketched above.
Eventually Von Neumann discovered a solution for two-person games of strategy. Specifically, he discovered two special cases in which the efforts of two players to adjust their strategies to each other would not lead to an infinite regress. The first of these two special cases—strict dominance, in which one of the players has among his possible strategies one which is superior to any other, regardless of what the other player does—is of no great importance for our present purposes. Clearly, this leads quickly and easily to a determinate result.
The second special case—the saddle point—is much more interesting. Assuming that there is no strict dominance, a game has a saddle point if the mutual efforts of the two players to adjust their strategy to each other would lead to a determinate result. This, of course, assumes that each player knows his opponent’s strategy, and Von Neumann, therefore, introduced a special and very interesting version of the economists’ “perfect knowledge” assumption. Von Neumann advises each player to act on the assumption that his opponent will make no mistakes; specifically, if player A is able to decide that strategy 2 is the proper one for him, he should realize that player B will also figure this out and choose his strategy on the assumption that A’s strategy is 2. Thus, strategy 2 can only be a good strategy for A if it is to his advantage, even assuming that B knows that A is using 2. It can be seen that all of this is simply a way of assuming perfect knowledge without using the magic words. In fact, the assumptions are much stronger than those used in economics since knowledge of another’s intentions is normally not included in the area where information is “perfect” in the economic model.
The reader will have noted that my explanation of game theory differs somewhat from that normally given. This is principally the result of my desire to emphasize the similarities between it and economics. In spite of the different approach, it seems likely that anyone familiar with game theory will realize that my description is operationally identical to the conventional one. One difference between game theory and economics, however, deserves emphasis. Game theory studies the behavior of individuals in a “game” with given rules. Economics does the same, but the end or purpose of the investigation in economics is to choose between alternative sets of rules.*85 We study what the outcome of the “game” will be, but with the objective of making improvements in the rules.
Game theory normally accepts the “rules” as given. Under its assumptions the game with a saddle point does come to a perfectly determinate conclusion, and there is no infinite regress. This result, of course, comes from the structure of the game, and there is no implication that all games have a saddle point. If there is a saddle point, mutual adjustment of strategies will lead to a determinate equilibrium. Von Neumann, however, went further and demonstrated that a game which had no saddle point could be converted into a larger game in which the strategies of each party were decisions as to the type of randomized procedure which should be adopted to choose between the various strategies in the original game. This larger game has a definite saddle point in all cases, although it may be most difficult to calculate. These mixed strategies are most interesting ideas, although currently they can be computed for few real situations.
The application of this apparatus to the real world, begun by Von Neumann and Morgenstern and since greatly expanded by numerous others, has been one of the more important intellectual roots of our present work. With economics, it provided the bulk of our intellectual tools. Fortunately we were able to avoid the problems raised by mixed strategies, and, equally fortunately, the most recent developments in economics were almost perfectly suited to our needs. In particular, the recent developments in the theory of choice have been basic to our work. Specifically, we are indebted to modern game theory and modern economics for a theoretical apparatus and for three major guidelines for our investigation. (1) Modern utility theory, which has largely been developed by economists but which has also benefited greatly from the work of the game theorists, led us to concentrate on the calculus of the individual decision-maker. (2) From game theory in particular, but also from our economic background, we were led into a search for “solutions” to well-defined “political games.” (3) Political economy and the search for criteria in modern statistics led us into a search for the “optimal” set of “political rules of the game,” as conceived by the utility-maximizing individual.
The Search for a Majority Rule
In addition to these major fields of study, the much less well-known and undeveloped field which I have called “the strict theory of politics” has also influenced our work. In view of the rather limited number of people who are familiar with this field, it is necessary to discuss it in some detail. The strict theory of politics can be divided into three areas. The first of these, which has been named the “theory of committees and elections” by Duncan Black, will be the subject of this section. This will be followed by a section on the “theory of parties and candidates” and a final brief section on the “theory of constitutions.” My knowledge in the first area, like the title I have given it, comes almost entirely from the work of Duncan Black.*86 I shall also follow his organizational example in separating the history of the subject prior to the mid-twentieth century from the modern period exemplified by Black and Arrow.
Black’s book contains, as Part II, an excellent discussion of the early history of the subject. I will, therefore, merely indicate the general outline of the work done before Black revived the subject and refer the reader to Black’s most excellent account for further details. The story begins with three French mathematicians and physicists writing in the period of the French Revolution. Borda opened the study and made important contributions. He was followed by Condorcet, who produced a study of the utmost importance which, unfortunately, was so badly presented that no one prior to Black appears to have understood it. Laplace added a few details to the structure as it stood. It should be noted that all of these men were much interested in the development of probability theory, and Condorcet presented his theory erroneously (this is the error which has led to his being so long misunderstood) as a branch of the mathematics of probability.
No one seems to have paid much attention to this work, and the only later development which Black was able to locate occurred in 1907 when E. J. Nanson produced a memoir on elections which clearly showed a familiarity with the work of Borda and Condorcet. His addition to the received theory was slight; the same can be said of the contributions of George H. Hallet and Francis Galton.
In the long interval between the development of the ideas of the three Frenchmen and their reappearance in the work of Nanson, another man had turned his mind to the problem. The Reverend C. L. Dodgson (Lewis Carroll), in addition to his work in formal logic and the Alice series, produced three pamphlets on voting methods. This subject is treated by Black in a particularly masterly manner, and I must refer the reader interested in the details to his account.*87 Only two matters should be referred to here. In the first place, Black has succeeded in proving that Carroll’s work was entirely original; he had not taken his ideas from Borda or Condorcet. Secondly, it is clear that we have only fragments of Lewis Carroll’s work in the field. He was writing a book which was never printed, but the pamphlets themselves show unmistakable evidence of being only part of a much larger body of knowledge.
But so far I have talked about who and when, and totally ignored the what. What, then, were these people investigating? From the fact that their work attracted so little notice and that it tended to be forgotten and then reinvented,*88 one might assume that it was not very important. In fact, I think that the tendency for the subject to be swept under a variety of rugs can be attributed to the importance of the challenge which it presented to traditional democratic doctrine. These investigators had found a problem which lay at the heart of traditional theory and which resisted all attempts to solve it. In a period in which democracy was almost a religion it is no wonder that most investigators turned aside.
Traditional democratic theory depends on majority voting. There are all sorts of problems about who shall vote (quorums, representation, etc.), but it is generally agreed that a majority of some group of people will eventually decide the issue. The problem which puzzled Condorcet, Carroll, Laplace, and Black was that involved in finding a system of voting which would lead to a majority which could reasonably be regarded as the genuine will of a majority of the group. To people who have not looked into the problem, this seems a foolish inquiry; it seems obvious that a majority is a majority and that is that. In reality the problem is a most difficult one.
In investigating the problem, all of the workers in this field used basically the same method. In the first place, they examined the problem of deciding an issue or group of issues in a single election. The investigation of logrolling, which interconnects different issues and different votes, was completely ignored by them. Presumably, they felt that this was more complicated than a single issue and hoped to develop a theory of logrolling after they understood the “simpler” problem. As we have shown in this book, logrolling eliminates the basic problem, so this whole line of investigation can now be regarded as simply an examination of the special case where there is no logrolling.
The second similarity in the methods of these investigators is that they all used the same mathematical device. They assume a number of voters confronted with a number of alternatives (candidates or bills), and they assume that each voter knows which of these alternatives he prefers. The more recent workers have used a matrix form of presentation in which each voter is represented by a vertical column and his order of preference by the place a given alternative occupies on that column.
v1 |
v2 |
v3 |
|
||
A |
C |
B |
B |
A |
C |
C |
B |
A |
Thus, voter v1 prefers A to B and B to C. From matrices of this sort it is possible to work out the results of various voting procedures, and research has largely consisted of assuming various preference orders and then testing out specific voting procedures on the assumed matrix. The problem which has puzzled the workers in this field has been the difficulty of discovering a procedure which does not lead to paradoxes.
If a group of people are confronted with the problem of making a choice between a number of different ways of dealing with a given problem, it may be that a majority of them have one of the possible ways as their most preferred alternative. If this is so, no problem arises;*89 there clearly is a majority. More commonly, however, none of the possible courses of action is the first preference of a majority of the voters, a fact which is reflected in the popular view that democracy requires a willingness to compromise. If there are only two alternatives, of course, one will have a majority, and if there are only three, it is not unlikely that one will be preferred over all the others by a majority of the voters; but as the number of possible alternatives increases, the possibility that one will be preferred by a majority over all the others rapidly declines.
This being so, a number of procedures have been worked out for dealing with the problem of reaching a decision in cases where there is no alternative that is the first preference of a majority. These procedures may be divided into two general classes: those that reach a decision by some sort of manipulation of the votes but without a true majority; and those which restrict the choices confronting the voter in such a way that he is finally confronted with a choice between two, which naturally results in one or the other getting a majority. Two examples of the first type are: the system used to elect members of Parliament in England, where the candidate who receives the most votes is declared elected regardless of whether he has a true majority (this system is commonly called plurality voting); and, as our second example, each voter may mark his first, second, third, etc., preferences among the candidates. His first preference is then given, say, 5 voting points, his second 4, etc. The points are added and the candidate who has the most is declared elected.
The disadvantage of these systems is that they may elect people whom the majority of the voters dislike. To take an extreme example, suppose five men are running for some office. Candidate A is favored by 21 per cent of the voters; B, C, and D are each favored by 20 per cent of the voters; and E is favored by 19 per cent. A would be declared elected under the plurality system, although it might well be the case that 79 per cent of the voters would prefer B to A.*90 Clearly, this is an odd result, and it is extremely hard to argue that this is the rule of the majority. The second method mentioned above is also subject to this difficulty. It, too, is likely to elect a man who is regarded as worse than some other candidate by a majority of the voters. In fact, all of the systems which fall in this general classification are subject to this criticism and hence cannot really be called majority rule.
Among those systems which rely on restricting choice in order to force a majority vote, we can again examine two examples. The first will be a system not infrequently used in private-club elections in which all candidates are listed, a vote is taken, and the lowest is discarded. The process is repeated until only two remain, and one of these will then gain a majority over the other.*91 As in our previous examples, the result may be most unsatisfactory. It is quite possible for a candidate to be eliminated in the early stages who is preferred by a majority over the eventual victor. Again, is this majority rule?
All but one of the methods of forcing a majority by restricting choices are subject to this objection. The unique method which escapes this problem and which is used in almost all parliamentary bodies is to require that all votes be taken on a two-choice basis. Since only two choices are presented to the voters, one must get a majority of the votes cast. The rules of order are an elaborate and superficially highly logical system for forcing any possible collection of proposals into a series of specific motions which can be voted on in simple yes-no terms. In theory, all possible alternatives can be voted on in a series of pairs, each against each of the others, and the one which beats all of the others can reasonably be considered to have majority support. Unfortunately this process, which is the theoretical basis of all modern parliamentary procedure, leads directly into the worst of the voting paradoxes, the cyclical majority.
Suppose we have 101 voters who propose to choose among three measures, A, B, and C. Suppose further that the preferences of the voters among these measures are as follows:
50 |
1 |
50 |
|
||
A |
C |
B |
B |
A |
C |
C |
B |
A. |
Now we put the matter to a vote, taking each issue against each of the others. In the choice between A and B, A wins; in the choice between A and C, C wins; but, unfortunately, in the choice between B and C, B wins. There is no choice which can be considered the will of the majority. Nor is this a special and unlikely arrangement of preferences. No general function has yet been calculated to show what portion of possible preference patterns would lead to this result, but it seems likely that where there is any sizable number of possible issues and voters this is very common—quite probably this is the normal case.*92
In actual parliamentary practice we never find examples of this sort of thing occurring. The most likely explanation for this would appear to be quite simple: most decision-making bodies which follow Robert’s Rules in taking decisions make a number of decisions, and consequently logrolling is possible. If logrolling is the norm (and it will be no secret to the reader that we think it is), then the problem of the cyclical majority vanishes. There are two other possible explanations for the absence of evidence of cyclical majorities in functioning parliamentary bodies, but they are both complicated and unlikely so I shall not attempt to discuss them here. However, one thing should be said: asserting that either of them was the correct explanation would, by logical implication, involve a very serious attack on the whole idea of democracy.
Thus the problem stood when Black took it up. Although he made some improvements in the analysis so far described, and produced the first comprehensive presentation of the matter, his principal contribution was his discovery of the “single-peaked preference curve.”*93 It may be that the possible choices can be arranged on a single line in such a way that any individual will always prefer a choice which is closer to his own to any that is farther away. It seems likely that a good many of the issues in active political life are of that sort, particularly those that are involved in the familiar “left-right” continuum. Black demonstrated that in this situation no paradox develops. Voting on the issues in pairs, the normal parliamentary manner, simply leads to the alternative preferred by the median voter. Again, it is not obvious that this is “the will of the majority,” but at least it is nonparadoxical.
Black thus demonstrated that many issues are decided on in a manner which can legitimately be called “majority” rule, but there still remained those issues which were not “single-peaked” and which, therefore, led to the paradoxes which we have discussed. It was at this point that Kenneth Arrow published the only work in this field which has had any significant effect on the scholarly community.*94 In spite of the difficulties of reading it arising from a quasi-mathematical style, Arrow’s book is widely known. Since I shall be somewhat critical of the book, I should start by saying that this relative fame is, in my opinion, quite justified. In detail, I think Arrow’s position is open to criticism, but he was the first to indicate, however vaguely, the real significance of the discoveries that we have been discussing.
All of the previous writers in this field have concerned themselves largely with attempts to develop procedures which would avoid the problems which we have been discussing. Arrow had the courage to say that they could not be avoided. Although his presentation was difficult and elliptical, the disproof of the “will of the majority” theory of democracy was implicit in his work. The impact of his book can readily be understood, and the rather forbidding format of his work, although it scared off potential readers, probably also gave it an appearance of rigor and logic which was very convincing. Altogether, the book was the sort which should have a wide impact, and it has had considerable effect.
Having said this, I wish now to turn to some criticisms of the book, at least as it now is interpreted. It should be noted that these criticisms do not go to the heart of Arrow’s achievement. They are basically disagreements with certain interpretations of his basic argument rather than with the argument itself. Arrow sets up a number of criteria which he feels any decision-making system should fulfill, and then presents a demonstration that voting does not meet them.
To start our discussion with an examination of some of his criteria, Arrow has been severely criticized for requiring “rationality” in the voting outcomes.*95 His critics point out that any decision-making process is a device or instrumentality. It has no mind, and therefore we should not expect rationality. As a methodological individualist, I agree with Arrow’s critics, but, in the context of the time in which his book was published, the rationality or irrationality of the process was of some importance. It was published in 1951 at the end of a century in which democratic governments had steadily increased the proportion of decisions which were made by governmental means. At that time a large part of the intellectual community felt that the solution for many problems was that of turning operational control over to a democratic government.
If, however, governments are to serve this function of solving practically all problems and operating a very large part of the total economic apparatus, clearly they must function in a rational way. It is hard to argue that a given function should be transferred to the government if governmental decision processes are closely analogous to flipping coins. Thus, a person who believes in widespread government activities must at least be disappointed by irrationality in governmental decision-making processes. From the standpoint of the authors of this book, some irrational behavior on the part of the government is inevitable under any feasible decision-making rule. This fact should be taken into account in deciding whether or not to entrust a given activity to the government. Due to the predominance of processes in which votes are traded, where the particular type of irrationality described by Arrow is impossible,*96 the basic irrationality of governmental decision-making becomes less important, but the impact of Arrow’s work on people whose views of the proper role of the government were more idealistic is readily understandable.
Arrow also says (p. 59): “Similarly, the market mechanism does not create a rational social choice.” As Buchanan has shown,*97 this involves a misunderstanding of the nature of the market process. It does not produce a “social choice” of any sort, as such. Rationality or irrationality is here completely irrelevant. This is of considerable importance for our present work since democratic voting (in the view of the authors of this book) also does not produce a “social choice,” as such. Hence, here also “rationality” is not to be considered an absolute requirement.
The second criterion postulated by Arrow is independence of irrelevant alternatives. In England it is frequently the case that the Liberal party has no chance of electing an M.P. from a given constituency; nevertheless, the decision by the Liberal party on whether or not to run a candidate may be decisive as between a Conservative or a Labour victory. Thus, the outcome is dependent upon the presence or absence of an “irrelevant”*98 candidate. In fact, this problem is simply the one we have discussed earlier: that a voting process may select a candidate who is considered less attractive than some other by a majority of the voters. Arrow chose to criticize the logical coherence of the result in keeping with his general approach. From our standpoint, the problem raised by these voting procedures is that they lead to results which are less desired by the majority than some other results.
Now it happens to be true that all voting procedures except the process prescribed by the rules of order, that is, taking all the feasible alternatives against each other in pairs, are subject to this problem.*99 This being so, the criterion rules out all but one method of voting. Since the one remaining method is subject to the problem of the cyclical majority, it is clear that no method is available which will work without flaws. Nevertheless, if we simply try to find the best method, not the perfect one, it seems likely that our most promising field lies among the systems which are not independent of “irrelevant” alternatives.
The last of Arrow’s criteria which I wish to discuss is that the outcome should “not be imposed.” Arrow obviously included this criterion in order to rule out any method which would decide policy without regard for individual preferences. Unfortunately, the wording he chose rules out all possible voting rules except unanimity if there is logrolling. I do not think this was deliberate on his part, but in any event it is true. If decisions are made by some voting rule of less than unanimity and if they result from logrolling, then “there will be some pair of alternatives, X and Y, such that the community can never express a preference for Y over X no matter what the tastes of all individuals are.” By Arrow’s definition, therefore, the result is imposed.
An example will make the matter clear. Suppose we return to the road model, but this time we assume that the 100 farmers live in northern Michigan. We shall assume that road-repair work is impossible in the winter, but, on the other hand, people are too busy in the summer-crop season to engage in “politicking.” The normal procedure, therefore, is to vote on all road repairs in the winter but have the actual work done in the following summer. By early spring all the road-repair bills have been enacted, but none has yet been implemented. If the bargaining in the winter has proceeded to full equilibrium, then every individual farmer faces the prospect of spending more of his income in purchasing road repairs than he would freely choose. Suppose, at this point, it was proposed that 1 per cent less repairing be done on each road during the summer. On our assumptions this alternative would be unanimously approved if presented, but such an alternative could never be selected under simple majority rule.*100
Thus, reaching decisions by a series of less-than-unanimous votes interconnected by logrolling violates the nonimposition criteria. Since we have pointed out at great length that the outcome will be nonoptimal, this does not disturb us greatly. It should be recognized that an imposed decision, in Arrow’s terminology, may be the best available outcome. To sum up, all means of reaching decisions by voting will, in at least some cases, reach rather unsatisfactory results. This fact should be taken into account in deciding whether some given activity should be carried on under conditions requiring decisions by voting, but it is not an insuperable obstacle to democratic government.
Turning now to Arrow’s proof of the general (im)possibility theorem, it should be noted that it is general possibility which is involved. Arrow is interested in the question of whether some given method of voting will, in every conceivable case, produce a satisfactory result. He proves that there is no voting rule which will meet this test in choosing between three or more alternatives. He does not, however, disprove the existence of a voting rule which functioned unexceptionably for 99,999,999,999,999,999,999,999,999,999 cases out of each 100,000,000,000,000,000,000,000,000,000. I suspect that complex combinations of the sort invented by Nanson*101 can be built up to reduce the anomalies to any desired proportion. As in all other cases of successive approximations, the onerousness of the procedure would increase as a power of the accuracy.
The proof itself is extremely simple, although Arrow’s presentation of it is not. He assumes (p. 58, 30-1-2) the preference pattern which leads to a cyclical majority, and then demonstrates that it leads to a “contradiction.” (X is preferred to Y and Y is preferred to Z, but Z is preferred to X.) The form which he has chosen—discussion of the rationality of a nonthinking institution—is unfortunate, but it is still true that putting alternatives against each other in pairs does not lead to a final result if there is a cyclical majority. Of course, putting alternatives against each other in pairs is not the only method of voting. Arrow’s whole “proof” (pp. 51-59) makes no sense if it is applied to voting methods other than pairwise comparisons. In fact, Arrow’s insistence on “independence of irrelevant alternatives” eliminates all methods of voting except that used in his “proof.” He never proves this nor does he even mention that it plays this part in his reasoning, but since it is, in fact, true, he can be forgiven for this omission. In any event much can be forgiven the man who took the nettle in his hand. Arrow was the first to dare to challenge the traditional theory of democracy by saying that no voting rule leading to rule by “the will of the majority” was possible.
The Behavior of Politicians
The “theory of candidates and parties” treats politicians like entrepreneurs and parties like corporations or partnerships. It is based on the view that politicians want to get elected or re-elected, and that parties are simply voluntary coalitions of politicians organized for the purpose of winning elections. A corporation serves the individual economic ends of those who organize it, yet can be treated as a functional individual for some purposes. Similarly, a party serves the individual political interests of those who organize it, but can be considered as a unified body for some purposes. Altogether, this branch of investigation strongly resembles the “theory of the firm” in economics. In economics, of course, this was a relatively late development, coming long after political economy. Why it came early in politics, I do not know. It may merely reflect the fact that “strict political theory” has largely been developed in the fifteen years since the end of World War II, a very short period.
In any event this branch of political theory has been mainly developed by individuals whose basic training is in economics. The reasons for this are fairly clear. Although the subject matter is that normally studied by the political scientists, the methods are entirely economic. Almost any citizen of a democracy will know something about the subject matter of political science, but knowledge of the methodological technique of economics is not so universal. The average economist knows economic method and some political science, while the average political scientist has little facility with the mathematical techniques of the economist. Since the new field requires both a knowledge of economic method and political reality, it may be predicted that economists would come closer to possessing the desired combination of knowledge.
One can find certain foreshadowings of the “theory of candidates and elections” in the work of a number of modern economists. Hotelling*102 and Schumpeter,*103 in particular, made contributions. Basically, however, the theory has been developed by two people, Anthony Downs and myself. Since Downs’ book*104 is fairly well known, even if not so widely read as might be hoped, while my contribution consists of a chapter in a book which has been circulated only in preliminary form,*105 I may perhaps be forgiven if I emphasize my own contribution.
The formal theory in this field has been largely based on Duncan Black’s single-peaked preference curve. Although both Downs and I do discuss other possible structures, our basic picture of political preference can be equated to the left-right political continuum of the conventional political scientist. If we consider an individual candidate running for office, then both the desires of the voters for various governmental policies and the structure of the voting rules should be taken into account in determining his position on various issues. In working on this subject, I ignored the complex voting rules which have been developed by the theorists of committees and elections and confined myself to a few schemes. In one, the candidate must get 50 per cent of the votes to win, and it can readily be demonstrated that this will lead the opposing candidates to adopt closely similar positions on the issues. It also has a tendency to limit the number of active candidates in any one election to two.
Another possible scheme, much used in Europe, permits a number of candidates, say five, to be elected from each constituency. In this case, a candidate can insure his own victory by obtaining 20 per cent of the votes, and may win with less. Here, there is no tendency for the candidates to take similar positions; on the contrary, they will be spread over the full spectrum of voter opinion. These demonstrations carry over to party organizations too. The frequent lament that American and British parties are much alike, instead of representing different ideologies or, sometimes, classes, is thus a criticism of the voting rules rather than of our politicians. Further, although the European parties are ideologically different, government requires a coalition of such minority parties, and these coalitions are about as similar in their policies as are the two parties of the English-speaking world.
That the structure of the political alliances which we call parties is probably largely a reflection of the voting rules has been dimly realized in England since about the last quarter of the nineteenth century. Periodically, English scholars will argue that their system of single-member constituencies, with victory in the constituency going to whoever gets a plurality, leads to a two-party system. Undoubtedly it does have such a tendency, but the fact that since the last reform bill England has always had three parties, and that it has not infrequently been necessary to turn to coalitions between two of them to get a majority in Parliament, indicates that the tendency is merely a tendency. Still, it does seem likely that we would be able to deduce an “equilibrium” party structure from any constitutional voting scheme. Actually doing so with the rather complicated systems in use in most democracies must await further research. It would appear a particularly good field for an investigator looking for something important to do.
So much for my work, which covers a broad field rather lightly. Downs instead has covered a narrow field intensively. Basically he considers the British political system as it existed from 1945 to the date of publication of his book. From this system he removed the Liberal party, the House of Lords, and the University members, which will be generally accepted as only a minor simplification of the real world. He also made two structural changes of minor importance: elections occur at regular intervals instead of at the desire of the prime minister, and the Cabinet is elected directly by popular vote rather than indirectly through Parliament. Given the present organization of the parties in Great Britain, the latter is surely a permissible simplification, although it is startling at first glance.
Having produced this simplified model of what is already the simplest governmental system now in use in any democracy, Downs proceeds to analyze its functioning. Even at this highly simplified level,*106 he finds it necessary to introduce a set of functions referring to information held by individuals and the cost of obtaining more. This series of functions is also of considerable utility in economics, but the problem of inadequate information is less pressing there. One of Downs’ more surprising conclusions is that a rational man will devote little effort to becoming well informed before voting. Since the mass of the voters clearly follow his advice, and since the traditional students of the problem continually call for more informed voting, it would appear that his approach is more realistic than those offered by the traditionalists.
This is a general characteristic of the Downs model—his conclusions are highly realistic. From a rather limited number of basic premises, most of which would not be seriously questioned, he produces by strictly logical reasoning a set of conclusions. These conclusions seem to fit the real world rather well, thus serving as a validation of the whole process. Among these conclusions are a number which we might call negative characteristics of democracy. These are matters, such as the relative lack of information of the voter, which have been widely noted but which have been regarded by traditional students as defects in the process. In the traditional view such defects result from failures on the part of the voters or politicians to “do their duty.” As a result, throughout the history of political theory there has been much preaching aimed at “improving” the voter. Downs’ demonstration that the voter was, in fact, behaving sensibly not only suggests why all of this preaching has been unsuccessful but also indicates that the preachers have been wrong.
There seems to be no point in further summarization of Downs’ main conclusions. He himself has included a summary after each chapter of his book and a final summary chapter listing all of his more important conclusions, and the curious reader can thus quickly gain the main points of his argument. If the summary leads the student on to read the whole book, so much the better. Discussion of possible further research in the field, however, does seem desirable.
In the first place, Downs’ supersimplified model of party government, taken by itself, can no doubt be further investigated. It has the very great advantage of being the easiest possible research tool. Further, many conclusions drawn from this very simple model will also be applicable to all party systems. Nevertheless, investigation of more complex systems would seem called for. Both Downs and myself have done some work on multiparty systems such as the French, but this is merely a beginning. Introduction of more complicated models should eventually lead to a good understanding of party dynamics in almost all of the democracies.
The internal politics of parties is, strictly speaking, not part of Downs’ basic model, but, in fact, he does discuss the situation of a minority within a party which is dissatisfied with the party policies. His conclusions fit the present crisis in the British Labour party very well, although they are far from a complete explanation. This field, however, is a particularly large one, and the connection between the individual active party member and the party itself should be a major area for further research. Again, it seems likely that investigation of systems more complex than that developed by Downs will be the eventual objective, but the pioneers would probably be wise to confine themselves to his supersimple model.
We, the People
The “theory of constitutions” concerns itself with a discussion of the effects of various possible democratic constitutions. These constitutions, in the book to which this essay is an appendix, are evaluated entirely in terms of their effects on individual citizens. Now that the subject has been opened up, it does not seem unlikely that others will attempt to use the same system—but without our consistent individualism. Whether this will be possible or not cannot be foretold, but at the moment the system is entirely based on individual preferences. This individualistic position raises no particular problems in connection with the “theory of committees and elections” and only apparent problems with the “theory of candidates and parties.”
In this book we have said little about parties*107 and elected decision-makers. Although there have been some exceptions, we have normally assumed that decisions are made by direct popular vote. Where we have discussed elected legislatures, we have assumed that the legislator simply votes according to the majority preference in his district. This is obviously a simplification of the real world, and it might seem inconsistent with the role of the politician in the theory of candidates and parties. In fact, there is no inconsistency. The details have not yet been fully worked out, but the two branches of the “strict theory of politics” merely amount to looking at the same phenomena from two different viewpoints. The theory of candidates and parties investigates the methods of winning elections with the preferences of the voters and the constitution taken as constant. The theory of constitutions, as we have used it, investigates the constitutional method of maximizing the extent to which the voter achieves his goals with the behavior of politicians as a constant.
The pattern of behavior on the part of politicians deduced by Downs and myself is taken into account in the theory of constitutions. Again, there are problems in detail, but a politician aiming at maximizing his support in the next election will follow a course of action which fits neatly into the theory of constitutions. The situation is similar to the relationship between general economics and the theory of the firm. General economics proves that a certain social organization will maximize the degree to which individual desires are met. The theory of the firm investigates how individual businessmen or corporations achieve their ends. The two theories integrate neatly because both are based on the same basic assumptions about human behavior. These assumptions are also those of the theories of candidates and parties, and of constitutions; hence we may expect all of these theories to fit together.
The basic forerunners of the theory of constitutions in the strict theory of politics have been found in the work discussed in the second and third sections of this Appendix. However, there have been some investigators who have done preliminary work directly in this area. Buchanan demonstrated that the State must be considered as merely a device, not an end in itself.*108 A State, qua State, does not have either preferences or aversions and can feel no pleasure or pain.*109 Samuelson went on to point out that every citizen would agree to the establishment of the State because it provides a method of providing services needed by all.*110 He also pointed out that this universal agreement would extend to an agreement to coerce individuals who attempted to obtain the advantages of membership in the State without paying the cost.
Another investigation relevant to the theory of constitutions was carried on by Karl A. Wittfogel.*111 Although his principal field of investigation lay outside the area in which democracy has developed, the contrast between the “hydraulic” State and the “multicentered” State with which our own history deals greatly increases our understanding of our own institutions. From our standpoint, the main lesson to be learned is that the State should not have a monopoly of force. The oriental states were “too strong for society,” and we should do everything in our power to avoid a similar situation. The State should have enough power to “keep the peace” but not enough to provide temptation to ambitious men. The State should never be given enough power to prevent genuinely popular uprisings against it.
The work of Rutledge Vining had a major effect on both of us, largely through his emphasis on the necessity of separating consideration of what “rules of the game” were most satisfactory from the consideration of the strategy to be followed under a given set of “rules.”
A further area in which quite a bit of research has been done and which can, in a sense, be taken as supporting the position we have taken in this book is the statistical study of voting behavior. With the great modern development of statistical methods, it was inevitable that investigators would eventually turn to voting records as a source of information about politics. A great deal of work has now been done in this field, and a vast literature now exists consisting of statistical investigations of the influence of various factors on voting. We have not made any thorough attempt to survey this literature, but from what reading we have done it would appear that this work largely supports our basic position and contradicts the traditional view.
In addition to these investigations which influenced our work, we have found two clear-cut cases of previous work directly in the “theory of constitutions.” The first of these, by Wicksell,*112 involves a fairly sophisticated discussion of an important constitutional problem together with recommendations for specific constitutional changes. The particular problem he discussed is now “one with Nineveh and Tyre,” but his approach is still of considerable interest. The second example is an article by J. Roland Pennock,*113 a most elegant example of what can be done in this field. It had no influence on our work, but only because we had overlooked it when it first appeared and just found it recently.
Both Wicksell and Pennock overlooked the problem of the costs of decision in choosing the optimal constitutional rule. We are not in a position to criticize them on this point since we both made the same mistakes in our own earlier work. In Buchanan’s “Positive Economics, Welfare Economics, and Political Economy” and my “Some Problems of Majority Voting”*114 the costs of decision-making are ignored, although these two articles clearly fall within the “theory of constitutions.”
In summary, although good work has been done in the field, the “strict theory of politics” is still an underdeveloped area. One of the purposes of this book is to attract resources, in the form of research work, into the field. There are few more promising areas for original work.