Here’s a cut and pasted Amazon review of *The Macrodynamics of Capitalism: Elements for a Synthesis of Marx, Keynes and Schumpeter. *It’s a bit heavy and I’ve ignored the maths so can’t vouch for it.

I’m basically slapping it up here for my own future reference, but Troppodillians might be interested to have a squiz. Without endorsing it wholeheartedly it’s of interest to questions about the methodology of economics and social sciences. I’ve never waded through Keynes’ Theory of Probability but have always been aware that there’s a lot of unity between it and his whole approach to social science (in his case economics). And there isn’t much that Keynes wrote that I don’t find myself in sympathetic agreement with. I can’t say the same for anyone in the neoclassical camp, though of course there’s lots of things to be learned from them.

This is an interesting book that overlooks the connections between the approach to decision making and expectations formation presented by Keynes in his A Treatise on Probability (TP) in 1921 ( the same material is also in the 1908 Cambridge Fellowship Thesis version) and Keynes’s integration of this approach into the mathematical elasticity analysis of chapters 20 and 21 of the GT (1936) using an aggregated version of the theory of purely competitive firms based on a micro foundation that assumed one fixed input ,capital goods ,such as plants,factories,machinery,or equipment ,and one variable input ,labor.

The author states the following:

“In sum,the aim of this chapter is… to isolate as precisely as possible the basic logical flaw in the Classical system and to go on then to show the limitations of current seemingly consistent reformulations of this approach”.(2008,p.27)

The author then starts his search in chapters 2 and 3 of the GT.He never gets to the core of Keynes’s theory presented in chapters 20 and 21.The problem is that there is no basic logical flaw in the Classical model.It is a logically consistent system that is a special case of a much more general case that Keynes had aready analyzed in general decision theoretic terms in the TP.The basic classical case comes from the work of Jeremy Bentham.James Mill,David Ricardo,Nassau Senior,J B Say,etc.,were his students.Bentham’s approach is diametrically opposed the the approach of Adam Smith.Smith’s approach is an updated and improved version of the political and economic thought of the Old and New Testament,Plato,Aristotle,Augustine,Aquinas,and Albert the Great,the teacher of Aquinas.It can be summed up in the term virtue ethics.Bentham sought to completely replace this approach with his Benthamite Utilitarianism.Benthamite Utilitarianism assumes a rational economic man who can calculate the odds(risks) of different alternative courses of action.This rational economic calculator will then seek to maximize his utility over time.Bentham claimed that all humans can make such calculations and are able to choose the best or optimal course of action over time.The fundamental foundation of Bentham is that each individual decision maker can obtain an optimal information set that allows him to know all of the relevant odds and outcomes before he makes a decison by choosing one of the many different courses of action that he is confronted by.Classical and neoclassical economics is merely the mathematical representation of Bentham’s Benthamite Utilitarianism.Keynes realized that there was no fundamental difference on this basic,crucial point between the classical economics of Bentham’s students ,such as Mill,Ricardo,Say,and Senior and neoclassical economics.

This mathematical representation was first specified by the EMV (Expected Monetary Value ) rule in the 19th Century and by the SEU( Subjective Expected Utility )rule in the 20th century.Both the EMV and SEU rules are simply extensions of the purely mathematical laws of the probability calculus,the addition and multiplication rules needed to apply decision trees and tree diagrams.Both rules require that all probabilities or weights used in every decision be linear and additive.Keynes proved that these assumptions were very special cases of the general case,which was that the weights or probabilities were non linear and non additive.Keynes’s technical and mathematical genius allowed him to formulate the generalization in chapter 26 of the TP.He called this generalization a ” conventional coefficient of risk and weight,c”.He presented his analysis in chapter 26.His critiqure of the EMV and/or EU rules centered on the fact that they ignored non linearity.

The following demonstration presents Keynes’s position .The best way to understand what Keynes did in the General Theory is to compare his general theory of decision making ,as illustrated by the c coefficient model Keynes presented in chapter 26 of the A Treatise on Probability (1921), with SEU.One can then effortlessly conclude that ” Modern ” macroeconomcs is just a very special case of Keynes’s general theory whcih assumes linearity and additivity so that one can assume a normal distribution.

Keynes presented a clearcut mathematical,technical analysis of ambiguity aversion using his conventional coefficient of risk and and weight( uncertainty),c,in chapter 26 of the TP. A very specific example of Keynes’s nomlinear and non additive approach to probability in chapters 15,17,20,and 22 of the TP was worked out in great detail by Keynes in chapter 26 using his conventional coefficient of risk and weight ,c, on p.314 and in Footnote 2 on p.314.Edgeworth, in his 1922 article on ” The Philosophy of Chance ” in Mind ,was certainly correct in asking for the help of the readers of that philosophy journal in order to figure out the what and the why’s involved in the application of Keynes’s c coefficient.This will be provided for the reader below since it was never done in Mind or anywhere else with the exception of Brady’s work.

The foundation of Neoclassical economics is merely the mathematical development of a theoretical approach first proposed by Jeremy Bentham in 1787.Bentham claimed that all individuals have the capability to calculate the odds and outcomes and act on the expected value (the probability times the outcome) in a rational way.This can be expressed by the following ,where p is the probability of success and A is the outcome:

Maximize pA.The modern version of this is to Maximize pU(A),where p is a subjective probability that is additive,linear,precise,and exact.U(A) is a Von Neumann-Morgenstern Utility function.The goal is to

Maximize pU(A).

The modern name for Benthamite Utilitarianism in neoclassical economics is SEU theory(Subjective Expected Utility).Therefore,a microeconomic foundation based on Utility Maximization is just Benthamite Utilitarianism updated with modern mathematical techniques.Modern macro is all SEU theory.

Keynes rejected Benthamite Utilitarianism as a very special case that would only hold under the special assumptions of the subjectivist,Bayesian model-that all probabilities were additive,linear,precise,single number answers that obeyed the mathematical laws of the probabiity calculus.

Keynes specifies his conventional coefficient of risk and weight,c, model in chapter 26 of the TP on p.314 and fotnote 2 on p.314,as a counter weight to the Benthamite Utilitarian approach.

Essentially, Keynes’s generalized model is given by

c=2pw/(1+q)(1+w),

where w is Keynes’s weight of the evidence variable that measures the completeness of the relevant, available evidence upon which the probabilities p and q are calculated.(Benthamite Utilitarians assume that the value of w is always 1.)w is an index defined on the unit interval between 0 and 1,p is the probability of success,and q is the probability of failure.p+q sum to 1 if they are additive.This requires that w=1.Keynes’s c coefficient can be rewritten as

c=p

^{1}^{2}.Now multiply by A or U(A).One obtains

cA= p

^{1}^{2}A.The goal is to maximuze cA or cU(A).The weight 1/(1+q) deals with non linearity.The weight 2w/(1+w) deals with non additivity.Modern Macroeconomics amounts to nothing more than the claim that c=p or cA (cU(A)= pA (pU(A)) .

It is now straightforward to see that the neoclassical microfoundations of macroeconomics assumes that all probabilities are additive and linear.This is nothing but a special case of Keynes’s generalized decision rule to maximize cA,or cU(A),as opposed to the Benthamite Utilitarian pA or neoclassical pU(A).It is now clear that Keynes had created general theories of macroeconomics,probability,and decision making between 1921 and 1936.Keynes’s accomplishments,once understood,make him the only rival to Einstein for the title of the greatest scientist of the 20th century. Economists have only a very vague,hazy,cloudy understanding of Keynes ‘s distinction between risk and uncertainty . It is this distinction that has to be grasped first before any economist has any hope of understanding what Keynes means in the GT.

The conclusion is very straightforward. New Classical,New Keynesian,rational expectationist,and real business cycle theorists use the rule to Maximize pU(A).Keynes used the rule to maximize cU(A).This is the same type of rule used by the overwhelmingly ambiguity averse decision makers that populate the real world.

QED!

Nicholas, can you just remind us what it was that the Keynesian theory had to explain that could not be be explained by “classical” ecoonimic theories?

In case it helps to background Keynes on probability, the idea is to predict the future, or to make rational predictions about the probability of events.

There are three situations to consider-:

1. Where strict causality or determinism operates, for example an unsuspended apple falls to the ground.

2. Situations of “risk” where known probabilities can be applied to events, and those probabilities can be applied, like the probability of throwing a particular number using a fair die.

3. Situations of “uncertainty” where more or less unique events are unfolding like a football or cricket game and reliable numerical probabilities do not apply. (Except where a player is helping to generate an event which is normally unpredictable, like a no ball).

The holy grail of probability theory is to find some way to calculate “inductive” probabilities that apply to unfolding events.

The problem of induction in the philosophy of science has become the problem of assigning numerical probability values to general scientific theories.

That is how Keynes related to the philosophy of science, although his work in probability theory was done before he made his name as an economist and has generally been forgotten, partly because that program seems to have failed in the philosophy of science and has been replaced lately by Bayesian (subjective) probability theory.

I’m not quite sure what you’re getting at Rafe. Keynes built the logical relations implied in his own model of the way the economy worked from his own ‘classical’ background, using the method of the classical economists and departing from it where he thought warranted. So I guess Keynes is your proof that Keynesian theory can be explained by classical theories.

The contrast I drew was with the neoclassicals, not the classicals, (it was just that the review draws our attention to the deep methodological congruity between Bentham and the neoclassicals).

Keynes was, to the surprise of many, quite diffident about Hicks very clever reduction of the theory to the IS-LM curve. I think it’s fairly clear, though I’m not particularly well read on it, so I may have to stand corrected, that Keynes felt that this was too mechanical representation. Of course a representation of the guts of his model would have to be that, but Keynes emphasised unpredictability, and the IS-LM approach somehow suggests stability of relationships. Minsky took Keynes in the other direction which emphasised inherent instability – an instability that meant that stability for any length of time created the conditions for its own destruction and descent into instability.

Keynes was also highly sceptical of Tinbergen’s attempts to build a big model of the macro-economy describing the assumptions of stability between relationships as ‘black magic’. But on we went and on we go, largely ignoring the caveats.

I think Keynes would have appreciated our recent correspondent’s contrast between to usable and unusable knowledge.

Still something tells me I may not have understood what you were asking.

I will try to ask the question a different way. What problem did Keynes solve that could not be solved by some other, classical or neoclassical theory?

How did he depart from classical economics?

And what is the difference beteen classical and neoclassical economics?

Well your second question doesn’t get me much beyond my first answer I’m afraid.

Anyway, perhaps it would be less elliptical if you made your points by way of assertion rather than question.

http://en.wikipedia.org/wiki/Principle_of_insufficient_reason

Bernoulli 1654–1705

Bayes 1702–1761

Laplace 1749–1827

For rediscovering what had been done a century before? Keynes thought he was smarter than Bayes and we went through 50 years of people trying to disprove Bayes Theorem only to have come back to it nowadays. All this pedestal building just to get access to a bit more government debt and spending. It’s a getting disgusting.

If you want to give credit for a recent major advancement in probability theory consider Andrey Markov.

Please can economists just start making that tiny step to look beyond their own discipline? If you don’t find something interesting in 6 months then at least you can say you gave it a go, and get back to business as usual.