At the limits of our knowledge

What to do in a discipline once it is clear that it is impossible to base one’s knowledge on anything underlying that can be reasonably accurately measured and when you know that you cannot construct a consistent story that ties all the sub-problems in a field together?

We are at this stage in all areas of economics and social sciences that I know of. As I argued in a previous post, we for purely technical reasons cannot generate useful models of money. Not because we cannot think of the right arguments or because we lack information about how money arose historically, but because what is mathematically solvable and useful in further investigations is so limited.

The same problem has turned up when it concerns models of disequilibrium, social interaction, life-time planning, optimal control of complex organisations, etc.
It is not just theoretically that we keep hitting barriers of tractability. We can’t measure anything with a high degree of accuracy either. Everything we measure, even at the level of brain activity, is not what we really had in mind and is measured very inaccurately with little hope of much improvement.

Individual income, for instance, understood in theory-land as our ability to draw on resources, is in practice measured quite poorly. Individuals themselves don’t know their income all that well, and tax authorities only deal with the monetary part that they can potential grab and miss important chunks even of that. The goods and services we get for free or that have been promised to us (and are hence every bit as much a part of income as a salary) are not included in either the measures of the tax authorities or self-reports. Environmental services that we get now and in the future (sunshine, biodiversity, a happy life for our kids) are even harder to measure, even if some brave researcher attempts to do so. Once you are prepared to think like a philosopher, you get into even more fundamental problems: the concept of individual income presupposes a notion of an ‘individual’ which is an exceptionally tough notion to define and measure if you think about it.What goes for income goes for everything populating our representations of an economy: ‘peers’, ‘competition’, ‘health’, ‘causes of death’, ‘productivity’, ‘R&D’, ‘technology’, “GDP’, ‘expectations’, the list goes on and on. There is not a single economic notion that can actually be measured with much confidence. It is not that our measures are useless and tell us nothing, but more that they are poor proxies of what we wanted to know. Well-known economists have described this problem in various ways, with Paul Krugman stating in ‘Peddling Prosperity’ “the economy can’t be put in a box”.

In academic environments where truth is valued, you get to hear these things as a starting academic and every young scientist worth their salt closes their ears when they hear this. The search for something that can be said with certainty and that is not subject to the seeming arbitrariness of ‘judgment calls’ is a key motivator for a scientist and to give up at the outset is not healthy for the soul. Hence the realities of the mathematical and measurement barriers between a scientist and the ‘complete truth’ are only squared up to after long exploration, and even then only reluctantly.
What reactions can one have if one accepts that it is indeed beyond us to have a consistent story of economics and that nothing is measured with great precision?

  1. True denial of the barriers. With fresh hope, new generations try anew. If you are oblivious to the notion that you have to make decisions, you can quite happily find truth in theory land, though even there you will have to maintain a certain level of unintelligence not to see that most models written down are inconsistent with other models written down. In empirical land there is always the hope that new ways of measuring things will improve accuracy. Brain scans, lab-experiments, national statistics, cohort panels, world-wide consistent datasets, etc., have flooded the market in recent decades. We have never had so much data available at the click of a mouse. There is then always the hope that the next batch of numbers truly measures what we wanted to measure and that hope keeps us going. Of course, the world is also full of people who will quite clearly see all the problems with mainstream economics as it currently is, but think they have already hit upon the crystal ball that makes them privy to the total truth if only the world takes the time to find out about it. When done whole-heartedly that is, in a sense, also a fresh take on the problem.
  2. Deceit I: to pretend that the relations found between measured variables is the undoubted truth, mainly in order to get more admiration and resources from the onlookers. Journalists in particular are susceptible to this kind of deceit. They don’t want to be given a hundred measures of the unemployment rate. They want a hard and fast ‘verifiable’ number and they will always find someone to supply it to them.
  3. Deceit II: to pretend that what has been found in the world of some theory is actually useful and has been applied in the real world. Statements along these lines include things like ‘The Nobel Prize winner Mirrlees proved income taxes should not be too progressive and his work has had great influence on government’. Ha! Other statements along those lines often heard include ‘the fundamental welfare theorem shows the potential efficiency of markets’. The notion that the incredibly abstract theoretical world of Mirlees and the welfare theorems have much to do with the actual world of taxation and markets is pure hubris on the side of theorists and their acolytes.
  4. Deceit III: to have one’s cake and eat it by basing decisions on one thing and defending them based on something else. For instance, the actual reason for economists to increase interest rates when inflation is high is mainly the hope that past experience will repeat itself and not to be the odd one out amongst central bankers by doing something else, whereas the official defence is a tome of models and figures. Similarly, a favourite defense when other people criticise economists for assuming that people are greedy (Homo Economicus) is that economists make no assumptions at all about what people want and allow for each individual to be completely unique in their wants subject only to some regularity conditions of the ‘preferences’. The accuser is then speaking of perceived actions of economic policy makers whereas the defender is thinking of axiomatic preference bundles and other theories that do not actually inform any decisions.
  5. Absolutist Retreat: to re-define the discipline as the study of those things that can be known for certain (even if conditional on assumptions that do not overlap) and to refuse to be drawn into statements about policies or implications. This is an honest approach that re-defines economics as an intellectual pursuit rather than an applicable science but of course will elicit the question of the onlooker as to why their taxes should pay for the study of inapplicable and internally inconsistent knowledge.
  6. Self-serving Retreat\Deceit:  to re-define the discipline as the study of those things that can be known for certain but to encourage others to make the leap from the snippets of knowledge to policies and implications. This is an accusation one can level at the ‘randomistas’ in economics who insist that every problem is unique and that one needs randomised trials to get at the truth. Taking that line seriously, nothing they find in any experiment would then be applicable to anyone else (or even the same persons second time around). One might, for instance, argue that we need to experiment with higher minimum wages amongst car-cleaners in Perth to know whether ‘high minimum wages cost jobs’. There is an obvious intuitive appeal to this, but of course if every situation is truly unique then how can one know that a random experiment on one population will have the same outcome on another population, at another time, or even on a larger group at the same time? The answer is that one doesn’t and is hence appealing to the tendency of the onlooker to see more in the number that is produced than is truly there. Hence the ‘result’ on Perth car-cleaners might not apply to all other professions, might not hold at all when feedbacks from such a large change are taken into account, and of course depend anyway on the rather fuzzy notion of what a ‘job’ is and on the ability to keep track of the car-washers included in the Perth experiment (which is usually not a trivial problem in actual experiments: neither the people who agree to be monitored nor the people running the experiments are ‘standard’).
  7. Reactionary Retreat: to reject formalism and measurement in its entirety as a bad job and to rely on expert judgment, verbal theories, and a historical database of anecdotes and perceived ‘lessons’. This of course is ultimately even harder to sell to new students (and thus in terms of evolution a dead-end), but has the added disadvantage that formalism and data are not a universally poor fit. There are areas where formalism and measurement have improved our decisions and our understanding. Just because there are no certainties doesn’t mean there aren’t degrees of good and bad guesses and that formalism can help with the guesses.
  8. Pragmatism I: to give up on the notion that any large area of economics comes with a consistent story, but to adopt heuristics on how to arrive at local knowledge and local institutions to muddle along. You are then in the business of comparing the outcomes of different processes, which of course is messy in itself, but it allows for a higher-order science of what the more or less successful strategies are (obviously this is only useful in sub-fields).
  9. Pragmatism II: to give up on the Grand Truth and to go into a ‘horses for courses’ set-up where one applies formalism in one area, historical knowledge in another, and yet other models and historical stories in another. The ‘unscientific’ aspect of this type of pragmatism is the question of how to decide what model and what data to rely on in which circumstances. The ‘feelers’ amongst us choose their explanatory framework based on gut instinct. Brilliant though some of those individuals may be, it’s not a very useful approach because we can’t teach it to the younger scientists and furthermore it means that the true strategies of the ‘feelers’ are beyond scrutiny and improvement. Hence an attempt to do ‘science’ at the next level is to start to formulate rules-of-thumb, heuristics, and even overt models with measurement to decide on which mental strategy to use in which situation. In short, one then accepts that the sub-models and sub-data are imperfect and inconsistent with other sub-models and sub-data but one nevertheless tries to make some progress by designing and discussing actual formalised rules as to which way of looking at a problem is appropriate when. You are then ultimately in the business of competing schools of thought.

Obviously, my personal favourite is (9). As a result I feel that highly successful ‘intuitive economists’ who on instinct decide how they approach a problem and what the most important characteristics of a new situation are, are duty bound to try and formalise what they do in actuality: to capture their own instinct in heuristics that can be held up to scrutiny. That way one can hope to make some progress in economic thinking after all.

This entry was posted in Economics and public policy and tagged , , , . Bookmark the permalink.

14 Responses to At the limits of our knowledge

  1. James says:

    Wow, frank acknowledgement that economics isn’t very useful… From an economist! But I think you underestimate the ability of new tools to help. Money can probablly modeled with agent based modelling techniques, by making explicit the heuristics in agents brains that make money useful. Have more faith in the new guard.

    • Paul Frijters says:

      Economics is very useful, but one should be open about its limitations and what economics is really about.

      Agent-based simulation modelling using behavioural heuristics is, in my experience (and i have done agent-based simulation myself), not very useful to illuminate the emergence of money. You need so many unrealistic assumptions to set a simulation in motion.

      I have hopes for agent-based modelling in the area of human conflict and perhaps complex market interactions, but that is a future hope.

  2. Dan says:

    This is a good, brave and humble (and useful!) post. I note that it might have some resonance with Yanis Varoufakis’ upcoming book.

    • Paul Frijters says:

      thanks for the link. I was once on a discussion panel with him and we agree on many of the problems in economics. I do now and then read Yanis’ thoughts but missed this latest one. His use of the word ‘pseudo-science’ though in his Preface is a disservice to social science. He should lead by example and work on an easy-to-understand intellectual alternative to the current paradigms. Rage against the machine only gets you so far.

      • Dan says:

        Respectfully, I disagree.

        If YV was uninterested in developing useful, generalisable findings I don’t think he’d have accepted this gig.

        Nor do I think he would have written The Global Minotaur, which to me is the most readable account of global financial history I’ve yet read (not an easy subject, I grant).

        Nor do I think he’d have developed his Modest Proposal for the eurozone, which I think is only useless insofar as it hasn’t been used.

        • Paul Frijters says:

          Dan,

          he is a busy boy and I am a fan, but I dont see his Valve gig an attempt at getting a different vision of economics.

          Let us leave a discussion of the Global Minotaur to another day. It’s a topic worthy of a workshop! I find it more in the ‘rage against the machine’ category than you probably do. Similarly, I was on a panel in which he talked about his plan for Europe so I do know about it.

  3. john r walker says:

    Number 9 sounds a bit like an art of economics( no bad thing).
    Economics definitely involves representation of representations (of representations) – ‘completeness’ is therefore definitely not possible – true and mutually exclusive formulations are inevitable .

    Have read about scientists IT/AI types working on “precisely vague” systems for the sort of stuff you are talking about, anybody know more?

  4. desipis says:

    Number 9 also seems a bit like engineering, particularly systems engineering. There’s a certain amount of using ‘knowable’ and well studied parts (‘science’) as well as using a certain amount of experience and judgement (‘art’). Systems engineering involves using a heterogeneous set of not completely understood parts together in order to make a whole (as well as identifying where currently available parts won’t cut the mustard and novel solutions are required).

  5. Marks says:

    It seems to me that 9 is the closest to the engineering approach, which is to make as much of the existing science as possible, set up a rough model with ‘safety margins’, then use science and real world verification to do the necessary checks. Then iterate until happy with the reliability of the outcome.

    So, it is a leapfrog affair. Use some science to set up models. Check what works or does not using scientific method. Then incrementally adjust the model as new science is undertaken. Note that models can be developed for various sectors then added together when the science and/or evidence justifies that addition. This process can take a century or two, mind. :)

    This approach actually allows for focus of applied scientific research as well. If a particular model has some major safety factors in its early stages, then scientific research into those areas with high safety factors is not only obvious, but also where the biggest payoffs usually are. Of course, that does not stop the theoretical scientists from researching to discover new models.

    The attraction of this approach is that it is relatively simple to set up a rough model with a small number of controls and variables, and incrementally sophisticate the system. It also means that the theorists get to play with their theories in universities, and do not inflict untried models on the world before some sort of validation.

    One very big hurdle to developing models is the argument that since economic analysis contains so many variables, it is impossible to model, and therefore modelling is doomed to failure. One could observe that it is impossible to tell which way each electron in an electrical system spins so setting up a power network is doomed to failure. Whether or not effective system control or modelling is in the too hard basket depends on whether or not one is employing realistic enough simplifications that enable the numbers to be crunched.

    Having said that, I am not sure that there is consensus from the economic community of what exactly it wants from economic modelling. Do people want to go down to the detail of being able to model every aspect of the economy down to ABS data collection unit level, or just an overall modelling giving a good idea of gross activity to much higher levels (sector/State/National)?

    Given the relatively small number of controls in place in economic management, it would seem that the subject is amenable to improvement. (I would suggest in this regard that laws and regulations are not so much controls as constraints).

    • Paul frijters says:

      The reality in applied land is often already that we are in horses-for-courses land, but the pretence of certainty is kept up for the clients who demand to see some magic in an Appendix.
      The post is less though about what the reality of selling economics is, but more about what the most honest way forward is in terms of understanding the system. In the real world all sorts of other considerations arise, including career concerns and the limited degree to which the buyers have time for us.
      On macro for instance my reading of the current situation is that the main practical role of big macro models inside economic institutions like the RBA is to protect the budget of those going through the basic data by means of deceit. And to be fair, it is a local optimum to do this.

      • Marks says:

        True, but do you not think that if the RBA were to be thinking of something like splitting bonds into several classes with different interest rates for the same maturity (eg requiring Aust Banks to have reserves in a mix of bonds which reflected the mix of housing loans/other loans and overseas/local deposits), then would it not be a local and (more) global optimum to shed the deceit?

        Note, I am not proposing the scenario above, merely giving it as an instance where the RBA would need to model before it introduced such measures, and would presumably not be well served by a deceitful model, given the amount of external scrutiny such a measure would receive.

  6. hc says:

    Paul, I enjoyed this destructive post.

    I think things were healthier when simple models were used and recognised for what they were – enlightening parables that gave you some bearings but which didn’t map the world. Nostalgic thoughts of Samuelsonian economics and some of the nice essays by Ken Arrow.

    We enjoyed ourselves and were more modest about policy.

    The incentives to become empirical have been motivated by the fact that much low-hanging fruit has been picked and research outputs are needed to get doctorates and promotions. Hang any thoughts of witch-doctory. And of course the mainstreaming of time series analysis and the development of massive time series and panel data bases has assisted in resolving the originality problem but yielded almost no major insights.

    Empirical macroeconomics and econometrics have been waste lands. What has been shown? Convincing evidence on labour supply elasticities? Fiscal multipliers? Money demand interest elasticities? I can’t think of a single major issue among these that has been close to being resolved.

    When the GFC occurred modern macroeconomics disappeared and economists starting arguing about Keynesian fiscal issues that were explained to me when I was a first-year.

    Microeconomics is a bit better since there are many good applied issues that can be profitably analysed using simple partial equilibrium models. But this is unpopular because there is increased scope for rational criticism of unclear thinking and simple logic doesn’t demonstrate math-technological muscle.

    • paul frijters says:

      “When the GFC occurred modern macroeconomics disappeared and economists starting arguing about Keynesian fiscal issues that were explained to me when I was a first-year.”

      Yes, what a curious phenomenon that was. Bernanke was suddenly talking about trust and confidence as if the literature he was part of was dedicated to that issue, and the rest of these policy types too were suddenly experts in things they as editors and referees kept rejecting from the journals and textbooks. And after the GFC died down, it was business as usual for rational expectations! I am still waiting for the AER to run macro-models where recessions are not mass holidays (now dressed up as leisure choices due to unexpected shocks in productivity), but perhaps I have not paid enough attention.

  7. john r walker says:

    paul
    feel that the drift of what you have written is like this:
    Whilst any study of things that even partly include the ‘observer’ as the subject of study is going to be a fundamentally incomplete study , Is this really such a big problem that we should: either drop it all together(rage?) OR retreat to a academic study of what is clearly objective i.e study economies that do not include humans?

Leave a Reply

Your email address will not be published. Required fields are marked *

Notify me of followup comments via e-mail. You can also subscribe without commenting.