Following the efforts of James Farrell as to the many different things meant by lay folk and professionals by the word ‘belief’, I wanted to try to tackle the question from an economics points of view. Given that the methods and mindsets of economists are an amalgam of other scientists, we firstly need to review how different stereotypical scientists from various disciplines would answer this question. Before getting to the perspective of economists, this medium-sized essay will therefore first present the view of a mathematician, a statistician, and a modern cognitive psychologist (or at least how I think of them).
For a mathematician, a belief is not a well-defined concept: in mathematics land, things are either assumed, taken as proven, or yet to be (dis)proved. There are no ‘shades between’ and hence nearly all common-sense uses of the word ‘belief’ would be uninterpretable within the language of mathematics. Yet, various mathematical concepts themselves come close to what the layman thinks of as a belief, ranging from an axiom to a conjecture.
On the far certainty end of the spectrum there is the axiom, which is the undoubted premise that something is true and can be presumed to be true forever and for everything one wants to use the axiom for. To say you believe an axiom to hold means talking about an ‘inner belief’ in the sense that it is not possible to verify or refute an axiom by any outside measure: it is an article of faith stemming from revealed internal knowledge. The notion of an axiom is almost religious in content in that it is dependent of some inner revelation of truth immune to all observations. A pure mathematician does not even worry about whether an axiom holds ‘in reality’ because a mathematician thinks of an axiom as an unquestioned assumption: there doesn’t have to be any outside reality in which it holds. There is then still the mystery of why one axiom is interesting to a mathematician and another is not, but one can go through life as a mathematician without ever worrying about that. Yet, as soon as one is to take the results of mathematical inquiry as useful in any outside context, the validity of ‘translated’ axioms do matter.
The other end of the spectrum of mathematical language that comes close to the word ‘belief’ is the ‘conjecture’, such as the conjecture that Fermat’s last theorem is true (something now considered proven subject to the axioms of calculus, but long considered ‘probably true but unproven’). A conjecture is a conditional statement that is either a tautology with its assumptions (and hence true) or that is not a tautology with its assumptions and hence untrue. The interesting thing about the conjecture is that its truth only depends on assumptions already made, but it is not trivial to establish whether the assumptions encapsulate the conjecture or not. This creates philosophical distinctions between various types of knowledge. One needs a certain degree of fudge though to have any interpretation of what it would have meant to ‘believe’ that Fermats theorem was true. One would almost have to envisage the possibility of hundreds of such theorems of which some high proportion would eventually turn out to be true. Thinking of such an ex ante universe of such theories requires fudge and the statement that one believes an unproven theorem to be true is not itself interpretable within mathematics.
For a statistician, the notion of a belief has a meaning in the context of a measuring process: before the measurement of any phenomenon, one can meaningfully say that one believes the outcome to be within a certain range with a certain frequency of observations. A belief is then a kind of prediction. A properly-stated belief would be a statement of the form ‘I believe that if we measure occurrences of X, the observed values will fall between the values Y and Z at least M percent of the time if the number of measured occurrences goes to infinity’.
There are a couple of important points here. The first is that the question of what X ‘actually’ is, is a metaphysical one, i.e. not of real concern to the statistician. What matters to the statistician is how X is measured because its measurement defines X as an empirical concept. Whether god really exists is hence not a proper question to a statistician. What really matters is how god is measured. In this context, the labelling of X as god is actually arbitrary to the statistician: a statistician deals purely in the relation between measured objects without necessarily allowing himself an opinion as to the relation between measured objects and unmeasured abstractions. Another interesting point is that to the statistician, a belief is a prediction that either turns out to be confirmed by the data or not with a certain probability, i.e. in finite data the initial prediction can only be refuted or confirmed with certainty if the stated prediction was in terms of ‘all or nothing’. Any stated belief that is fractional (such as that half the number of dice thrown will show a number above 4) can only be dismissed with a certain probability, never for certain. Hence to a statistician, there is no such thing as true or false in most cases, there is only likely and unlikely.
To a statistician hence, lay-mans’ beliefs can sometimes be sensible and sometimes not. Someone who says he believes it more likely that Gillard will win the next election would be expressing a fairly well-defined belief for a statistician on which he would be able to give an ex-post confidence interval, because it will be fairly clear to the statistician what the observed future event is and how to interpret observations on it. Yet, someone who says he believes he is a better than average car-driver is not making any sense to the statistician unless he gives an empirical operationalisation of the notion of ‘better’. Someone who says he believes the circumference of a circle is always two times pi times its radius would only be making sense to a statistician if they gave the statistician the heuristic via which to measure abstract instances of this statement. Someone who says they believe the world just started 5 minutes ago is making a statement that is unverifiable in the absence of time-travel and hence the statistician would not be able to assign it any probability unless more structure was put on the statement. As a singular statement it would hence be non-sensical.
A final thing to say about statistician’s view of beliefs is that knowledge within statistics is, like mathematics, ultimately internally revealed: a statistical statement about probabilities depends on prior information. Put crudely, a probability of a certain outcome is the relative frequency with which that outcome would be observed given all the instances in which one would have the exact same starting information. The validity, presence, and interpretation of that prior information derives on unquestioned truth, i.e. assumptions. Hence the statistical world, even in the Bayesian subsection, is full of internal truths, such as ‘priors’, knowledge of the ‘sampling universe’, ‘distributional assumptions’, etc. Only within these truths (such as that an actual dice can only have 6 outcomes and is a fair dice) can one meaningfully speak of information and therefore probabilities. At the end of the day hence, the statistician’s view of beliefs is much like that of the mathematician in that the only real truth is the knowledge of assumptions and axioms that have been revealed internally, and all statements of beliefs about reality are statements conditional upon that internal knowledge. Data only exists within prior theory. An important issue that makes any real world application of statistics subject to leaps of pure faith is that the assumptions must truly be unquestionable: the dice must truly be fair and have only six outcomes (or at least the deviation must be known with certainty to be within particular bounds). The situation in which someone knows absolutely nothing with complete certainty (which is how you would often want to think of as a belief) does not allow for any statistical inferences to be made: somewhere along the line the application of statistics has to use the fudge of ‘if we assume this….’.
To a modern cognitive psychologist, the meaning of the word belief is itself a purely empirical issue, i.e. a question of what goes on inside the minds of the users of the word ‘belief’. The question of what a belief then is then subdivides into the question of how a human brain actually processes information and language, and how best to typify this process so as to relate the utterances of the words ‘belief’ with stylised representations of what might lie behind it. One goes on inside the mind of the subject and the other goes on in the mind of the cognitive psychologist explaining what goes on. By necessity, we can only talk about the latter whereas we will pretend to talk about the former, trusting to competition between cognitive psychologists to yield a useful representation. These two subdivisions are both subject to exceptionally tricky philosophical questions, such as what the nature of uttered, remembered, associated, and other forms of language is, and what an outside simplified model of internal cognitive processes ‘really’ means. Libraries have been written on both and I don’t feel I know enough about them to say much more than that the answer is essentially one of practicality.
Using the classic fudge that we lack true internal information via which to judge the findings of psychologists but pretending they are all above board anyway because we hope they will return that favour when it comes to us, there are several items of note that cognitive psychologists have come up with regarding how our minds work that are important for an understanding of beliefs:
1. Almost no normal human subject is capable of thinking in terms of probabilities. It requires an exceptionally trained mind to think of the world in entirely probabilistic terms. Hence in people’s minds, the world is either flat or it isn’t, not flat with probability p and unflat with probability (1-p). Therefore the statistical view of uttered beliefs is one that only trained individuals can relate to and they have to make a great efforts in each instance where they are asked things like ‘with what probability do you think X will occur?’. Answers to questions like ‘do you think X will occur’ roll off most people’s tongue in an instant. Yet, to the statistician the latter question is completely nonsensical and its interpretation requires a god-like knowledge as to how much probability a ‘yes’ refers to (this is a serious problem in my line of research where I ask people how happy they think they will be in 5 years time). To foreshadow the general importance of this issue for economics, you only need to reflect on the fact that many business confidence indicators simply add up the number of people who say ‘yes’ to a question like ‘do you think you will have more orders the next 3 months?’. To a pure statistician and mathematician (like Manski who has gone on about this in several papers fuming at the sloppiness of such questions), the answers are meaningless. Most economists using such data though, even if they know about this conceptual problem, simply ignore the fact that they don’t really know what they are measuring, clinging to usage value with lines like ‘I don’t care what it measures as long as it predicts something I do care about’.
2. What people believe to be true in one area can be inconsistent with what they believe in another area, without giving the least bit of bother to the believer. Hence someone who says they are a devout Catholic and take the bible literally, is nevertheless quite happy to ignore passages of the bible he doesn’t like (such as stoning unmarried couples, having foreign slaves, and the various other absurd dictates in Leviticus and other parts). When pressed, people simply refer to other passages that allow them a general cop-out on the consistency of interpretation. Such inconsistencies are entirely normal everywhere though. The economic theorist who one day writes down a model of the whole economy that presumes perfect markets and on the other day writes down a micro-model of a particular sub-market with strong market imperfections is in principle also guilty of double-dipping in that he writes down assumptions incompatible with previous assumptions and will only have the vaguest internal ‘gut feeling’ that this is somehow alright in some non-formalised way. To the true mathematician though, any application of one economic model to reality invalidates the application of any other non-nested model, meaning that to the true mathematician at least 99.99% of applied economic theory is false. If we’d adhere to that kind of rigour in reality, we might as well stop as economists. Inconsistent beliefs are thus a part of any applied science as well as normal life. In normal life inconsistent beliefs allow us to have a pleasant conversation with a person at one point in time, smoothed over by the tacit application of a truly temporarily belief in each other’s goodness, and nevertheless be ready to switch to a different belief in an instant when that person requests a loan.
3. People have mental models about the real world, ranging from what a tree is to how the economy functions, to the world of mathematics, to the motivations within our families. These mental models come complete with automated emotional responses, activated memory-areas, plans-of-actions, network heuristics that activate particular mental models in particular situations, etc. Humans make up mental models all the time and it appears to be a basic survival strategy for us to do this. Essentially mathematics and statistics are just examples of mental models. These mental models are situational though, with some mental models activated much more often than others. Hence, some beliefs are more integrated in our various mental response patterns than others, meaning we can act as-if we believe the basic rules of calculus 90% of the time, whilst only acting as-if we believe that we can’t trust our daughters with young men 30% of the time. Cues that allow us to switch between models can be subtle and occur many times per second regarding different objects. We for instance automatically scan our visual inputs for danger using mental models of what is dangerous and what is not, quite apart from the mental model we might simultaneously be using when speaking to someone. From a practical point of view, this gives a neat notion of what a belief is, i.e. a relation within a mental model. It makes it clear that a belief is only relevant in a context of mental models and decision situations. Outside of those contexts the belief does not need to have meaning.
4. From an empirical point of view, the word ‘belief’ can denote different aspects of our mental models. The statement ‘I believe Ireland is in Europe’ is hence really a statement about the mental model we have of Ireland in our memory. Note that to the statistician this is a very difficult belief to interpret because the notion that an individual only assigns a certain probability to whether Ireland is in Europe begs the question what the observation space is in which it is possible that Ireland is not in Europe (nevertheless, a person with this belief would, if trained, be able to make sensible statistical statements). There are also statements of belief that denote a sense of identity and hence that tell you individuals have incorporated a whole mental model of in which the belief is but one relation. Such statements include the belief that Don Bradman is the greatest cricketer ever or that Spain was favoured by the referee in the latest World Cup final. Such statements are not probabilistic or ‘true’ statements in any sense of the word but are rather statements signifying the adoption of a particular mental model by the utterer. In a similar vein, one can have statements rationalising former actions in order to uphold a particular internal mental model of something, such as when someone who beats up his wife says he ‘believed her to be egging him on’. Such a statement is again one of the adoption of a whole mental model, but this time complete with active distortions of own memory and appealing to outside mental models of appropriate blame. Finally, one has statements like ‘I believe the traffic light is red’ which is a statement of a perception and as such ‘true’ if the statement is made to the self. It neither means the traffic light is truly red nor that the perceiver takes it as possible that it is anything but red, but it is highly predictable of further action (i.e. the person believing it will stop) and we frequently trust our lives to such beliefs, i.e. it appears to be a very reliable form of mental modeling.
5. An interesting finding of neuroscience is that we are naturally prone to ‘believe’ what we see and hear, i.e. to doubt our own senses is something we have to learn and is perceived as abnormal (making magic tricks and cognitive fallacies a source of amusement). In turn, this means that beliefs do not really follow from an objective appraisal of information or even the objective building-up of an internal predictive mental model (as an economic theorist often would like to think and as a properly trained scientist should do). Rather, it needs an internal apparatus of doubt and self-checking mental models to prevent outside stimuli from automatically becoming beliefs. Given that anything experienced often gains credence, all kinds of things can become situational beliefs that are in violent disagreement with our own interests or already existing mental models.
Amongst economists, all the views above have their place.
Beliefs as a form of tautology dependent on unquestioned axioms have a good example in the ‘revealed preference’ literature where belief is entirely circular, i.e. ‘I believe X is the best choice for a person because that person chose it. If the person would not have chosen X, it would not have been the best’. The statement is circular because there is no outside definition of ‘the best’ and hence the statement is self-confirmatory, just like any proper mathematical statement. Truth in this context is one of the divine internal revelation of the appropriate assumptions and as such is not scientific at all. Economic theory is full of tautologies like this, such as that every situation is by definition in equilibrium. Within the internal world of models, beliefs like this are tautologies, but as soon as someone uses the associations between labels used in such models and the same words used in reality as a basis for statements about reality, one essentially treats the prior tautology as a form of divine internal revelation true everywhere (often in defiance of actual data, i.e. inconsistent with other mental models applied at the same time).
Statistical views of beliefs are perhaps best exemplified by the Bayesian community within economics that come up with Bayesian models of decision making and interpretation of data. For instance, economists who, on the basis of the data, think it more likely that higher minimum wages cost jobs than create jobs, are quite explicitly using a fairly Bayesian view of beliefs. Note that if one follows any interpretation to data to its logical conclusion, one once again is forced to rely on unquestioned prior knowledge about how to interpret observations, set sample spaces, define what a ‘job’ is and what ‘create’ means, etc.
Identity-type beliefs in economics are common when it comes to sub-tribes that organise themselves as insiders versus outsiders using certain beliefs, such as experimenters who pretend they believe that lying to students seriously contaminates the future pool of subjects and that hence papers by labs that do this should all be refused, or macro-economists who pretend they believe real agents are aware of the true model of the economy and refuse to publish papers with other assumptions. Such beliefs are invariable contradictory, but since the real driver is not the thirst for knowledge but the thirst for a successful career, this is glossed over. For instance, if economic agents would be truly equipped with the correct view of how the economy works, why bother doing economic research at all and not simply step outside and ask the person at the bus station what GDP is going to be next year? The very activity of economic research is hence inconsistent with the belief that agents in the economy act as-if they know what is going on, but because we find it too hard to come up with the model of everything, we muddle on ignoring such inconsistencies but still use particular beliefs to keep others away from our table.
Ex-post rationalising beliefs are common when it concerns historical events, such as the belief that the Great Depression was made worse by the gold standard, or that the 2010 introduction of the mining tax in Australia was not presented in is most positive light.
Observational beliefs are prevalent in applied economics, such as when economists believe they see a tragedy of the commons when it concerns fishing in international waters.
Memory beliefs include the belief that the economy of the Roman Empire was dependent on an increasing set of territories to supply new slaves or that Australia’s economy grew faster during the GFC than the OECD average.
The potential criticisms and observations made on these types of beliefs for other sciences carry through in the case that they are held by economists: the beliefs are situational (non-transitive), and if you follow them up invariably rely on the internally revealed structure of our mental models formed over lifetimes. In the absence of the mental model of everything, it is not clear what any of those beliefs are worth ‘in reality’ and indeed, notions of falsification, probability, and verification are not really applicable to them in any clean sense. All we have to go on are heuristics that have been seen to work in particular areas, but of which it is not clear that they are all that useful in economics. For instance, the practice of challenging any assumption popular enough to get published on the basis that they fail to perfectly predict what happens in a lab might be useful in many areas, but in economics it is not clear it buys us anything. Perhaps most interesting is that the adoption and evolution of economic ideas is linked to the adoption and evolution of mental models. The evolutionary drivers of that race are not just inter-subjective agreed upon notions of verification and falsification defining ‘scientists’ but also include whether they are inherently appealing to students in the market for additional mental models (i.e. internal success), how easily they fit onto existing mental models of anyone who hears them (whether useful or not) and of course whether they help the adaptor survive and procreate in both a literal and career sense. Theories of the evolution of scientific thought are precisely about the different directions these driving forces go into.
After this quick tour of various sciences, what is then the ‘best’ answer to the question of what a belief is? My best answer, which is a mental model in itself, is that a belief is a relation within an internal mental model. That makes it by definition situational and only unquestionably true within the world of that internal model. The various forms and limitations of beliefs come from the various forms and limitations of mental models. On the whole, they do not relate to truth or probability as a one-to-one mapping. Only for highly trained individuals can they become related to truth or probability. Also, beliefs never stand alone and are mere parts of the mental model they are an aspect of, complete with action-plans, memories, associations, etc. To treat beliefs as separate entities is like looking at windscreen wipers without thinking about cars. The problem in trying to look at cars is that everyone has a different car.