Predicting the Future

November 30, 2010 § Leave a comment

Humans are many magnitudes worse at predicting the future than they think they are – especially when they try to think too much. I’ll try provide insights to why this is so, why it is a problem and what to do about it. We also do not know that we are unable to predict. When our prediction fails we forget the initial prediction was made. We have a selection bias to hold memories of our successful predictions which gives us greater confidence that we predicted the past and can thus predict the future. Furthermore when we look at the past 10 years it all looks quite easy to understand – we formulate a variety of narratives and connect everything together in a way that makes sense and gives the impression that each step was inevitable. The brain indexes information by stories or (usually short) narratives and we use these stories to pass this selected information from one to another. Consider that an imaginary Mr. Smith purchases shares in company X. During interviewing we ask why he generally decided to buy some shares and he explains interesting stories about the company’s future prospects and his methods used for appraise its value. However in painstakingly collecting all information outside of his own model of how to account for the decision we find that he was given a tax refund and had some spare money and that was the primary or most influential cause. Our brains do not have the capacity to memorize even a tiny fraction of all the jagged edges, exceptions and near-infinitely complex details that makes up the true world. Furthermore we view past experiences from the perspective of surviving evidence – usually a tiny and thus profoundly biassed impression of the entire history itself. Starting even with that tiny sub-set of history we are able to observe, our narrative oriented memory then allows us to store information more efficiently by taking the few surviving details and reducing them even vastly further. The resulting stories make our understanding of the past appear much more logical and well connected that it really was. As a result we are enormously overconfident that we understand the future. Stock market chartists fail because they curve-fit the actual raw data and become convinced that the patterns are the data – the raw data without the curves looks quite random (and is largely so) but the added curves and other pen-strokes produce a summary that is analogous to a narrative (a selected subset of the information that can be comprehended) that makes the past look more predictable instead of random – unfortunately these poor chartists then infer that the future is also predictable rather than random . . . and have to wait until they have lost significant money before the remorse builds up enough to surpass the psychological bias. We not only make these predictions and believe them, which alone could be harmless, but unfortunately we go further and refine or optimise decisions when taking on action in anticipation of this understandable future. We hastily optimize because we want the best result given that we know what will mostly likely happen in the future. The problem with optimising is that it increases our fragility – we now perform perfectly if our expected future takes hold but perform worse than without the optimisation when the future turns out to be different. Here are some random examples of this over-optimising tendency:

1. Going for a long bush-walk and making a careful calculation to carry ‘just enough’ water to save weight and making the trip as easy as possible (the risk of having too much water is a vastly smaller risk than having not enough)

2. Raising debt when there is increased confidence of future capital growth (this will make you fragile exactly at the time when you require robustness because that growth probably doesn’t exist at this time – future asset growth can be the poorest exactly when one is, as everyone else around us, the most confident)

3. Believing in a particular technology (over-predicting its future importance or even just survival, this part okay on its own) and then (here’s the mistake) committing heavily to its use (over-optimising resources) making you far more fragile. If we were able to predict the future then the optimised technology would be the best choice however as we are unable to predict then the least optimised and most robust technology (that continues working regardless of how the future unfolds) is the best choice. The problem is that we think we can predict and thus tend towards over-optimisation.

4. Selling a kidney. A sophisticated banking analyst would, if applying their same methods outside of economics, further require us to have only one of each organ instead of two in order to be the most optimised for performance. We have redundant organs for good reason – evolution has learned (taught our genes) the hard way what to do in order to survive risk over millions of years factoring many more events than we are capable to model.

5. Using mathematical equations to calculate risks/rewards and then making an intricate financial decision – the more optimising the more our decision becomes fragile to the future not turning out the way you predicted (and the predictions are false much more often than we expect).

6. Favouring taking on insurance for more recently occurring catastrophes (narrative is more tightly held in memory so it appears to be more predictable and likely) in preference to catastrophes that have not occurred for a long time. There should however be little or no preference.

7. In some cases favouring taking on insurance for more specifically defined (stating positive elements) policies involving more narrative (making the event appear to be more predictable because of increased positive memory association) in preference to catastrophes that have more broadly defined policies with less specification. Of course the preference should be in the opposite direction – whichever policy is simply broadest is best. By seeking more optimisation (specialisation, reduction is scope) for a policy we end up with a less useful (less robust) policy.

Since we are overconfident about our view the future and tend to over-optimise (causing harm to ourselves), what is the solution? Firstly to understand that we do not know anything about the past (for reasons discussed at the opening) nor the future (overconfidence arises from biassed account of the past) – in other words, to understand that we don’t really know anything. When that is achieved then the second step is to create an environment around us that is robust to the variety of possible outcomes including unlikely ones that might lie ahead. The reverse of this is helpful to consider: The largest mistake that can be made is then to start by being confident about one particular version of the future (okay on its own) and then heavily optimise everything we do for maximum success under that version (unless you enjoy blowing up).

Comments are closed.

What’s this?

You are currently reading Predicting the Future at Composing Notes.

meta

%d bloggers like this: