What I call Platonicity, after the ideas (and personality) of the philosopher Plato, is our tendency to mistake the map for the territory, to focus on pure and well-defined “forms,” whether objects, like triangles, or social notions, like utopias (societies built according to some blueprint of what “makes sense”), even nationalities. When these ideas and crisp constructs inhabit our minds, we privilege them over other less elegant objects, those with messier and less tractable structures (an idea that I will elaborate progressively throughout this book).
The Platonic fold is the explosive boundary where the Platonic mind-set enters in contact with messy reality, where the gap between what you know and what you think you know becomes dangerously wide. It is here that the Black Swan is produced.
We tend to treat our knowledge as personal property to be protected and defended. It is an ornament that allows us to rise in the pecking order.
History is opaque. You see what comes out, not the script that produces events, the generator of history. There is a fundamental incompleteness in your grasp of such events, since you do not see what’s inside the box, how the mechanisms work. What I call the generator of historical events is different from the events themselves, much as the minds of the gods cannot be read just by witnessing their deeds. You are very likely to be fooled about their intentions.
The human mind suffers from three ailments as it comes into contact with history, what I call the triplet of opacity. They are:
a. the illusion of understanding, or how everyone thinks he knows what is going on in a world that is more complicated (or random) than they realize
b. the retrospective distortion, or how we can assess matters only after the fact, as if they were in a rearview mirror (history seems clearer and more organized in history books than in empirical reality)
c. the overvaluation of factual information and the handicap of authoritative and learned people, particularly when they create categories—when they “Platonify”
Much of what took place would have been deemed completely crazy with respect to the past. Yet it did not seem that crazy after the events. This retrospective plausibility causes a discounting of the rarity and conceivability of the event.
Our minds are wonderful explanation machines, capable of making sense out of almost anything, capable of mounting explanations for all manner of phenomena, and generally incapable of accepting the idea of unpredictability. These events were unexplainable, but intelligent people thought they were capable of providing convincing explanations for them—after the fact. Furthermore, the more intelligent the person, the better sounding the explanation. What’s more worrisome is that all these beliefs and accounts appeared to be logically coherent and devoid of inconsistencies.
Events present themselves to us in a distorted way. Consider the nature of information: of the millions, maybe even trillions, of small facts that prevail before an event occurs, only a few will turn out to be relevant later to your understanding of what happened. Because your memory is limited and filtered, you will be inclined to remember those data that subsequently match the facts,
Categorizing is necessary for humans, but it becomes pathological when the category is seen as definitive, preventing people from considering the fuzziness of boundaries, let alone revising their categories.
Categorizing always produces reduction in true complexity. It is a manifestation of the Black Swan generator, that unshakable Platonicity that I defined in the Prologue. Any reduction of the world around us can have explosive consequences since it rules out some sources of uncertainty; it drives us to a misunderstanding of the fabric of the world.
In Extremistan, inequalities are such that one single observation can disproportionately impact the aggregate, or the total.
So while weight, height, and calorie consumption are from Mediocristan, wealth is not. Almost all social matters are from Extremistan. Another way to say it is that social quantities are informational, not physical: you cannot touch them. Money in a bank account is something important, but certainly not physical. As such it can take any value without necessitating the expenditure of energy. It is just a number!
Note that before the advent of modern technology, wars used to belong to Mediocristan. It is hard to kill many people if you need to slaughter them one at the time. Today, with tools of mass destruction, all it takes is a button, a nutcase, or a small error to wipe out the planet.
Look at the implication for the Black Swan. Extremistan can produce Black Swans, and does, since a few occurrences have had huge influences on history.
Now, there are other themes arising from our blindness to the Black Swan:
a. We focus on preselected segments of the seen and generalize from it to the unseen: the error of confirmation.
b. We fool ourselves with stories that cater to our Platonic thirst for distinct patterns: the narrative fallacy.
c. We behave as if the Black Swan does not exist: human nature is not programmed for Black Swans.
d. What we see is not necessarily all that is there. History hides Black Swans from us and gives us a mistaken idea about the odds of these events: this is the distortion of silent evidence.
e. We “tunnel”: that is, we focus on a few well-defined sources of uncertainty, on too specific a list of Black Swans (at the expense of the others that do not easily come to mind).
Let us call it the domain specificity of our reactions. By domain-specific I mean that our reactions, our mode of thinking, our intuitions, depend on the context in which the matter is presented, what evolutionary psychologists call the “domain” of the object or the event. The classroom is a domain; real life is another. We react to a piece of information not on its logical merit, but on the basis of which framework surrounds it, and how it registers with our social-emotional system.
We may have learned things wrong from our ancestors. I speculate here that we probably inherited the instincts adequate for survival in the East African Great Lakes region where we presumably hail from, but these instincts are certainly not well adapted to the present, post-alphabet, intensely informational, and statistically complex environment.
Indeed our environment is a bit more complex than we (and our institutions) seem to realize. How? The modern world, being Extremistan, is dominated by rare—very rare—events. It can deliver a Black Swan after thousands and thousands of white ones, so we need to withhold judgment for longer than we are inclined to. As I said in Chapter 3, it is impossible—biologically impossible—to run into a human several hundred miles tall, so our intuitions rule these events out. But the sales of a book or the magnitude of social events do not follow such strictures. It takes a lot more than a thousand days to accept that a writer is ungifted, a market will not crash, a war will not happen, a project is hopeless, a country is “our ally,” a company will not go bust, a brokerage-house security analyst is not a charlatan, or a neighbor will not attack us. In the distant past, humans could make inferences far more accurately and quickly.
We like stories, we like to summarize, and we like to simplify, i.e., to reduce the dimension of matters.
The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.
We, members of the human variety of primates, have a hunger for rules because we need to reduce the dimension of matters so they can get into our heads. Or, rather, sadly, so we can squeeze them into our heads. The more random information is, the greater the dimensionality, and thus the more difficult to summarize. The more you summarize, the more order you put in, the less randomness. Hence the same condition that makes us simplify pushes us to think that the world is less random than it actually is.
And the Black Swan is what we leave out of simplification.
Both the artistic and scientific enterprises are the product of our need to reduce dimensions and inflict some order on things. Think of the world around you, laden with trillions of details. Try to describe it and you will find yourself tempted to weave a thread into what you are saying. A novel, a story, a myth, or a tale, all have the same function: they spare us from the complexity of the world and shield us from its randomness. Myths impart order to the disorder of human perception and the perceived “chaos of human experience.”
Indeed, many severe psychological disorders accompany the feeling of loss of control of—being able to “make sense” of—one’s environment.
Platonicity affects us here once again. The very same desire for order, interestingly, applies to scientific pursuits—it is just that, unlike art, the (stated) purpose of science is to get to the truth, not to give you a feeling of organization or make you feel better. We tend to use knowledge as therapy.
Our tendency to perceive—to impose—narrativity and causality are symptoms of the same disease—dimension reduction. Moreover, like causality, narrativity has a chronological dimension and leads to the perception of the flow of time. Causality makes time flow in a single direction, and so does narrativity.
But memory and the arrow of time can get mixed up. Narrativity can viciously affect the remembrance of past events as follows: we will tend to more easily remember those facts from our past that fit a narrative, while we tend to neglect others that do not appear to play a causal role in that narrative. Consider that we recall events in our memory all the while knowing the answer of what happened subsequently. It is literally impossible to ignore posterior information when solving a problem. This simple inability to remember not the true sequence of events but a reconstructed one will make history appear in hindsight to be far more explainable than it actually was—or is.
Conventional wisdom holds that memory is like a serial recording device like a computer diskette. In reality, memory is dynamic—not static—like a paper on which new texts (or new versions of the same text) will be continuously recorded, thanks to the power of posterior information. (In a remarkable insight, the nineteenth-century Parisian poet Charles Baudelaire compared our memory to a palimpsest, a type of parchment on which old texts can be erased and new ones written over them.) Memory is more of a self-serving dynamic revision machine: you remember the last time you remembered the event and, without realizing it, change the story at every subsequent remembrance.
So we pull memories along causative lines, revising them involuntarily and unconsciously. We continuously renarrate past events in the light of what appears to make what we think of as logical sense after these events occur.
Narrative, as well as its associated mechanism of salience of the sensational fact, can mess up our projection of the odds.
System 2, the cogitative one, is what we normally call thinking. It is what you use in a classroom, as it is effortful (even for Frenchmen), reasoned, slow, logical, serial, progressive, and self-aware (you can follow the steps in your reasoning). It makes fewer mistakes than the experiential system, and, since you know how you derived your result, you can retrace your steps and correct them in an adaptive manner.
Most of our mistakes in reasoning come from using System 1 when we are in fact thinking that we are using System 2. How? Since we react without thinking and introspection, the main property of System 1 is our lack of awareness of using it!
Much of the trouble with human nature resides in our inability to use much of System 2, or to use it in a prolonged way without having to take a long beach vacation. In addition, we often just forget to use it.
I’ll conclude by saying that our misunderstanding of the Black Swan can be largely attributed to our using System 1, i.e., narratives, and the sensational—as well as the emotional—which imposes on us a wrong map of the likelihood of events. On a day-to-day basis, we are not introspective enough to realize that we understand what is going on a little less than warranted from a dispassionate observation of our experiences.
Our intuitions are not cut out for nonlinearities. Consider our life in a primitive environment where process and result are closely connected. You are thirsty; drinking brings you adequate satisfaction. Or even in a not-so-primitive environment, when you engage in building, say, a bridge or a stone house, more work will lead to more apparent results, so your mood is propped up by visible continuous feedback.
In a primitive environment, the relevant is the sensational. This applies to our knowledge. When we try to collect information about the world around us, we tend to be guided by our biology, and our attention flows effortlessly toward the sensational—not the relevant so much as the sensational. Somehow the guidance system has gone wrong in the process of our coevolution with our habitat—it was transplanted into a world in which the relevant is often boring, nonsensational.
Furthermore, we think that if, say, two variables are causally linked, then a steady input in one variable should always yield a result in the other one. Our emotional apparatus is designed for linear causality. For instance, if you study every day, you expect to learn something in proportion to your studies. If you feel that you are not going anywhere, your emotions will cause you to become demoralized. But modern reality rarely gives us the privilege of a satisfying, linear, positive progression: you may think about a problem for a year and learn nothing; then, unless you are disheartened by the emptiness of the results and give up, something will come to you in a flash.
These nonlinear relationships are ubiquitous in life. Linear relationships are truly the exception; we only focus on them in classrooms and textbooks because they are easier to understand.
Some blindness to the odds or an obsession with their own positive Black Swan is necessary for entrepreneurs to function. The venture capitalist is the one who gets the shekels. The economist William Baumol calls this “a touch of madness.” This may indeed apply to all concentrated businesses: when you look at the empirical record, you not only see that venture capitalists do better than entrepreneurs, but publishers do better than writers, dealers do better than artists, and science does better than scientists (about 50 percent of scientific and scholarly papers, costing months, sometimes years, of effort, are never truly read). The person involved in such gambles is paid in a currency other than material success: hope.
As a matter of fact, your happiness depends far more on the number of instances of positive feelings, what psychologists call “positive affect,” than on their intensity when they hit. In other words, good news is good news first; how good matters rather little. So to have a pleasant life you should spread these small “affects” across time as evenly as possible. Plenty of mildly good news is preferable to one single lump of great news.
Say you attribute the success of the nineteenth-century novelist Honoré de Balzac to his superior “realism,” “insights,” “sensitivity,” “treatment of characters,” “ability to keep the reader riveted,” and so on. These may be deemed “superior” qualities that lead to superior performance if, and only if, those who lack what we call talent also lack these qualities. But what if there are dozens of comparable literary masterpieces that happened to perish? And, following my logic, if there are indeed many perished manuscripts with similar attributes, then, I regret to say, your idol Balzac was just the beneficiary of disproportionate luck compared to his peers. Furthermore, you may be committing an injustice to others by favoring him.
My point, I will repeat, is not that Balzac is untalented, but that he is less uniquely talented than we think. Just consider the thousands of writers now completely vanished from consciousness: their record does not enter into analyses. We do not see the tons of rejected manuscripts because these writers have never been published.
Numerous studies of millionaires aimed at figuring out the skills required for hotshotness follow the following methodology. They take a population of hotshots, those with big titles and big jobs, and study their attributes. They look at what those big guns have in common: courage, risk taking, optimism, and so on, and infer that these traits, most notably risk taking, help you to become successful. You would also probably get the same impression if you read CEOs’ ghostwritten autobiographies or attended their presentations to fawning MBA students.
Now take a look at the cemetery. It is quite difficult to do so because people who fail do not seem to write memoirs, and, if they did, those business publishers I know would not even consider giving them the courtesy of a returned phone call (as to returned e-mail, fuhgedit). Readers would not pay $26.95 for a story of failure, even if you convinced them that it had more useful tricks than a story of success.* The entire notion of biography is grounded in the arbitrary ascription of a causal relation between specified traits and subsequent events. Now consider the cemetery. The graveyard of failed persons will be full of people who shared the following traits: courage, risk taking, optimism, et cetera. Just like the population of millionaires. There may be some differences in skills, but what truly separates the two is for the most part a single factor: luck.
If you look at the population of beginning gamblers taken as a whole, you can be close to certain that one of them (but you do not know in advance which one) will show stellar results just by luck. So, from the reference point of the beginning cohort, this is not a big deal. But from the reference point of the winner (and, who does not, and this is key, take the losers into account), a long string of wins will appear to be too extraordinary an occurrence to be explained by luck.
We have to accept the fuzziness of the familiar “because” no matter how queasy it makes us feel (and it does makes us queasy to remove the analgesic illusion of causality). I repeat that we are explanation-seeking animals who tend to think that everything has an identifiable cause and grab the most apparent one as the explanation. Yet there may not be a visible because; to the contrary, frequently there is nothing, not even a spectrum of possible explanations. But silent evidence masks this fact. Whenever our survival is in play, the very notion of because is severely weakened. The condition of survival drowns all possible explanations. The Aristotelian “because” is not there to account for a solid link between two items, but rather, as we saw in Chapter 6, to cater to our hidden weakness for imparting explanations.
Note here that I am not saying causes do not exist; do not use this argument to avoid trying to learn from history. All I am saying is that it is not so simple; be suspicious of the “because” and handle it with care—particularly in situations where you suspect silent evidence.
We have seen several varieties of the silent evidence that cause deformations in our perception of empirical reality, making it appear more explainable (and more stable) than it actually is. In addition to the confirmation error and the narrative fallacy, the manifestations of silent evidence further distort the role and importance of Black Swans. In fact, they cause a gross overestimation at times (say, with literary success), and underestimation at others (the stability of history; the stability of our human species).
I said earlier that our perceptual system may not react to what does not lie in front of our eyes, or what does not arouse our emotional attention. We are made to be superficial, to heed what we see and not heed what does not vividly come to mind. We wage a double war against silent evidence. The unconscious part of our inferential mechanism (and there is one) will ignore the cemetery, even if we are intellectually aware of the need to take it into account. Out of sight, out of mind: we harbor a natural, even physical, scorn of the abstract.
What is the ludic fallacy? Ludic comes from ludus, Latin for games.
In real life you do not know the odds; you need to discover them, and the sources of uncertainty are not defined. Economists, who do not consider what was discovered by noneconomists worthwhile, draw an artificial distinction between Knightian risks (which you can compute) and Knightian uncertainty (which you cannot compute), after one Frank Knight, who rediscovered the notion of unknown uncertainty and did a lot of thinking but perhaps never took risks, or perhaps lived in the vicinity of a casino. Had he taken economic or financial risks he would have realized that these “computable” risks are largely absent from real life! They are laboratory contraptions!
Yet we automatically, spontaneously associate chance with these Platonified games. I find it infuriating to listen to people who, upon being informed that I specialize in problems of chance, immediately shower me with references to dice. Two illustrators for a paperback edition of one of my books spontaneously and independently added a die to the cover and below every chapter, throwing me into a state of rage. The editor, familiar with my thinking, warned them to “avoid the ludic fallacy,” as if it were a well-known intellectual violation.
The cosmetic and the Platonic rise naturally to the surface. This is a simple extension of the problem of knowledge. It is simply that one side of Eco’s library, the one we never see, has the property of being ignored. This is also the problem of silent evidence. It is why we do not see Black Swans: we worry about those that happened, not those that may happen but did not. It is why we Platonify, liking known schemas and well-organized knowledge—to the point of blindness to reality. It is why we fall for the problem of induction, why we confirm. It is why those who “study” and fare well in school have a tendency to be suckers for the ludic fallacy.
Alas, we are not manufactured, in our current edition of the human race, to understand abstract matters—we need context. Randomness and uncertainty are abstractions. We respect what has happened, ignoring what could have happened. In other words, we are naturally shallow and superficial—and we do not know it. This is not a psychological problem; it comes from the main property of information. The dark side of the moon is harder to see; beaming light on it costs energy. In the same way, beaming light on the unseen is costly in both computational and mental effort.
I propose that if you want a simple step to a higher form of life, as distant from the animal as you can get, then you may have to denarrate, that is, shut down the television set, minimize time spent reading newspapers, ignore the blogs. Train your reasoning abilities to control your decisions; nudge System 1 (the heuristic or experiential system) out of the important ones. Train yourself to spot the difference between the sensational and the empirical. This insulation from the toxicity of the world will have an additional benefit: it will improve your well-being.
Prediction, not narration, is the real test of our understanding of the world
I find it scandalous that in spite of the empirical record we continue to project into the future as if we were good at it, using tools and methods that exclude rare events. Prediction is firmly institutionalized in our world. We are suckers for those who help us navigate uncertainty, whether the fortune-teller or the “well-published” (dull) academics or civil servants using phony mathematics.
Epistemic arrogance bears a double effect: we overestimate what we know, and underestimate uncertainty, by compressing the range of possible uncertain states (i.e., by reducing the space of the unknown).
The more information you give someone, the more hypotheses they will formulate along the way, and the worse off they will be. They see more random noise and mistake it for information.
The problem is that our ideas are sticky: once we produce a theory, we are not likely to change our minds—so those who delay developing their theories are better off. When you develop your opinions on the basis of weak evidence, you will have difficulty interpreting subsequent information that contradicts these opinions, even if this new information is obviously more accurate. Two mechanisms are at play here: the confirmation bias that we saw in Chapter 5, and belief perseverance, the tendency not to reverse opinions you already have. Remember that we treat ideas like possessions, and it will be hard for us to part with them.
Poincaré’s reasoning was simple: as you project into the future you may need an increasing amount of precision about the dynamics of the process that you are modeling, since your error rate grows very rapidly. The problem is that near precision is not possible since the degradation of your forecast compounds abruptly—you would eventually need to figure out the past with infinite precision. Poincaré showed this in a very simple case, famously known as the “three body problem.
Someone with a low degree of epistemic arrogance is not too visible, like a shy person at a cocktail party. We are not predisposed to respect humble people, those who try to suspend judgment. Now contemplate epistemic humility. Think of someone heavily introspective, tortured by the awareness of his own ignorance. He lacks the courage of the idiot, yet has the rare guts to say “I don’t know.” He does not mind looking like a fool or, worse, an ignoramus. He hesitates, he will not commit, and he agonizes over the consequences of being wrong. He introspects, introspects, and introspects until he reaches physical and nervous exhaustion.
This does not necessarily mean that he lacks confidence, only that he holds his own knowledge to be suspect. I will call such a person an epistemocrat; the province where the laws are structured with this kind of human fallibility in mind I will call an epistemocracy.
The major modern epistemocrat is Montaigne.
The notion of future mixed with chance, not a deterministic extension of your perception of the past, is a mental operation that our mind cannot perform. Chance is too fuzzy for us to be a category by itself. There is an asymmetry between past and future, and it is too subtle for us to understand naturally.
History is useful for the thrill of knowing the past, and for the narrative (indeed), provided it remains a harmless narrative. One should learn under severe caution. History is certainly not a place to theorize or derive general knowledge, nor is it meant to help in the future, without some caution. We can get negative confirmation from history, which is invaluable, but we get plenty of illusions of knowledge along with it.
The lesson for the small is: be human! Accept that being human involves some amount of epistemic arrogance in running your affairs. Do not be ashamed of that. Do not try to always withhold judgment—opinions are the stuff of life. Do not try to avoid predicting—yes, after this diatribe about prediction I am not urging you to stop being a fool. Just be a fool in the right places.
What you should avoid is unnecessary dependence on large-scale harmful predictions—those and only those. Avoid the big subjects that may hurt your future: be fooled in small matters, not in the large. Do not listen to economic forecasters or to predictors in social science (they are mere entertainers), but do make your own forecast for the picnic. By all means, demand certainty for the next picnic; but avoid government social-security forecasts for the year 2040.
I am trying here to generalize to real life the notion of the “barbell” strategy I used as a trader, which is as follows. If you know that you are vulnerable to prediction errors, and if you accept that most “risk measures” are flawed, because of the Black Swan, then your strategy is to be as hyperconservative and hyperaggressive as you can be instead of being mildly aggressive or conservative. Instead of putting your money in “medium risk” investments (how do you know it is medium risk? by listening to tenure-seeking “experts”?), you need to put a portion, say 85 to 90 percent, in extremely safe instruments, like Treasury bills—as safe a class of instruments as you can manage to find on this planet. The remaining 10 to 15 percent you put in extremely speculative bets, as leveraged as possible (like options), preferably venture capital–style portfolios.
Seize any opportunity, or anything that looks like opportunity. They are rare, much rarer than you think. Remember that positive Black Swans have a necessary first step: you need to be exposed to them. Many people do not realize that they are getting a lucky break in life when they get it. If a big publisher (or a big art dealer or a movie executive or a hotshot banker or a big thinker) suggests an appointment, cancel anything you have planned: you may never see such a window open up again. I am sometimes shocked at how little people realize that these opportunities do not grow on trees. Collect as many free nonlottery tickets (those with open-ended payoffs) as you can, and, once they start paying off, do not discard them. Work hard, not in grunt work, but in chasing such opportunities and maximizing exposure to them. This makes living in big cities invaluable because you increase the odds of serendipitous encounters—you gain exposure to the envelope of serendipity.
This idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can’t know) is the central idea of uncertainty.
Effectively, if free markets have been successful, it is precisely because they allow the trial-and-error process I call “stochastic tinkering” on the part of competing individual operators who fall for the narrative fallacy—but are effectively collectively partaking of a grand project. We are increasingly learning to practice stochastic tinkering without knowing it—thanks to overconfident entrepreneurs, naïve investors, greedy investment bankers, and aggressive venture capitalists brought together by the free-market system.
Random outcomes, or an arbitrary situation, can also explain success, and provide the initial push that leads to a winner-take-all result. A person can get slightly ahead for entirely random reasons; because we like to imitate one another, we will flock to him. The world of contagion is so underestimated!
In sociology, Matthew effects bear the less literary name “cumulative advantage.” This theory can easily apply to companies, businessmen, actors, writers, and anyone else who benefits from past success. If you get published in The New Yorker because the color of your letterhead attracted the attention of the editor, who was daydreaming of daisies, the resultant reward can follow you for life. More significantly, it will follow others for life. Failure is also cumulative; losers are likely to also lose in the future, even if we don’t take into account the mechanism of demoralization that might exacerbate it and cause additional failure.
A great illustration of preferential attachment can be seen in the mushrooming use of English as a lingua franca—though not for its intrinsic qualities, but because people need to use one single language, or stick to one as much as possible, when they are having a conversation. So whatever language appears to have the upper hand will suddenly draw people in droves; its usage will spread like an epidemic, and other languages will be rapidly dislodged.
The world, epistemologically, is literally a different place to a bottom-up empiricist. We don’t have the luxury of sitting down to read the equation that governs the universe; we just observe data and make an assumption about what the real process might be, and “calibrate” by adjusting our equation in accordance with additional information. As events present themselves to us, we compare what we see to what we expected to see. It is usually a humbling process, particularly for someone aware of the narrative fallacy, to discover that history runs forward, not backward. As much as one thinks that businessmen have big egos, these people are often humbled by reminders of the differences between decision and results, between precise models and reality.
What I am talking about is opacity, incompleteness of information, the invisibility of the generator of the world. History does not reveal its mind to us—we need to guess what’s inside of it.
I want to be broadly right rather than precisely wrong. Elegance in the theories is often indicative of Platonicity and weakness—it invites you to seek elegance for elegance’s sake. A theory is like medicine (or government): often useless, sometimes necessary, always self-serving, and on occasion lethal. So it needs to be used with care, moderation, and close adult supervision.
The idea is not to correct mistakes and eliminate randomness from social and economic life through monetary policy, subsidies, and so on. The idea is simply to let human mistakes and miscalculations remain confined, and to prevent their spreading through the system, as Mother Nature does. Reducing volatility and ordinary randomness increases exposure to Black Swans—it creates an artificial quiet.
Our aversion to variability and desire for order, and our acting on those feelings, have helped precipitate severe crises. Making something artificially bigger (instead of letting it die early if it cannot survive stressors) makes it more and more vulnerable to a very severe collapse—as I showed with the Black Swan vulnerability associated with an increase in size.
Also when I was railing against models, social scientists kept repeating that they knew it and that there is a saying, “all models are wrong, but some are useful”—not understanding that the real problem is that “some are harmful.” Very harmful.
It is particularly shocking that people do what are called “stress tests” by taking the worst possible past deviation as an anchor event to project the worst possible future deviation, not thinking that they would have failed to account for that past deviation had they used the same method on the day before the occurrence of that past anchor event.
Since Plato only philosophers have spent time discussing what Truth was, and for a reason: it is unusable in practice. By focusing on the True/False distinction, epistemology remained, with very few exceptions, prisoner of an inconsequential, and highly incomplete, 2-D framework. The third missing dimension is, of course, the consequence of the True, and the severity of the False, the expectation. In other words, the payoff from decisions, the impact and magnitude of the result of such a decision. Sometimes one may be wrong and the mistake may turn out to be inconsequential.
Most statistical education is based on these asymptotic, Platonic properties, yet we live in the real world, which rarely resembles the asymptote. Statistical theorists know it, or claim to know it, but not your regular user of statistics who talks about “evidence” while writing papers. Furthermore, this compounds what I called the ludic fallacy: most of what students of mathematical statistics do is assume a structure similar to the closed structures of games, typically with a priori known probability. Yet the problem we have is not so much making computations once you know the probabilities, but finding the true distribution for the horizon concerned. Many of our knowledge problems come from this tension between a priori and a posteriori.
There is a measure called kurtosis that the reader does not need to bother with, but that represents “how fat the tails are,” that is, how much rare events play a role. Well, often, with ten thousand pieces of data, forty years of daily observations, one single observation represents 90 percent of the kurtosis! Sampling error is too large for any statistical inference about how non-Gaussian something is, meaning that if you miss a single number, you miss the whole thing.
The results show that, clearly, we have good intuitions when it comes to Mediocristan, but horribly poor ones when it comes to Extremistan—yet economic life is almost all Extremistan. We do not have good intuition for that atypicality of large deviations.
A general principle is that, while in the first three quadrants you can use the best model or theory you can find, and rely on it, doing so is dangerous in the Fourth Quadrant: no theory or model should be better than just any theory or model.
In other words, the Fourth Quadrant is where the difference between absence of evidence and evidence of absence becomes acute.