Antifragile by Nassim Nicholas Taleb: Summary & Notes

Rated: 10/10

Available at: Amazon

ISBN: 9781400067824

Related: The Black Swan, Skin in the Game

Summary

An excellent book that has influenced my thinking more than almost any other, and continues to be a reference I go back to. This book builds on The Black Swan.

Taleb puts forward evidence and definitions for fragility, robustness and antifragility, and explains where they apply in life, and why you want to strive for antifragility.  Great for improving contrarian thinking.

Notes

Prologue

  • Anything that has more upside than downside from random events (or certain shocks) is antifragile; the reverse is fragile.
  • The largest source of fragility: absence of skin in the game.
  • Soviet-Harvard delusion: the (unscientific) overestimation of the reach of scientific knowledge.
  • Less is more and usually more effective.
  • Being accommodating towards anyone committing a nefarious action condones it.


Book I - The Antifragile: An Introduction

Chapter 1: Between Damocles and Hydra

  • Mithridatization: the result of an exposure to a small dose of a substance that, over time, makes one immune to additional, larger quantities of it.
  • Note quite antifragility, but close. In other words, you need some stressors.


Chapter 2: Overcompensation and Overreaction Everywhere

  • If you need something done, give it to the busiest person.
  • Lucretius problem: we consider the biggest object of any kind that we have seen in our lives or hear about as the largest item that can possibly exist.
  • Information is antifragile: it benefits more from attempts to harm it than efforts to promote it.
  • Heuristic: to estimate quality of research, take the caliber of the highest detractor, or the caliber of the lowest detractor whom the author answers in print - whichever is lower.
  • Some jobs and professions are fragile - a mid-level bank employee with a mortgage, for example - while others are antifragile, like writers or artists.
  • Heuristic: those who dress outrageously are robust or antifragile, those who dress in suit and tie are fragile to information about them.


Chapter 3: The Cat and the Washing Machine

  • Everything that has life in it is to some extent antifragile.
  • Distinction between complex and non-complex systems: complex systems have many interdependencies.
  • causal opacity: it is hard to see the arrow from cause to consequence, making much of conventional methods of analysis, in addition to standard logic, inapplicable.


Chapter 4: What Kills Me Makes Others Stronger

  • The fragility of every startup is necessary for the economy to be antifragile, and that’s what makes, among other things, entrepreneurship work: the fragility of individual entrepreneurs and their necessarily high failure rate.
  • Random stressors (within reason, ie. no extinction) help species evolve quickly and improve.
  • Errors are valuable as long as they are made in isolation, and learned from. They help the overall system improve (example: airlines; counter: economy - highly inter-dependent).
  • He who has never sinned is less reliable than he who has only sinned once. And someone who has made plenty of errors—though never the same error more than once—is more reliable than someone who has never made any.
  • Nature and naturalize systems want local overconfidence - failure of individual economic agents is necessary for the whole to improve.
  • Government bailouts disrupt this natural system, and transfer fragility from the collective to the unfit.
  • My dream—the solution—is that we would have a National Entrepreneur Day, with the following message:
  • Most of you will fail, disrespected, impoverished, but we are grateful for the risks you are taking and the sacrifices you are making for the sake of the economic growth of the planet and pulling others out of poverty. You are at the source of our antifragility. Our nation thanks you.


Book II: Modernity and the Denial of Antifragility

Chapter 5: The Souk and the Office Building

  • The more variability you observe in a system, the less Black Swan–prone it is. Risks are visible here, while elsewhere, they are invisible.
  • Switzerland does not have a large central government, but rather a collection of municipal entities called ‘cantons’, which govern themselves.
  • Eye contact with one’s peers changes one’s behaviour.
  • Lobbyists cannot exist in a municipality or small region.
  • Switzerland also has much more apprenticeship than education; more techne (crafts and know how) than episteme (book knowledge, know what).

The Great Turkey Problem

  • We can also see from the turkey story the mother of all harmful mistakes: mistaking absence of evidence (of harm) for evidence of absence, a mistake that we will see tends to prevail in intellectual circles and one that is grounded in the social sciences.


Chapter 6: Tell Them I Love (Some) Randomness

  • The longer one goes without a market trauma, the worse the damage when commotion occurs.
  • A donkey equally famished and thirsty caught at an equal distance between food and water would unavoidably die of hunger or thirst. But he can be saved thanks to a random nudge one way or the other. This metaphor is named Buridan’s Donkey, after the medieval philosopher Jean de Buridan.

What to Tell the Foreign Policy Makers

  • To summarize, the problem with artificially suppressed volatility is not just that the system tends to become extremely fragile; it is that, at the same time, it exhibits no visible risks.
  • Seeking stability by achieving stability (and forgetting the second step) has been a great sucker game for economic and foreign policies.
  • One of life’s packages: no stability without volatility.


Chapter 7: Naive Intervention

  • Iatrogenics: net loss, damage from treatment in excess of the benefits (usually hidden or delayed).
  • Compounded by the “agency problem”: when one party has personal interests that are divorced from those of the one using his services (the principal).
  • Anything in which there is naive intervention - or even intervention - will have iatrogenics.
  • Theories are superfragile, while phenomenologies stay (and are robust).
  • We tend to over-intervene in areas with minimal benefits (and large risks), and under-intervene in areas where it’s necessary, like emergencies.
  • What should we control? As a rule, intervening to limit size (of companies, airports, or sources of pollution), concentration, and speed are beneficial in reducing Black Swan risks.
  • To conclude, the best way to mitigate interventionism is to ration the supply of information, as naturalistically as possible. This is hard to accept in the age of the Internet. It has been very hard for me to explain that the more data you get, the less you know what’s going on, and the more iatrogenics you will cause. People are still under the illusion that “science" means more data.

Catalyst-as-Cause Confusion

  • Obama’s mistake illustrates the illusion of local causal chains—that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect. The final episode of the upheaval in Egypt was unpredictable for all observers, especially those involved.
  • Political and economic "tail events" are unpredictable, and their probabilities are not scientifically measurable. No matter how many dollars are spent on research, predicting revolutions is not the same as counting cards; humans will never be able to turn politics and economics into the tractable randomness of blackjack.


Chapter 8: Prediction as a Child of Modernity

  • There are ample empirical findings to the effect that providing someone with a random numerical forecast increases his risk taking, even if the person knows the projections are random.
  • To see how redundancy is a nonpredictive, or rather a less predictive, mode of action, let us use the argument of Chapter 2: if you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will cause potential difficulties.


Book III: A Nonpredictive View of the World

Chapter 9: Fat Tony and the Fragilistas

  • There is another dimension to the need to focus on actions and avoid words: the health-eroding dependence on external recognition. People are cruel and unfair in the way they confer recognition, so it is best to stay out of that game. Stay robust to how others treat you.


Chapter 10: Seneca’s Upside and Downside

  • Stoicism makes you desire the challenge of a calamity. And Stoics look down on luxury: about a fellow who led a lavish life, Seneca wrote: "He is in debt, whether he borrowed from another person or from fortune."
  • Stoicism, seen this way, becomes pure robustness—for the attainment of a state of immunity from one’s external circumstances, good or bad, and an absence of fragility to decisions made by fate, is robustness. Random events won’t affect us either way (we are too strong to lose, and not greedy to enjoy the upside), so we stay in the middle column of the Triad.
  • Success brings an asymmetry: you now have a lot more to lose than to gain.
  • My idea of the modern Stoic sage is someone who transforms fear into prudence, pain into information, mistakes into initiation, and desire into undertaking.
  • Simple test: if I have "nothing to lose" then it is all gain and I am antifragile.


Chapter 11: Never Marry the Rock Star

  • The barbell (or bimodal) strategy is a way to achieve antifragility and move to the right side of the Triad.
  • The first step toward antifragility consists in first decreasing downside, rather than increasing upside; that is, by lowering exposure to negative Black Swans and letting natural antifragility work by itself.

Seneca’s Barbell

  • The barbell (a bar with weights on both ends that weight lifters use) is meant to illustrate the idea of a combination of extremes kept separate, with avoidance of the middle. In our context it is not necessarily symmetric: it is just composed of two extremes, with nothing in the center. One can also call it, more technically, a bimodal strategy, as it has two distinct modes rather than a single, central one.
  • For antifragility is the combination aggressiveness plus paranoia—clip your downside, protect yourself from extreme harm, and let the upside, the positive Black Swans, take care of itself.
  • Where can this apply? Work, investing, social policy, exercise.


Book IV: Optionality, Technology, and the Intelligence of Antifragility

  • teleological fallacy: the illusion that you know exactly where you are going, and that you knew exactly where you were going in the past, and that others have succeeded in the past by knowing where they were going.


Chapter 12: Thales’ Sweet Grapes

  • This kind of sum I’ve called in my vernacular f*** you money—a sum large enough to get most, if not all, of the advantages of wealth (the most important one being independence and the ability to only occupy your mind with matters that interest you) but not its side effects, such as having to attend a black-tie charity event and being forced to listen to a polite exposition of the details of a marble-rich house renovation.
  • The worst side effect of wealth is the social associations it forces on its victims, as people with big houses tend to end up socializing with other people with big houses.

Option and Symmetry

  • The formula in Chapter 10 was: antifragility equals more to gain than to lose equals more upside than downside equals asymmetry (unfavorable) equals likes volatility. And if you make more when you are right than you are hurt when you are wrong, then you will benefit, in the long run, from volatility (and the reverse). You are only harmed if you repeatedly pay too much for the option.

Things That Like Dispersion

  • One property of the option: it does not care about the average outcome, only the favorable ones (since the downside doesn’t count beyond a certain point). Authors, artists, and even philosophers are much better off having a very small number of fanatics behind them than a large number of people who appreciate their work.
  • Beyond books, consider this simple heuristic: your work and ideas, whether in politics, the arts, or other domains, are antifragile if, instead of having one hundred percent of the people finding your mission acceptable or mildly commendable, you are better off having a high percentage of people disliking you and your message (even intensely), combined with a low percentage of extremely loyal and enthusiastic supporters. Options like dispersion of outcomes and don’t care about the average too much.
  • No one at present dares to state the obvious: growth in society may not come from raising the average the Asian way, but from increasing the number of people in the “tails," that small, very small number of risk takers crazy enough to have ideas of their own, those endowed with that very rare ability called imagination, that rarer quality called courage, and who make things happen.

How to Be Stupid

  • If you "have optionality," you don’t have much need for what is commonly called intelligence, knowledge, insight, skills, and these complicated things that take place in our brain cells. For you don’t have to be right that often. All you need is the wisdom to not do unintelligent things to hurt yourself (some acts of omission) and recognize favorable outcomes when they occur.

Nature and Options

  • Let us call trial and error tinkering when it presents small errors and large gains.

The Rationality

  • To crystallize, take this description of an option:
  • Option = asymmetry + rationality
  • The rationality part lies in keeping what is good and ditching the bad, knowing to take the profits.


Chapter 13: Lecturing Birds on How to Fly

  • The error of naive rationalism leads to overestimating the role and necessity of the second type, academic knowledge, in human affairs—and degrading the uncodifiable, more complex, intuitive, or experience-based type.
  • This is called the Baconian linear model, after the philosopher of science Francis Bacon; I am adapting its representation by the scientist Terence Kealey (who, crucially, as a biochemist, is a practicing scientist, not a historian of science) as follows:
  • Academia → Applied Science and Technology → Practice
  • While this model may be valid in some very narrow (but highly advertised instances), such as building the atomic bomb, the exact reverse seems to be true in most of the domains I’ve examined. Or, at least, this model is not guaranteed to be true and, what is shocking, we have no rigorous evidence that it is true.
  • So we are blind to the possibility of the alternative process, or the role of such a process, a loop:
  • Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship → Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship


Chapter 14: When Two Things Are Not the "Same Thing”

  • Now let’s look at evidence of the direction of the causal arrow, that is, whether it is true that lecture-driven knowledge leads to prosperity. Serious empirical investigation (largely thanks to one Lant Pritchet, then a World Bank economist) shows no evidence that raising the general level of education raises income at the level of a country. But we know the opposite is true, that wealth leads to the rise of education—not an optical illusion.
  • Further, let me remind the reader that scholarship and organized education are not the same.
  • Also, note that I am not saying that universities do not generate knowledge at all and do not help growth (outside, of course, of most standard economics and other superstitions that set us back); all I am saying is that their role is overly hyped-up and that their members seem to exploit some of our gullibility in establishing wrong causal links, mostly on superficial impressions.

Polished Dinner Partners

  • Entrepreneurs are selected to be just doers, not thinkers, and doers do, they don’t talk, and it would be unfair, wrong, and downright insulting to measure them in the talk department.

Prometheus and Epimetheus

  • All this does not mean that tinkering and trial and error are devoid of narrative: they are just not overly dependent on the narrative being true—the narrative is not epistemological but instrumental. For instance, religious stories might have no value as narratives, but they may get you to do something convex and antifragile you otherwise would not do, like mitigate risks.
  • Expert problems (in which the expert knows a lot but less than he thinks he does) often bring fragilities, and acceptance of ignorance the reverse. Expert problems put you on the wrong side of asymmetry.
  • When you are fragile you need to know a lot more than when you are antifragile. Conversely, when you think you know more than you do, you are fragile (to error).


Chapter 15: History Written by the Losers

Governments Should Spend on Nonteleological Tinkering, Not Research

  • There is no evidence that strategic plans work.
  • Instead, money should go to tinkerers who you can trust to milk the option.
  • When engaging in tinkering, you incur a lot of small losses, then once in a while you find something rather significant. Such methodology will show nasty attributes when seen from the outside—it hides its qualities, not its defects.
  • In the antifragile case (of positive asymmetries, positive Black Swan businesses), such as trial and error, the sample track record will tend to underestimate the long-term average; it will hide the qualities, not the defects.

To Fail Seven Times, Plus or Minus Two

  • Let me stop to issue rules based on the chapter so far. (i) Look for optionality; in fact, rank things according to optionality, (ii) preferably with open-ended, not closed-ended, payoffs; (iii) Do not invest in business plans but in people, so look for someone capable of changing six or seven times over his career, or more (an idea that is part of the modus operandi of the venture capitalist Marc Andreessen); one gets immunity from the backfit narratives of the business plan by investing in people. It is simply more robust to do so; (iv) Make sure you are barbelled, whatever that means in your business.


Chapter 16: A Lesson in Disorder

  • The largest hindrance to the development of children: the soccer mom.
  • Explanation: they try to eliminate trial and error, the anti fragility, from children’s lives.
  • Provided we have the right type of rigor, we need randomness, mess, adventures, uncertainty, self-discovery, near-traumatic episodes, all these things that make life worth living, compared to the structured, fake, and ineffective life of an empty-suit CEO with a preset schedule and an alarm clock.


Chapter 17: Fat Tony Debates Socrates

  • Textbook “knowledge" misses a dimension, the hidden asymmetry of benefits—just like the notion of average. The payoff, what happens to you (the benefits or harm from it), is always the most important thing, not the event itself.


BOOK V: The Nonlinear and the Nonlinear

Chapter 18: On the Difference Between a Large Stone and a Thousand Pebbles

A Simple Rule to Detect the Fragile

  • For the fragile, shocks bring higher harm as their intensity increases (up to a certain level).
  • Your car is fragile. If you drive it into the wall at 50 miles per hour, it would cause more damage than if you drove it into the same wall ten times at 5 mph.
  • Let me reexpress my previous rule:
  • For the fragile, the cumulative effect of small shocks is smaller than the single effect of an equivalent single large shock.
  • Now let us flip the argument and consider the antifragile. Antifragility, too, is grounded in nonlinearties, nonlinear responses.
  • For the antifragile, shocks bring more benefits (equivalently, less harm) as their intensity increases (up to a point).
  • A simple case—known heuristically by weight lifters. Lifting one hundred pounds once brings more benefits than lifting fifty pounds twice, and certainly a lot more than lifting one pound a hundred times.
  • In project management, Bent Flyvbjerg has shown firm evidence that an increase in the size of projects maps to poor outcomes and higher and higher costs of delays as a proportion of the total budget. But there is a nuance: it is the size per segment of the project that matters, not the entire project.

Why Planes Don’t Arrive Early

  • Because travel time cannot be really negative, uncertainty tends to cause delays, making arrival time increase, almost never decrease. Or it makes arrival time decrease by just minutes, but increase by hours, an obvious asymmetry. Anything unexpected, any shock, any volatility, is much more likely to extend the total flying time.
  • Indeed, governments do not need wars at all to get us in trouble with deficits: the underestimation of the costs of their projects is chronic for the very same reason 98 percent of contemporary projects have overruns. They just end up spending more than they tell us. This has led me to install a governmental golden rule: no borrowing allowed, forced fiscal balance.
  • To conclude this chapter, fragility in any domain, from a porcelain cup to an organism, to a political system, to the size of a firm, or to delays in airports, resides in the nonlinear.


Chapter 19: The Philosopher’s Stone and Its Inverse

  • fragility (and antifragility) detection heuristic: Let’s say you want to check whether a town is overoptimized. Say you measure that when traffic increases by ten thousand cars, travel time grows by ten minutes. But if traffic increases by ten thousand more cars, travel time now extends by an extra thirty minutes. Such acceleration of traffic time shows that traffic is fragile and you have too many cars and need to reduce traffic until the acceleration becomes mild (acceleration, I repeat, is acute concavity, or negative convexity effect).

The Idea of Positive and Negative Model Error

  • So—and this is the key to the Triad—we can classify things by three simple distinctions: things that, in the long run, like disturbances (or errors), things that are neutral to them, and those that dislike them. By now we have seen that evolution likes disturbances. We saw that discovery likes disturbances. Some forecasts are hurt by uncertainty—and, like travel time, one needs a buffer. Airlines figured out how to do it, but not governments, when they estimate deficits.
  • Finally, this method can show us where the math in economic models is bogus—which models are fragile and which ones are not. Simply do a small change in the assumptions, and look at how large the effect, and if there is acceleration of such effect. Acceleration implies—as with Fannie Mae—that someone relying on the model blows up from Black Swan effects.

How to Lose a Grandmother

  • The variability often turns out to be much more important than the average. The notion of average is of no significance when one is fragile to variations—the dispersion in possible thermal outcomes here matters much more.
  • Let us call that second piece of information the second-order effect, or, more precisely, the convexity effect.
  • Never cross a river that is, on average, four feet deep."

Now the Philosopher’s Stone

  • The following note would allow us to understand:
  • (a) The severity of the problem of conflation (mistaking the price of oil for geopolitics, or mistaking a profitable bet for good forecasting—not convexity of payoff and optionality).
  • (b) Why anything with optionality has a long-term advantage—and how to measure it.
  • (c) An additional subtle property called Jensen’s inequality.
  • Recall from our traffic example in Chapter 18 that 90,000 cars for an hour, then 110,000 cars for the next one, for an average of 100,000, and traffic will be horrendous. On the other hand, assume we have 100,000 cars for two hours, and traffic will be smooth and time in traffic short.
  • The number of cars is the something, a variable; traffic time is the function of something. The behavior of the function is such that it is, as we said, "not the same thing." We can see here that the function of something becomes different from the something under nonlinearities.
  • (a) The more nonlinear, the more the function of something divorces itself from the something.
  • (b) The more volatile the something—the more uncertainty—the more the function divorces itself from the something. Let us consider the average number of cars again. The function (travel time) depends more on the volatility around the average. Things degrade if there is unevenness of distribution.
  • (c) If the function is convex (antifragile), then the average of the function of something is going to be higher than the function of the average of something. And the reverse when the function is concave (fragile).
  • As an example for (c), which is a more complicated version of the bias, assume that the function under question is the squaring function (multiply a number by itself). This is a convex function. Take a conventional die (six sides) and consider a payoff equal to the number it lands on, that is, you get paid a number equivalent to what the die shows—1 if it lands on 1, 2 if it lands on 2, up to 6 if it lands on 6. The square of the expected (average) payoff is then (1+2+3+4+5+6 divided by 6)^2, equals 3.52, here 12.25. So the function of the average equals 12.25.
  • But the average of the function is as follows. Take the square of every payoff, 12+22+32+42+52+62 divided by 6, that is, the average square payoff, and you can see that the average of the function equals 15.17.
  • So, since squaring is a convex function, the average of the square payoff is higher than the square of the average payoff. The difference here between 15.17 and 12.25 is what I call the hidden benefit of antifragility—here, a 24 percent “edge."
  • There are two biases: one elementary convexity effect, leading to mistaking the properties of the average of something (here 3.5) and those of a (convex) function of something (here 15.17), and the second, more involved, in mistaking an average of a function for a function of an average, here 15.17 for 12.25. The latter represents optionality.
  • Someone with a linear payoff needs to be right more than 50 percent of the time. Someone with a convex payoff, much less.
  • The hidden benefit of antifragility is that you can guess worse than random and still end up outperforming. Here lies the power of optionality—your function of something is very convex, so you can be wrong and still do fine—the more uncertainty, the better.
  • This explains my statement that you can be dumb and antifragile and still do very well.
  • This hidden "convexity bias" comes from a mathematical property called Jensen’s inequality. This is what the common discourse on innovation is missing. If you ignore the convexity bias, you are missing a chunk of what makes the nonlinear world go round. And it is a fact that such an idea is missing from the discourse.

How to Transform Gold into Mud: The Inverse Philosopher’s Stone

  • Let me summarize the argument: if you have favorable asymmetries, or positive convexity, options being a special case, then in the long run you will do reasonably well, outperforming the average in the presence of uncertainty. The more uncertainty, the more role for optionality to kick in, and the more you will outperform.


Book VI: Via Negativa

  • But if we cannot express what something is exactly, we can say something about what it is not - the indirect rather than the direct expression.

Where is the Charlatan?

  • I have used all my life a wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice.
  • Yet in practice it is the negative that’s used by the pros, those selected by evolution: chess grandmasters usually win by not losing; people become rich by not going bust (particularly when others do); religions are mostly about interdicts; the learning of life is about what to avoid. You reduce most of your personal risks of accident thanks to a small number of measures.

Subtractive Knowledge

  • Now when it comes to knowledge, the same applies. The greatest-and most robust - contribution to knowledge consists in removing what we think is wrong - subtractive epistemology.
  • In life, antifragility is reached by not being a sucker.
  • So the central tenet of the epistemology I advocate is as follows: we know a lot more what is wrong than what is right, or, phrased according to the fragile/robust classification, negative knowledge (what is wrong, what does not work) is more robust to error than positive knowledge (what is right, what works). So knowledge grows by subtraction much more than by addition—given that what we know today might turn out to be wrong but what we know to be wrong cannot turn out to be right, at least not easily.
  • I discovered that I had been intuitively using the less-is-more idea as an aid in decision making (contrary to the method of putting a series of pros and cons side by side on a computer screen). For instance, if you have more than one reason to do something (choose a doctor or veterinarian, hire a gardener or an employee, marry a person, go on a trip), just don’t do it. It does not mean that one reason is better than two, just that by invoking more than one reason you are trying to convince yourself to do something. Obvious decisions (robust to error) require no more than a single reason.
  • I have often followed what I call Bergson’s razor: “A philosopher should be known for one single idea, not more” (I can’t source it to Bergson, but the rule is good enough).
  • So, a heuristic: if someone has a long bio, I skip him.


Chapter 20: Time and Fragility

  • Technology is at its best when it is invisible. I am convinced that technology is of greatest benefit when it displaces the deleterious, unnatural, alienating, and, most of all, inherently fragile preceding technology.

To Age in Reverse: The Lindy Effect

  • The nonperishable is anything that does not have an organic unavoidable expiration date. The perishable is typically an object, the nonperishable has an informational nature to it. A single car is perishable, but the automobile as a technology has survived about a century (and we will speculate should survive another one). Humans die, but their genes—a code—do not necessarily.
  • For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy.
  • So the longer a technology lives, the longer it can be expected to live.
  • If a book has been in print for forty years, I can expect it to be in print for another forty years.

Neomania and Treadmill Effects

  • These impulses to buy new things that will eventually lose their novelty, particularly when compared to newer things, are called treadmill effects.
  • So, we can apply criteria of fragility and robustness to the handling of information—the fragile in that context is, like technology, what does not stand the test of time. The best filtering heuristic, therefore, consists in taking into account the age of books and scientific papers.
  • One of my students (who was majoring in, of all subjects, economics) asked me for a rule on what to read. "As little as feasible from the last twenty years, except history books that are not about the last fifty years," I blurted out.

What Does Not Make Sense

  • Let’s take this idea of Empedocles’ dog a bit further: If something that makes no sense to you (say, religion—if you are an atheist—or some age-old habit or practice called irrational); if that something has been around for a very, very long time, then, irrational or not, you can expect it to stick around much longer, and outlive those who call for its demise.


Chapter 21: Medicine, Convexity, and Opacity

  • Simple, quite simple decision rules and heuristics emerge from this chapter. Via negativa, of course (by removal of the unnatural): only resort to medical techniques when the health payoff is very large (say, saving a life) and visibly exceeds its potential harm, such as incontrovertibly needed surgery or lifesaving medicine (penicillin).

First Principle of Iatrogenics

  • Now we can see the pattern: iatrogenics, being a cost-benefit situation, usually results from the treacherous condition in which the benefits are small, and visible—and the costs very large, delayed, and hidden. And of course, the potential costs are much worse than the cumulative gains.

Second Principle of Iatrogenics

  • Second principle of iatrogenics: it is not linear. We should not take risks with near-healthy people; but we should take a lot, a lot more risks with those deemed in danger.

How to Medicate Half the Population

  • Medicine has a hard time grasping normal variability in samples - it is hard sometimes to translate the difference between "statistically significant" and “significant" in effect. A certain disease might marginally lower your life expectancy, but it can be deemed to do so with "high statistical significance," prompting panics when in fact all these studies might be saying is they established with a significant statistical margin that in some cases, say, 1 percent of the cases, patients are likely to be harmed by it.


Chapter 22: To Live Long, but Not Too Long

  • So there are many hidden jewels in via negativa applied to medicine. For instance, telling people not to smoke seems to be the greatest medical contribution of the last sixty years.
  • If true wealth consists in worriless sleeping, clear conscience, reciprocal gratitude, absence of envy, good appetite, muscle strength, physical energy, frequent laughs, no meals alone, no gym class, some physical labor (or hobby), good bowel movements, no meeting rooms, and periodic surprises, then it is largely subtractive (elimination of iatrogenics).
  • I am convinced (an inevitable result of nonlinearity) that we are antifragile to randomness in food delivery and composition—at least over a certain range, or number of days.
  • And there is this antifragility to the stressor of the fast, as it makes the wanted food taste better and can produce euphoria in one’s system. Breaking a fast feels like the exact opposite of a hangover.
  • Walking effortlessly, at a pace below the stress level, can have some benefits—or, as I speculate, is necessary for humans, perhaps as necessary as sleep.


Chapter 23: Skin in the Game: Antifragility and Optionality at the Expense of Others

  • Heroism is the exact inverse of the agency problem: someone elects to bear the disadvantage (risks his own life, or harm to himself, or, in milder forms, accepts to deprive himself of some benefits) for the sake of others.
  • A half-man (or, rather, half-person) is not someone who does not have an opinion, just someone who does not take risks for it.

Hammurabi

  • To me, every opinion maker needs to have "skin in the game" in the event of harm caused by reliance on his information or opinion.
  • Further, anyone producing a forecast or making an economic analysis needs to have something to lose from it, given that others rely on those forecasts.
  • The second heuristic is that we need to build redundancy, a margin of safety, avoiding optimization, mitigating (even removing) asymmetries in our sensitivity to risk.
  • No opinion without risk; and, of course, no risk without hope for return.
  • There is another central element of ancient Mediterranean ethics: Factum tacendo, crimen facias acrius: For Publilius Syrus, he who does not stop a crime is an accomplice. (I’ve stated my own version of this in the prologue, which needs to be reiterated: if you see fraud and don’t say fraud, you are a fraud.)

The Stiglitz Syndrome

  • Stiglitz Syndrome = fragilista (with good intentions) + ex post cherry-picking
  • Finally, the cure to many ethical problems maps to the exact cure for the Stiglitz effect, which I state now.
  • Never ask anyone for their opinion, forecast, or recommendation. Just ask them what they have—or don’t have—in their portfolio.
  • The psychologist Gerd Gigerenzer has a simple heuristic. Never ask the doctor what you should do. Ask him what he would do if he were in your place. You would be surprised at the difference.

The Problem of Frequency, or How to Lose Arguments

  • To put in Fat Tony terms: suckers try to be right, nonsuckers try to make the buck, or:
  • Suckers try to win arguments, nonsuckers try to win.

The Antifragility and Ethics of (Large) Corporations

  • A rule then hit me: with the exception of, say, drug dealers, small companies and artisans tend to sell us healthy products, ones that seem naturally and spontaneously needed; larger ones—including pharmaceutical giants—are likely to be in the business of producing wholesale iatrogenics, taking our money, and then, to add insult to injury, hijacking the state thanks to their army of lobbyists.

Artisans, Marketing, and the Cheapest to Deliver

  • Another attribute of the artisanal. There is no product that I particularly like that I have discovered through advertising and marketing: cheeses, wine, meats, eggs, tomatoes, basil leaves, apples, restaurants, barbers, art, books, hotels, shoes, shirts, eyeglasses, pants, olives, olive oil, etc. The same applies to cities, museums, art, novels, music, painting, sculpture. These may have been “marketed" in some sense, by making people aware of their existence, but this isn’t how I came to use them—word of mouth is a potent naturalistic filter. Actually, the only filter.


Chapter 24: Fitting Ethics to a Profession

Wealth Without Independence

  • There is a phenomenon called the treadmill effect, similar to what we saw with neomania: you need to make more and more to stay in the same place. Greed is antifragile—though not its victims.
  • The point isn’t that making a living in a profession is inherently bad; rather, it’s that such a person becomes automatically suspect when dealing with public affairs, matters that involve others. The definition of the free man, according to Aristotle, is one who is free with his opinions—as a side effect of being free with his time.
  • In other words, for Fat Tony, it was a very, very specific definition of a free person: someone who cannot be squeezed into doing something he would otherwise never do.
  • In a municipality (rather than a nation-state), shame is the penalty for violation of ethics, making things more symmetric.
  • A simple solution, but quite drastic: anyone who goes into public service should not be allowed to subsequently earn more from any commercial activity than the income of the highest paid civil servant.
  • If someone has an opinion, like, say, the banking system is fragile and should collapse, I want him invested in it so he is harmed if the audience for his opinion are harmed.
  • But when general statements about the collective welfare are made, instead, absence of investment is what is required. Via negativa.
  • Increasingly, data can only truly deliver via negativa–style knowledge—it can be effectively used to debunk, not confirm.


Conclusion

  • As usual at the end of the journey, while I was looking at the entire manuscript on a restaurant table, someone from a Semitic culture asked me to explain my book standing on one leg.
  • Shaiy’s extraction was: Everything gains or loses from volatility. Fragility is what loses from volatility and uncertainty.
  • The best way to verify that you are alive is by checking if you like variations. Remember that food would not have a taste if it weren’t for hunger; results are meaningless without effort, joy without sadness, convictions without uncertainty, and an ethical life isn’t so when stripped of personal risks.

Want to get my latest book notes? Subscribe to my newsletter to get one email a week with new book notes, blog posts, and favorite articles.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.