There are not many books about risk management that have been translated into 23 languages. Nassim Nicholas Taleb’s bestselling tome Fooled by Randomness is one of them. But Taleb’s insights are not to everyone’s liking. The central thesis of Fooled by Randomness is that human brains have evolved to see causal patterns even in situations where there is no cause-and-effect. This leads us to overestimate our ability to predict the future and to overly reward businesspeople who take dumb chances but get lucky. That is not the kind of message that will endear Taleb to lucky (and rich) businesspeople, or to anyone who imagines this world can be neatly explained during the course of a story or within the length of a tweet. What are the implications for risk managers, the professionals paid to manage luck (both good and bad)?
Risk managers should be wary of neat patterns and convenient explanations, because of the damage that might be done if they later prove unreliable. However, risk managers are not computers. They are human beings too. Whilst Taleb is an advocate of better risk management, and especially the thorough use of data and mathematics to explore potential downside risk, he is unforgiving with all his criticisms of human foibles, and he makes sharp observations about the risk managers he came across whilst working as a Wall Street trader. This excerpt can be found near the end of the second chapter of Fooled by Randomness, in a section entitled “Risk Managers”:
The risk managers’ job feels strange… They are limited in their power to stop profitable traders from taking risks, given that they would, ex post, be accused… of costing the shareholder some precious opportunity shekels. On the other hand, the occurrence of a blowup would cause them to be responsible for it. What to do in such circumstances?
Their focus becomes to play politics, cover themselves by issuing vaguely phrased internal memoranda that warn against risk-taking activities yet stop short of completely condemning it, lest they lose their job. Like a doctor torn between the two types of errors, the false positive (telling the patient he has cancer when in fact he does not) and the false negative (telling the patient he is healthy when in fact he has cancer), they need to balance their existence with the fact that they inherently need some margin of error in their business.
Taleb continues this theme in the following section, entitled “Epiphenomena”:
From the standpoint of an institution, the existence of a risk manager has less to do with actual risk reduction than it has to do with the impression of risk reduction. Philosophers since Hume and modern psychologists have been studying the concept of epiphenomenalism, or when one has the illusion of cause-and-effect. Does the compass move the boat? By “watching” your risks, are you effectively reducing them or are you giving yourself the feeling that you are doing your duty? Are you like a chief executive officer or just an observing press officer? Is such illusion of control harmful?
This seems to be an inherently bad proposition for the risk manager. The risk manager will be blamed if something goes wrong, because the disaster was not prevented, and despised if everything goes well, because his or her work increases costs but seemingly delivers no benefits. And even if risk managers provide measures of the value they add to the business, it is not possible to compare the outcome in this universe with the outcome that would be found in another, hypothetical, universe in which the risk manager does not exist, and so obtain objective confirmation that the risk manager’s measures are valid.
What can be done to make the risk manager’s fate less terrible? One solution might be to acknowledge that the risk manager cannot possibly please everyone. Many people prefer to live a stupid life where positive outcomes depend upon a combination of good fortune and the social advantages that accrue from agreeing to widespread but often mistaken explanations of how things work. In other words, they hedge their bets by choosing to believe the same things that other people believe, guaranteeing they are only proven wrong when most other people are proven wrong. But these people will never reward a superior strategy for managing risk, because they manage their personal risk by choosing to make average decisions. A superior risk manager will not succeed by pleasing an average decision-maker, so it is not worth trying to persuade everyone of their talents. They should focus instead on educating a chosen few capable of understanding the paradoxes involved in managing risk. Taleb follows a similar strategy in his own life.
By definition, I go against the grain, so it should come as no surprise that my style and methods are neither popular nor easy to understand. But I have a dilemma: On the one hand, I work with others in the real world… So my wish is for people in general to remain fools of randomness (so I can trade against them), yet for there to be a minority intelligent enough to value my methods and hire my services. In other words, I need people to remain fools of randomness, but not all of them.
Such a strategy is never going to receive the most retweets or the most ‘likes’ on LinkedIn. But therein lies the point: the most intelligent approach is unlikely to be the one that everyone can understand and appreciate. If you want to successfully manage risk then you should know who your audience is, and what information and ideas will influence the decisions they make. Success involves pleasing the people who matter, not pleasing everybody. And Taleb has shown how the strategy can be successful. Though many dislike Taleb’s conclusions, Fooled by Randomness would not have been published in 23 languages if nobody appreciated his way of thinking about risk.