It may fall to the political scientists rather than the economists to give us a complete story of what happened" [during the financial crisis].
To make this contrast more stark, compare the authoritative and conclusive accident reports of the National Transportation Safety Board (NTSB)—which investigates and documents the who–what–when–where–and–why of every single plane crash—with the twenty-one separate and sometimes inconsistent accounts of the financial crisis we’ve just reviewed (and more books are surely forthcoming). Why is there such a difference? The answer is simple: complexity and human behavior.
While airplanes often crash because of human behavior or “pilot error,” the causes of such accidents can usually be accurately and definitively determined with sufficient investigatory resources. Typically there are a small number of human actors involved—the pilots, an air traffic controller, and perhaps some maintenance crew. Also, the nature of accidents in this domain is fairly tightly constrained: an airplane loses aerodynamic lift and falls to the ground. While there may be many underlying reasons for such an outcome, investigators often have a pretty clear idea of where to look. In other words, we have sufficiently precise models for how airplanes fly so that we can almost always determine the specific causal factors for their failure through relatively linear chains of physical investigation and logical deduction. Human behavior is just one part of that chain, and thanks to flight data recorders and the relatively narrow set of operations that piloting an aircraft involves—for example, the pilot must lower the landing gear before the plane can land, and there’s only one way to lower it—the complexity of the human/machine interface isn’t beyond the collective intellectual horsepower of the NTSB’s teams of expert investigators.
Now compare this highly structured context with piloting an investment bank, where the “instrument panel” is the steady stream of news reports, market data, internal memos, emails, text messages, and vague impressions that a CEO is bombarded with almost 24/7, not all of which is true; where the “flight controls” are often human subordinates, not mechanical devices or electronic switches; and where there is no single “flight data recorder,” but rather hundreds of distinct narratives from various stakeholders with different motivations and intentions, generating both fact and fantasy. If we want to determine whether or not the failure of Lehman Brothers was due to “pilot error,” like the NTSB, we need to reconstruct the exact state of Lehman prior to the accident, deduce the state of mind of all the executives involved at the time, determine which errors of commission and omission they made, and rule out all but one of the many possible explanations of the realized course of events.
Given that we can’t even agree on a set of facts surrounding the financial crisis, nor do we fully understand what the “correct” operation of a financial institution ought to be in every circumstance, the challenges facing economists are far greater than those faced by the NTSB. However, the stakes are also far higher, as we’ve witnessed over the past four years. There is a great deal to be learned from the NTSB’s methods and enviable track record, as Fielding, Lo, and Yang (2011) illustrate in their case study of this remarkable organization. And one of the most basic elements of their success is starting with a single set of incontrovertible facts. In other words, we need the equivalent of the “black box” flight data recorder for the financial industry, otherwise we may never get to the bottom of any serious financial accident.--Andrew W. Lo (2012) Journal Economic Literature.
The passages quoted above come from a widely read, wonderful, extended survey of books by economists and journals on the financial crisis. Andrew Lo (MIT) is co-author of one of the most important and influential papers in the history of analysis of stock-markets (which denies that markets behave as a random walk--a key feature of the efficient market hypothesis [recall and here]). It has over 7000 citations. Oddly enough, his most important paper, "The National Transportation Safety Board: A Model for Systemic Risk Management" has only 17 citations. (This tells us the market for ideas in financial economics is not efficient.) Odder yet, when he draws on his own research in the passage quoted from his 2012 review article, he draws extremely incomplete lessons from his very own research. Let me explain.
I first encountered the idea of a financial cockpit when I met Jean-Pierre Fouque in Santa Barbara (whose office was right down the hall when I was visiting the philosophy department). The idea is deceptively simple: make sure that key policy-makers (at central banks and other oversight agencies) have real-time access to relevant data to monitor systemic risk in the market. As Lo notes (note 23 attached to the quoted passage) something like this was "behind the Dodd Frank Act's creation of the Office of Financial Research." (See also David Warsh's account of Lo's 2008 testimony to Congress). The guidelines of the (more international) Financial Stability Board, also tacitly rely on bank regulators having access to relevant data in real time so that they can regulate and monitor, say, shadow banking entities.
A moment's reflection makes clear that even in airplanes nobody (inside or outside a cockpit) has access to the relevant data before or during a crash. A further moment's reflection reminds us that lots of participants in markets have incentives to (a) hide their data, or (b) generate dummy data to gain a competitive advantage (and they would surely not trust the probity of the regulators to never abuse the data for public or private gain). Moreover, and this is a point that Lo and his co-authors make in his overlooked piece on the NTSB, there is (with exception of outside lawyers) nobody that benefits from airplane crashes, whereas in trades gone sour there may well be counterparts that gain (and there are examples of folk who made a killing during the crash). Not to mention the fact, that nobody really knows what data should or should not be included in that financial cockpit; as Lo's review reminds us, we simply don't understand what happened yet, and so don't have reliable models to tell us what data is salient. As we know from nuclear power station disasters [many of which involve mistakes in reading the data in the control room], and other complex systems failures, including airplanes (recall), a complex system in which monitoring of the health of the system is not part of the normal functioning of the most important (and well trained, if not drilled) actors within the system, is fundamentally unstable.
So, even if we do figure out what happened during 2007-9, odds are that during the next crisis systemic risk will be generated, in part, by products that are not invented yet by people who do not feel it's their job to care for the system. There is an important policy lesson here: all approaches that count on regulators of complex systems (as the financial market) to have access to and know how to act on data during a crisis, are not robust. To be sure, regulators don't fly entirely in the dark and don't only make bad decisions: the Treasury Department's guarantee of money market funds almost certainly prevented a total meltdown of the financial system in 2008 (recall).
This is not to deny the significance of having high quality data which may be useful to reconstruct what happened (as Lo suggests, and this may have been all he intended to suggest) after the fact. But such a post-facto perspective misses some of the key features of air-travel success: (i) the ongoing drilling and training of pilots, air-controllers, and others that are part of the system; (ii) the extensive and expensive stress testing of planes and components before commercial flight--in the JEL article, Lo misleadingly suggests that we really know how planes fly; as I learned from George Smith (a failure expert) that's true at a high level of generality, but despite cheap and massive computer power, in practice we are constantly dealing with extremely subtle approximations;* (iii) the ongoing feedback loop between design, practice (including minor failures), investigation, and regulation. Moreover (and now drawing on Lo's research on NTSB): (iv) at NTSB the separation of the regulation and investigation functions -- this is a key feature identified by Lo and his colleagues. That is, the "paradox of less regulatory authority yielding greater influence is one of the most striking characteristics of the NTSB;" (v) at NTSB there is a culture of open-ness and "inclusive atmosphere for all stakeholders involved" (again quoting Lo et al). Most astonishingly of all, (vi) "Each of the [NTSB's] board members and employees undergoes a rigorous ethical review process and is subject to ongoing restrictions such as not having any financial interests in transportation-related companies so as to minimize actual and potential conflicts." (Section 2.3 makes for interesting reading for those of us who note that economists and financial regulators routinely act as if financial incentives do not apply to them!) And, not trivially, (vii) work at NTSB is a capstone not a steppingstone to future careers which contrasts between the revolving door we encounter between Wall Street, Central Banks, Academia, and Politics. It is disconcerting that when Lo writes for a wide audience of fellow economists in JEL,** he does not challenge them to reflect on his best insights.
*UPDATE: Susan Sterrett points out on facebook "that sometimes there are phenomena of a previously unknown sort that get found out after an investigation, and sometimes it is a phenomenon that arises only for some specific aircraft designs."
**My list of seven (i-vii) overlooks other fine best-practices identified by Fielding, Lo, and Yang at NTSB.
Comments
You can follow this conversation by subscribing to the comment feed for this post.