On Bank Runs
The following is an excerpt from Re-Architecting Trust: The Curse of History and the Crypto Cure for Money, Markets and Platforms.
One of the most important services that commercial banks provide is the practice of maturity transformation, otherwise known as borrowing short term and lending long term. Although we don’t usually think of them as such, banks are among the largest natural borrowers in the economy. Every dollar deposited by a consumer or business is technically a loan — cash swapped for a ledger entry with the promise of someday getting it back, possibly with interest. Customer deposits are recorded on a bank’s balance sheet as liabilities. On the other side of the ledger are mostly the loans it dishes out using funds borrowed from its depositors. These are a bank’s assets.
As a general rule, depositors who lend their money to a bank do so for short periods, and can request their money back at will. After all, what good is a checking account if you can’t make a payment when you need to? Depositors pay for these privileges by accepting less interest than they would otherwise, thus why more restricted deposits like CDs pay higher interest. Those who borrow from a bank, on the other hand, like home buyers looking for a mortgage, gladly pay higher interest for long-dated loans that get repaid on a fixed schedule. Maturity transformation is he tricky business of intermediating between these disparate needs, the reward for which is the interest rate differential between what the bank charges borrowers and pays depositors, a difference that could be quite large under certain conditions. As the old joke goes, being a banker is all about practicing the 3–6–3 rule: borrow at 3 percent, lend at 6 percent, and hit the golf tee by 3 p.m.
The derogatory tone of this joke is somewhat unfair, as it assumes that anyone could do it. Bankers need to be experts at deciding which borrowers are creditworthy. They also need to stagger their borrowing and lending so there is always enough money coming in to cover whatever has to go out. Most of all, they need to fulfill their duty as anchors of trust. Loan officers might only ever interact with borrowers, but they actually work on behalf of the depositors, lending out their money in a (hopefully) responsible fashion. Too little risk and the depositors won’t earn much interest. Too much, and they may not get their money back.
Walking the line between enough and too much is more art than science, particularly so in the 1920s, a decade that would eventually earn the adjective roaring. Risk is a relative concept and has different textures throughout the economic cycle. Lending money to stock traders who only put 10 percent down and take the rest on margin — a practice pioneered centuries earlier by the Banque Royale — might seem like a dangerous idea but worked well for everyone during the five-year span leading up to the October crash, when the Dow Jones Industrial Average more than tripled. Any conservative risk officer would have looked foolish for trying to curtail the practice.
Bull markets have a way of loosening standards, and lenders who don’t join the party are taking a different kind of risk, that of disappointing their shareholders. Margin lending was big business in the 1920s, particularly for banks in New York, many of whom commingled commercial banking with securities services. In a setup that would foreshadow their “financial supermarket” status a century later, both National City Bank of New York (Citibank) and Chase National (JPMorgan) had securities underwriting and brokerage affiliates. Each affiliate was technically a different company from the bank, but the twin activities of lending and trading were so intertwined that the stock certificate for the brokerage businesses was simply printed on the back of the bank’s.
When the stock market crashed on Black Tuesday, the selling was so amplified by margin lending that the losses were translated to the banking sector. Borrowers who couldn’t put up additional funds had their shares liquidated, driving prices even lower, and thereby pushing other borrowers into margin calls. The problem was further exacerbated by the now-familiar delays between trading and settlement (for securities) and messaging and settlement for payments. Cash raised by a broker by selling stock did not actually arrive until days later. The resulting liquidity crunch hampered the balance sheets of certain banks.
Then there was the banking system’s heavy commitment to real estate, a lumbering asset that unlike stocks could not be unloaded with a call to the exchange. Property bubbles have a long history of causing problems for lenders. Under normal circumstances, real estate is the ideal collateral. Unlike securities (which can be diluted by the issuer) or personal property (which can be stolen) physical property is fixed, unique, and durable — thus the use of the adjective real in its name. Real estate also mostly goes up in value, two traits that make it generally safe to lend against.
But therein lies the rub because too much borrowing and leverage can turn a real estate boom into a bubble. Manhattan real estate prices climbed by more than 50 percent in the decade leading up to the crash, with the average American household more than tripling their mortgage debt as a percentage of total net worth. All of that was great for the profits of the financial sector. Then came the collapse, and property prices fell by over 70 percent in five years, driving even the most conservative mortgages underwater. American banks foreclosed on a record number of properties during the Depression, but with limited benefit, as the money they were getting back was often a fraction of what they had loaned out. Manhattan prices didn’t recover until 1960.
None of these challenges to the basic business of banking need to be catastrophic. Banks that find themselves shorthanded have many tools at their disposal. They can sell off their own assets, raise additional capital from shareholders, or borrow from other banks. Worse comes to worst, they can call for help from a Higher Authority. Not god, but the next best thing in financial services: the central bank. Indeed, a major force in the evolution of central banking over the centuries has been the need to occasionally rescue their more pedestrian, commercial cousins. To stave off a crisis, central banks can provide an emergency loan, usually against valuable but currently illiquid assets, such as a basket of mortgages. But this “lender of last resort” facility is only meant for banks that are having problems of liquidity, not insolvency. In theory, what a central bank doesn’t want to do is to send the message that it’s willing to bail out reckless bankers.
The First Domino
All these issues — reduced risk standards, too much overlap between commercial banks and their securities counterparts, and overexposure to the real estate market — came to a head for the American banking system in the waning weeks of 1929. But very few commercial banks failed that year, thanks to close attention from state and federal banking authorities and a few heroic measures by the then relatively young Federal Reserve. Clients, for their part, continued to believe in their institutions of trust — at least for a while longer.
But the ongoing deterioration of the economy, combined with even greater declines in the prices of stocks and real estate, would ultimately lead to an avalanche of failures in the ensuing three years, a shattering of the trust framework that would not cease until President Franklin Delano Roosevelt declared a nationwide bank holiday on his second day in office. Today we understand the first major crack to have begun with the gathering of a small crowd outside of an otherwise unremarkable bank branch in the Bronx.
The story of what happened to the aptly named Bank of United States (a publicly traded company that had no affiliation with the US government) was in many ways the story of America in the first few decades of the twentieth century. It was founded by a German immigrant, who in 1913 opened its first branch among the now historic tenement houses of Manhattan’s Lower East Side. It catered to immigrants and small businesses. It grew slowly until it was taken over by the founder’s son, an Ivy League graduate, in the late 1920s. It then expanded rapidly via a series of mergers and acquisitions, eventually becoming the third-largest bank in New York by assets and the largest in the country by number of customers. It would fail over the span of two days.
The Bank of United States was a poster child for all that could go wrong with any trusted entity in a prolonged expansion. The company had expanded faster than its own operations or risk officers could keep up with. It had lent aggressively to low-quality real estate projects and tied up depositor funds in its own executives’ misadventures in the stock market. It did not have the cleanest books. State and federal banking regulators were concerned about its health even before the crash and did everything they could to pull it through the immediate aftermath. But falling prices continued to take their toll on the asset side of its ledger, and rising unemployment led to diminishing deposits. By the summer of 1930, the only reasonable path forward was to merge with several other banks, with the blessing of concerned regulators. A tentative deal with three other New York banks was announced in late November.
Opacity is the ultimate kryptonite for any trust framework. When people don’t know what is going on and can’t verify, they will assume the worst. When it comes to banks, the paramount question is whether they can get their money back. Those who are uncertain will show up and find out. On the morning of Tuesday, December 9, the New York Times ran a short article revealing that the proposed merger had been called off on account of irreconcilable differences between the negotiating parties. The next day, rumors of solvency issues drew a crowd to a small branch in the Bronx.
Some people withdrew their money, just to be safe. Others followed suit, and the crowd grew bigger. Seeing that more and more people were taking out their money, less concerned depositors decided to line up as well, sometimes only to chat with a banker. The police were called in to control the crowd, one that would ultimately grow to over twenty thousand people. Crowds also began to show up at some of the bank’s other branches, in far-flung locations like Brooklyn. Everyone who wanted to withdraw their money did, but the public’s anxiety grew. The run was on, and spreading, alongside ever more terrifying rumors.
Bank runs are a lot like stampedes: once they get going, it doesn’t matter if there is any actual danger. At a certain point, not running is the most dangerous thing to do. Later that evening, the New York Police Department announced that it would post two officers outside of each branch the next morning to prevent “undue excitement.” But the issue turned out to be moot. At an emergency late-night meeting featuring the bank’s executives, state and federal authorities and other leaders from New York’s financial community, the decision was made to forestall the inevitable and shut it down.
The collapse of the Bank of United States was not the first major bank failure in the aftermath of the crash, but it was the loudest. The proximity to the country’s financial capital, combined with round-the clock coverage in newspapers like the New York Times, spread doubt throughout the land. Doubt in a financial system is contagious. If that bank could fail, some depositors must have wondered, why can’t mine? There is no financial cost for withdrawing one’s money out of fear — only an inconvenience — and the customers who showed up first in the Bronx turned out to be the smartest. Not surprisingly, bank failures accelerated in the early 1930s, with each collapse making the next one that much more likely. The people’s trust had been violated, and society as a whole would pay a price.
Ironically, when all was said and done, customers of the Bank of United States still recovered 80 percent of their money. Bank failures generally don’t mean that depositors lose everything unless there is outright fraud. What they do lose is access to their capital (in the short term) and whatever value is irrecoverable from the bank’s loan and investment portfolio, a figure that even for a poorly run bank turned out to only be 20 percent. And yet, lack of transparency into its books and growing mistrust of its executives still led to a panic. It did not matter that — as the New York Times would report the next day — the run was triggered by a false rumor. Trust in the economic situation was frayed twelve months after the crash, so a rumor was enough to take down one bank, and the failure of that bank was the spark that started an inferno.
Banking regulators would later report that customers of the Bank of United States had withdrawn almost 25 percent of their deposits in the weeks leading up to the run, with almost half of that money being taken out in the final few days. Such numbers would have had a chilling effect on depositors elsewhere, making them more likely to opt for the mattress. Economic historians consider the loss of confidence in the banking system as one important reason for the unique severity of the Great Depression.
The problem was even known back then, as stated starkly in a sermon by a New York pastor in the waning days of 1930:
“If bank closings are caused by whispers of false tales, then they are the worst type of traitors the world has ever seen. Beside them Judas was an angel. Such destroyers not only bring runs on banks but they undermine faith, without which neither commerce or happiness is possible.”
Rumors and “false tales” thrive in a financial system built on opacity and discretionary control. The problem wasn’t that people suddenly stopped believing in the idea of banking, but rather lost faith in those banks at that particular moment in time. Had there been some way for the depositors to be certain that the bankers hadn’t misbehaved, perhaps by directly inspecting their books to make sure — the proverbial “trust, but verify” — then most of the runs may have never happened. But such luxuries do not exist in banking as it has been architected for the past five hundred years. Regulators are supposed to solve this problem with rules and audits, but they have a nasty habit of being behind the curve just when the public needs them most.
The contagion that took down a significant portion of the American banking system ended after the weeklong bank holiday declared by President Roosevelt. Remarkably, the holiday ended with people lining up to redeposit their money at banks across America, newly assured that their funds would be safe. Credit for that shift in psychology goes to FDR’s decisiveness (which included his first nationally broadcast “fireside chat”) and a transformative piece of legislation passed by Congress called the Emergency Banking Act.
The bill paid heed to the fact that the crisis was as much psychological as it was financial and granted extraordinary new powers to the federal government to boost both. It dramatically expanded the Fed’s “lender of last resort” functionality and allowed the federal government itself to buy the stocks of ailing banks if need be. It also created new mechanisms for unwinding troubled banks in a manner that would protect depositors. Last, but certainly not least, it granted the president emergency powers to do whatever else needed to be done to end the crisis.
Government to the Rescue
History would later remember the introduction of deposit insurance via the Federal Deposit Insurance Corporation (FDIC) as the turning point of the crisis. But that solution came later and wasn’t all that powerful, as it imposed strict limits on how much protection each depositor could obtain. What made the Emergency Banking Act potent was the implication that going forward, all deposits in every bank would be guaranteed by the government, one way or another.
The president and his supporters went out of their way to communicate that implication with impressive results. Not only was a substantial portion of the cash that had been withdrawn before the holiday returned, but the stock market — which had also closed during the holiday — surged by 15 percent upon resumption of trading, making March 15, 1933, the single biggest up day in the 130-year history of the Dow Jones Industrial Average. The US government had opened a bazooka of trust on an ailing banking system and confidence had been restored. Not only would the number of bank failures fall dramatically, but the economy itself would begin to recover. History would remember this as the turning point of the Great Depression.
If trust in a financial system is a static measure that can be increased with laws and regulations and then left in an elevated position, then our story would end here, and the American banking system would have been uninteresting ever since. The challenge is that prolonged stability in any trust framework can be self-defeating.
The more trusting the beneficiaries of the framework, the greater the free-rider temptations for the people in charge of it. So goes the curse of history, and thus how trusted currencies get diluted by their issuers and popular platforms turn exploitative against their users. In the case of financial intermediaries, an abundance of stability allows bankers to become reckless with depositor capital, again. But now, the depositors don’t care.
In economics, moral hazard is defined as “the lack of incentive to avoid risk where there is protection against its consequences,” a phenomenon to which anyone who has ever driven a rental car more recklessly than their own can relate. The great banking interventions of the Depression era did not solve the problem of an economic crisis caused by a financial one. The interventions simply delayed the arrival of the next one, and made it bigger.
Depositors who believe that either they or their bank will always get bailed out are less discerning about which bank to use. After all, if every bank that is given a charter by the government is also guaranteed by it — either explicitly via deposit insurance or implicitly via emergency bailouts — then why not just use the most convenient bank? Or the one that pays the highest interest? Indeed, the most prudent course of action might be to actively seek out riskier banks. If things go well, they’ll make depositors more money. If they don’t, someone else will foot the bill. Heads I win, tails you lose.
The unintended consequences of America’s new approach to banking first arose during the savings and loans crisis of the 1980s. S&Ls, also known as thrifts, are limited financial institutions meant to help working-class people save money or buy a house. They grew in popularity in the decades after the Second World War when a growing population, low unemployment, and suburban expansion led to growing demand for savings accounts and mortgages — particularly from institutions that offered better terms than traditional banks.
As with the banking crisis of the 1930s, the collapse of the thrift industry was caused by a nasty combination of a challenging economic period and over-levered balance sheets, particularly as tied to risky real estate. Lack of transparency, poor internal controls, and flat-footed regulators were also a factor. But there was no widespread panic this time around, and no runs, because savers believed that they were protected, either by government-run deposit insurance or some other kind of bailout. They were right. Despite the collapse of over one thousand thrift banks in the span of nine years, depositors hardly lost anything. Taxpayers, on the other hand, lost over $100 billion.
The moral hazard of the Depression-era programs meant to stabilize banking now had a price tag, and a steep one at that. To make matters worse, the cost was socialized across an entire nation. Citizens who had nothing to do with thrifts and never benefited from their existence were nevertheless billed for their folly. This was not supposed to happen. There is a reason why America’s primary banking backstop is called the Federal Deposit Insurance Corporation. The money needed to rescue failed banks is supposed to come from premiums charged to all banks. But the premiums collected from thousands of thrifts across the span of several decades turned out to be woefully inadequate, covering less than 20 percent of the money spent on the rescue.
The concern that government protections in banking might ultimately encourage reckless behavior is as old as the protections themselves. No lesser an influential figure in enacting them than Franklin Delano Roosevelt himself expressed his own fears in a newspaper article published less than a year before he signed the FDIC into existence. He predicted that deposit insurance would “lead to laxity in bank management and carelessness on the part of both banker and depositor,” eventually causing “an impossible drain on the Federal Treasury.”
It would take seventy-five years, but history would prove him right.