Categories
Backwardness of law

Solved: The Problem of Indeterminacy in the Law

It’s not a problem of language. It’s a problem of writing:

Socrates makes the point in Plato’s dialogue [Phaedrus] that writing will not help in the search for truth. He compares writing to painting — paintings look like living beings, but if you ask them a question, they are mute. If you ask written words a question, you get the same answer over and over. Writing cannot distinguish between suitable and unsuitable readers: it can be ill-treated or unfairly abused, but it cannot defend itself. In contrast, truths found in the art of dialectic can defend themselves. Thus, the spoken is superior to the written word!

Michael D. Coe, Breaking the Maya Code 14 (2012).

The trouble is, if you don’t write it down, then you must enact people.

Categories
Miscellany

And Then There’s the Carrot

When April’s jobs numbers disappointed, the governors of South Carolina and Montana responded by cutting unemployment benefits. The idea is that, given the low minimum wage in those states, some low-wage workers are making more money by staying home and cashing unemployment checks than they could make working. Stop mailing the checks and they will get back to work—and work is good, because work means more restaurants can reopen and stay open for longer hours, making consumers happy.

Herein, of course, the stick: make workers lives’ worse so that they will cry uncle and go back to work.

Then there’s the carrot: convert the unemployment benefits into handouts. That is: stop conditioning the delivery of those unemployment checks on proof that the worker is unemployed.

That would send workers back to work because when you live at the bottom end of the income distribution you can always use more money. Even if you are already getting an extra $300 per week in unemployment, you are not exactly living the good life. You will go back to work to earn another $300 if the government won’t cut your $300 unemployment check if you do so.

Removing the requirement that a worker be unemployed to earn the current batch of extended federal unemployment gives workers a carrot for working: the extra wages they can earn, above and beyond those unemployment benefits, by going back to work.

I know removing that requirement undermines the paper rationale for unemployment insurance, which is to tide over people who are looking for work. But this country has a major inequality problem, this would be a one-time thing and Congress has already appropriated the money, and those currently receiving benefits (who, under this plan, are the only ones who would be made eligible to continue to receive payments even if they go back to work) are more likely to be poor and deserving than the vast swaths of the middle class that received stimulus checks over the past year.

(It might be a little unfair to those who did find jobs and went back to work, or those who didn’t lie about looking for work in order to get their checks. But as between a little unfairness and the stick, I say we go with unfairness. And if there’s money in the extended unemployment budget to pay out benefits to those who went off unemployment early, then we should do that, and the unfairness would be much reduced.)

Plus the fix that President Biden has mooted—making sure people on unemployment really are looking for work—is impossible to execute without a lot of unnecessary bureaucracy, which the Administration is not likely seriously to pursue anyway.

If you have the choice between achieving efficiency (getting people back to work) using the carrot or the stick, and the carrot is there, why not use it?

The worst you could do is make the poor a little richer.

Categories
Miscellany

Ménière’s Disease and the COVID Vaccines

A lifelong friend writes:

I took the first shot of the Pfizer COVID vaccine on January 16. Two days later I developed a feeling of fullness in my left ear, some sinus congestion, and chills, which I took to be side effects of the vaccine. The next day, the chills and congestion were gone, but the feeling of fullness–almost water-loggedness–in my ear has persisted since. But that was just the beginning. On January 29, my left ear started ringing and I had an attack of vertigo and vomiting that lasted several hours: the world spins around you making all forms of physical activity, including walking, impossible. Every day since then, I’ve had an attack of vertigo that lasts more than an hour, and the ringing and sense of fullness in my ear have ebbed and flowed, making it difficult to concentrate; indeed, the experience has been debilitating.

I saw a doctor yesterday, an eminent expert on hearing and balance, and he diagnosed Ménière’s disease. I asked him if this was triggered by the vaccine and he said that he has been seeing cases like this arising from the vaccines, as well as from COVID itself; he suggested that the vaccines may trigger inflammation in the ear that the body’s immune response does not eliminate. He recommended that, given my apparent reaction to the first shot, I not take the booster shot of the vaccine.

I saw another doctor today, and he insisted that there could be no connection between the vaccine and my condition and suggested that the fact that it started two days after I had taken the vaccine was pure chance. This doctor said that I should take the booster.

Although I have never had this sense of fullness or ringing in my ear before, and have never had daily attacks of vertigo before, I have had a total of three or four bouts of vertigo in the past; those occurred six to ten years ago and until now I had had none since.

Categories
Miscellany Monopolization Philoeconomica

Was Personalized Pricing the Epstein Grift?

The Times reports that pedophile Jeffrey Epstein earned more than $100 million from private equity magnate Leon Black in exchange for providing some “idea-generator”-type tax advice on a handful of Black’s family trusts, advice that Black still had to pay his own tax lawyers to implement.

Does that mean that Epstein, who was a college dropout, was a self-taught tax genius? Not likely.

But it does suggest that Epstein knew the value of personalized pricing. Here’s the key passage from the article:

Jack Blum, a Washington lawyer who has led corruption investigations for several Senate committees, said he was surprised by the size of the fees Mr. Epstein’s work commanded. “You could be the best lawyer in Manhattan working on the most complicated trusts and estates and it would never come anywhere close to that kind of money,” he said.

Matthew Goldstein & Steve Eder, What Jeffrey Epstein Did to Earn $158 Million From Leon Black, N.Y. Times (Jan. 26, 2021).

So what gives?

The answer is that tax lawyers price for the marginal consumer: the marginal client using their services. They not only serve magnates like Leon Black, but also the merely rich, like an executive mentioned in the Times article whom Epstein initially refused to take on as a client for being insufficiently wealthy.

The merely rich can’t afford $100 million, so, to get their business, tax lawyers must charge them lower fees. When the truly rich, like Leon Black, go looking for tax advice, they knock on these lawyers’ doors, and the lawyers charge them about the same price they charge everyone else.

They don’t try to charge higher fees to their wealthiest clients because tax law is a reasonably competitive industry. You need to be smart to work at the high end of the field, but tax is not a field in which “the best are easily ten times better than the average.”

And for the many who do have what it takes, the cost of entry into the market is relative low; all you need is a JD and an LLM, which cost a few hundred thousand dollars to obtain, about the amount needed to open a cleaners or a pizzeria (okay, there’s also the opportunity cost of time spent in school, but we are still probably only talking about the high six figures).

So if you start raising your fees above what the marginal client is willing to pay, your super-rich inframarginal clients will take their business to another tax lawyer who is still pricing for the marginal client. So you, too, continue to price for the marginal client.

But what if you could find a way to charge your richest clients prices personalized to them, and not have them jump ship to your competitor?

It looks like Epstein’s grift was figuring out how to do that.

The answer, as in so many other lines of business, was to make tax advice into a luxury product: to make the product exclusive.

The Times tells us that Epstein sold himself to clients as a genius who would only give tax advice to the richest of the rich. He cultivated the image of being, not some pathetic, overworked, upwardly-mobile professional, but one of them, a fellow member of the super-rich who was willing to cut other members in on secrets that only they could access because of who they were.

Exclusivity creates brand loyalty, and brand loyalty means that you stop shopping around; you are willing to pay a price determined by what you can afford, rather then what competitors are offering. You are willing to pay, in other words, a personalized price.

Graphically, the tax market may have looked like this:

Gerrit De Geest observes in Rents: How Marketing Causes Inequality, that in today’s economy, it’s not those who make who earn all the profits, or those who distribute who earn all the profits; it’s those who do the marketing. That’s where all the rents live. Competition drives profits to zero for all save those who beguile.

It seems somehow fitting that this economy would spawn a figure like Epstein, who sold tax advice but didn’t even bother to do his legal work in house. He didn’t really sell tax advice; he marketed it.

As the Times recounts, Epstein referred one acquaintance to outside tax lawyers, whom the acquaintance then paid for tax advice, and then Epstein, having never mentioned a fee to this acquaintance, sent him a bill for 10% of the purported tax savings that the lawyers, and not Epstein, had created.

That 10% was the price of enchantment, nothing more.

But you still have to wonder how a private equity guy like Black, whose business revolves around deals hammered out by armies of lawyers and shaped by tax considerations, could have thought he was getting something special from Epstein.

Did he really think tax was like music, and it was worth paying his Mozart to dream up a tune, even if Black still had to pay someone else to write all the notes down for him?

Maybe he didn’t, and there’s more left to tell in this story.

Or maybe we need a new razor: Never attribute to conspiracy what can otherwise be attributed to marketing.

Categories
Monopolization World

Unlearning Trade

First we thought the inherent superiority of our political system would defeat the Chinese Communist Party. Now that we’re coming to terms with the fact that it didn’t, we seem to think that the inherent superiority of free markets will defeat China instead.

Clearly, we’re not taking learning in account.

But I don’t mean that we haven’t learned from our mistaken view that China would become more democratic as it became wealthier.

I mean that in assuming that China’s embrace of a new closed door policy will cause its technological competitiveness to wither, we are literally failing to take the relationship between learning and output into account.

The Wall Street Journal argues that by picking fights with the West, and getting itself banned from engaging in semiconductor trade with the US as a result, China has put itself in the deeply wasteful position of having to recreate a native semiconductor industry from scratch. If the moonshot fails, Chinese high tech firms will lag, and the country’s race to global dominance will be lost.

It would have been much better, argues the Journal, for China to have continued to make nice with the West and enjoy the benefits of trade, not least of which is the ability to leverage what others do best—like making semiconductors—to enable China to do what it does best—like making smartphones and 5G infrastructure.

The Achilles heel of this and all free trade arguments is that they don’t take innovation into account, and specifically that most valuable of all forms of innovation: learning by doing.

The fact that China is not an efficient producer of semiconductors today, and would be better off trading with those who are, does not mean that China cannot learn to be an efficient producer of semiconductors tomorrow.

And if China is able to learn, then the money it pours into starting more or less from scratch now won’t be wasted.

Instead, it will be the most important investment China has ever made, because it will buy not only a valuable skill, but something more valuable still: independence and a shot at world domination. The future belongs to high tech, the hardest thing to do in high tech is chips, and so if you’ve got the best chips, you will win eventually.

The key to learning is doing: the more you make, the better you get at making, which is why semiconductors have a downward sloping learning curve. As production volumes increase, cost falls and falls and falls.

That in turn means that if you want to produce the difficult-to-make things that render countries rich and powerful, the opposite of free trade dogma is required: you must shut out foreign competition, freeing up domestic demand for your native industries, so that those industries can ramp up supply and start marching down the learning curve.

If you don’t do that, then your domestic market will buy from foreign producers, helping them learn, not you.

Of course, too much protection can also be a problem. If your domestic industries are not subject to competitive pressures, they won’t have an incentive to learn. That can particularly vex small countries whose internal demand can only support one or two firms in a given market. But for a country the size of China, that’s not a problem. (Indeed, it’s no accident that free trade ideology has roots in Western Europe, home to lots of small- and medium-sized countries.)

So by picking fights with the West at a moment in its development when it has plenty of domestic demand for semiconductors (think Huawei) China is really just binding itself to the mast: committing its domestic market to its native semiconductor operations. It is forcing itself to learn.

And China does know how to learn. America installed the first solar panel in 1956, on the Vanguard I satellite. But at that time a single panel cost the equivalent of $500,000 today, meaning that we weren’t very good at applying the technology. As we made more solar panels, however, we got much better, as the solar learning curve below shows. But by the early 2000s learning had stagnated at around $5 per module.

Then China, which is energy poor but for coal—a mature technology that promises few gains from innovation—embraced solar, installing panels across its vast peripheral deserts.

By doing, China learned to do better, driving price south of 50 cents per module by 2019, making solar power the cheapest in the world today, more so even than coal or gas, and coming to dominate the global solar industry.

Will China walk just as quickly down the semiconductor learning curve? You can bet on it. And the country’s leadership in the new technology of quantum computing—the future of chips—means that it is not starting all that far behind its global competitors.

So when the Wall Street Journal says things like this:

Beijing is essentially now engaged in a massive, long-shot attempt to build from the ground up an advanced semiconductor manufacturing capability that doesn’t depend on foreign suppliers—churning through gargantuan amounts of the Chinese people’s money in the process. Rather than trying to reinvent the wheel, a better economic strategy would be to mend its relations with the West and reform China’s dysfunctional credit system—then import chips and let Chinese markets and Chinese companies decide what China is really good at.

Nathaniel Taplin, China’s State Capitalism Collides With Its Technological Ambitions, Wall St. J. (Jan. 2, 2021).

I have to wonder at its lack of learning.

And as I have pointed out elsewhere, the really funny thing about this mode of thought—the notion that a country is better off not trying to do the things that it is not right now good at doing—is that those who love it most also tend to be those who, when they turn their gaze to domestic markets, talk most about innovation and learning, and the need to protect firms from too much competition in order to promote them.

They argue in favor of monopoly and against regulation at home on the ground that shelter from competition is a necessary reward for innovation, that though big firms may destroy “static competition”—competition over price by firms with fixed levels of technical skill—doing so actually enables “dynamic competition”—competition to learn and innovate that eventually leads to far greater benefits for society.

So they ought to know better than to assume that a new Chinese closed door policy will save America from China.

Indeed, the Journal’s faith in free trade reminds me a bit of Ah Q, the eponymous antihero of The True Story of Ah Q, by the great early 20th century Chinese writer Lu Xun.

Ah Q’s talent, you see, was convincing himself he was the winner whenever he lost a fight.

To be sure, Ah Q was a metaphor for the much-oppressed China of a century ago, whereas America is still on top today.

But mentality is fate.


Categories
Antitrust Monopolization

The Assault on the Printed Page

When the New York Journal cabled Mark Twain in London on June 2, 1897 to inquire whether he was gravely ill, Twain famously replied that the reports of his death were greatly exaggerated.

William Randolph Hearst, the Journal’s publisher, could have saved his scoop by having Twain shot on the spot. Fortunately, he didn’t, and we got another 13 years and “Captain Stormfield’s Visit to Heaven” out of the great humorist.

Publishers of university textbooks wouldn’t have been so patient.

Reports of the demise of the printed page, popular since the dawn of the Internet, have turned out to be greatly exaggerated: sales of print books are surging.

So textbook publishers have decided to kill the printed page themselves.

According to a recent antitrust class action brought by university students, all the big names in textbook publishing have been working together to funnel cash to universities in exchange for commitments to assign online-only textbooks to students instead of print books.

It’s working: more than 1,000 universities have agreed to assign publishers’ online-only editions, millions of students have already been forced to purchase them, and publishers are preparing to phase out print textbooks entirely.

Studies show that students, like most readers, prefer the printed page, and textbook companies have seemingly had no problem jacking prices up to astronomical levels in recent years, with the average price of textbook in a core undergraduate course like statistics retailing for more than $300 dollars. So what do publishers have to gain from their assault on the printed page?

A lot, it turns out.

The Rise and Fall of the Internet Used Textbook Market

Eliminating print allows publishers to wipe out competitors that have depressed sales for years.

Before the Internet, textbook publishers had little to fear from the used book market, apart from an occasional copy with a yellow “Used” sticker on the spine that would make its way onto the shelf of a university bookstore.

The Internet changed that, by creating a national–indeed, international–market for used textbooks. Sales volumes of new textbooks plummeted, as students could now pass books along to each other from semester to semester through the medium of online booksellers.

For years, publishers more than offset their losses by jacking up new book prices, but it turns out that there is a limit even to what students with no-questions-asked access to loans are willing to spend for a new textbook.

Indeed, just as excessive tax increases can reduce tax revenues, excessive textbook price increases reduce profits as students start locating bootleg copies on the Internet or shaming their professors into distributing textbook pdfs in violation of copyright rules.

Publishers tried to stem the tide by accelerating the rate at which they put out new textbook editions, even–and rather humorously–in such timeless subjects as basic physics, in order to drive used books to obsolescence and force students to come back to the market for new books.

It didn’t work, which is not to say that it put the major textbook publishers in jeopardy of closing up shop. Textbooks remain the most profitable books in publishing. But publishers preferred to go back to minting money at the old rate. And that’s where online-only books come in.

The Supreme Court has held as recently as 2013 that publishers cannot prohibit students from reselling their textbooks. But it is a staple of Internet law that online publishers can prohibit users from reselling access codes for online material. By killing the printed page, publishers kill the used book market.

The Antitrust Case against the Publishers

There was just one wrinkle that publishers couldn’t iron out on their own: getting universities to assign online-only books. To achieve that, publishers had to buy off the universities, and violate the antitrust laws.

Paying someone to deny your competitors an essential input is called “exclusive dealing” in antitrust lingo, and it’s illegal if the perpetrators have market power and the denial does not help them improve their own products.

But that’s just what publishers do when they pay universities to assign online-only books.

A university’s textbook choices are an essential input into the used book business. If schools don’t assign print books, used book sellers have no textbooks to resell in competition with publishers’ new books.

With used book sellers frozen out of the market, publishers end up with 100% of the textbook market, far in excess of the market shares generally required by the courts to establish market power.

And students end up blinking into the glare of an inferior product.

So this should be an easy antitrust case. But before an increasingly pro-business judiciary, it is anyone’s guess whether the courts will actually get this one right.

From Bad to Worse in the Information Age

The rise of the Internet used book market twenty years ago was itself a disaster for those who love books.

Back in the 1990s, biblioagnostics–those who were indifferent to studying off a new book or a used book–subsidized the bibliophiles who much preferred new books, because the ‘agnostics had to buy new books they didn’t want for lack of a robust used book market. Sales to ‘agnostics kept prices down, enabling bibliophiles to buy new books they could not otherwise afford.

In freeing ‘agnostics to save their money and buy used books instead, the Internet put an end to that subsidy, forcing bibliophiles who could not afford $300 for a new edition to put up with tattered, highlighter-marred tomes.

But if publishers now succeed at killing the printed page, everyone will suffer, not just bibliophiles. For the other thing publishers have to gain from the move to online is the demise of the university itself.

The Assault on the Printed Page Is an Assault on the University

Publishers sell more than just online textbooks. They sell everything a school needs for online learning, offering tests, quizzes, lecture notes, and PowerPoint slides to go along with the textbooks they peddle. And they hire university faculty members to teach instructors how to teach their materials.

When universities accept cash from the publishers in exchange for moving books online, and allow their faculties to indulge in the teaching aids that publishers offer as a perk to make the switch, schools effectively outsource instruction to publishers.

It is not hard to imagine publishers one day cutting out the middleman by offering courses and degrees directly to students. That would wipe out virtually all academic scholarship, save for the sponsored research common in the hard sciences.

Tuition covers about half the cost of instruction at universities, with the difference coming from subsidies. But faculty spend up to half of their putative instructional time producing scholarship, which means that student tuition dollars mostly pay for research, not teaching.

The publishers, however, won’t get any subsidies if they take over the instructional function, so the fees they will charge students will go entirely to instruction. But unless universities are able to make a strong case for conducting scholarship without teaching, which seems unlikely, the subsidies will dry up, and so there will be no money, either tuition or subsidies, left for scholarship.

Universities have been able to force students to pay for scholarship because university education is an oligopoly: brand loyalty–universities call it reputation–makes entry into the market by startups almost impossible, allowing schools to choose their prices without fear of competition.

But it is a virtuous oligopoly that subsidizes a public service, much the way the local advertising monopolies enjoyed by newspapers for most of the 20th century subsidized investigative reporting that was not strictly necessary to attract readers (tabloid headlines suffice for that).

The Internet has already come for newspapers, which lack the extreme brand loyalty enjoyed by universities, but one day it will come for universities too.

If their complicity in the assault on the printed page is any indication, they won’t know what hit them.

(I thank Chris Bradley for comments on a draft of this post.)

Categories
Miscellany Monopolization

Dynamic Pricing Meets Music Licensing?

Just when you thought the most toxic of information age innovations had already spread as widely as possible:

The big publishers — which are all divisions of the major record conglomerates — own far too much material to exploit it all properly, he says. Sony/ATV, for example, has nearly five million songs in its portfolio. . . . In its place, he posits a bold but somewhat vague plan called “song management,” in which leaner companies look after smaller collections of high-value hits, and each track is held to a profit-and-loss analysis to ensure its value is maximized.

Ben Sisario, This Man Is Betting $1.7 Billion on the Rights to Your Favorite Songs, N.Y. Times (Dec. 18, 2020).

The big publishers block-license their songs, which means that they don’t adjust the prices of individual songs based on shifts in the willingness of licensees to pay for them. It sounds like Mercuriadis wants to capture additional profits by pricing songs dynamically–jacking prices up during periods when buyers are willing to pay more–which is why he can afford to pay more for song rights himself. “Song management” is the tell: In hospitality, which pioneered the practice in the context of hotel rooms and airline tickets, they call it revenue management.

Categories
Meta Miscellany

F-Stop

Source: Adapted from N.Y.Times.

The thing that astonishes me about photography is the proof it seems to provide that the past was real. I should never, ever have thought that was the case were it not for photography, my own memories appear so much as dream images to me. They crystallize, like Stendhal’s twigs pulled from the salt mines encrusted with diamonds, until I cannot be sure that they were real. Whatever led scientists to pick, out of the vast spectrum of possible explanations for memory, out of the fairies and gods, the view that memory is just the lasting impression made by light upon the brain?

Categories
Antitrust Monopolization

Antitrust’s Long-Lived Japanese Business Paradox

One strand of the new antitrust is the notion that bigness is fragile: concentrate wealth and power in a single firm and you put all your eggs in one basket. If the firm fails, the economy is done. Better to break up your behemoths to give the economy resilience in the face of crises.

It turns out, however, that the lesson of Japan’s millenarian firms is quite the opposite: the best way to last a thousand years is to cultivate a monopoly position.

At least according to the Times, which reports that “the Japanese companies that have endured the longest have often been defined by . . . an accumulation of large cash reserves,” which economics teaches is only possible for firms that have market power, and consequently the ability to raise prices above costs and stash the difference.

Indeed, according to the Times, many long-lived firms “started during the 200-year period, beginning in the 17th century, when Japan largely sealed itself off from the outside world, providing a stable business environment.” Read: when the government limited competition.

But wait. Aren’t Japan’s old firms small businesses? Today’s antitrust movement is all for giving small businesses their own mini-monopolies. Possibly because when a small business fails the economy won’t come down with it. (But more likely because this isn’t really about antitrust, but about wealth redistribution.)

So doesn’t the longevity of Japan’s businesses actually support the view that small is resilient? It turns out no. Japan counts big companies like Nintendo among its long-lived firms.

But America doesn’t need medieval Japanese business wisdom to understand that it’s competition, and not monopoly, that’s fragile. We have Schumpeter, who made the resilience of the big the centerpiece of his theory of creative destruction.

He argued that in a world that is more like a stormy sea than the water cooler at the Chicago Board of Trade, the apparent excesses of monopoly are in fact mostly examples of redundancy, the mainmast’s apparently unnecessary girth useful when the big storm hits.

Schumpeter writes:

If for instance a war risk is insurable, nobody objects to a firm’s collecting the cost of this insurance from the buyers of its products. But that risk is no less an element in long-run costs, if there are no facilities for insuring against it, in which case a price strategy aiming at the same end will seem to involve unnecessary restriction and to be productive of excess profits. . . . In analyzing such business strategy ex visu for a given point of time, the investigating economist or government agent sees price policies that seem to him predatory and restrictions of output that seemed to him synonymous with loss of opportunities to produce. He does not see that restrictions of this type are, in the conditions of the perennial gale [of creative destruction], incidents, often unavoidable incidents, of a long-run process of expansion which they protect rather than impede.

Joseph A. Schumpeter, Capitalism, Socialism and Democracy 88 (Harper & Row 1975).

When the Times writes of Japan’s long-lived firms that “[l]arge enterprises in particular keep substantial reserves to ensure that they can continue issuing paychecks and meet their other financial obligations in the event of an economic downturn or a crisis,” it’s hard not to see these firms’ business philosophy as fundamentally Schumpeterian.

Of course, there’s a limit to this kind of thinking. Business longevity and economic growth are two different things. I really am glad we’re not still using fax machines. But although there are plenty of problems with monopoly, fragility is not one of them.

Categories
Antitrust Meta Philoeconomica

Liu et al. and the Good and Bad in Economics

Liu et al.’s paper trying to connect market concentration to low interest rates reflects everything that’s good and bad about economics.

The Good Is the Story

The good is that the paper tells a plausible story about why the current era’s low interest rates might actually be the cause of the low productivity growth and increasing markups we are observing, as well as the increasing market concentration we might also be observing.

The story is that low interest rates encourage investment in innovation, but investment in innovation paradoxically discourages competition against dominant firms, because low rates allow dominant firms to invest more heavily in innovation in order to defend their dominant positions.

The result is fewer challenges to market dominance and therefore less investment in innovation and consequently lower productivity growth, increasing markups, and increasing market concentration.

Plausible does not mean believable, however.

The notion that corporate boards across America are deciding not to invest in innovation because they think dominant firms’ easy access to capital will allow them to win any innovation war is farfetched, to say the least.

“Gosh, it’s too bad rates are so low, otherwise we might have a chance to beat the iPhone,” said one Google Pixel executive to another never.

And it’s a bit too convenient that this monopoly-power-based explanation for two of the major stylized facts of the age–low interest rates and low productivity growth–would come along at just the moment when the news media is splashing antitrust across everyone’s screens for its own private purposes.

But plausibility is at least helpful to the understanding (as I will explain more below), and the gap between it and believability is not the bad part of economics on display in Liu et al.

The Bad Is the General Equilibrium

The bad part is the the authors’ general equilibrium model.

They think they need the model to show that the discouragement competitors feel at the thought of dominant firms making large investments in innovation to thwart them outweighs the incentive that lower interests rates give competitors, along with dominant firms, to invest in innovation.

If not, then competitors might put aside their fears and invest anyway, and productivity growth would then increase anyway, and concentration would fall.

Trouble is, no general equilibrium model can answer this question, because general equilibrium models are not themselves even approximately plausible models of the real world, and economists have known this since the early 1970s.

Intellectually Bankrupt for a While Now

Once upon a time economists thought they could write down a model of the economy entire. The model they came up with was built around the concept of equilibrium, which basically meant that economists would hypothesize the kind of bargains that economic agents would be willing to strike with each other–most famously, that buyers and sellers will trade at a price at which supply equals demand–and then show how resources would be allocated were everyone in the economy in fact to trade according to the hypothesized bargaining principles.

As Frank Ackerman recounts in his aptly-titled assessment of general equilibrium, “Still Dead After All These Years: Interpreting the Failure of General Equilibrium Theory,” trouble came in the form of a 1972 proof, now known as the Sonnenschein-Mantel-Debreu Theorem, that there is never any guarantee that actual economic agents will bargain their way to the bargaining outcomes–the equilibria–that form the foundation of the model.

In order for buyers and sellers of a good to trade at a price the equalizes supply and demand, the quantity of the good bid by buyers must equal the quantity supplied at the bid price. If the price doesn’t start at the level that equalizes supply and demand–and there’s not reason to suppose it should–then the price must move up or down to get to equilibrium.

But every time price moves, it affects the budgets of buyers and sellers, who much then adjust their bids across all the other markets in which they participate, in order to rebalance their budgets. But that in turn means prices in the other markets must change to rebalance supply and demand in those markets.

The proof showed that there is no guarantee that the adjustments won’t just cause prices to move in infinite circles, an increase here triggering a reduction there that triggers another reduction here that triggers an increase back there, and so on, forever.

Thus there is no reason to suppose that prices will ever get to the places that general equilibrium assumes that they will always reach, and so general equilibrium models describe economies that don’t exist.

Liu et al.’s model describes an economy with concentrated markets, so it doesn’t just rely on the supply-equals-demand definition of equilibrium targeted by the Sonnenschein-Mantel-Debreu Theorem, a definition of equilibrium that seeks to model trade in competitive markets. But the flaw in general equilibrium models is actually even greater when the models make assumptions about bargaining in concentrated markets.

We can kind-of see why, in competitive markets, an economic agent would be happy to trade at a price that equalizes supply and demand, because if the agent holds out for a higher price, some other agent waiting in the wings will jump into the market and do the deal at the prevailing price.

But in concentrated markets, in which the number of firms is few, and there is no competitor waiting in the wings to do a deal that an economic agent rejects, holding out for a better price is always a realistic option. And so there’s never even the semblance of a guarantee that whatever price the particular equilibrium definition suggests should be the one at which trade takes place in the model would actually be the price upon which real world parties would agree. Buyer or seller might hold out for a better deal at a different price.

Indeed, in such game theoretic worlds, there is not even a guarantee that any deal at all will be done, much less a deal at the particular price dictated by the particular bargaining model arbitrarily favored by the model’s authors. Bob Cooter called this possibility the Hobbes Theorem–that in a world in which every agent holds out for the best possible deal, one that extracts the most value from others, no deals will ever get done and the economy will be laid to waste.

The bottom line is that all general equilibrium models, including Liu et al.’s, make unjustified assumptions about the prices at which goods trade, not to mention whether trade will take place at all.

But are they at least good as approximations of reality? The answer is no. There’s no reason to suppose that they get prices only a little wrong.

That makes Liu et al.’s attempt to use general equilibrium to prove things about the economy something of a farce. And their attempt to “calibrate” the model by plugging actual numbers from the economy into it in order to have it spit out numbers quantifying the effect of low interest rates on productivity, absurd.

If general equilibrium models are not accurate depictions of the economy, then using them to try to quantify actual economic effects is meaningless. And a reader who doesn’t know better might might well come away from the paper with a false impression of the precision with which Liu et al. are able to make their economic arguments about the real world.

So Why Is It Still Used?

But if general equilibrium is a bad description of reality, why do economists still use it?

It Creates a Clear Pecking Order

Partly because solving general equilibrium models is hard, and success is clearly observable, so keeping general equilibrium models in the economic toolkit provides a way of deciding which economists should get ahead and be famous: namely, those who can work the models.

By contrast, lots of economists can tell plausible, even believable, stories about the world, and it can take decides to learn which was actually right, making promotion and tenure decisions based on economic stories a more fraught, and necessarily political, undertaking.

Indeed, it is not without a certain amount of pride that Liu et al. write in their introduction that

[w]e bring a new methodology to this literature by analytically solving for the recursive value functions when the discount rate is small. This new technique enables us to provide sharp, analytical characterizations of the asymptotic equilibrium as discounting tends to zero, even as the ergodic state space becomes infinitely large. The technique should be applicable to other stochastic games of strategic interactions with a large state space and low discounting.

Ernest Liu et al., Low Interest Rates, Market Power, and Productivity Growth 63 (NBER Working Paper, Aug. 2020).

Part of the appeal of the paper to the authors is that they found a new way to solve the particular category of models they employ. The irony is that technical advances of this kind in general equilibrium economics are like the invention of the coaxial escapement for mechanical watches in 1976: a brilliant advance on a useless technology.

It’s an Article of Faith

But there’s another reason why use of general equilibrium persists: wishful thinking. I suspect that somewhere deep down economists who devote their lives to these models believe that an edifice so complex and all-encompassing must be useful, particularly since there are no other totalizing approaches to mathematically modeling the economy on offer.

Surely, think Liu et al., the fact that they can prove that in a general equilibrium model low interest rates drive up concentration and drive down productivity growth must at least marginally increase the likelihood that the same is actually true in the real world.

The sad truth is that, after Sonnenschein-Mantel-Debreu, they simply have no basis for believing that. It is purely a matter of faith.

Numeracy Is Charismatic

Finally, general equilibrium persists because working really complicated models makes economics into a priesthood. The effect is exactly the same as the effect that writing had on an ancient world in which literacy was rare.

In the ancient world, reading and writing were hard and mysterious things that most people couldn’t do, and so they commanded respect. (It’s not an accident that after the invention of writing each world religion chose to idolize a book.) Similarly, economics–and general equilibrium in particular–is something really hard that most literate people, indeed, even most highly-educated people and even most social scientists, cannot do.

And so it commands respect.

I have long savored the way the mathematical economist gives the literary humanist a dose of his own medicine. The readers and writers lorded it over the illiterate for so long, making the common man shut up because he couldn’t read the signs. It seems fitting that the mathematical economists should now lord their numeracy over the merely literate, telling the literate that they now should shut up, because they cannot read the signs.

It is no accident, I think, that one often hears economists go on about the importance of “numeracy,” as if to turn the knife a bit in the poet’s side. Numeracy is, in the end, the literacy of the literate. But schadenfreude shouldn’t stop us from recognizing that general equilibrium has no more purchase on reality than the Bhagavad Gita.

To be sure, economists’ own love affair with general equilibirium is somewhat reduced since the Great Recession, which seems to have accelerated a move from theoretical work in economics (of which general equilibrium modeling is an important part) to empirical work.

But it’s important to note here that economists have in many ways been reconstituting the priesthood in their empirical work.

For economists do not conduct empirics the way you might expect them to, by going out and talking to people and learning about how businesses function. Instead, they prefer to analyze data sets for patterns, a mathematically-intensive task that is conveniently conducive to the sort of technical arms race that economists also pursue in general equilibrium theory.

If once the standard for admission to the cloister was fluency in the latest general equilibrium techniques, now it is fluency in the latest econometric techniques. These too overawe non-economists, leaving them to feel that they have nothing to contribute because they do not speak the language.

Back to the Good

But general equilibrium’s intellectual bankruptcy is not economics’ intellectual bankruptcy, and does not even mean that Liu et al.’s paper is without value.

For economic thinking can be an aid to thought when used properly. That value appears clearly in Liu et al.’s basic and plausible argument that low interest rates can lead to higher concentration and lower productivity growth. Few antitrust scholars have considered the connection between interest rates and market concentration, and the basic story Liu et al. tell give antitrusters something to think about.

What makes Liu et al.’s story helpful, in contrast to the general equilibrium model they pursue later in the paper, is that it is about tendencies alone, rather than about attempting to reconcile all possible tendencies and fully characterize their net product, as general equilibrium tries to do.

All other branches of knowledge undertake such simple story telling, and indeed limit themselves to it, and so one might say that economics is at its best when it is no more ambitious in its claims than any other part of knowledge.

When a medical doctor advises you to reduce the amount of trace arsenic in your diet, he makes a claim about tendencies, all else held equal. He does not claim to account for the possibility that reducing your arsenic intake will reduce your tolerance for arsenic and therefore leave you unprotected against an intentional poisoning attempt by a colleague.

If the doctor were to try to take all possible effects of a reduction in arsenic intake into account, he would fail to provide you with any useful knowledge, but he would succeed at mimicking a general equilibrium economist.

When Liu et al. move from the story they tell in their introduction to their general equilibrium model, they try to pin down the overall effect of interest rates on the economy, accounting for how every resulting price change in one market influences prices in all other markets. That is, they try in a sense to simulate an economy in a highly stylized way, like a doctor trying to balance the probability that trace arsenic intake will give you cancer against the probability that it will save you from a poisoning attempt. Of course they must fail.

When they are not deriding it as mere “intuition,” economists call the good economics to which I refer “partial equilibrium” economics, because it doesn’t seek to characterize equilibria in all markets, but instead focuses on tendencies. It is the kind of economics that serves as a staple for antitrust analysis.

What will a monopolist’s increase in price do to output? If demand is falling in price–people buy less as price rises–then obviously output will go down. And what will that mean for the value that consumers get from the product? It must fall, because they are paying more, so we can say that consumer welfare falls.

Of course, the higher prices might cause consumers to purchase more of another product, and economies of scale in production of that other product might actually cause its price to fall, and the result might then be that consumer welfare is not reduced after all.

But trying to incorporate such knock-on effects abstractly into our thought only serves to reduce our understanding, burying it under a pile of what-ifs, just as concerns about poisoning attempts make it impossible to think clearly about the health effects of drinking contaminated water.

If the knock-on effects predominate, then we must learn that the hard way, by acting first on our analysis of tendencies. And even if we do learn that the knock-on effects are important, we will not respond by trying to take all effects into account general-equilibrium style–for that would gain us nothing but difficulty–but instead we will respond by flipping our emphasis, and taking the knock-on effects to be the principal effects. We will assume that the point of ingesting arsenic is to deter poisoning, and forget about the original set of tendencies that once concerned us, namely, the health benefits of avoiding arsenic.

Our human understanding can do no more. But faith is not really about understanding.

(Could it be that general equilibrium models are themselves just about identifying tendencies, showing, perhaps, that a particular set of tendencies persists even when a whole bunch of counter-effects are thrown at it? In principle, yes. Which is why very small general equilibrium models, like the two-good exchange model known as the Edgeworth Box, can be useful aids to thought. But the more goods you add in, and the closer the model comes to an attempt at simulating an economy–the more powerfully it seduces scholars into “calibrating” it with data and trying to measure the model as if it were the economy–the less likely it is that the model is aiding thought as opposed to substituting for it.)