Categories
Antitrust Monopolization

Self-Preferencing and the Level Playing Field

I, too, have been enamored of sports metaphors in antitrust. How can the level playing field not convince?

Two wrestlers meet on the floor. If it is uphill for one and downhill for the other, neither will excel. One will find it too easy to win, and so train little. The other will find it too hard to win, and so train little. So, too, in business. If Amazon stands at the top of the hill, because Amazon owns the floor and has chosen to put itself there, then it will do little to improve itself, for it can too easily win against the third-party sellers that it has placed at the bottom of the hill.

But the level playing field is but the pretense of fairness. A way, only, of highlighting a much greater unfairness that we in fact revere. For when the athletes meet, one wins, and not, we like to think, by chance, but because one is better. And why is that one better? Forsooth, because that one does not compete on a level playing field at all. His muscles are better developed. He has better stamina. He is a quicker thinker. He has the focus of mind required to train more. His intuition is better. He has a better spatial sense. And so on. That is, he has an advantage that he does not share with his opponent.

Let us say it is his muscles. In muscle space the field is not level; he stands at the top. And he self-preferences, for he does not, say, starve himself for a week before the bout in order to waste his muscles a bit and thereby level the playing field in muscle space. No! He seizes his advantage. He uses it to win—inevitably to win—and despite this inevitability he feels that he deserves this win, that it is an expression of who he is and not of the tilt of a field.

Why should he feel that his victory is about him given that it was not earned on a truly level playing field? Because it is not a complete leveling that we really seek in any contest. What we seek is to reveal the character of a field that we value. Once we have isolated that field, we glory in whatever tilt we find to it.

If what interests us is who is the strongest, then we want to level the irrelevant fields, and then watch which way the parties slide on the field of strength. If it is the strength of wrestlers, then we level the floor, so that we can better perceive the tilt in their relative strengths.

We can therefore only really object to self-preferencing if the particular instance of self-preferencing at issue relates to a field that we do not think important. We cannot oppose self-preferencing itself, for to do that is to oppose all tilts of field, which is to say, to oppose excellence. It is to insist that no one win the match, or equivalently, that it only ever be determined by a flip of the coin.

(You ask me why the strong massacre the weak in war and want to celebrate it. What challenge is there in that, you say? I say: what challenge was there in your successes, dear reader, any of them, apart from the anxiety you may have felt over whether you would succeed, an anxiety born of your ignorance regarding where your strengths lay? Do you not massacre your opponents too, and call it achievement?)

We can oppose Amazon’s self-preferencing only if it lies in a space that we think irrelevant. If, instead, it reflects a superiority that we desire—if, in the commercial context, it is a superiority in product space, meaning that the self-preferencing delivers better products to consumers—then we must celebrate Amazon’s blood-letting.

We might legitimately say that in giving priority to ads for its own products, Amazon is tilting a field about which we do not care much—the field of marketing—and that prevents the tilt of a different field, that of product quality, from determining the outcome of the game, as we would want it to do.

But we might also conclude that on an ecommerce platform rife with unregulated and unsafe products, the field of information about products should be tilted in favor of Amazon, because then, at least, it is easier for consumers to find the products that Amazon actually stands behind: its own, for which Amazon can be sued if the products turn out to be defective.

So I do not see how the sports metaphor ultimately adds anything to antitrust analysis. It certainly does not teach that the heart of antitrust is fairness, the rules of the game. It merely takes us back to the question that is the heart of all antitrust analysis: does the slaying of competitors improve the product entire, including our ability to find it?

Categories
Antitrust

Cheerledeing

The headline in the New York Times one day last week: “Antitrust Overhaul Passes Its First Tests. Now, the Hard Parts.” The article notes that “[t]he bills face fierce opposition from technology companies, which have marshaled their considerable lobbying operations.” That would have been a good place to mention the fierce support for the bills coming from the Times itself, although I suppose that would be obvious to anyone who had read the article’s headline.

The Washington Post recently called out OAN reporter Christina Bobb for reporting on the Arizona recount while also raising funds to support it. How about an exposé on the Times for cheering on antitrust reforms that target the paper’s own direct competitors—the Tech Giants—for advertising dollars? If the giants go down, the Times will gain.

But I guess the Washington Post will too.

Categories
Antitrust Monopolization

The Assault on the Printed Page

When the New York Journal cabled Mark Twain in London on June 2, 1897 to inquire whether he was gravely ill, Twain famously replied that the reports of his death were greatly exaggerated.

William Randolph Hearst, the Journal’s publisher, could have saved his scoop by having Twain shot on the spot. Fortunately, he didn’t, and we got another 13 years and “Captain Stormfield’s Visit to Heaven” out of the great humorist.

Publishers of university textbooks wouldn’t have been so patient.

Reports of the demise of the printed page, popular since the dawn of the Internet, have turned out to be greatly exaggerated: sales of print books are surging.

So textbook publishers have decided to kill the printed page themselves.

According to a recent antitrust class action brought by university students, all the big names in textbook publishing have been working together to funnel cash to universities in exchange for commitments to assign online-only textbooks to students instead of print books.

It’s working: more than 1,000 universities have agreed to assign publishers’ online-only editions, millions of students have already been forced to purchase them, and publishers are preparing to phase out print textbooks entirely.

Studies show that students, like most readers, prefer the printed page, and textbook companies have seemingly had no problem jacking prices up to astronomical levels in recent years, with the average price of textbook in a core undergraduate course like statistics retailing for more than $300 dollars. So what do publishers have to gain from their assault on the printed page?

A lot, it turns out.

The Rise and Fall of the Internet Used Textbook Market

Eliminating print allows publishers to wipe out competitors that have depressed sales for years.

Before the Internet, textbook publishers had little to fear from the used book market, apart from an occasional copy with a yellow “Used” sticker on the spine that would make its way onto the shelf of a university bookstore.

The Internet changed that, by creating a national–indeed, international–market for used textbooks. Sales volumes of new textbooks plummeted, as students could now pass books along to each other from semester to semester through the medium of online booksellers.

For years, publishers more than offset their losses by jacking up new book prices, but it turns out that there is a limit even to what students with no-questions-asked access to loans are willing to spend for a new textbook.

Indeed, just as excessive tax increases can reduce tax revenues, excessive textbook price increases reduce profits as students start locating bootleg copies on the Internet or shaming their professors into distributing textbook pdfs in violation of copyright rules.

Publishers tried to stem the tide by accelerating the rate at which they put out new textbook editions, even–and rather humorously–in such timeless subjects as basic physics, in order to drive used books to obsolescence and force students to come back to the market for new books.

It didn’t work, which is not to say that it put the major textbook publishers in jeopardy of closing up shop. Textbooks remain the most profitable books in publishing. But publishers preferred to go back to minting money at the old rate. And that’s where online-only books come in.

The Supreme Court has held as recently as 2013 that publishers cannot prohibit students from reselling their textbooks. But it is a staple of Internet law that online publishers can prohibit users from reselling access codes for online material. By killing the printed page, publishers kill the used book market.

The Antitrust Case against the Publishers

There was just one wrinkle that publishers couldn’t iron out on their own: getting universities to assign online-only books. To achieve that, publishers had to buy off the universities, and violate the antitrust laws.

Paying someone to deny your competitors an essential input is called “exclusive dealing” in antitrust lingo, and it’s illegal if the perpetrators have market power and the denial does not help them improve their own products.

But that’s just what publishers do when they pay universities to assign online-only books.

A university’s textbook choices are an essential input into the used book business. If schools don’t assign print books, used book sellers have no textbooks to resell in competition with publishers’ new books.

With used book sellers frozen out of the market, publishers end up with 100% of the textbook market, far in excess of the market shares generally required by the courts to establish market power.

And students end up blinking into the glare of an inferior product.

So this should be an easy antitrust case. But before an increasingly pro-business judiciary, it is anyone’s guess whether the courts will actually get this one right.

From Bad to Worse in the Information Age

The rise of the Internet used book market twenty years ago was itself a disaster for those who love books.

Back in the 1990s, biblioagnostics–those who were indifferent to studying off a new book or a used book–subsidized the bibliophiles who much preferred new books, because the ‘agnostics had to buy new books they didn’t want for lack of a robust used book market. Sales to ‘agnostics kept prices down, enabling bibliophiles to buy new books they could not otherwise afford.

In freeing ‘agnostics to save their money and buy used books instead, the Internet put an end to that subsidy, forcing bibliophiles who could not afford $300 for a new edition to put up with tattered, highlighter-marred tomes.

But if publishers now succeed at killing the printed page, everyone will suffer, not just bibliophiles. For the other thing publishers have to gain from the move to online is the demise of the university itself.

The Assault on the Printed Page Is an Assault on the University

Publishers sell more than just online textbooks. They sell everything a school needs for online learning, offering tests, quizzes, lecture notes, and PowerPoint slides to go along with the textbooks they peddle. And they hire university faculty members to teach instructors how to teach their materials.

When universities accept cash from the publishers in exchange for moving books online, and allow their faculties to indulge in the teaching aids that publishers offer as a perk to make the switch, schools effectively outsource instruction to publishers.

It is not hard to imagine publishers one day cutting out the middleman by offering courses and degrees directly to students. That would wipe out virtually all academic scholarship, save for the sponsored research common in the hard sciences.

Tuition covers about half the cost of instruction at universities, with the difference coming from subsidies. But faculty spend up to half of their putative instructional time producing scholarship, which means that student tuition dollars mostly pay for research, not teaching.

The publishers, however, won’t get any subsidies if they take over the instructional function, so the fees they will charge students will go entirely to instruction. But unless universities are able to make a strong case for conducting scholarship without teaching, which seems unlikely, the subsidies will dry up, and so there will be no money, either tuition or subsidies, left for scholarship.

Universities have been able to force students to pay for scholarship because university education is an oligopoly: brand loyalty–universities call it reputation–makes entry into the market by startups almost impossible, allowing schools to choose their prices without fear of competition.

But it is a virtuous oligopoly that subsidizes a public service, much the way the local advertising monopolies enjoyed by newspapers for most of the 20th century subsidized investigative reporting that was not strictly necessary to attract readers (tabloid headlines suffice for that).

The Internet has already come for newspapers, which lack the extreme brand loyalty enjoyed by universities, but one day it will come for universities too.

If their complicity in the assault on the printed page is any indication, they won’t know what hit them.

(I thank Chris Bradley for comments on a draft of this post.)

Categories
Antitrust Monopolization

Antitrust’s Long-Lived Japanese Business Paradox

One strand of the new antitrust is the notion that bigness is fragile: concentrate wealth and power in a single firm and you put all your eggs in one basket. If the firm fails, the economy is done. Better to break up your behemoths to give the economy resilience in the face of crises.

It turns out, however, that the lesson of Japan’s millenarian firms is quite the opposite: the best way to last a thousand years is to cultivate a monopoly position.

At least according to the Times, which reports that “the Japanese companies that have endured the longest have often been defined by . . . an accumulation of large cash reserves,” which economics teaches is only possible for firms that have market power, and consequently the ability to raise prices above costs and stash the difference.

Indeed, according to the Times, many long-lived firms “started during the 200-year period, beginning in the 17th century, when Japan largely sealed itself off from the outside world, providing a stable business environment.” Read: when the government limited competition.

But wait. Aren’t Japan’s old firms small businesses? Today’s antitrust movement is all for giving small businesses their own mini-monopolies. Possibly because when a small business fails the economy won’t come down with it. (But more likely because this isn’t really about antitrust, but about wealth redistribution.)

So doesn’t the longevity of Japan’s businesses actually support the view that small is resilient? It turns out no. Japan counts big companies like Nintendo among its long-lived firms.

But America doesn’t need medieval Japanese business wisdom to understand that it’s competition, and not monopoly, that’s fragile. We have Schumpeter, who made the resilience of the big the centerpiece of his theory of creative destruction.

He argued that in a world that is more like a stormy sea than the water cooler at the Chicago Board of Trade, the apparent excesses of monopoly are in fact mostly examples of redundancy, the mainmast’s apparently unnecessary girth useful when the big storm hits.

Schumpeter writes:

If for instance a war risk is insurable, nobody objects to a firm’s collecting the cost of this insurance from the buyers of its products. But that risk is no less an element in long-run costs, if there are no facilities for insuring against it, in which case a price strategy aiming at the same end will seem to involve unnecessary restriction and to be productive of excess profits. . . . In analyzing such business strategy ex visu for a given point of time, the investigating economist or government agent sees price policies that seem to him predatory and restrictions of output that seemed to him synonymous with loss of opportunities to produce. He does not see that restrictions of this type are, in the conditions of the perennial gale [of creative destruction], incidents, often unavoidable incidents, of a long-run process of expansion which they protect rather than impede.

Joseph A. Schumpeter, Capitalism, Socialism and Democracy 88 (Harper & Row 1975).

When the Times writes of Japan’s long-lived firms that “[l]arge enterprises in particular keep substantial reserves to ensure that they can continue issuing paychecks and meet their other financial obligations in the event of an economic downturn or a crisis,” it’s hard not to see these firms’ business philosophy as fundamentally Schumpeterian.

Of course, there’s a limit to this kind of thinking. Business longevity and economic growth are two different things. I really am glad we’re not still using fax machines. But although there are plenty of problems with monopoly, fragility is not one of them.

Categories
Antitrust Meta Philoeconomica

Liu et al. and the Good and Bad in Economics

Liu et al.’s paper trying to connect market concentration to low interest rates reflects everything that’s good and bad about economics.

The Good Is the Story

The good is that the paper tells a plausible story about why the current era’s low interest rates might actually be the cause of the low productivity growth and increasing markups we are observing, as well as the increasing market concentration we might also be observing.

The story is that low interest rates encourage investment in innovation, but investment in innovation paradoxically discourages competition against dominant firms, because low rates allow dominant firms to invest more heavily in innovation in order to defend their dominant positions.

The result is fewer challenges to market dominance and therefore less investment in innovation and consequently lower productivity growth, increasing markups, and increasing market concentration.

Plausible does not mean believable, however.

The notion that corporate boards across America are deciding not to invest in innovation because they think dominant firms’ easy access to capital will allow them to win any innovation war is farfetched, to say the least.

“Gosh, it’s too bad rates are so low, otherwise we might have a chance to beat the iPhone,” said one Google Pixel executive to another never.

And it’s a bit too convenient that this monopoly-power-based explanation for two of the major stylized facts of the age–low interest rates and low productivity growth–would come along at just the moment when the news media is splashing antitrust across everyone’s screens for its own private purposes.

But plausibility is at least helpful to the understanding (as I will explain more below), and the gap between it and believability is not the bad part of economics on display in Liu et al.

The Bad Is the General Equilibrium

The bad part is the the authors’ general equilibrium model.

They think they need the model to show that the discouragement competitors feel at the thought of dominant firms making large investments in innovation to thwart them outweighs the incentive that lower interests rates give competitors, along with dominant firms, to invest in innovation.

If not, then competitors might put aside their fears and invest anyway, and productivity growth would then increase anyway, and concentration would fall.

Trouble is, no general equilibrium model can answer this question, because general equilibrium models are not themselves even approximately plausible models of the real world, and economists have known this since the early 1970s.

Intellectually Bankrupt for a While Now

Once upon a time economists thought they could write down a model of the economy entire. The model they came up with was built around the concept of equilibrium, which basically meant that economists would hypothesize the kind of bargains that economic agents would be willing to strike with each other–most famously, that buyers and sellers will trade at a price at which supply equals demand–and then show how resources would be allocated were everyone in the economy in fact to trade according to the hypothesized bargaining principles.

As Frank Ackerman recounts in his aptly-titled assessment of general equilibrium, “Still Dead After All These Years: Interpreting the Failure of General Equilibrium Theory,” trouble came in the form of a 1972 proof, now known as the Sonnenschein-Mantel-Debreu Theorem, that there is never any guarantee that actual economic agents will bargain their way to the bargaining outcomes–the equilibria–that form the foundation of the model.

In order for buyers and sellers of a good to trade at a price the equalizes supply and demand, the quantity of the good bid by buyers must equal the quantity supplied at the bid price. If the price doesn’t start at the level that equalizes supply and demand–and there’s not reason to suppose it should–then the price must move up or down to get to equilibrium.

But every time price moves, it affects the budgets of buyers and sellers, who much then adjust their bids across all the other markets in which they participate, in order to rebalance their budgets. But that in turn means prices in the other markets must change to rebalance supply and demand in those markets.

The proof showed that there is no guarantee that the adjustments won’t just cause prices to move in infinite circles, an increase here triggering a reduction there that triggers another reduction here that triggers an increase back there, and so on, forever.

Thus there is no reason to suppose that prices will ever get to the places that general equilibrium assumes that they will always reach, and so general equilibrium models describe economies that don’t exist.

Liu et al.’s model describes an economy with concentrated markets, so it doesn’t just rely on the supply-equals-demand definition of equilibrium targeted by the Sonnenschein-Mantel-Debreu Theorem, a definition of equilibrium that seeks to model trade in competitive markets. But the flaw in general equilibrium models is actually even greater when the models make assumptions about bargaining in concentrated markets.

We can kind-of see why, in competitive markets, an economic agent would be happy to trade at a price that equalizes supply and demand, because if the agent holds out for a higher price, some other agent waiting in the wings will jump into the market and do the deal at the prevailing price.

But in concentrated markets, in which the number of firms is few, and there is no competitor waiting in the wings to do a deal that an economic agent rejects, holding out for a better price is always a realistic option. And so there’s never even the semblance of a guarantee that whatever price the particular equilibrium definition suggests should be the one at which trade takes place in the model would actually be the price upon which real world parties would agree. Buyer or seller might hold out for a better deal at a different price.

Indeed, in such game theoretic worlds, there is not even a guarantee that any deal at all will be done, much less a deal at the particular price dictated by the particular bargaining model arbitrarily favored by the model’s authors. Bob Cooter called this possibility the Hobbes Theorem–that in a world in which every agent holds out for the best possible deal, one that extracts the most value from others, no deals will ever get done and the economy will be laid to waste.

The bottom line is that all general equilibrium models, including Liu et al.’s, make unjustified assumptions about the prices at which goods trade, not to mention whether trade will take place at all.

But are they at least good as approximations of reality? The answer is no. There’s no reason to suppose that they get prices only a little wrong.

That makes Liu et al.’s attempt to use general equilibrium to prove things about the economy something of a farce. And their attempt to “calibrate” the model by plugging actual numbers from the economy into it in order to have it spit out numbers quantifying the effect of low interest rates on productivity, absurd.

If general equilibrium models are not accurate depictions of the economy, then using them to try to quantify actual economic effects is meaningless. And a reader who doesn’t know better might might well come away from the paper with a false impression of the precision with which Liu et al. are able to make their economic arguments about the real world.

So Why Is It Still Used?

But if general equilibrium is a bad description of reality, why do economists still use it?

It Creates a Clear Pecking Order

Partly because solving general equilibrium models is hard, and success is clearly observable, so keeping general equilibrium models in the economic toolkit provides a way of deciding which economists should get ahead and be famous: namely, those who can work the models.

By contrast, lots of economists can tell plausible, even believable, stories about the world, and it can take decides to learn which was actually right, making promotion and tenure decisions based on economic stories a more fraught, and necessarily political, undertaking.

Indeed, it is not without a certain amount of pride that Liu et al. write in their introduction that

[w]e bring a new methodology to this literature by analytically solving for the recursive value functions when the discount rate is small. This new technique enables us to provide sharp, analytical characterizations of the asymptotic equilibrium as discounting tends to zero, even as the ergodic state space becomes infinitely large. The technique should be applicable to other stochastic games of strategic interactions with a large state space and low discounting.

Ernest Liu et al., Low Interest Rates, Market Power, and Productivity Growth 63 (NBER Working Paper, Aug. 2020).

Part of the appeal of the paper to the authors is that they found a new way to solve the particular category of models they employ. The irony is that technical advances of this kind in general equilibrium economics are like the invention of the coaxial escapement for mechanical watches in 1976: a brilliant advance on a useless technology.

It’s an Article of Faith

But there’s another reason why use of general equilibrium persists: wishful thinking. I suspect that somewhere deep down economists who devote their lives to these models believe that an edifice so complex and all-encompassing must be useful, particularly since there are no other totalizing approaches to mathematically modeling the economy on offer.

Surely, think Liu et al., the fact that they can prove that in a general equilibrium model low interest rates drive up concentration and drive down productivity growth must at least marginally increase the likelihood that the same is actually true in the real world.

The sad truth is that, after Sonnenschein-Mantel-Debreu, they simply have no basis for believing that. It is purely a matter of faith.

Numeracy Is Charismatic

Finally, general equilibrium persists because working really complicated models makes economics into a priesthood. The effect is exactly the same as the effect that writing had on an ancient world in which literacy was rare.

In the ancient world, reading and writing were hard and mysterious things that most people couldn’t do, and so they commanded respect. (It’s not an accident that after the invention of writing each world religion chose to idolize a book.) Similarly, economics–and general equilibrium in particular–is something really hard that most literate people, indeed, even most highly-educated people and even most social scientists, cannot do.

And so it commands respect.

I have long savored the way the mathematical economist gives the literary humanist a dose of his own medicine. The readers and writers lorded it over the illiterate for so long, making the common man shut up because he couldn’t read the signs. It seems fitting that the mathematical economists should now lord their numeracy over the merely literate, telling the literate that they now should shut up, because they cannot read the signs.

It is no accident, I think, that one often hears economists go on about the importance of “numeracy,” as if to turn the knife a bit in the poet’s side. Numeracy is, in the end, the literacy of the literate. But schadenfreude shouldn’t stop us from recognizing that general equilibrium has no more purchase on reality than the Bhagavad Gita.

To be sure, economists’ own love affair with general equilibirium is somewhat reduced since the Great Recession, which seems to have accelerated a move from theoretical work in economics (of which general equilibrium modeling is an important part) to empirical work.

But it’s important to note here that economists have in many ways been reconstituting the priesthood in their empirical work.

For economists do not conduct empirics the way you might expect them to, by going out and talking to people and learning about how businesses function. Instead, they prefer to analyze data sets for patterns, a mathematically-intensive task that is conveniently conducive to the sort of technical arms race that economists also pursue in general equilibrium theory.

If once the standard for admission to the cloister was fluency in the latest general equilibrium techniques, now it is fluency in the latest econometric techniques. These too overawe non-economists, leaving them to feel that they have nothing to contribute because they do not speak the language.

Back to the Good

But general equilibrium’s intellectual bankruptcy is not economics’ intellectual bankruptcy, and does not even mean that Liu et al.’s paper is without value.

For economic thinking can be an aid to thought when used properly. That value appears clearly in Liu et al.’s basic and plausible argument that low interest rates can lead to higher concentration and lower productivity growth. Few antitrust scholars have considered the connection between interest rates and market concentration, and the basic story Liu et al. tell give antitrusters something to think about.

What makes Liu et al.’s story helpful, in contrast to the general equilibrium model they pursue later in the paper, is that it is about tendencies alone, rather than about attempting to reconcile all possible tendencies and fully characterize their net product, as general equilibrium tries to do.

All other branches of knowledge undertake such simple story telling, and indeed limit themselves to it, and so one might say that economics is at its best when it is no more ambitious in its claims than any other part of knowledge.

When a medical doctor advises you to reduce the amount of trace arsenic in your diet, he makes a claim about tendencies, all else held equal. He does not claim to account for the possibility that reducing your arsenic intake will reduce your tolerance for arsenic and therefore leave you unprotected against an intentional poisoning attempt by a colleague.

If the doctor were to try to take all possible effects of a reduction in arsenic intake into account, he would fail to provide you with any useful knowledge, but he would succeed at mimicking a general equilibrium economist.

When Liu et al. move from the story they tell in their introduction to their general equilibrium model, they try to pin down the overall effect of interest rates on the economy, accounting for how every resulting price change in one market influences prices in all other markets. That is, they try in a sense to simulate an economy in a highly stylized way, like a doctor trying to balance the probability that trace arsenic intake will give you cancer against the probability that it will save you from a poisoning attempt. Of course they must fail.

When they are not deriding it as mere “intuition,” economists call the good economics to which I refer “partial equilibrium” economics, because it doesn’t seek to characterize equilibria in all markets, but instead focuses on tendencies. It is the kind of economics that serves as a staple for antitrust analysis.

What will a monopolist’s increase in price do to output? If demand is falling in price–people buy less as price rises–then obviously output will go down. And what will that mean for the value that consumers get from the product? It must fall, because they are paying more, so we can say that consumer welfare falls.

Of course, the higher prices might cause consumers to purchase more of another product, and economies of scale in production of that other product might actually cause its price to fall, and the result might then be that consumer welfare is not reduced after all.

But trying to incorporate such knock-on effects abstractly into our thought only serves to reduce our understanding, burying it under a pile of what-ifs, just as concerns about poisoning attempts make it impossible to think clearly about the health effects of drinking contaminated water.

If the knock-on effects predominate, then we must learn that the hard way, by acting first on our analysis of tendencies. And even if we do learn that the knock-on effects are important, we will not respond by trying to take all effects into account general-equilibrium style–for that would gain us nothing but difficulty–but instead we will respond by flipping our emphasis, and taking the knock-on effects to be the principal effects. We will assume that the point of ingesting arsenic is to deter poisoning, and forget about the original set of tendencies that once concerned us, namely, the health benefits of avoiding arsenic.

Our human understanding can do no more. But faith is not really about understanding.

(Could it be that general equilibrium models are themselves just about identifying tendencies, showing, perhaps, that a particular set of tendencies persists even when a whole bunch of counter-effects are thrown at it? In principle, yes. Which is why very small general equilibrium models, like the two-good exchange model known as the Edgeworth Box, can be useful aids to thought. But the more goods you add in, and the closer the model comes to an attempt at simulating an economy–the more powerfully it seduces scholars into “calibrating” it with data and trying to measure the model as if it were the economy–the less likely it is that the model is aiding thought as opposed to substituting for it.)

Categories
Antitrust Regulation

“The Best Are Easily 10 Times Better Than Average,” But Can They Do Anything Else?

Netflix CEO Reed Hastings is celebrating the principle that great software programmers are orders of magnitude more productive than average programmers. The implication is that sky-high salaries for these rock stars are worth it.

Now, it may very well be the case that the best programmers are orders of magnitude better than average programmers. I’ve seen a similar thing on display during examinations for gifted students: inevitably one student finishes the exam in half the time and walks out with a perfect score, while the rest of the gifted struggle on.

Just how many orders of magnitude smarter is that student, relative not just to the other gifted students in the room, but to the average student who is not in room?

But while the rock-star principle may justify the high willingness of Silicon Valley firms to pay for talent — the more value an employee brings to a firm the more the firm can afford to pay the employee and still end up ahead — that doesn’t mean that as an economic matter a firm must pay rock-star employees higher salaries.

Far from it.

Economic efficiency requires that great programmers be put to use programming, otherwise society loses the benefit of their talents. But the minimum salary that, as an economic matter, a tech firm must pay a rock-star programmer to induce the programmer to program is just a penny more than what the programmer would earn doing the programmer’s next-most productive activity.

If the programmer isn’t good at anything but programming, that number might be $15.01 — the $15 minimum wage Amazon pays its fulfillment center workers plus a penny — or even something lower, as the programmers I know would have a tough time sprinting around a warehouse all day.

A programmer might be worth $100 million as a programmer, for example, because the programmer is capable of delivering that much value to software. But to make sure this person actually delivers that value, the market does not need actually to pay the programmer $100 million, or anything near to that amount. All the market needs to pay the programmer is a penny more than what the programmer would earn by not programming.

And if rock-star programmers tend only to be rock stars at programming, as I suspect is the case, that number might be pretty small, indeed, on the order of what average programmers make — if not $15 an hour, which is a bit of an exaggeration — because the rock-star programmer is likely to be average at programming-adjacent pursuits.

If the most the programmer would make teaching math, playing competitive chess, or just programming for non-tech companies that will never earn the profits needed to pay rock-star salaries, no matter how talented their employees, is a hundred thousand a year, then that plus a penny is all that economics requires that the programmer be paid for doing programming. Not $100 million.

So why are rock-star programmers earning the big bucks in Silicon Valley? Because tech firms compete for them, bidding up the price of their services.

Tech firms know this, of course, and once tried to put a lid on the bidding war, by entering into no-poach agreements pursuant to which they promised not to try to lure away each others’ programmers by offering them more money.

There is no reason to think that these no-poach agreements were inefficient. Unless you believe that programmers can contribute more to some tech firms than to others, in which case the bidding wars that drive rock-star compensation sky high are allocating programmers to their most productive uses. But that seems unlikely: does making Google better contribute more to America than making Amazon better?

(The agreements also could not have created any deadweight loss, because perfect price discrimination is the norm in hiring programming talent: firms negotiate compensation individually with each programmer.)

All the no-poach agreements did was to change the distribution of wealth: limiting the share of a firm’s revenues that programmers can take for themselves.

Indeed, the no-poach agreements probably contributed a bit to the deconcentration of wealth.

A dollar of revenue paid out to a smart programmer goes in full to the programmer, whereas that same dollar, if not paid to the programmer but instead paid out as profits to shareholders, is divided multiple ways between the firm’s owners. Competitive bidding for rock-star programmer salaries concentrates wealth, and the no-poach agreements spread it — admittedly to shareholders, who tend to be wealthy, but at least the dollar is spread.

The antitrust laws intervened just in time, however, to dissolve these agreements and punish Silicon Valley firms for doing their part to slow the increase in the wealth gap in America.

Today’s antitrust movement has argued that antitrust should break up the tech giants in part to prevent them from artificially depressing the wages they pay the little guy. I’ve argued that would be a mistake, because breakup could damage the companies, reducing the value they deliver to society and harming everyone. Regulating wages directly is a better idea.

But you don’t just make compensation fair by raising low wages. You also have to reduce excessive wages. One way to start is just by allowing the tech firms to conspire against their rock stars.

And once tech firms have finished conspiring against their overpaid programmers, they can start conspiring against another group of employees that is even more grossly overpaid per dollar of value added: their CEOs.

Well, that we might have to do for them.

Categories
Antitrust Monopolization

The Decline in Monopolization Cases in One (More) Graph

DOJ, FTC, and Private Cases Filed under Section 2 of the Sherman Act
(Image license: CC BY-SA 4.0.)

Observations:

  • The decline in cases brought by the Department of Justice since the 1970s is consistent with the story of Chicago School influence over antitrust. What is perhaps less well known, but clearly reflected in the data, is that the Chicago Revolution took place in the Ford, and especially the Carter, Administrations, not, as is sometimes supposed, in the Reagan Administration, although Reagan supplied the coup de grace.

    Indeed, we have only five monopolization cases filed by DOJ over the course of the entire Carter Administration, as compared with 58 filed during the part of the Nixon Administration and the Ford Administration covered by this data series. This is consistent with the broader influence of the Chicago School over regulation of business. It was also under Ford and Carter, not Reagan, that deregulation got underway, with partial deregulation of railroads (1976), near-complete deregulation of airlines (1978), and partial deregulation of trucking (1980) (more here).

    The timing suggests that the Chicago School’s victories were intellectual, rather than merely partisan. As Przemyslaw Palka has pointed out to me, Milton Friedman consciously pursued a strategy of intellectual, rather than political warfare, because he understood that victory on the intellectual plane is more complete and enduring (a nice discussion of this may be found here on pages 218-221). As these numbers suggest, Chicago prevailed by converting its adversaries, so that even when its adversaries were nominally in political power under Carter, they implemented Chicago’s own agenda.
  • To the extent that the early part of the FTC data series is reliable (more on that below), the story in the FTC case numbers is that of the six monopolization cases brought over the past five years, following a twenty-year period during which the FTC brought only three cases. With the exception of Google, which has just been filed, there has been no corresponding uptick in monopolization cases filed by the Department of Justice.
  • The private litigation data show that in some years (1998 and 2013), private litigation across the entire United States has produced fewer monopolization cases (against unique defendants) than did a single federal enforcer–the DOJ–in 1971. The private litigation numbers for 1997 to 2020 also show that, on average, about twenty defendants face new monopolization actions each year when federal enforcers are filing near-zero complaints. To the extent that the numbers for 1974 to 1983 are reliable (of which more below), they suggest that private cases have also declined markedly since the 1970s, although there was a lag of several years between the two effects, perhaps due to the tendency of private plaintiffs to file follow-on cases to government cases.
  • Altogether, one is left with the impression that corporate America has been awfully well-behaved since about 1975.

Notes on the Data:

  • The cases brought by the Department of Justice (DOJ) come from the Antitrust Division’s own workload statistics, so I assume the numbers are accurate. For DOJ cases investigated, as well as filed, see here.
  • The cases filed by private plaintiffs come from two sources. The first, for the years 1997 to 2020, is a search for Section 2 complaints in federal court dockets via Lexis CourtLink. I must thank Beau Steenken, Instructional Services Librarian & Associate Professor of Legal Research at University of Kentucky Rosenberg College of Law, for figuring out how to search CourtLink for Section 2 cases (no easy task, it turns out).

    These are only cases for which the plaintiff, in filing the complaint, indicated the cause of action as Section 2 of the Sherman Act in the court’s cover sheet. Apart from deleting a few cases in which DOJ was the plaintiff, and a few cases in which the case was filed by mistake (e.g., the case name reads: “error”), I did not examine these cases at all, other than to note that many of the defendants look plausible (e.g., Microsoft comes up a lot in the late 1990s or early 2000s).

    Finally, I counted only unique defendants in any given year. So for example, if there were ten cases filed against Microsoft in 2000, I counted that as only one case. The reason is that multiple consumers or competitors might be harmed by a single piece of anticompetitive conduct undertaken by a monopolist, and so one would expect multiple plaintiffs to sue the monopolist based on the same conduct. For those interested in using case counts to measure enforcement, all of those cases signal the same thing, that a particular anticompetitive practice has been challenged, and so all of the cases together really only represent a single instance of enforcement. I did not, however, check each complaint to make sure that the alleged conduct was the same across all complaints. I just assumed that multiple complaints filed in a given year against a single defendant relate to the same conduct. (I did not, however, count unique defendants across plaintiff types: the Justice Department case against Microsoft was counted toward DOJ cases and and any private cases filed against Microsoft in the same year count as a an additional single case in the private cases account.)

    According to CourtLink, some federal courts adopted online filing later than others, and CourtLink only has electronic dockets. I chose to use 1997 as the start year for this count, because by that year almost all jurisdictions were online and so presumably their dockets are part of the CourtLink database. According to CourtLink, several jurisdictions had not yet moved online by that year, however, and so the counts may be slightly skewed low in the first few years after 1997 because they miss cases filed in the jurisdictions that were still offline during that period. The jurisdictions that went online after January 1, 1997, and the year in which they went online, are District of New Mexico (1998), District of Nevada (1999), and District of Alaska (2004).

    The source of the data for the years 1974 to 1983 is Table 6 in this article. That table gives the yearly percentage of refusal to deal and predatory pricing cases in a sample of 2,357 cases from five courts, Southern District of New York, Northern District of Illinois, Northern District of California, Western District of Missouri, and Northern District of Georgia, as well as the total number of private antitrust cases filed per year. Because I suspect that my CourtLink data represents “pure” Section 2 cases–cases in which the Section 2 claim is the principal claim in the case–I adjusted these percentages using information from Table 1 in the paper about the share of those percentages that represent primary claims. Because the total yearly private cases given in the Article did not appear to be adjusted for multiple cases filed against the same defendant in a given year, as I adjusted the CourtLink data, I therefore further reduced the results in the same proportion as my CourtLink results were reduced when I eliminated multiple cases against the same defendant, a reduction of about 40%.
  • I collected the FTC data by searching for cases labeled “Single-Firm Conduct” in the FTC’s “cases and proceedings” database. The cases and proceedings database goes back to 1996, and so I labeled years for which there were no hits as years of zero cases going back to 1996. However, the FTC website does caution that some older cases are searchable only by name and year, and presumably not by case type, so it is possible that this data fails to count cases from early in the period (e.g., late 1990s). I also paged through the “Annual Antitrust Enforcement Activities Reports” issued by the FTC between 1996 and 2008 and found a couple of cases not returned by the search of the cases and proceedings database. Finally, I included the FTC’s case against Intel, filed in 2009. I counted both administrative complaints filed in the FTC’s own internal adjudication system and complaints filed by the FTC in federal court. The FTC cases are nominally brought under Section 5 of the FTC Act, through which the FTC enforces Section 2 of the Sherman Act.
Categories
Antitrust Inframarginalism Monopolization

The Smallness of the Bigness Problem

The tendency to ascribe the problem of inequality that ails us to the bigness of firms is the great embarrassment of contemporary American progressivism. The notion that the solution to poverty is cartels for small business and the hammer for big business is so pre-modern, so mercantilist, that one wonders what poverty of intellect could have led American progressives into it.

Indeed, the contemporary progressive’s shame is all the greater because the original American progressives a century ago, whose name the contemporary progressive so freely appropriates, did not make the same mistake. The original progressives were more modern than progressives today, perhaps because the pre-modern age was not quite so distant from them. Robert Hale, the greatest lawyer-economist of the period, wrote that

[e]ven the classical economists realized . . . competition would not keep the price at a level with the cost of all the output, but would result in a price equal to the cost of the marginal portion of the output. Those who produce at lower costs because they own superior [capital] would reap a differential advantage which Ricardo, in his well-known analysis, designated “economic rent.”

Robert L. Hale, Freedom Through Law: Public Control of Private Governing Power 25-26 (1952).

I suspect that this is absolute Greek to the contemporary progressive. I will kindly explain it below.

But first, it should be noted that the American progressive’s failure to appreciate the smallness of the bigness problem is not shared by Piketty, whom American progressives celebrate without actually reading:

Yet pure and perfect competition cannot alter the inequality r > g, which is not the consequence of any market “imperfection.”

Thomas Piketty, Capital in the Twenty-First Century 573 (Arthur Goldhammer trans., 2017). (Italics mine.)

What does Piketty mean here?

He means what Hale meant, which is that the heart of inequality does not come from monopolists charging supracompetitive prices, however obnoxious we may feel that to be, but rather from the fact that the rich own assets that are more productive than the assets owned by the poor, and so they profit more than the poor even at efficient, competitive prices.

In other words, the rich get richer because their costs are lower and their costs are lower because they own all the best stuff.

No matter how competitive the market, prices will never be driven down to the lower costs faced by the rich, because other people own less-productive assets than do the rich and competition drives prices down to the level of the higher costs associated with producing things with less-productive assets.

(Why can’t price just keep going down, and simply drive the more expensive producers out of the market to the end of dissipating the profits of the less expensive producers? Because there is always a less expensive producer! Price can therefore never dissipate the profits of them all, and anyway demand puts a floor on price: consumers are always bidding prices up until supply satisfies demand.)

Graphically, American progressives have been sweating the “monopoly profit” box without seeming to realize that it’s tiny compared to what remains once you eliminate it, which is the “economic rent” box.

Picketty, the original American progressives, and kindergartners know the difference between big and small. Why don’t we?

Categories
Antitrust

Conspiracy or Incompetence?

Let’s get this straight. The New York Times criticizes The Epoch Times today for running infomercials attacking the Chinese Communist Party’s handling of the coronavirus pandemic while making “no mention of The Epoch Times’s ties to Falun Gong, or its two-decade-long campaign against Chinese communism.”

But last week the Times ran a long piece, titled “Big Tech’s Professional Opponents Strike at Google,” that purported to reveal to readers the forces behind the Google antitrust suit while making no mention of the campaign of the News Media Alliance, of which the Times is a member, for antitrust action against Google, or the threat posed by Google to the Times’ advertising business.

Since the Times seems to think poor little Epoch Times should be disclosing its death struggle with the CCP to readers, I would like to see the Times start disclosing, in each article it writes about Big Tech, its death struggle with those companies over advertising revenues. The paper can also slap a correction to the same effect on each of the hundreds of pieces it has published over the past three years trashing its tech adversaries.

Ben Smith, who knows better, contributed to the Epoch Times piece. Let’s see him show some courage in his next column about media and tech.

So which is it? Maybe both.

Categories
Antitrust Regulation

Antitrust as Price Regulation by Least Efficient Means

Any company that has $100 billion in cash and marketable securities on its books, as Apple does, is charging excessive prices for its products, in the sense of prices higher than necessary to make everyone at Apple ready, willing, and able to continue to do the excellent job that they are doing.

Is that a problem? Unfortunately, yes, for any society that’s supposed to be a thing of the people. It means that Apple is bilking the public: taking more from the people for their iPhones and Macbooks than is strictly necessary to give Apple an incentive to produce iPhones and Macbooks.

You don’t need the money to reward investors. Otherwise you would have paid the money out already.

You don’t need the money to build more factories. Otherwise you would have built the factories already.

You don’t need the money to pay Tim Cook. Otherwise you would have upped his compensation already.

And with an AA+ credit rating, you don’t need the money for an emergency either, since it would cost you almost nothing to borrow cash in a pinch.

You just don’t need those billions, which is why they are what economists call “rents:” earnings in excess of what would be necessary to make the company, and all those who contribute to its success, ready, willing, and able to carry on.

Should government do something about these rents?

Yes. But not with the antitrust laws. Because Apple’s rents are not monopoly rents. Those are the excessive returns that come from making your products stand out by trashing your competitors’ products, rather than improving your own. Antitrust prohibits that sort of behavior.

But does anyone think Apple achieved the ability to charge $1,200 for an iPhone by making Samsung products worse?

Of course not.

Which is why there is no antitrust case against Apple.

Instead, Apple’s rents are Schumpeterian: excessive returns that come from making your products stand out by improving them, rather than by trashing the products of competitors. Antitrust does not prohibit such conduct.

Nor should it, because antitrust is a slayer, breaking up the firms that run afoul of its rules, saddling them with behavioral injunctions, and taxing them with trebled damages.

Those remedies make sense when the target is a firm that has gotten ahead by trashing competitors. That sort of firm doesn’t have a better product to offer, so smashing it is no great loss to society.

That’s not true for firms like Apple that have gotten ahead by being better. Smash Apple and you might well get Apple’s prices down. But you might also end up with poorer-quality products.

Why is it that Samsung keeps churning out gimmicky phones that are just a bit too ahead of their time to work properly, whereas, iteration after iteration, Apple phones continue to please?

Who knows?

By the same token, who knows whether Apple divided two ways, three ways or four ways will still have the same old magic? Organizations are mysterious things and we should break them only when they are already broken.

That doesn’t mean that something shouldn’t be done about Apple’s prices. As is so often the case, the right approach is the most direct: tell Apple to lower them.

There’s nothing novel about doing that. It’s the way America often has dealt with high-tech firms that get carried away with their own success. It happened with the landline telephone: the states regulated telephone rates for a century, and many retain the statutory authority to do so today. No vast cultural leap would be required to regulate the prices of iPhones or other Apple products.

Regulating prices runs much less of a risk of killing the golden goose, because it’s a scalpel to antitrust’s hammer, ordering prices down without smashing the firms that charge them.

But are prices really all that Apple’s antitrust adversaries care about? I think so.

The antitrust complaint brought by Fortnite-videogame-maker Epic is admirably transparent on this score, inveighing against what it calls Apple’s “30% tax” on paid App Store apps.

True, Epic spends a lot of time arguing that Apple should stop vetting the apps that can be installed on iPhones and should also stop requiring apps to accept payments via Apple’s own systems.

But it’s hard to believe Epic really cares whether consumers can run any app they want on the iPhone, or whether consumers can make in-app purchases with Paypal instead of Apple Pay.

The real reason Epic targets app vetting and payment systems lockdown is more likely because these two Apple policies prevent Epic from doing an end run around Apple’s 30% fee by connecting directly with users.

So to use antitrust to attack Apple’s prices, Epic ends up trying to thrust a stake through the streamlined, curated environment that iPhone users love. Needless to say, we know what a platform on which you can install anything and pay in any manner looks like: it’s called the PC, that bug-ridden, bloatware-filled, hackable free-for-all from which Apple users have been running screaming for decades now.

The beauty of price regulation is that you don’t need to redesign products to get what you want. Under price regulation, Apple would be able to continue to vet apps and manage payments, and thereby maintain the experience its customers love. All the company would need to do is lower its prices.

Epic isn’t the only organization out to exploit the antitrust laws for the sake of a bit of price regulation by least efficient means. Today’s Neo Brandeisians seem to share this goal.

That is the substance of an extraordinary piece by two affiliates of the Open Markets Institute that calls for using antitrust to smash big firms, but allowing small firms to form price-fixing cartels. The idea is to redistribute wealth by reducing the prices big firms can charge and increasing the prices that the little guy can charge.

That sounds great. But why not just regulate prices directly instead of smashing the country’s patrimony to get there?

Indeed, I’m mystified by the contempt in which this supposedly-radical movement seems to hold price regulation. The movement is all for returning to antitrust’s New Deal heyday. But it has nary a word to spare for price regulation, which was a much bigger part of the New Deal and the mid-century economic settlement that followed it, during which fully 25% of the American economy by GDP was price regulated.

One wonders whether the Neo Brandeisians share the Chicago School’s old concerns about “capture.” Something tells me they might.

Nevermind that we learned long ago that the notion that administrative agencies are captured by those they regulate is too simple by half.

And no one has been able to explain to me why the judges who apply the antitrust laws are any less susceptible to capture than are government price regulators.

But I do know that most Americans don’t seem to know that their gas, electricity, and insurance rates are regulated by government agencies, which says a lot about whether price regulation is the supreme evil that antitrusters of all stripes make it out to be.

The Neo Brandeisians’ mania for competition is really just run-of-the-mill American anti-statism, with a bit of progressive polish. Consider another example of intemperate fervor for competition, one that differs from the Neo Brandeisians’ campaign against big tech only in lacking that campaign’s radical pretensions: The Hatch-Waxman Act.

Rather than follow the rest of the world in regulating prescription drug prices directly, the United States has chosen to use competition from generic drugs to drive down drug prices after patents expire. The Hatch-Waxman Act of 1984 was meant to kickstart the plan by streamlining the generic drug approval process.

It’s important to understand how ridiculous using competition to reduce off-patent drug prices really is. Far and away the greatest virtue of competition is that it leads to innovation: firms must make better products or lose out to competitors.

But when it comes to generic drugs, competition cannot lead to innovation, because generic drugs are by definition copies of old drugs!

If a generic drug company were to innovate in order to get ahead of its competitors, its product would need to go through full-blown clinical trials in order to receive FDA approval and would also likely receive patent protection, instantaneously removing it from the competitive generic drug market and driving up its price. So the innovation rationale for competition just doesn’t exist in the context of generics.

But we decided to promote competition anyway, purely for the purpose of reducing off-patent drug prices.

It kind of worked.

Prices for many off-patent drugs fell. But not for all off-patent drugs. As scandals involving Daraprim (of pharma bro fame) and the Epipen show (the latter in the device context), it turned out that competition does not always come to the rescue once patents expire and regulatory hurdles are lowered.

More importantly, the cost of maintaining the system turned out to be immense. Firms responded by finding ways to prevent their drugs from going off-patent, leading to interminable patent and antitrust litigation. Just google “reverse payment patent settlements”–one of the mechanisms used by drug makers to undermine competition–and behold the flood of ink spilt on this avoidable disaster.

Worse, we have learned in recent years that generic drug quality is actually pretty terrible, even dangerous: competition is killing the golden goose.

Not, in this case, because Hatch-Waxman led to the break-up of big firms, but because when competition is just about getting prices down, firms will skimp on production costs. Ruinously low prices are, incidentally, supposed to be another of the great problems with price regulation–that regulators will dictate prices that are too low to cover costs–but it turns out that competition is at least as good at undershooting.

So what we could have gotten from a rate regulator in four little words–“lower your damn prices”–Hatch-Waxman accomplished in a patchwork way, at the cost of interminable litigation and sketchy pills.

Which leads me to ask: can Congress please do something about Apple’s $100 billion cash pile? How about putting aside $25 billion (just to make sure Apple has a nice cushion against shocks), and then rebating the other $75 billion to everyone who has ever bought an Apple product, pro rata? You can be sure Apple knows who they are.

And while Congress is at it, they can take a look at Microsoft and Alphabet, too.

For $100 billion is not actually the largest hoard in Silicon Valley.