Categories
Antitrust Meta Philoeconomica

Liu et al. and the Good and Bad in Economics

Liu et al.’s paper trying to connect market concentration to low interest rates reflects everything that’s good and bad about economics.

The Good Is the Story

The good is that the paper tells a plausible story about why the current era’s low interest rates might actually be the cause of the low productivity growth and increasing markups we are observing, as well as the increasing market concentration we might also be observing.

The story is that low interest rates encourage investment in innovation, but investment in innovation paradoxically discourages competition against dominant firms, because low rates allow dominant firms to invest more heavily in innovation in order to defend their dominant positions.

The result is fewer challenges to market dominance and therefore less investment in innovation and consequently lower productivity growth, increasing markups, and increasing market concentration.

Plausible does not mean believable, however.

The notion that corporate boards across America are deciding not to invest in innovation because they think dominant firms’ easy access to capital will allow them to win any innovation war is farfetched, to say the least.

“Gosh, it’s too bad rates are so low, otherwise we might have a chance to beat the iPhone,” said one Google Pixel executive to another never.

And it’s a bit too convenient that this monopoly-power-based explanation for two of the major stylized facts of the age–low interest rates and low productivity growth–would come along at just the moment when the news media is splashing antitrust across everyone’s screens for its own private purposes.

But plausibility is at least helpful to the understanding (as I will explain more below), and the gap between it and believability is not the bad part of economics on display in Liu et al.

The Bad Is the General Equilibrium

The bad part is the the authors’ general equilibrium model.

They think they need the model to show that the discouragement competitors feel at the thought of dominant firms making large investments in innovation to thwart them outweighs the incentive that lower interests rates give competitors, along with dominant firms, to invest in innovation.

If not, then competitors might put aside their fears and invest anyway, and productivity growth would then increase anyway, and concentration would fall.

Trouble is, no general equilibrium model can answer this question, because general equilibrium models are not themselves even approximately plausible models of the real world, and economists have known this since the early 1970s.

Intellectually Bankrupt for a While Now

Once upon a time economists thought they could write down a model of the economy entire. The model they came up with was built around the concept of equilibrium, which basically meant that economists would hypothesize the kind of bargains that economic agents would be willing to strike with each other–most famously, that buyers and sellers will trade at a price at which supply equals demand–and then show how resources would be allocated were everyone in the economy in fact to trade according to the hypothesized bargaining principles.

As Frank Ackerman recounts in his aptly-titled assessment of general equilibrium, “Still Dead After All These Years: Interpreting the Failure of General Equilibrium Theory,” trouble came in the form of a 1972 proof, now known as the Sonnenschein-Mantel-Debreu Theorem, that there is never any guarantee that actual economic agents will bargain their way to the bargaining outcomes–the equilibria–that form the foundation of the model.

In order for buyers and sellers of a good to trade at a price the equalizes supply and demand, the quantity of the good bid by buyers must equal the quantity supplied at the bid price. If the price doesn’t start at the level that equalizes supply and demand–and there’s not reason to suppose it should–then the price must move up or down to get to equilibrium.

But every time price moves, it affects the budgets of buyers and sellers, who much then adjust their bids across all the other markets in which they participate, in order to rebalance their budgets. But that in turn means prices in the other markets must change to rebalance supply and demand in those markets.

The proof showed that there is no guarantee that the adjustments won’t just cause prices to move in infinite circles, an increase here triggering a reduction there that triggers another reduction here that triggers an increase back there, and so on, forever.

Thus there is no reason to suppose that prices will ever get to the places that general equilibrium assumes that they will always reach, and so general equilibrium models describe economies that don’t exist.

Liu et al.’s model describes an economy with concentrated markets, so it doesn’t just rely on the supply-equals-demand definition of equilibrium targeted by the Sonnenschein-Mantel-Debreu Theorem, a definition of equilibrium that seeks to model trade in competitive markets. But the flaw in general equilibrium models is actually even greater when the models make assumptions about bargaining in concentrated markets.

We can kind-of see why, in competitive markets, an economic agent would be happy to trade at a price that equalizes supply and demand, because if the agent holds out for a higher price, some other agent waiting in the wings will jump into the market and do the deal at the prevailing price.

But in concentrated markets, in which the number of firms is few, and there is no competitor waiting in the wings to do a deal that an economic agent rejects, holding out for a better price is always a realistic option. And so there’s never even the semblance of a guarantee that whatever price the particular equilibrium definition suggests should be the one at which trade takes place in the model would actually be the price upon which real world parties would agree. Buyer or seller might hold out for a better deal at a different price.

Indeed, in such game theoretic worlds, there is not even a guarantee that any deal at all will be done, much less a deal at the particular price dictated by the particular bargaining model arbitrarily favored by the model’s authors. Bob Cooter called this possibility the Hobbes Theorem–that in a world in which every agent holds out for the best possible deal, one that extracts the most value from others, no deals will ever get done and the economy will be laid to waste.

The bottom line is that all general equilibrium models, including Liu et al.’s, make unjustified assumptions about the prices at which goods trade, not to mention whether trade will take place at all.

But are they at least good as approximations of reality? The answer is no. There’s no reason to suppose that they get prices only a little wrong.

That makes Liu et al.’s attempt to use general equilibrium to prove things about the economy something of a farce. And their attempt to “calibrate” the model by plugging actual numbers from the economy into it in order to have it spit out numbers quantifying the effect of low interest rates on productivity, absurd.

If general equilibrium models are not accurate depictions of the economy, then using them to try to quantify actual economic effects is meaningless. And a reader who doesn’t know better might might well come away from the paper with a false impression of the precision with which Liu et al. are able to make their economic arguments about the real world.

So Why Is It Still Used?

But if general equilibrium is a bad description of reality, why do economists still use it?

It Creates a Clear Pecking Order

Partly because solving general equilibrium models is hard, and success is clearly observable, so keeping general equilibrium models in the economic toolkit provides a way of deciding which economists should get ahead and be famous: namely, those who can work the models.

By contrast, lots of economists can tell plausible, even believable, stories about the world, and it can take decides to learn which was actually right, making promotion and tenure decisions based on economic stories a more fraught, and necessarily political, undertaking.

Indeed, it is not without a certain amount of pride that Liu et al. write in their introduction that

[w]e bring a new methodology to this literature by analytically solving for the recursive value functions when the discount rate is small. This new technique enables us to provide sharp, analytical characterizations of the asymptotic equilibrium as discounting tends to zero, even as the ergodic state space becomes infinitely large. The technique should be applicable to other stochastic games of strategic interactions with a large state space and low discounting.

Ernest Liu et al., Low Interest Rates, Market Power, and Productivity Growth 63 (NBER Working Paper, Aug. 2020).

Part of the appeal of the paper to the authors is that they found a new way to solve the particular category of models they employ. The irony is that technical advances of this kind in general equilibrium economics are like the invention of the coaxial escapement for mechanical watches in 1976: a brilliant advance on a useless technology.

It’s an Article of Faith

But there’s another reason why use of general equilibrium persists: wishful thinking. I suspect that somewhere deep down economists who devote their lives to these models believe that an edifice so complex and all-encompassing must be useful, particularly since there are no other totalizing approaches to mathematically modeling the economy on offer.

Surely, think Liu et al., the fact that they can prove that in a general equilibrium model low interest rates drive up concentration and drive down productivity growth must at least marginally increase the likelihood that the same is actually true in the real world.

The sad truth is that, after Sonnenschein-Mantel-Debreu, they simply have no basis for believing that. It is purely a matter of faith.

Numeracy Is Charismatic

Finally, general equilibrium persists because working really complicated models makes economics into a priesthood. The effect is exactly the same as the effect that writing had on an ancient world in which literacy was rare.

In the ancient world, reading and writing were hard and mysterious things that most people couldn’t do, and so they commanded respect. (It’s not an accident that after the invention of writing each world religion chose to idolize a book.) Similarly, economics–and general equilibrium in particular–is something really hard that most literate people, indeed, even most highly-educated people and even most social scientists, cannot do.

And so it commands respect.

I have long savored the way the mathematical economist gives the literary humanist a dose of his own medicine. The readers and writers lorded it over the illiterate for so long, making the common man shut up because he couldn’t read the signs. It seems fitting that the mathematical economists should now lord their numeracy over the merely literate, telling the literate that they now should shut up, because they cannot read the signs.

It is no accident, I think, that one often hears economists go on about the importance of “numeracy,” as if to turn the knife a bit in the poet’s side. Numeracy is, in the end, the literacy of the literate. But schadenfreude shouldn’t stop us from recognizing that general equilibrium has no more purchase on reality than the Bhagavad Gita.

To be sure, economists’ own love affair with general equilibirium is somewhat reduced since the Great Recession, which seems to have accelerated a move from theoretical work in economics (of which general equilibrium modeling is an important part) to empirical work.

But it’s important to note here that economists have in many ways been reconstituting the priesthood in their empirical work.

For economists do not conduct empirics the way you might expect them to, by going out and talking to people and learning about how businesses function. Instead, they prefer to analyze data sets for patterns, a mathematically-intensive task that is conveniently conducive to the sort of technical arms race that economists also pursue in general equilibrium theory.

If once the standard for admission to the cloister was fluency in the latest general equilibrium techniques, now it is fluency in the latest econometric techniques. These too overawe non-economists, leaving them to feel that they have nothing to contribute because they do not speak the language.

Back to the Good

But general equilibrium’s intellectual bankruptcy is not economics’ intellectual bankruptcy, and does not even mean that Liu et al.’s paper is without value.

For economic thinking can be an aid to thought when used properly. That value appears clearly in Liu et al.’s basic and plausible argument that low interest rates can lead to higher concentration and lower productivity growth. Few antitrust scholars have considered the connection between interest rates and market concentration, and the basic story Liu et al. tell give antitrusters something to think about.

What makes Liu et al.’s story helpful, in contrast to the general equilibrium model they pursue later in the paper, is that it is about tendencies alone, rather than about attempting to reconcile all possible tendencies and fully characterize their net product, as general equilibrium tries to do.

All other branches of knowledge undertake such simple story telling, and indeed limit themselves to it, and so one might say that economics is at its best when it is no more ambitious in its claims than any other part of knowledge.

When a medical doctor advises you to reduce the amount of trace arsenic in your diet, he makes a claim about tendencies, all else held equal. He does not claim to account for the possibility that reducing your arsenic intake will reduce your tolerance for arsenic and therefore leave you unprotected against an intentional poisoning attempt by a colleague.

If the doctor were to try to take all possible effects of a reduction in arsenic intake into account, he would fail to provide you with any useful knowledge, but he would succeed at mimicking a general equilibrium economist.

When Liu et al. move from the story they tell in their introduction to their general equilibrium model, they try to pin down the overall effect of interest rates on the economy, accounting for how every resulting price change in one market influences prices in all other markets. That is, they try in a sense to simulate an economy in a highly stylized way, like a doctor trying to balance the probability that trace arsenic intake will give you cancer against the probability that it will save you from a poisoning attempt. Of course they must fail.

When they are not deriding it as mere “intuition,” economists call the good economics to which I refer “partial equilibrium” economics, because it doesn’t seek to characterize equilibria in all markets, but instead focuses on tendencies. It is the kind of economics that serves as a staple for antitrust analysis.

What will a monopolist’s increase in price do to output? If demand is falling in price–people buy less as price rises–then obviously output will go down. And what will that mean for the value that consumers get from the product? It must fall, because they are paying more, so we can say that consumer welfare falls.

Of course, the higher prices might cause consumers to purchase more of another product, and economies of scale in production of that other product might actually cause its price to fall, and the result might then be that consumer welfare is not reduced after all.

But trying to incorporate such knock-on effects abstractly into our thought only serves to reduce our understanding, burying it under a pile of what-ifs, just as concerns about poisoning attempts make it impossible to think clearly about the health effects of drinking contaminated water.

If the knock-on effects predominate, then we must learn that the hard way, by acting first on our analysis of tendencies. And even if we do learn that the knock-on effects are important, we will not respond by trying to take all effects into account general-equilibrium style–for that would gain us nothing but difficulty–but instead we will respond by flipping our emphasis, and taking the knock-on effects to be the principal effects. We will assume that the point of ingesting arsenic is to deter poisoning, and forget about the original set of tendencies that once concerned us, namely, the health benefits of avoiding arsenic.

Our human understanding can do no more. But faith is not really about understanding.

(Could it be that general equilibrium models are themselves just about identifying tendencies, showing, perhaps, that a particular set of tendencies persists even when a whole bunch of counter-effects are thrown at it? In principle, yes. Which is why very small general equilibrium models, like the two-good exchange model known as the Edgeworth Box, can be useful aids to thought. But the more goods you add in, and the closer the model comes to an attempt at simulating an economy–the more powerfully it seduces scholars into “calibrating” it with data and trying to measure the model as if it were the economy–the less likely it is that the model is aiding thought as opposed to substituting for it.)

Categories
Antitrust Regulation

“The Best Are Easily 10 Times Better Than Average,” But Can They Do Anything Else?

Netflix CEO Reed Hastings is celebrating the principle that great software programmers are orders of magnitude more productive than average programmers. The implication is that sky-high salaries for these rock stars are worth it.

Now, it may very well be the case that the best programmers are orders of magnitude better than average programmers. I’ve seen a similar thing on display during examinations for gifted students: inevitably one student finishes the exam in half the time and walks out with a perfect score, while the rest of the gifted struggle on.

Just how many orders of magnitude smarter is that student, relative not just to the other gifted students in the room, but to the average student who is not in room?

But while the rock-star principle may justify the high willingness of Silicon Valley firms to pay for talent — the more value an employee brings to a firm the more the firm can afford to pay the employee and still end up ahead — that doesn’t mean that as an economic matter a firm must pay rock-star employees higher salaries.

Far from it.

Economic efficiency requires that great programmers be put to use programming, otherwise society loses the benefit of their talents. But the minimum salary that, as an economic matter, a tech firm must pay a rock-star programmer to induce the programmer to program is just a penny more than what the programmer would earn doing the programmer’s next-most productive activity.

If the programmer isn’t good at anything but programming, that number might be $15.01 — the $15 minimum wage Amazon pays its fulfillment center workers plus a penny — or even something lower, as the programmers I know would have a tough time sprinting around a warehouse all day.

A programmer might be worth $100 million as a programmer, for example, because the programmer is capable of delivering that much value to software. But to make sure this person actually delivers that value, the market does not need actually to pay the programmer $100 million, or anything near to that amount. All the market needs to pay the programmer is a penny more than what the programmer would earn by not programming.

And if rock-star programmers tend only to be rock stars at programming, as I suspect is the case, that number might be pretty small, indeed, on the order of what average programmers make — if not $15 an hour, which is a bit of an exaggeration — because the rock-star programmer is likely to be average at programming-adjacent pursuits.

If the most the programmer would make teaching math, playing competitive chess, or just programming for non-tech companies that will never earn the profits needed to pay rock-star salaries, no matter how talented their employees, is a hundred thousand a year, then that plus a penny is all that economics requires that the programmer be paid for doing programming. Not $100 million.

So why are rock-star programmers earning the big bucks in Silicon Valley? Because tech firms compete for them, bidding up the price of their services.

Tech firms know this, of course, and once tried to put a lid on the bidding war, by entering into no-poach agreements pursuant to which they promised not to try to lure away each others’ programmers by offering them more money.

There is no reason to think that these no-poach agreements were inefficient. Unless you believe that programmers can contribute more to some tech firms than to others, in which case the bidding wars that drive rock-star compensation sky high are allocating programmers to their most productive uses. But that seems unlikely: does making Google better contribute more to America than making Amazon better?

(The agreements also could not have created any deadweight loss, because perfect price discrimination is the norm in hiring programming talent: firms negotiate compensation individually with each programmer.)

All the no-poach agreements did was to change the distribution of wealth: limiting the share of a firm’s revenues that programmers can take for themselves.

Indeed, the no-poach agreements probably contributed a bit to the deconcentration of wealth.

A dollar of revenue paid out to a smart programmer goes in full to the programmer, whereas that same dollar, if not paid to the programmer but instead paid out as profits to shareholders, is divided multiple ways between the firm’s owners. Competitive bidding for rock-star programmer salaries concentrates wealth, and the no-poach agreements spread it — admittedly to shareholders, who tend to be wealthy, but at least the dollar is spread.

The antitrust laws intervened just in time, however, to dissolve these agreements and punish Silicon Valley firms for doing their part to slow the increase in the wealth gap in America.

Today’s antitrust movement has argued that antitrust should break up the tech giants in part to prevent them from artificially depressing the wages they pay the little guy. I’ve argued that would be a mistake, because breakup could damage the companies, reducing the value they deliver to society and harming everyone. Regulating wages directly is a better idea.

But you don’t just make compensation fair by raising low wages. You also have to reduce excessive wages. One way to start is just by allowing the tech firms to conspire against their rock stars.

And once tech firms have finished conspiring against their overpaid programmers, they can start conspiring against another group of employees that is even more grossly overpaid per dollar of value added: their CEOs.

Well, that we might have to do for them.

Categories
Antitrust Monopolization

The Decline in Monopolization Cases in One (More) Graph

DOJ, FTC, and Private Cases Filed under Section 2 of the Sherman Act
(Image license: CC BY-SA 4.0.)

Observations:

  • The decline in cases brought by the Department of Justice since the 1970s is consistent with the story of Chicago School influence over antitrust. What is perhaps less well known, but clearly reflected in the data, is that the Chicago Revolution took place in the Ford, and especially the Carter, Administrations, not, as is sometimes supposed, in the Reagan Administration, although Reagan supplied the coup de grace.

    Indeed, we have only five monopolization cases filed by DOJ over the course of the entire Carter Administration, as compared with 58 filed during the part of the Nixon Administration and the Ford Administration covered by this data series. This is consistent with the broader influence of the Chicago School over regulation of business. It was also under Ford and Carter, not Reagan, that deregulation got underway, with partial deregulation of railroads (1976), near-complete deregulation of airlines (1978), and partial deregulation of trucking (1980) (more here).

    The timing suggests that the Chicago School’s victories were intellectual, rather than merely partisan. As Przemyslaw Palka has pointed out to me, Milton Friedman consciously pursued a strategy of intellectual, rather than political warfare, because he understood that victory on the intellectual plane is more complete and enduring (a nice discussion of this may be found here on pages 218-221). As these numbers suggest, Chicago prevailed by converting its adversaries, so that even when its adversaries were nominally in political power under Carter, they implemented Chicago’s own agenda.
  • To the extent that the early part of the FTC data series is reliable (more on that below), the story in the FTC case numbers is that of the six monopolization cases brought over the past five years, following a twenty-year period during which the FTC brought only three cases. With the exception of Google, which has just been filed, there has been no corresponding uptick in monopolization cases filed by the Department of Justice.
  • The private litigation data show that in some years (1998 and 2013), private litigation across the entire United States has produced fewer monopolization cases (against unique defendants) than did a single federal enforcer–the DOJ–in 1971. The private litigation numbers for 1997 to 2020 also show that, on average, about twenty defendants face new monopolization actions each year when federal enforcers are filing near-zero complaints. To the extent that the numbers for 1974 to 1983 are reliable (of which more below), they suggest that private cases have also declined markedly since the 1970s, although there was a lag of several years between the two effects, perhaps due to the tendency of private plaintiffs to file follow-on cases to government cases.
  • Altogether, one is left with the impression that corporate America has been awfully well-behaved since about 1975.

Notes on the Data:

  • The cases brought by the Department of Justice (DOJ) come from the Antitrust Division’s own workload statistics, so I assume the numbers are accurate. For DOJ cases investigated, as well as filed, see here.
  • The cases filed by private plaintiffs come from two sources. The first, for the years 1997 to 2020, is a search for Section 2 complaints in federal court dockets via Lexis CourtLink. I must thank Beau Steenken, Instructional Services Librarian & Associate Professor of Legal Research at University of Kentucky Rosenberg College of Law, for figuring out how to search CourtLink for Section 2 cases (no easy task, it turns out).

    These are only cases for which the plaintiff, in filing the complaint, indicated the cause of action as Section 2 of the Sherman Act in the court’s cover sheet. Apart from deleting a few cases in which DOJ was the plaintiff, and a few cases in which the case was filed by mistake (e.g., the case name reads: “error”), I did not examine these cases at all, other than to note that many of the defendants look plausible (e.g., Microsoft comes up a lot in the late 1990s or early 2000s).

    Finally, I counted only unique defendants in any given year. So for example, if there were ten cases filed against Microsoft in 2000, I counted that as only one case. The reason is that multiple consumers or competitors might be harmed by a single piece of anticompetitive conduct undertaken by a monopolist, and so one would expect multiple plaintiffs to sue the monopolist based on the same conduct. For those interested in using case counts to measure enforcement, all of those cases signal the same thing, that a particular anticompetitive practice has been challenged, and so all of the cases together really only represent a single instance of enforcement. I did not, however, check each complaint to make sure that the alleged conduct was the same across all complaints. I just assumed that multiple complaints filed in a given year against a single defendant relate to the same conduct. (I did not, however, count unique defendants across plaintiff types: the Justice Department case against Microsoft was counted toward DOJ cases and and any private cases filed against Microsoft in the same year count as a an additional single case in the private cases account.)

    According to CourtLink, some federal courts adopted online filing later than others, and CourtLink only has electronic dockets. I chose to use 1997 as the start year for this count, because by that year almost all jurisdictions were online and so presumably their dockets are part of the CourtLink database. According to CourtLink, several jurisdictions had not yet moved online by that year, however, and so the counts may be slightly skewed low in the first few years after 1997 because they miss cases filed in the jurisdictions that were still offline during that period. The jurisdictions that went online after January 1, 1997, and the year in which they went online, are District of New Mexico (1998), District of Nevada (1999), and District of Alaska (2004).

    The source of the data for the years 1974 to 1983 is Table 6 in this article. That table gives the yearly percentage of refusal to deal and predatory pricing cases in a sample of 2,357 cases from five courts, Southern District of New York, Northern District of Illinois, Northern District of California, Western District of Missouri, and Northern District of Georgia, as well as the total number of private antitrust cases filed per year. Because I suspect that my CourtLink data represents “pure” Section 2 cases–cases in which the Section 2 claim is the principal claim in the case–I adjusted these percentages using information from Table 1 in the paper about the share of those percentages that represent primary claims. Because the total yearly private cases given in the Article did not appear to be adjusted for multiple cases filed against the same defendant in a given year, as I adjusted the CourtLink data, I therefore further reduced the results in the same proportion as my CourtLink results were reduced when I eliminated multiple cases against the same defendant, a reduction of about 40%.
  • I collected the FTC data by searching for cases labeled “Single-Firm Conduct” in the FTC’s “cases and proceedings” database. The cases and proceedings database goes back to 1996, and so I labeled years for which there were no hits as years of zero cases going back to 1996. However, the FTC website does caution that some older cases are searchable only by name and year, and presumably not by case type, so it is possible that this data fails to count cases from early in the period (e.g., late 1990s). I also paged through the “Annual Antitrust Enforcement Activities Reports” issued by the FTC between 1996 and 2008 and found a couple of cases not returned by the search of the cases and proceedings database. Finally, I included the FTC’s case against Intel, filed in 2009. I counted both administrative complaints filed in the FTC’s own internal adjudication system and complaints filed by the FTC in federal court. The FTC cases are nominally brought under Section 5 of the FTC Act, through which the FTC enforces Section 2 of the Sherman Act.
Categories
Antitrust Monopolization

The Smallness of the Bigness Problem

The tendency to ascribe the problem of inequality that ails us to the bigness of firms is the great embarrassment of contemporary American progressivism. The notion that the solution to poverty is cartels for small business and the hammer for big business is so pre-modern, so mercantilist, that one wonders what poverty of intellect could have led American progressives into it.

Indeed, the contemporary progressive’s shame is all the greater because the original American progressives a century ago, whose name the contemporary progressive so freely appropriates, did not make the same mistake. The original progressives were more modern than progressives today, perhaps because the pre-modern age was not quite so distant from them. Robert Hale, the greatest lawyer-economist of the period, wrote that

[e]ven the classical economists realized . . . competition would not keep the price at a level with the cost of all the output, but would result in a price equal to the cost of the marginal portion of the output. Those who produce at lower costs because they own superior [capital] would reap a differential advantage which Ricardo, in his well-known analysis, designated “economic rent.”

Robert L. Hale, Freedom Through Law: Public Control of Private Governing Power 25-26 (1952).

I suspect that this is absolute Greek to the contemporary progressive. I will kindly explain it below.

But first, it should be noted that the American progressive’s failure to appreciate the smallness of the bigness problem is not shared by Piketty, whom American progressives celebrate without actually reading:

Yet pure and perfect competition cannot alter the inequality r > g, which is not the consequence of any market “imperfection.”

Thomas Piketty, Capital in the Twenty-First Century 573 (Arthur Goldhammer trans., 2017). (Italics mine.)

What does Piketty mean here?

He means what Hale meant, which is that the heart of inequality does not come from monopolists charging supracompetitive prices, however obnoxious we may feel that to be, but rather from the fact that the rich own assets that are more productive than the assets owned by the poor, and so they profit more than the poor even at efficient, competitive prices.

In other words, the rich get richer because their costs are lower and their costs are lower because they own all the best stuff.

No matter how competitive the market, prices will never be driven down to the lower costs faced by the rich, because other people own less-productive assets than do the rich and competition drives prices down to the level of the higher costs associated with producing things with less-productive assets.

(Why can’t price just keep going down, and simply drive the more expensive producers out of the market to the end of dissipating the profits of the less expensive producers? Because there is always a less expensive producer! Price can therefore never dissipate the profits of them all, and anyway demand puts a floor on price: consumers are always bidding prices up until supply satisfies demand.)

Graphically, American progressives have been sweating the “monopoly profit” box without seeming to realize that it’s tiny compared to what remains once you eliminate it, which is the “economic rent” box.

Picketty, the original American progressives, and kindergartners know the difference between big and small. Why don’t we?

Categories
Antitrust

Conspiracy or Incompetence?

Let’s get this straight. The New York Times criticizes The Epoch Times today for running infomercials attacking the Chinese Communist Party’s handling of the coronavirus pandemic while making “no mention of The Epoch Times’s ties to Falun Gong, or its two-decade-long campaign against Chinese communism.”

But last week the Times ran a long piece, titled “Big Tech’s Professional Opponents Strike at Google,” that purported to reveal to readers the forces behind the Google antitrust suit while making no mention of the campaign of the News Media Alliance, of which the Times is a member, for antitrust action against Google, or the threat posed by Google to the Times’ advertising business.

Since the Times seems to think poor little Epoch Times should be disclosing its death struggle with the CCP to readers, I would like to see the Times start disclosing, in each article it writes about Big Tech, its death struggle with those companies over advertising revenues. The paper can also slap a correction to the same effect on each of the hundreds of pieces it has published over the past three years trashing its tech adversaries.

Ben Smith, who knows better, contributed to the Epoch Times piece. Let’s see him show some courage in his next column about media and tech.

So which is it? Maybe both.

Categories
Antitrust Regulation

Antitrust as Price Regulation by Least Efficient Means

Any company that has $100 billion in cash and marketable securities on its books, as Apple does, is charging excessive prices for its products, in the sense of prices higher than necessary to make everyone at Apple ready, willing, and able to continue to do the excellent job that they are doing.

Is that a problem? Unfortunately, yes, for any society that’s supposed to be a thing of the people. It means that Apple is bilking the public: taking more from the people for their iPhones and Macbooks than is strictly necessary to give Apple an incentive to produce iPhones and Macbooks.

You don’t need the money to reward investors. Otherwise you would have paid the money out already.

You don’t need the money to build more factories. Otherwise you would have built the factories already.

You don’t need the money to pay Tim Cook. Otherwise you would have upped his compensation already.

And with an AA+ credit rating, you don’t need the money for an emergency either, since it would cost you almost nothing to borrow cash in a pinch.

You just don’t need those billions, which is why they are what economists call “rents:” earnings in excess of what would be necessary to make the company, and all those who contribute to its success, ready, willing, and able to carry on.

Should government do something about these rents?

Yes. But not with the antitrust laws. Because Apple’s rents are not monopoly rents. Those are the excessive returns that come from making your products stand out by trashing your competitors’ products, rather than improving your own. Antitrust prohibits that sort of behavior.

But does anyone think Apple achieved the ability to charge $1,200 for an iPhone by making Samsung products worse?

Of course not.

Which is why there is no antitrust case against Apple.

Instead, Apple’s rents are Schumpeterian: excessive returns that come from making your products stand out by improving them, rather than by trashing the products of competitors. Antitrust does not prohibit such conduct.

Nor should it, because antitrust is a slayer, breaking up the firms that run afoul of its rules, saddling them with behavioral injunctions, and taxing them with trebled damages.

Those remedies make sense when the target is a firm that has gotten ahead by trashing competitors. That sort of firm doesn’t have a better product to offer, so smashing it is no great loss to society.

That’s not true for firms like Apple that have gotten ahead by being better. Smash Apple and you might well get Apple’s prices down. But you might also end up with poorer-quality products.

Why is it that Samsung keeps churning out gimmicky phones that are just a bit too ahead of their time to work properly, whereas, iteration after iteration, Apple phones continue to please?

Who knows?

By the same token, who knows whether Apple divided two ways, three ways or four ways will still have the same old magic? Organizations are mysterious things and we should break them only when they are already broken.

That doesn’t mean that something shouldn’t be done about Apple’s prices. As is so often the case, the right approach is the most direct: tell Apple to lower them.

There’s nothing novel about doing that. It’s the way America often has dealt with high-tech firms that get carried away with their own success. It happened with the landline telephone: the states regulated telephone rates for a century, and many retain the statutory authority to do so today. No vast cultural leap would be required to regulate the prices of iPhones or other Apple products.

Regulating prices runs much less of a risk of killing the golden goose, because it’s a scalpel to antitrust’s hammer, ordering prices down without smashing the firms that charge them.

But are prices really all that Apple’s antitrust adversaries care about? I think so.

The antitrust complaint brought by Fortnite-videogame-maker Epic is admirably transparent on this score, inveighing against what it calls Apple’s “30% tax” on paid App Store apps.

True, Epic spends a lot of time arguing that Apple should stop vetting the apps that can be installed on iPhones and should also stop requiring apps to accept payments via Apple’s own systems.

But it’s hard to believe Epic really cares whether consumers can run any app they want on the iPhone, or whether consumers can make in-app purchases with Paypal instead of Apple Pay.

The real reason Epic targets app vetting and payment systems lockdown is more likely because these two Apple policies prevent Epic from doing an end run around Apple’s 30% fee by connecting directly with users.

So to use antitrust to attack Apple’s prices, Epic ends up trying to thrust a stake through the streamlined, curated environment that iPhone users love. Needless to say, we know what a platform on which you can install anything and pay in any manner looks like: it’s called the PC, that bug-ridden, bloatware-filled, hackable free-for-all from which Apple users have been running screaming for decades now.

The beauty of price regulation is that you don’t need to redesign products to get what you want. Under price regulation, Apple would be able to continue to vet apps and manage payments, and thereby maintain the experience its customers love. All the company would need to do is lower its prices.

Epic isn’t the only organization out to exploit the antitrust laws for the sake of a bit of price regulation by least efficient means. Today’s Neo Brandeisians seem to share this goal.

That is the substance of an extraordinary piece by two affiliates of the Open Markets Institute that calls for using antitrust to smash big firms, but allowing small firms to form price-fixing cartels. The idea is to redistribute wealth by reducing the prices big firms can charge and increasing the prices that the little guy can charge.

That sounds great. But why not just regulate prices directly instead of smashing the country’s patrimony to get there?

Indeed, I’m mystified by the contempt in which this supposedly-radical movement seems to hold price regulation. The movement is all for returning to antitrust’s New Deal heyday. But it has nary a word to spare for price regulation, which was a much bigger part of the New Deal and the mid-century economic settlement that followed it, during which fully 25% of the American economy by GDP was price regulated.

One wonders whether the Neo Brandeisians share the Chicago School’s old concerns about “capture.” Something tells me they might.

Nevermind that we learned long ago that the notion that administrative agencies are captured by those they regulate is too simple by half.

And no one has been able to explain to me why the judges who apply the antitrust laws are any less susceptible to capture than are government price regulators.

But I do know that most Americans don’t seem to know that their gas, electricity, and insurance rates are regulated by government agencies, which says a lot about whether price regulation is the supreme evil that antitrusters of all stripes make it out to be.

The Neo Brandeisians’ mania for competition is really just run-of-the-mill American anti-statism, with a bit of progressive polish. Consider another example of intemperate fervor for competition, one that differs from the Neo Brandeisians’ campaign against big tech only in lacking that campaign’s radical pretensions: The Hatch-Waxman Act.

Rather than follow the rest of the world in regulating prescription drug prices directly, the United States has chosen to use competition from generic drugs to drive down drug prices after patents expire. The Hatch-Waxman Act of 1984 was meant to kickstart the plan by streamlining the generic drug approval process.

It’s important to understand how ridiculous using competition to reduce off-patent drug prices really is. Far and away the greatest virtue of competition is that it leads to innovation: firms must make better products or lose out to competitors.

But when it comes to generic drugs, competition cannot lead to innovation, because generic drugs are by definition copies of old drugs!

If a generic drug company were to innovate in order to get ahead of its competitors, its product would need to go through full-blown clinical trials in order to receive FDA approval and would also likely receive patent protection, instantaneously removing it from the competitive generic drug market and driving up its price. So the innovation rationale for competition just doesn’t exist in the context of generics.

But we decided to promote competition anyway, purely for the purpose of reducing off-patent drug prices.

It kind of worked.

Prices for many off-patent drugs fell. But not for all off-patent drugs. As scandals involving Daraprim (of pharma bro fame) and the Epipen show (the latter in the device context), it turned out that competition does not always come to the rescue once patents expire and regulatory hurdles are lowered.

More importantly, the cost of maintaining the system turned out to be immense. Firms responded by finding ways to prevent their drugs from going off-patent, leading to interminable patent and antitrust litigation. Just google “reverse payment patent settlements”–one of the mechanisms used by drug makers to undermine competition–and behold the flood of ink spilt on this avoidable disaster.

Worse, we have learned in recent years that generic drug quality is actually pretty terrible, even dangerous: competition is killing the golden goose.

Not, in this case, because Hatch-Waxman led to the break-up of big firms, but because when competition is just about getting prices down, firms will skimp on production costs. Ruinously low prices are, incidentally, supposed to be another of the great problems with price regulation–that regulators will dictate prices that are too low to cover costs–but it turns out that competition is at least as good at undershooting.

So what we could have gotten from a rate regulator in four little words–“lower your damn prices”–Hatch-Waxman accomplished in a patchwork way, at the cost of interminable litigation and sketchy pills.

Which leads me to ask: can Congress please do something about Apple’s $100 billion cash pile? How about putting aside $25 billion (just to make sure Apple has a nice cushion against shocks), and then rebating the other $75 billion to everyone who has ever bought an Apple product, pro rata? You can be sure Apple knows who they are.

And while Congress is at it, they can take a look at Microsoft and Alphabet, too.

For $100 billion is not actually the largest hoard in Silicon Valley.

Categories
Antitrust Monopolization

The Original and Purest Form of Anticompetitive Conduct

Still in those early days trade depended not upon the quality of the goods but upon the military force to control the markets. The Dutch consequently valued the island chiefly on account of its strategical position. From Formosa the Spanish commerce between Manila and China, and the Portuguese commerce between Macau and Japan could by constant attacks be made so precarious that much of it would be thrown into the hands of the Dutch, while the latter’s dealings with China and Japan would be subject to no interruptions.

James W. Davidson, The Island of Formosa, Past and Present (1903).

Here Davidson nicely contrasts monopolies based on product quality with monopolies based on force, capitalism with mercantilism. I do not think it is too much to say that democracy, or at least a genuine republicanism, even if autocratic in administration, is the principal bulwark between the two, and that antitrust, when used properly, is meant to round off any remaining mercantilist edges.

When used improperly, antitrust is of course a gunboat all of its own.

Categories
Antitrust

Forbidden Fruit

As if to remind those who might still be confused about what the antitrust movement against the tech giants is really about, newspapers are now making common cause with app developers to force Apple to delay new privacy protections that would have allowed app users to opt out of targeted advertising.

That’s right, the same newspapers that have been savaging the tech giants for years as evil privacy foes are fighting to stop Apple from making it harder for app developers to exploit your data.

Why? Because newspapers make money from advertising, of course. They’re the app developers who want to continue to spy on you.

In this light, it’s hard not to see the calls for antitrust action that newspapers have been slinging at the tech giants as coming from the emptiness of their pocketbooks rather than the goodness of their hearts. It is the hackneyed tale of yesterday’s technology trying to use politics–and the antitrust laws–instead of excellence to survive in the market.

Readers think newspapers are in the news business; actually, their business is selling ads. But Google and Facebook do that better, because, as the Times recently noted in relation to Google parent Alphabet:

consumers interact with the company nearly every time they search for information, watch a video, hail a ride, order delivery in an app or see an ad online. Alphabet then improves its products based on the information it gleans from every user interaction, making its technology even more dominant.

Katie Benner & Cecilia Kang, Justice Dept. Plans to File Antitrust Charges Against Google in Coming Week, N.Y. Times, Sept. 3, 2020.

The result has been a catastrophic decline in newspaper revenues.

Rather than do what they should have done all along, which is cut the cord with advertising and build their business around a more wholesome revenue stream–one that doesn’t involve trying to manipulate their readers into buying products they don’t really want to buy–or seek public funding à la the BBC for what is after all a sacred public function, the media industry has appeared to engage in a campaign to scare the tech giants into giving media a share of their advertising revenues.

The “tech-lash” of the past decade? That looks an awful lot like a message from media to big tech: pay up, or we’ll wreck your reputation. Wasn’t that driven instead by concerns about privacy? The media’s opposition to Apple’s privacy safeguards today gives us the answer: not so much.

The drumbeat of articles about the courageous antitrust scholars daring to take on big tech (few of whom actually are antitrust scholars)? That looks an awful lot like a message from media to big tech too: pay up, or we’ll get the law to break you into pieces. Wasn’t that driven instead by concern that there’s too much concentration in America? The News Media Alliance’s multi-year campaign for an antitrust exemption that would allow newspapers to cartelize gives us the same answer: not so much.

The House antitrust investigation into big tech, led by a congressman who has been doing the bidding of the News Media Alliance? That too looks an awful lot like a message.

Oh, and before I forget, that fawning story in the Times about Tim Sweeney, CEO of Epic games, the scrappy maker of Fortnite that is leading an antitrust “crusade” against Apple in search of lower fees? Funny how it doesn’t mention that much of the media industry, including the Times, is publicly supporting Epic, and demanding lower fees for their apps too.

If the pen is mightier than the sword, it is perforce mightier than the microchip. The tech giants have already started to open their pocketbooks. It will be interesting to see how badly they cave.

Of course, there are limits to the amount of sympathy one can feel for Google or Facebook. Those companies may be better at what they do than newspapers, but they are better at doing something antisocial: the spying and manipulation that constitute modern commercial advertising. The newspapers’ fight to get cut in on the spoils is ugly, but one set of rogues deserves another.

Apple is different. The company makes most of its money selling products that genuinely make life easier. And as the company has not tired of reminding us, the fact that its business is not mainly advertising means that its interests are more closely aligned with those of consumers when it comes to privacy than are the interests of any other player in this fight.

Which is why the newspapers’ attacks on Apple are a new low.

For a time, not competing with newspapers for advertising seemed to buy Apple some safety from the media’s antitrust crusade. But when the antitrust shakedown seems to be working against companies that wiped out your old-economy advertising business, why not extend it to one that wants to put the screws on your new-economy advertising business, and see if you can extract lower app store fees while you are at it?

Today’s antitrust movement against big tech may be many things to many people, but one thing it’s not is a progressive movement, even if some of its proponents delight in wrapping themselves in the progressive banner.

That should have been obvious to anyone watching the movement attract Trump Administration backing in assaulting what are probably the most progressive corporations ever. (It’s not normal for corporate employees to block management from accepting lucrative military contracts, and then not get fired.)

But at least now it is completely clear. For “when they tasted of the apple their shame was manifest.”

Categories
Antitrust

You Furnish the Briefs

No court has ever, in 130 years of antitrust practice in the United States, taken the position that dominance in and of itself, absent bad conduct, is illegal. But if you were a reader of The New York Times, you could be forgiven for thinking that as a matter of American law big is bad:

Alphabet was an obvious antitrust target. Through YouTube, Google search, Google Maps and a suite of online advertising products, consumers interact with the company nearly every time they search for information, watch a video, hail a ride, order delivery in an app or see an ad online. Alphabet then improves its products based on the information it gleans from every user interaction, making its technology even more dominant.

Katie Benner & Cecilia Kang, Justice Dept. Plans to File Antitrust Charges Against Google in Coming Week, N.Y. Times, Sept. 3, 2020.

Google is an obvious target for the Times, of course, because Google has eaten its lunch in the competition for advertising dollars. But it’s not an obvious target for anyone who knows something about antitrust, which isn’t in the business of smashing firms that win by being better.

But The New York Journal got its war by whipping Americans into a frenzy against an enemy of its choice. Why shouldn’t The New York Times get its antitrust case against Google?

Unlike in 1898, however, the only Americans who have actually been whipped into a frenzy are the elites: surveys show that Americans still love Google and the other tech giants, at least when they’re not being asked leading questions like: should the government “break up tech companies if they control too much of the economy?” (Actually, the best thing about the surveys is that the tech company Americans like least is the one that elites probably like most: Twitter.)

I suppose that it’s only the elites who matter, however, even those who might pretend not to read the Times. AG Barr is so intent on rushing out a case against Google, presumably because he’s been blinkered into thinking it will clinch a win in November for President Trump, that his line attorneys are in open revolt:

Justice Department officials told lawyers involved in the antitrust inquiry into Alphabet . . . to wrap up their work by the end of September[.] Most of the 40-odd lawyers who had been working on the investigation opposed the deadline. Some said they would not sign the complaint, and several of them left the case this summer.

Katie Benner & Cecilia Kang, Justice Dept. Plans to File Antitrust Charges Against Google in Coming Week, N.Y. Times, Sept. 3, 2020.

As PBS tells it: “Remington, who had been sent to Cuba to cover the insurrection, cabled to Hearst that there was no war to cover.” Hearst replied: “You furnish the pictures. I’ll furnish the war.”

Categories
Antitrust Monopolization Regulation

Antitrust’s Robocall Paradox

Today’s antitrust movement loves to point to the breakup of AT&T as an example of what antitrust enforcers can do when they put their minds to it. The only problem is that the breakup of AT&T was a disaster, and The Wall Street Journal has kindly provided a new example of that today: robocalls.

The breakup of AT&T was a politically-motivated hit, a Nixon-originated project that became the only monopolization case carried through to a conclusion by a Reagan Justice Department that otherwise wanted nothing else to do with antitrust. The target was widely recognized as the standard bearer of progressive managerialism and a leader in progressive labor practices. (Remind you of some other firms that have found themselves in the cross-hairs of an otherwise do-nothing Antitrust Division today?)

The country has little to show for the breakup forty years later. It didn’t eliminate the fundamental bottleneck associated with telephony: the massive last-mile infrastructure required to get calls into consumers’ handsets. That infrastructure is today mostly owned by just three firms, the new AT&T, Verizon, and T-Mobile, because it exhibits great economies of scale.

While the breakup did bring down long-distance rates, that’s a bug, not a feature. The only reason the old AT&T charged high long-distance rates was because the company was engaged in progressive redistribution of wealth, scalping businesses and well-off long-distance powerusers to the end of providing dirt-cheap local phone access and “universal service” to the masses.

Any economist who knows his Ramsey prices will tell you that’s not the most profitable way to set your rates, because long-distance calling is a luxury, but basic phone access is a necessity (911, anyone?). To get the most profit out of the public, you want to charge high prices for the necessity–because people will pay those prices whatever they may be–and low prices for the luxury. The trouble with that from the perspective of distributive justice is that it’s a regressive rate structure: charging the masses high prices and elites low prices.

Which is just what happened after the breakup of Ma Bell.

The court and later Congress forced the Baby Bells that owned the last-mile infrastructure to connect long-distance carriers’ calls, enabling massive entry into the long-distance market and driving down long-distance rates. But consumers don’t just pay for long distance, they also must pay for basic call connection that allows long-distance calls to reach consumers’ handsets.

The price of that went up, for everyone, not just long-distance callers, because the last mile remained a bottleneck, an infrastructure so expensive that few firms can survive in the market. Which is why the Baby Bells, which controlled that infrastructure, never went away.

Liberated from a dominating headquarters weaned on a New Deal politics that demanded the sacrifice of profits in favor of progressive pricing, the Baby Bells now charged whatever they wanted. At last they could enjoy whatever cream they might be able to skim from a public that needs phone service and has nowhere to go. The fact that they dominated regional markets, but not long-distance, gave them the political cover that hulking monopoly Ma Bell never had.

Free to grow fat, they matured into the tri-opoly we have today, one that has distinguished itself in its adherence to the maxim that the greatest reward of monopoly is a quiet life by supplying America with slower mobile internet service than almost any country in the developed world.

But at least we got competition in long distance, right? Now anyone with $10,000 in working capital and a closet to store some routers can be a long-distance provider. Isn’t that a win for local self-reliance?

More like a win for fraud and robocallers, according to the Journal, in a story about the “dozens of little-known carriers that serve as key conduits in America’s telecom system,” connecting robocalls that “in total bilked U.S. consumers out of at least $38 million in 2019.”

The Journal treads lightly here–after all it’s got as much to gain as any newspaper from the current antitrust campaign against the tech giants that have out-competed the paper for advertising revenue in recent years–but it’s hard to disguise the culprit:

These small carriers took hold in the decades following the 1984 breakup of AT&T’s phone system monopoly, which was designed to lower the costs of long-distance calls. They mushroomed during the introduction of internet-based calling services in the 2000s.

The emergence of these small phone companies was in many ways a positive development for consumers who now pay less for long-distance calls. The downside is that the system wasn’t designed to discern between legitimate and illegitimate calls, which are sometimes mixed together as they are passed along. U.S. regulators generally didn’t require these carriers to block calls and in some cases forbade them from doing so as a way of limiting anticompetitive behavior. Some telecommunications experts say that opened the door for smaller carriers to hustle business from robocallers, or simply turn a blind eye to suspect traffic.

Ryan Tracy & Sarah Krause, Where Robocalls Hide: the House Next Door, Wall Street Journal, August 15, 2020.

Would there have been robocalls if we still had Ma Bell? Unlikely for a company that was so obsessed with control over its network that it famously stamped “BELL SYSTEM PROPERTY – NOT FOR SALE” on every handset in America.

(I do have to admit, however, that another communications monopoly still with us today provides something of a counterexample. The largest category of mail delivered by the U.S. Postal Service is advertising.)