Categories
Antitrust Meta Philoeconomica

Liu et al. and the Good and Bad in Economics

Liu et al.’s paper trying to connect market concentration to low interest rates reflects everything that’s good and bad about economics.

The Good Is the Story

The good is that the paper tells a plausible story about why the current era’s low interest rates might actually be the cause of the low productivity growth and increasing markups we are observing, as well as the increasing market concentration we might also be observing.

The story is that low interest rates encourage investment in innovation, but investment in innovation paradoxically discourages competition against dominant firms, because low rates allow dominant firms to invest more heavily in innovation in order to defend their dominant positions.

The result is fewer challenges to market dominance and therefore less investment in innovation and consequently lower productivity growth, increasing markups, and increasing market concentration.

Plausible does not mean believable, however.

The notion that corporate boards across America are deciding not to invest in innovation because they think dominant firms’ easy access to capital will allow them to win any innovation war is farfetched, to say the least.

“Gosh, it’s too bad rates are so low, otherwise we might have a chance to beat the iPhone,” said one Google Pixel executive to another never.

And it’s a bit too convenient that this monopoly-power-based explanation for two of the major stylized facts of the age–low interest rates and low productivity growth–would come along at just the moment when the news media is splashing antitrust across everyone’s screens for its own private purposes.

But plausibility is at least helpful to the understanding (as I will explain more below), and the gap between it and believability is not the bad part of economics on display in Liu et al.

The Bad Is the General Equilibrium

The bad part is the the authors’ general equilibrium model.

They think they need the model to show that the discouragement competitors feel at the thought of dominant firms making large investments in innovation to thwart them outweighs the incentive that lower interests rates give competitors, along with dominant firms, to invest in innovation.

If not, then competitors might put aside their fears and invest anyway, and productivity growth would then increase anyway, and concentration would fall.

Trouble is, no general equilibrium model can answer this question, because general equilibrium models are not themselves even approximately plausible models of the real world, and economists have known this since the early 1970s.

Intellectually Bankrupt for a While Now

Once upon a time economists thought they could write down a model of the economy entire. The model they came up with was built around the concept of equilibrium, which basically meant that economists would hypothesize the kind of bargains that economic agents would be willing to strike with each other–most famously, that buyers and sellers will trade at a price at which supply equals demand–and then show how resources would be allocated were everyone in the economy in fact to trade according to the hypothesized bargaining principles.

As Frank Ackerman recounts in his aptly-titled assessment of general equilibrium, “Still Dead After All These Years: Interpreting the Failure of General Equilibrium Theory,” trouble came in the form of the a 1972 proof, now known as the Sonnenschein-Mantel-Debreu Theorem, that there is never any guarantee that actual economic agents will bargain their way to the bargaining outcomes–the equilibria–that form the foundation of the model.

In order for buyers and sellers of a good to trade at a price the equalizes supply and demand, the quantity of the good bid by buyers must equal the quantity supplied at the bid price. If the price doesn’t start at the level that equalizes supply and demand–and there’s not reason to suppose it should–then the price must move up or down to get to equilibrium.

But every time price moves, it affects the budgets of buyers and sellers, who much then adjust their bids across all the other markets in which they participate, in order to rebalance their budgets. But that in turn means prices in the other markets must change to rebalance supply and demand in those markets.

The proof showed that there is no guarantee that the adjustments won’t just cause prices to move in infinite circles, an increase here triggering a reduction there that triggers another reduction here that triggers an increase back there, and so on, forever.

Thus there is no reason to suppose that prices will ever get to the places that general equilibrium assumes that they will always reach, and so general equilibrium models describe economies that don’t exist.

Liu et al.’s model describes an economy with concentrated markets, so it doesn’t just rely on the supply-equals-demand definition of equilibrium targeted by the Sonnenschein-Mantel-Debreu Theorem, a definition of equilibrium that seeks to model trade in competitive markets. But the flaw in general equilibrium models is actually even greater when the models make assumptions about bargaining in concentrated markets.

We can kind of see why, in competitive markets, an economic agent would be happy to trade at a price that equalizes supply and demand, because if the agent holds out for a higher price, some other agent waiting in the wings will jump into the market and do the deal at the prevailing price.

But in concentrated markets, in which the number of firms is few, and there is no firm waiting in the wings to do a deal that you reject, holding out for a better price is always a realistic option. And so there’s never even the semblance of a guarantee that whatever price the particular equilibrium definition suggests should be the one at which trade takes place in the model would actually be the price upon which real world parties would agree. Buyer or seller might hold out for a better deal at a different price.

Indeed, in such game theoretic worlds, there is not even a guarantee that any deal at all will be done, much less a deal at the particular price dictated by the particular bargaining model arbitrarily favored by the model’s authors. Bob Cooter called this possibility the Hobbes Theorem — that in a world in which every agent holds out for the best possible deal, one that extracts the most value from others, no deals will ever get done and the economy will be laid to waste.

The really important thing about the Hobbes Theorem is that there is nothing in economic theory the establishes that the nightmare it depicts cannot come to pass.

The bottom line is that all general equilibrium models, including Liu et al.’s, make unjustified assumptions about the prices at which goods trade, not to mention whether trade will take place at all.

But are they at least good as approximations of reality? The answer is no. There’s no reason to suppose that they get prices only a little wrong.

That makes Liu et al.’s attempt to use general equilibrium to prove things about the economy something of a farce. And their attempt to “calibrate” the model by plugging actual numbers from the economy into it in order to have it spit out numbers quantifying the effect of low interest rates on productivity, absurd.

If general equilibrium models are not accurate depictions of the economy, then using them to try to quantify actual economic effects is meaningless. And a reader who doesn’t know better might might well come away from the paper with a false impression of the precision with which Liu et al. are able to make their economic arguments about the real world.

So Why Is It Still Used?

But if general equilibrium is a bad description of reality, why do economists still use it?

It Creates a Clear Pecking Order

Partly because solving general equilibrium models is hard, and success is clearly observable, so keeping general equilibrium models in the economic toolkit provides a way of deciding which economists should get ahead and be famous: namely, those who can work the models.

By contrast, lots of economists can tell plausible, even believable, stories about the world, and it can take decides to learn which was actually right, making promotion and tenure decisions based on economic stories a more fraught, and necessarily political, undertaking.

Indeed, it is not without a certain amount of pride that Liu et al. write in their introduction that

[w]e bring a new methodology to this literature by analytically solving
for the recursive value functions when the discount rate is small. This new technique enables us to provide sharp, analytical characterizations of the asymptotic equilibrium as discounting tends to zero, even as the ergodic state space becomes infinitely large. The technique should be applicable to other stochastic games of strategic interactions with a large state space and low discounting.

Ernest Liu et al., Low Interest Rates, Market Power, and Productivity Growth 63 (NBER Working Paper, Aug. 2020).

Part of the appeal of the paper to the authors is that they found a new way to solve the particular category of models they employ. The irony is that technical advances of this kind in general equilibrium economics are like the invention of the coaxial escapement for mechanical watches in 1976: a brilliant advance on a useless technology.

It’s an Article of Faith

But there’s another reason why use of general equilibrium persists: wishful thinking. I suspect that somewhere deep down economists who devote their lives to these models believe that an edifice so complex and all-encompassing must be useful, particularly since there are no other totalizing approaches to modeling the economy mathematically on offer.

Surely, think Liu et al., the fact that they can prove that in a general equilibrium model low interest rates drive up concentration and drive down productivity growth must at least marginally increase the likelihood that the same is actually true in the real world.

The sad truth is that, after Sonnenschein-Mantel-Debreu, they simply have no basis for believing that. It is purely a matter of faith.

Numeracy Is Charismatic

Finally, general equilibrium persists because working really complicated models makes economics into a priesthood. The effect is exactly the same as the effect that writing had on an ancient world in which literacy was rare.

In the ancient world, reading and writing were hard and mysterious things that most people couldn’t do, and so they commanded respect. (It’s not an accident that after the invention of writing each world religion chose to idolize a book.) Similarly, economics–and general equilibrium in particular–is something really hard that most literate people, indeed, even most highly-educated people and even most social scientists, cannot do.

And so it commands respect.

I have long savored the way the mathematical economist gives the literary humanist a dose of his own medicine. The readers and writers lorded it over the illiterate for so long, making the common man shut up because he couldn’t read the signs. It seems fitting that the mathematical economists should now lord their numeracy over the merely literate, telling the literate that they now should shut up, because they cannot read the signs.

It is no accident, I think, that one often hears economists go on about the importance of “numeracy,” as if to turn the knife a bit in the poet’s side. Numeracy is, in the end, the literacy of the literate.

But schadenfreude shouldn’t stop us from recognizing that general equilibrium has no more purchase on reality than the Bhagavad Gita.

To be sure, economists’ own love affair with general equilbirium is somewhat reduced since the Great Recession, which seems to have accelerated a move from theoretical work in economics (of which general equilibrium modeling is an important part) to empirical work.

But it’s important to note here that economists have in many ways been reconstituting the priesthood in their empirical work.

For economists do not conduct empirics they way you might expect them to, by going out and talking to people and learning about how businesses function. Instead, they prefer to analyze data sets for patterns, and they have recreated the mathematical arms race that once characterized general equilibrium modeling in their development of increasingly sophisticated econometric tools for analyzing data.

If once the standard for admission to the cloister was fluency in the latest general equilibrium techniques, now it is fluency in the latest econometric techniques. These too overawe non-economists, leaving them to feel that they have nothing to contribute because they do not speak the language.

Back to the Good

But general equilibrium’s intellectual bankruptcy is not economics’ intellectual bankruptcy, and does not even mean that Liu et al.’s paper is without value.

For economic thinking can be an aid to thought, when used properly. That value appears clearly in Liu et al.’s basic and plausible argument that low interest rates can lead to higher concentration and lower productivity growth. Few antitrust scholars have considered the connection between interest rates and market concentration, and the basic story Liu et al. tell give them something to think about.

What makes Liu et al.’s story helpful is that it is about tendencies, rather than an attempt to reconcile all possible tendencies and fully characterize the net outcome of a particular action, as general equilibrium tries to do.

All other branches of knowledge undertake such story telling, and indeed limit themselves to it, and so one might say that economics is at its best when it is no more ambitious in its claims than any other part of knowledge.

When a medical doctor advises you to reduce the amount of trace arsenic in your diet, he makes a claim about tendencies, all else held equal. He does not claim to account for the possibility that reducing your arsenic intake will reduce your tolerance for arsenic and therefore leave you unprotected against an intentional poisoning attempt by a colleague.

If the doctor were to try to take all possible effects of a reduction in arsenic intake into account, he would fail to provide you with any useful knowledge, but he would succeed at mimicking a general equilibrium economists.

Similarly, what good economics does is to make a claim about tendencies, all else held equal. Liu et al. do this in their introduction when they make their basic argument about the connection between interest rates and competition and productivity growth. Economists generally disparage such introductory stories as mere “intuition,” but it is in fact the sole value proposition of economics.

When Liu et al. move from this to their general equilibrium model, they try to pin down the overall effect of interest rates on the economy, accounting for how every resulting price change in one market influences prices in all other markets. That is, they try in a sense to simulate an economy in a highly stylized way, like a doctor trying to balance the probability that trace arsenic intake will give you cancer against the probability that it will save you from a poisoning attempt. Of course they must fail.

Economists call the good economics to which I refer “partial equilibrium” economics, because it doesn’t seek to characterize equilibria in all markets, but instead focuses on tendencies. It is the kind of economics that serves as a staple for antitrust analysis.

What will a monopolist’s increase in price do to output? If demand is falling in price–people buy less as price rises–then obviously output will go down. And what will that mean for the value that consumers get from the product? It must fall, because they are paying more, so we can say that consumer welfare falls.

Of course, the higher prices might cause consumers to purchase more of another product, and economies of scale in production of that other product might actually cause its price to fall, and the result might then be that consumer welfare is not reduced after all.

But trying to incorporate such knock-on effects abstractly into our thought only serves to reduce our understanding, burying it under a pile of what-ifs, just as concerns about poisoning attempts make it impossible to think clearly about the health effects of drinking contaminated water.

If the knock-on effects predominate, then we must learn that the hard way, by acting first on our analysis of tendencies. And even if we do learn that the knock-on effects are important, we will not respond by trying to take all effects into account general-equilibrium style–for that would gain us nothing but difficulty–but instead we will respond by flipping our emphasis, and taking the knock-on effects to be the principal effects. We will assume that the point of ingesting arsenic is to deter poisoning, and forget about the original set of tendencies that once concerned us, namely, the health benefits of avoiding arsenic.

Our human understanding can do no more. But faith is not really about understanding.

(Could it be that general equilibrium models are themselves just about identifying tendencies, showing, perhaps, that a particular set of tendencies persists even when a whole bunch of counter-effects are thrown at it? In principle, yes. Which is why very small general equilibrium models, like the two-good exchange model known as the Edgeworth Box, can be useful aids to thought. But the more goods you add in, and the closer the model comes to an attempt at simulating an economy, the more powerfully it seduces scholars into “calibrating” it with data and trying to measure the model as if it were the economy, the less likely it is that the model is aiding thought as opposed to substituting for it.)

Categories
Miscellany

Two Kinds of Humanism

“American society used to be segregationist before it moved to a multiculturalist model, which is essentially about coexistence of different ethnicities and religions next to one another.”

“Our model is universalist, not multiculturalist,” he said, outlining France’s longstanding insistence that its citizens not be categorized by identity. “In our society, I don’t care whether someone is Black, yellow or white, whether they are Catholic or Muslim, a person is first and foremost a citizen.”

Ben Smith, The President vs. the American Media, N.Y. Times (Nov. 16, 2020).

That is because France still believes in the state. America has never had that problem.

America’s joy today at the success of SpaceX — a private firm — in sending four astronauts on their way to the space station, something the state has been powerless to do for nearly a decade, illustrates this rather nicely.

Categories
Miscellany

Place

“When you are a displaced person, and when you are longing for that place and you cannot visit it, that place becomes more than just a stone or mountain, it becomes like a beloved person. You want to kiss it, and lie down on it and feel the energy from the earth.”

Anton Troianovski & Carlotta Gall, After War Between Armenia and Azerbaijan, Peace Sees Winners and Losers Swap Places, N.Y. Times (Nov. 15, 2020).

I wonder how many Americans have such an attachment to place.

We've been here so little time,
And move so much, 
And it's such a big country,
With so many different soils.

But some do.

Categories
Miscellany

Corporate Law before Capitalism

One forgets that when Blackstone was writing his celebrated Commentaries on the Laws of England in the mid-18th century, business was not the most obvious application of the corporate form.

And so when Blackstone gives a list of types of corporations, he puts the business corporation last:

These artificial persons are called bodies politic, bodies corporate, (corpora corporata) or corporations: of which there is a great variety subsisting, for the advancement of religion, of learning, and of commerce.”

1 William Blackstone, Commentaries on the Laws of England 303 (Oxford 2016) (1765).

Blackstone goes on to take, as his primary example of a corporation, not the business corporation, but rather “the case of a college in either of our universities.”

Categories
Antitrust Regulation

“The Best Are Easily 10 Times Better Than Average,” But Can They Do Anything Else?

Netflix CEO Reed Hastings is celebrating the principle that great software programmers are orders of magnitude more productive than average programmers. The implication is that sky-high salaries for these rock stars are worth it.

Now, it may very well be the case that the best programmers are orders of magnitude better than average programmers. I’ve seen a similar thing on display during examinations for gifted students: inevitably one student finishes the exam in half the time and walks out with a perfect score, while the rest of the gifted struggle on.

Just how many orders of magnitude smarter is that student, relative not just to the other gifted students in the room, but to the average student who is not in room?

But while the rock-star principle may justify the high willingness of Silicon Valley firms to pay for talent — the more value an employee brings to a firm the more the firm can afford to pay the employee and still end up ahead — that doesn’t mean that as an economic matter a firm must pay rock-star employees higher salaries.

Far from it.

Economic efficiency requires that great programmers be put to use programming, otherwise society loses the benefit of their talents. But the minimum salary that, as an economic matter, a tech firm must pay a rock-star programmer to induce the programmer to program is just a penny more than what the programmer would earn doing the programmer’s next-most productive activity.

If the programmer isn’t good at anything but programming, that number might be $15.01 — the $15 minimum wage Amazon pays its fulfillment center workers plus a penny — or even something lower, as the programmers I know would have a tough time sprinting around a warehouse all day.

A programmer might be worth $100 million as a programmer, for example, because the programmer is capable of delivering that much value to software. But to make sure this person actually delivers that value, the market does not need actually to pay the programmer $100 million, or anything near to that amount. All the market needs to pay the programmer is a penny more than what the programmer would earn by not programming.

And if rock-star programmers tend only to be rock stars at programming, as I suspect is the case, that number might be pretty small, indeed, on the order of what average programmers make — if not $15 an hour, which is a bit of an exaggeration — because the rock-star programmer is likely to be average at programming-adjacent pursuits.

If the most the programmer would make teaching math, playing competitive chess, or just programming for non-tech companies that will never earn the profits needed to pay rock-star salaries, no matter how talented their employees, is a hundred thousand a year, then that plus a penny is all that economics requires that the programmer be paid for doing programming. Not $100 million.

So why are rock-star programmers earning the big bucks in Silicon Valley? Because tech firms compete for them, bidding up the price of their services.

Tech firms know this, of course, and once tried to put a lid on the bidding war, by entering into no-poach agreements pursuant to which they promised not to try to lure away each others’ programmers by offering them more money.

There is no reason to think that these no-poach agreements were inefficient. Unless you believe that programmers can contribute more to some tech firms than to others, in which case the bidding wars that drive rock-star compensation sky high are allocating programmers to their most productive uses. But that seems unlikely: does making Google better contribute more to America than making Amazon better?

(The agreements also could not have created any deadweight loss, because perfect price discrimination is the norm in hiring programming talent: firms negotiate compensation individually with each programmer.)

All the no-poach agreements did was to change the distribution of wealth: limiting the share of a firm’s revenues that programmers can take for themselves.

Indeed, the no-poach agreements probably contributed a bit to the deconcentration of wealth.

A dollar of revenue paid out to a smart programmer goes in full to the programmer, whereas that same dollar, if not paid to the programmer but instead paid out as profits to shareholders, is divided multiple ways between the firm’s owners. Competitive bidding for rock-star programmer salaries concentrates wealth, and the no-poach agreements spread it — admittedly to shareholders, who tend to be wealthy, but at least the dollar is spread.

The antitrust laws intervened just in time, however, to dissolve these agreements and punish Silicon Valley firms for doing their part to slow the increase in the wealth gap in America.

Today’s antitrust movement has argued that antitrust should break up the tech giants in part to prevent them from artificially depressing the wages they pay the little guy. I’ve argued that would be a mistake, because breakup could damage the companies, reducing the value they deliver to society and harming everyone. Regulating wages directly is a better idea.

But you don’t just make compensation fair by raising low wages. You also have to reduce excessive wages. One way to start is just by allowing the tech firms to conspire against their rock stars.

And once tech firms have finished conspiring against their overpaid programmers, they can start conspiring against another group of employees that is even more grossly overpaid per dollar of value added: their CEOs.

Well, that we might have to do for them.

Categories
Antitrust Monopolization

The Decline in Monopolization Cases in One (More) Graph

DOJ, FTC, and Private Cases Filed under Section 2 of the Sherman Act
(Image license: CC BY-SA 4.0.)

Observations:

  • The decline in cases brought by the Department of Justice since the 1970s is consistent with the story of Chicago School influence over antitrust. What is perhaps less well known, but clearly reflected in the data, is that the Chicago Revolution took place in the Ford, and especially the Carter, Administrations, not, as is sometimes supposed, in the Reagan Administration, although Reagan supplied the coup de grace.

    Indeed, we have only five monopolization cases filed by DOJ over the course of the entire Carter Administration, as compared with 58 filed during the part of the Nixon Administration and the Ford Administration covered by this data series. This is consistent with the broader influence of the Chicago School over regulation of business. It was also under Ford and Carter, not Reagan, that deregulation got underway, with partial deregulation of railroads (1976), near-complete deregulation of airlines (1978), and partial deregulation of trucking (1980) (more here).

    The timing suggests that the Chicago School’s victories were intellectual, rather than merely partisan. As Przemyslaw Palka has pointed out to me, Milton Friedman consciously pursued a strategy of intellectual, rather than political warfare, because he understood that victory on the intellectual plane is more complete and enduring (a nice discussion of this may be found here on pages 218-221). As these numbers suggest, Chicago prevailed by converting its adversaries, so that even when its adversaries were nominally in political power under Carter, they implemented Chicago’s own agenda.
  • To the extent that the early part of the FTC data series is reliable (more on that below), the story in the FTC case numbers is that of the six monopolization cases brought over the past five years, following a twenty-year period during which the FTC brought only three cases. With the exception of Google, which has just been filed, there has been no corresponding uptick in monopolization cases filed by the Department of Justice.
  • The private litigation data show that in some years (1998 and 2013), private litigation across the entire United States has produced fewer monopolization cases (against unique defendants) than did a single federal enforcer–the DOJ–in 1971. The private litigation numbers for 1997 to 2020 also show that, on average, about twenty defendants face new monopolization actions each year when federal enforcers are filing near-zero complaints. To the extent that the numbers for 1974 to 1983 are reliable (of which more below), they suggest that private cases have also declined markedly since the 1970s, although there was a lag of several years between the two effects, perhaps due to the tendency of private plaintiffs to file follow-on cases to government cases.
  • Altogether, one is left with the impression that corporate America has been awfully well-behaved since about 1975.

Notes on the Data:

  • The cases brought by the Department of Justice (DOJ) come from the Antitrust Division’s own workload statistics, so I assume the numbers are accurate. For DOJ cases investigated, as well as filed, see here.
  • The cases filed by private plaintiffs come from two sources. The first, for the years 1997 to 2020, is a search for Section 2 complaints in federal court dockets via Lexis CourtLink. I must thank Beau Steenken, Instructional Services Librarian & Associate Professor of Legal Research at University of Kentucky Rosenberg College of Law, for figuring out how to search CourtLink for Section 2 cases (no easy task, it turns out).

    These are only cases for which the plaintiff, in filing the complaint, indicated the cause of action as Section 2 of the Sherman Act in the court’s cover sheet. Apart from deleting a few cases in which DOJ was the plaintiff, and a few cases in which the case was filed by mistake (e.g., the case name reads: “error”), I did not examine these cases at all, other than to note that many of the defendants look plausible (e.g., Microsoft comes up a lot in the late 1990s or early 2000s).

    Finally, I counted only unique defendants in any given year. So for example, if there were ten cases filed against Microsoft in 2000, I counted that as only one case. The reason is that multiple consumers or competitors might be harmed by a single piece of anticompetitive conduct undertaken by a monopolist, and so one would expect multiple plaintiffs to sue the monopolist based on the same conduct. For those interested in using case counts to measure enforcement, all of those cases signal the same thing, that a particular anticompetitive practice has been challenged, and so all of the cases together really only represent a single instance of enforcement. I did not, however, check each complaint to make sure that the alleged conduct was the same across all complaints. I just assumed that multiple complaints filed in a given year against a single defendant relate to the same conduct. (I did not, however, count unique defendants across plaintiff types: the Justice Department case against Microsoft was counted toward DOJ cases and and any private cases filed against Microsoft in the same year count as a an additional single case in the private cases account.)

    According to CourtLink, some federal courts adopted online filing later than others, and CourtLink only has electronic dockets. I chose to use 1997 as the start year for this count, because by that year almost all jurisdictions were online and so presumably their dockets are part of the CourtLink database. According to CourtLink, several jurisdictions had not yet moved online by that year, however, and so the counts may be slightly skewed low in the first few years after 1997 because they miss cases filed in the jurisdictions that were still offline during that period. The jurisdictions that went online after January 1, 1997, and the year in which they went online, are District of New Mexico (1998), District of Nevada (1999), and District of Alaska (2004).

    The source of the data for the years 1974 to 1983 is Table 6 in this article. That table gives the yearly percentage of refusal to deal and predatory pricing cases in a sample of 2,357 cases from five courts, Southern District of New York, Northern District of Illinois, Northern District of California, Western District of Missouri, and Northern District of Georgia, as well as the total number of private antitrust cases filed per year. Because I suspect that my CourtLink data represents “pure” Section 2 cases–cases in which the Section 2 claim is the principal claim in the case–I adjusted these percentages using information from Table 1 in the paper about the share of those percentages that represent primary claims. Because the total yearly private cases given in the Article did not appear to be adjusted for multiple cases filed against the same defendant in a given year, as I adjusted the CourtLink data, I therefore further reduced the results in the same proportion as my CourtLink results were reduced when I eliminated multiple cases against the same defendant, a reduction of about 40%.
  • I collected the FTC data by searching for cases labeled “Single-Firm Conduct” in the FTC’s “cases and proceedings” database. The cases and proceedings database goes back to 1996, and so I labeled years for which there were no hits as years of zero cases going back to 1996. However, the FTC website does caution that some older cases are searchable only by name and year, and presumably not by case type, so it is possible that this data fails to count cases from early in the period (e.g., late 1990s). I also paged through the “Annual Antitrust Enforcement Activities Reports” issued by the FTC between 1996 and 2008 and found a couple of cases not returned by the search of the cases and proceedings database. Finally, I included the FTC’s case against Intel, filed in 2009. I counted both administrative complaints filed in the FTC’s own internal adjudication system and complaints filed by the FTC in federal court. The FTC cases are nominally brought under Section 5 of the FTC Act, through which the FTC enforces Section 2 of the Sherman Act.
Categories
Antitrust Monopolization

The Smallness of the Bigness Problem

The tendency to ascribe the problem of inequality that ails us to the bigness of firms is the great embarrassment of contemporary American progressivism. The notion that the solution to poverty is cartels for small business and the hammer for big business is so pre-modern, so mercantilist, that one wonders what poverty of intellect could have led American progressives into it.

Indeed, the contemporary progressive’s shame is all the greater because the original American progressives a century ago, whose name the contemporary progressive so freely appropriates, did not make the same mistake. The original progressives were more modern than progressives today, perhaps because the pre-modern age was not quite so distant from them. Robert Hale, the greatest lawyer-economist of the period, wrote that

[e]ven the classical economists realized . . . competition would not keep the price at a level with the cost of all the output, but would result in a price equal to the cost of the marginal portion of the output. Those who produce at lower costs because they own superior [capital] would reap a differential advantage which Ricardo, in his well-known analysis, designated “economic rent.”

Robert L. Hale, Freedom Through Law: Public Control of Private Governing Power 25-26 (1952).

I suspect that this is absolute Greek to the contemporary progressive. I will kindly explain it below.

But first, it should be noted that the American progressive’s failure to appreciate the smallness of the bigness problem is not shared by Piketty, whom American progressives celebrate without actually reading:

Yet pure and perfect competition cannot alter the inequality r > g, which is not the consequence of any market “imperfection.”

Thomas Piketty, Capital in the Twenty-First Century 573 (Arthur Goldhammer trans., 2017). (Italics mine.)

What does Piketty mean here?

He means what Hale meant, which is that the heart of inequality does not come from monopolists charging supracompetitive prices, however obnoxious we may feel that to be, but rather from the fact that the rich own assets that are more productive than the assets owned by the poor, and so they profit more than the poor even at efficient, competitive prices.

In other words, the rich get richer because their costs are lower and their costs are lower because they own all the best stuff.

No matter how competitive the market, prices will never be driven down to the lower costs faced by the rich, because other people own less-productive assets than do the rich and competition drives prices down to the level of the higher costs associated with producing things with less-productive assets.

(Why can’t price just keep going down, and simply drive the more expensive producers out of the market to the end of dissipating the profits of the less expensive producers? Because there is always a less expensive producer! Price can therefore never dissipate the profits of them all, and anyway demand puts a floor on price: consumers are always bidding prices up until supply satisfies demand.)

Graphically, American progressives have been sweating the “monopoly profit” box without seeming to realize that it’s tiny compared to what remains once you eliminate it, which is the “economic rent” box.

Picketty, the original American progressives, and kindergartners know the difference between big and small. Why don’t we?

Categories
Antitrust

Conspiracy or Incompetence?

Let’s get this straight. The New York Times criticizes the The Epoch Times today for running infomercials attacking the Chinese Communist Party’s handling of the coronavirus pandemic while making “no mention of The Epoch Times’s ties to Falun Gong, or its two-decade-long campaign against Chinese communism.”

But last week the Times ran a long piece, titled “Big Tech’s Professional Opponents Strike at Google,” that purported to reveal to readers the forces behind the Google antitrust suit while making no mention of the campaign of the News Media Alliance, of which the Times is a member, for antitrust action against Google, or the threat posed by Google to the Times’ advertising business.

Since the Times seems to think poor little Epoch Times should be disclosing its death struggle with the CCP to readers, I would like to see the Times start disclosing, in each article it writes about Big Tech, its death struggle with those companies over advertising revenues. The paper can also slap a correction to the same effect on each of the hundreds of pieces it has published over the past three years trashing Big Tech.

Ben Smith, who knows better, contributed to the Epoch Times piece. Let’s see him show some courage in his next column about media and tech.

So which is it? Maybe both.

Categories
Miscellany

Chess as a Warning

Still not like life, but more like it. (Source.)

There is a tendency to view the great lesson of chess as being that math-like reasoning works: the systematic thinkers and pattern recognizers among us excel at the game because they can see seven moves ahead, spot traps, and so on.

But that’s the wrong conclusion.

What chess tells us is that if even with fixed rules and an eight-by-eight square board math cannot deliver a key to winning every game–and it looks like it can’t–then imagine how much less useful mechanical thinking would be on the infinity by infinity board with no fixed rules that is life.

Chess has always been an amalgam of systematic behavior that is quite alien to daily life, and strategies and concepts that are much more familiar to us, like daring, care, and luck.

In chess there are the moments when a great systematic mind can see checkmate seven moves ahead. But just as often there are the moments in which several possible moves would open up a near-infinity of possible outcomes and no one can say which is best, both because that depends on what the other player does and because the possible variations are almost too numerous to count.

Sometimes the game lands us on an end branch of the tree of possibilities, and it’s possible to see your way to the blossom on the end, but other times the game lands you on a trunk and the brambles are so dense that you can scarcely see the sky.

When we find ourselves on the end branches, systematic thinking looms large, and we tend to use this as chess’s great lesson for life: find the system! Find mate in seven moves!

But when we find ourselves in the brambles, we call upon the same tools we use in daily life. We think in terms of strategy. Dominate the center of the board. Take the initiative by putting the opponent’s King in check. Pin down important pieces. Defend. Attack. Trust your gut. Victory goes to the bold.

There is in human relationships nothing at all even remotely analogous to mate in seven moves. Sometimes we talk about the relations between great powers as “like a game of chess,” but in truth they never are. The statesman who thinks he can mate his opponent is a dangerous fool, because he will sacrifice sound, human strategy for a system that will inevitably fail.

The closest thing we have to mate in life is the law, which purports at times to be a set of fixed rules that govern all human interactions. But any practicing lawyer will tell you that a bit of politics, or an appeal to heart of a judge, can win a case, even if the letter of the law is against you.

The board of life is so vast, and the pieces so numerous, that we are always, always caught in the brambles, unable to see the sky.

The great success of machine learning in chess represented by AlphaZero, a simple learning algorithm slapped together by Google that went on to beat the best chess computers in the world in dashing style, makes this lesson clear.

The legacy chess computers that AlphaZero beat were systematic thinkers, combining hardcoded programming about the best opening moves with number crunching that would explore possible games emanating from different moves and try to pick the most promising of them.

But AlphaZero is machine learning. Google’s engineers fed it the rules of chess and it played tens of millions of games against itself, creating a map of the best moves in different situations based on whether they ultimately led to a win or a loss.

It takes an approach akin to the approach of the human mind to life: note what seems to work based on experience and then do it when you encounter similar situations in the future. Of course, AlphaZero has a lot more experience to work with, because no one can play forty-four million games with himself in two hours.

The important thing about AlphaZero is that no one, not AlphaZero, not the Google engineers, can identify a winning rule of decision that AlphaZero follows, other than the learning map itself, and that changes as AlphaZero learns. There’s no system in there, other than the learning process, which is really just a method of coping with the richness of experience.

The thing that astonished chess enthusiasts is that AlphaZero plays in a human fashion, making daring sacrifices to achieve positional advantages. Some say it hearkens back to the age of “romantic chess” in the 19th century, before human players became obsessed with systematic play and made the game boring.

The lesson here is not that we have found a mechanical solution to the game. We haven’t: AlphaZero can lose; it’s just better at strategy than anyone else, so it tends to win more often. The lesson is that most of chess is not finding mate in seven moves–otherwise the brute force chess computers would be unbeatable–but rather being very, very good at the the familiar strategies that we use to navigate life: learning from experience, noticing what seems to work.

There’s no doubt that being able to see a few moves ahead helps avoid traps–those mates in seven–and that is what stands out at first about the game to human players.

But it stands out precisely because life is not like that.

Categories
Civilization Despair Miscellany World

God Has Died a Thousand Times, and Once in Philadephia

In its most extreme form, the state to an American is ‘a bunch of people’, politicians and their officials whom he watches with critical and even distrustful eyes; he sees the state as a powerful instrument that belongs to and is operated by groups of people for their own ends. At the other extreme one finds in Europe the adoration of the state as something majestic, transcendent and even divine (in the tradition of the ‘divine’ emperors of Rome). Nobody expressed this feeling better than the famous philosopher Hegel, who was professor at the Prussian University of Berlin from 1818 to 1831 and wrote: ‘The march of God in the world, that is what the state is. In considering the Idea of the State we must not have our eyes on particular states . . . Instead we must consider the Idea, this actual God, by itself’.

R. C. van Caenegem, An Historical Introduction to Western Constitutional Law 168 (2000).