Categories
Inframarginalism Monopolization Regulation

The Counterproductive Antimonopolism in New York’s Proposed Price Gouging Rules

In the modern age, we have trouble taking ideas seriously. We prefer to think in terms of dumb mechanism. We need oil for energy. It is in limited supply. Therefore we fight over it. Therefore we have conflict in the Middle East, which has a lot of oil. We apply this sort of economic logic to everything.

The view that ideas determine social behavior seems, by contrast, wishy washy. Does anyone need an idea in the way he needs energy and hence oil to live? Why would two groups that are otherwise well fed and well clothed fight over a figment of the mind?

To the extent that we credit ideas with power, we do so only by seeing them as snare and delusion—weapons in our quest for physical resources. Ideas are spin. They are the Viceroy butterfly’s colors, which mimic those of the bitter-tasting Monarch, warding off predators. Ideas are psyops, nothing more.

The ancients didn’t have this problem. Ideas, for them, were quite obviously everything, which is why people got worked up about religious dogma, as when the greens and the blues came to blows over the question whether Jesus was mostly human or mostly god. (We still do occasionally get violent over religion today, but we see that as a shame and a throwback.)

As I have argued before, the irony of our modern disdain for the power of ideas is that one of our greatest modern inventions—the computer—is an object lesson in the importance of ideas relative to physical mechanisms. No one questions the importance of software. No one questions its influence over the behavior of our machines.

And yet we are somehow certain that our own software—ideas—is mere epiphenomenon.

Antimonopolism as Mere Idea

So it is that when I point out to progressives that antimonopolism is bad for the movement because it leads, ultimately, to a vindication of the justice of the free market, I am told not to worry because antimonopolism is just good progressive psyops. Yes, I am told, free markets are themselves engines of inequality, but being an antimonopolist isn’t the same thing as being a free marketer.

Instead, I am told, antimonopolism is a way of affirming that business interests are the enemy. It’s a way of marshaling support for government intervention. And that is all. Once progressives have ridden a wave of antimonopoly sentiment into power, I am told, they will be free to achieve progressive goals however they want, and that may or may not include more markets and more competition.

This view of antimonopoly as psyops has been most on display in progressive calls to use antitrust to fight inflation. So far as I know, a century of progressive economics had never taken the position that inflation is caused by monopolization or that antitrust might be a useful remedy.

Keynes, for example, thought inflation’s flip side—deflation—had little to do with market structure. He thought Roosevelt’s first New Deal, which was about using cartelization of markets to fight deflation, was a mistake. He invented macroeconomics because microeconomics—tinkering with market structure—was a dead end. It stands to reason that, if he thought deflation wasn’t a problem of market structure, he didn’t think inflation was either.

Progressive economists no doubt understand that the link between inflation and monopolization is tenuous at best. And yet here, for example, was Paul Krugman writing a year ago when this debate was flaring:

Give Biden and his people a break on their antitrust crusade. It won’t do any harm. It won’t get in the way of the big stuff, which is mostly outside Biden’s control in any case. At worst, administration officials will be using inflation as an excuse to do things they should be doing in any case. And they might even have a marginal impact on inflation itself.

Paul Krugman, Why Are Progressives Hating on Antitrust?, N.Y. Times (Jan. 18, 2022).

In other words, arguing that inflation is an antitrust problem is good psyops, allowing progressives to leverage concern about inflation to achieve an unrelated agenda.

Well, there are costs to this kind of instrumental use of ideas—costs that arise because, at the end of the day, ideas aren’t just weapons for striking the other side. They are the software that governs the behavior of those who harbor them. If you hold onto ideas when they’re no good, you are going to do the wrong thing.

When you run bad software, the computer does bad things.

The New York Price Gouging Regulations

The peril of harboring bad ideas is reflected in the rather peculiar interpretation of New York’s new price gouging law proposed by New York Attorney General Letitia James.

The law itself is a good one. It prohibits “unconscionably excessive” pricing during any “abnormal disruption” of a market for a good or service that is “vital and necessary for the . . . welfare of consumers”.

The language is capacious enough to allow New York to institute generalized price controls to reign in supply-chain-driven inflation, including today’s inflation. After all, a supply chain disruption is an “abnormal” disruption. And all goods are, by definition, necessary to the “welfare of consumers.”

But only if the Attorney General interprets the law that way. And here is where the power of bad ideas rears its head.

As the Attorney General acknowledges, half a dozen states—including such conservative climes as Georgia, Mississippi, and Louisiana—consider any increase in the price of covered necessities during a time of emergency to be presumptive price gouging. The price of gas can go up by a penny or ten dollars—either way, the burden is on the seller to prove that it is not price gouging.

The New York Attorney General decided, however, to take a different tack. Instead of applying the presumption to any amount of price increase by any firm, the Attorney General decided to apply it only to any amount of price increase by firms having either a 30% market share or competing in a market with five or fewer “significant competitors.” In all other cases, only a price increase in excess of 10% will trigger the presumption of price gouging.

That’s right, New York’s price gouging presumption is actually going to be narrower than Mississippi’s, because it only applies to big firms.

What gives?

Answer: bad software.

Whether they genuinely believe in antimonopolism, or think it is mere psyops, progressives have antimonopolism on the brain. Every economic problem appears to them to be a problem of monopoly. And every solution appears to them again to be a solution to a monopoly problem.

They do not see a statute that prohibits the charging of high prices as an opportunity to redistribute wealth in areas of economic life that antimonopoly policy cannot touch. Instead, they see it as an invitation to extend antimonopoly ideology into new areas.

In their minds, making such a connection actually broadens the statute, by tying it to what they are sure is the root cause of all economic injustice.

Except it isn’t. And they end up narrowing the statute instead.

So they take a statute that could be interpreted presumptively to ban all above-cost pricing attributable to supply chain disruption and use it instead presumptively to ban only above-cost pricing by big firms.

Price Gouging Is about Scarcity, Not Monopoly (and Yes, Those Are Two Different Things)

The pity of using a market concentration requirement to limit a great price gouging law is that price gouging really has zilch do to with monopoly.

Price gouging is, instead, about scarcity. Or one might say that monopoly is about artificial scarcity whereas price gouging is about the exploitation of natural scarcity.

We fear the monopolist because, in the absence of competition, the monopolist can restrict output and raise price without losing market share.

By contrast, we hate price gouging because it involves taking advantage of an involuntary restriction in supply.

When demand for food spikes before a hurricane, the public knows that supermarkets don’t have the inventory to meet demand. But the public also knows that the supermarkets originally expected to sell the inventory that they do have at normal prices. Those eggs were already on the shelves before the impending hurricane was announced. When the supermarkets raise prices, it is therefore obvious to the public that the surcharge is pure profit. That’s what makes the public mad and gives rise to price gouging laws. The manufacturing of a voluntary shortage plays no role here. No one thinks the supermarket is holding back eggs—or choosing not to order more.

Monopoly is famine while grain rots in silos. Price gouging is your neighbor demanding your house in exchange for a slice of bread—after lightning striking the silos.

That’s why price gouging statutes kick into gear only during an emergency—or, as in the case of New York’s law, during a period of “abnormal disruption” of markets. A monopolist’s decision voluntarily to restrict output and jack up prices is plenty evil, but one thing it isn’t is the sort of supply chain disruption that triggers a price gouging statute.

Confusing Scarcity with Monopoly

So what is a market concentration requirement doing in regulations implementing a price gouging statute?

The Attorney General relies on a passage in the price gouging law that identifies “an exercise of unfair leverage” as a factor in determining whether a firms has engaged in price gouging. But the phrase “unfair leverage” could just as easily refer to (natural) scarcity power as it could to monopoly power.

The Attorney General’s comments shed more light on her rationale. They explain that “firms in concentrated markets pose a special risk of price gouging because they can use their pricing power in conjunction with an abnormal market disruption to unfairly raise prices.”

This seems to articulate a category mistake. She has confused scarcity power with monopoly power.

The pricing power upon which price gouging is based is scarcity power. It is the power that arises because an act of god has eliminated part of the supply that would otherwise exist in the market. The pricing power enjoyed by “firms in concentrated markets” is not (natural) scarcity power. It’s monopoly power (artificial scarcity)—the power voluntarily to restrict supply.

A firm in a concentrated market can use its monopoly power whenever it wants, including during an “abnormal market disruption.” But whenever the firm chooses to use it, the firm isn’t using (natural) scarcity power to raise prices. It’s using monopoly power to raise prices.

If, thanks to the abnormal market disruption, the firm is able to raise prices higher than the firm otherwise might, then that extra increment is price gouging due to (natural) scarcity power. But any price increase that the firm would be able to bring about without the aid of the market disruption is due to an artificial restriction in supply and remains an exercise of monopoly power.

So it makes little sense to say that firms with monopoly power pose a “special risk” during periods of market disruption because they can use their monopoly power “in conjunction” with their scarcity power to raise prices. Firms with monopoly power pose the same risk that all firms pose during periods of disruption: the risk that they will use the additional power conferred on them by disruption-triggered scarcity further to raise prices.

If we worry that (natural) scarcity is going to tempt a monopolist to raise prices we should be equally worried that it will tempt a non-monopolist to raise prices: (natural) scarcity gives both firms the exact same kind of power—the power to exploit scarcity to raise prices.

Non-Monopoly Price Gougers Probably Do More Harm

Indeed, one would expect that the harm that a firm that lacks monopoly power can do by exploiting scarcity would generally be greater than the harm that a monopolist can do by exploiting (natural) scarcity because, before the disruption, the monopolist will already have artificially restricted output to try to raise prices to the most profitable extent.

If a monopolist has already artificially restricted supply to the most profitable extent, any additional involuntary restriction caused by the disruption may be unprofitable for the monopolist and the monopolist may, therefore, choose not to exploit it by raising prices further.

As some have long suggested, the first increase in price above costs is always the most harmful to consumers, precisely because when price equals cost, output is at a maximum and consumers reap the greatest benefit from production. They therefore have the most to lose. Subsequent price increases play out over progressively lower sales volumes, inflicting smaller and smaller amounts of harm.

But what kind of firms are induced by an abnormal market disruption to make a first increase in price above costs?

Answer: non-monopolists.

Firms in hypercompetitive markets start out with prices at or near costs before an abnormal market disruption gives them power to price gouge.

Monopolists facing abnormal disruptions have already raised their prices above costs long ago, when they first acquired their monopoly position. To the extent that they increase prices due to a market disruption, that will be far from the first increase in their prices above costs.

Disruptions Operate at the Level of Markets, Not Individual Firms, So Price Gouging Is Not Worse In Concentrated Markets

The Attorney General seems to think that because a monopolist has a large market share relative to a non-monopolist, any price increase by the monopolist will tend to cause more harm because it will apply to a higher volume of sales. She writes that large firms “have an outsized role in price setting.”

This is the sort of mistake that comes from thinking in terms of firms instead of markets.

A market disruption does not enable price gouging by striking a single firm. If a single firm’s output is restricted but no restriction is placed on the market as a whole, other firms in the market will bring more inventory to market to offset the loss of the firm’s output and no firm will have the opportunity to raise prices.

Instead, a market disruption enables price gouging by striking the entire market. If the output of the market as a whole is restricted, then restrictions on the output of some firms won’t be made up by increased sales by other firms. As a result—and this is key—all firms in the market, and not just the firms that have suffered a restriction in output, will be able to raise prices.

That’s because the higher prices are a rationing mechanism: they allocate the restricted market supply to the consumers who have the highest willingness to pay for it.

If any firm in the market doesn’t raise prices, consumers will all try to buy from that one firm. But because there isn’t enough supply in the market to satisfy them all, that one firm won’t have enough to satisfy them all either. The firm will sell the same volume as the firm would have sold at the higher prices. But the firm will earn less profit. So the firm will prefer to just charge the higher prices.

That’s why only market-level disruptions enable price gouging.

What this means is that a supply disruption that is concentrated in a large firm doesn’t affect more consumers than a supply disruption that hits smaller firms instead. Regardless where the disruption is felt, all prices, charged by all firms in the market, rise—so long, that is, as the disruption is a market-level event in the sense that other participants in the market are unable instantaneously to make up for the reduction in the firm’s supply.

And, as I pointed out above, in markets with large numbers of small, hypercompetitive firms, those price increases are likely to be more harmful precisely because prices are likely to start out lower than in concentrated markets.

One must, therefore, scratch one’s head at the Attorney General’s further observation that “the profit maximizing choice for a smaller competitor in an industry with [a larger] seller will often be to match the larger company’s price,” as if that establishes that price gouging is more severe in markets that have larger competitors.

When industry supply is restricted, the profit maximizing choice for a smaller competitor will be to raise price to match smaller competitors’ price increases, as well. All firms, regardless of size, will find it profit-maximizing to raise price in order to ration the industry’s limited output.

The point of a rule against price gouging is to prevent the market from using high prices to ration access to goods in short supply. The rule effectively requires the market to ration based on the principle of first-come-first-served instead.

Price gouging enforcers target only a small subset of firms in any given market for enforcement. But the goal of the a rule against price gouging is not, ultimately, to regulate the conduct of individual firms but rather to get the market price down to cost. Enforcement against individual firms is meant to have a deterrent effect on the pricing behavior of all firms in the market.

While targeting the biggest firms for enforcement might send a stronger warning to the market than targeting a smaller firm, prosecutors do not need a regulation making it easier to bring cases against big firms in order to pursue such a strategy. Indeed, such a regulation makes it harder for them to bring cases in markets in which there are no big firms.

Does Plenty Really Make Firms More Likely to Collude?

The Attorney General’s theory seems to be that market disruptions enhance monopoly power, enabling a monopoly to leverage scarcity to increase prices in response to a market disruption to a greater extent than could a non-monopolist.

The Attorney General seems to have in mind that market disruptions facilitate collusion. “[I]t may be easier for big actors to coordinate price hikes during an inflationary period, even without direct communication,” she writes.

One would, of course, expect that firms in concentrated markets that are prone to tacit collusion would be able to raise prices after a market disruption. The disruption by definition reduces the amount of output in the industry in the short term, as discussed above.

That allows the firms in the market to raise prices. But such price increases are due to the increased scarcity of output, not to the collusion.

In order for the collusion to be responsible for the price increase, output would have to fall further. The firms would need to engage in collusion that enables them voluntarily to restrict supply above and beyond both the involuntary restrictions created by the market disruption and any voluntary restrictions that the firms were capable of impose absent the disruption.

Presumably the argument is that the impetus to raise prices independently that is created by the supply disruption puts firms in the frame of mind required for them further to restrict supply and raise prices in tacit collusion with other firms.

That’s a pretty slim psychological reed upon which to hang a theory of harm. And one can easily imagine alternative psychologies.

Plenty tends to make us self-involved and egomaniacal. Hardship, if not too great, makes us generous and cooperative. It would seem to follow that the profit opportunities created by a market disruption should undermine cooperation between firms, rather than promote it.

I don’t know if this story is any more likely to be true than the one that the Attorney General seems to favor. The point is that psychological arguments of this sort do not provide a strong basis for carving out special treatment for large firms under a price gouging rule.

More Confusion of Scarcity with Monopoly

The only other argument the Attorney General makes for special treatment reprises the Attorney General’s confusion of scarcity and monopoly power.

The Attorney General argues that

the risk of firms taking advantage of an abnormal disruption may be greater where certain market characteristics reduce the likelihood of new entry—for example, where supply chains are disrupted or key inputs are scarce or where high concentration makes investment less attractive in a particular market. . . . Incumbents are insulated from the credible threat of new competition to discipline prices during abnormal market disruptions.

The Attorney General seems not to understand what a “disruption” is. It is, well, a disruption. Supply is destroyed. Or, equivalently, it is insufficient to meet a surging demand. By definition, there can be no entry. If there were entry by other firms into the market, then supply would not be insufficient anymore!

It follows that the extent to which before the disruption the market is already protected against entry due to the deterrent effect created by high concentration is irrelevant.

If such a deterrent existed before the disruption, and firms took advantage of it, then output would already have been artificially restricted in advance of the disruption. The disruption may destroy additional supply, and firms may raise prices in response, but that destruction won’t be due to market concentration but instead to the disruption.

To be sure, if the market were less concentrated and there were no concommitant entry deterrent, then prices in the market would be lower over the period of the disruption. And, moreover, the extent of the price increase created by the disruption might be different—either greater or lesser depending on the shape of the demand curve.

But that increase would still be attributable to scarcity and not to monopoly. And the ability of firms to enter the market to eliminate the scarcity would be controlled by the nature of the disruption and not any deterrent power wielded by incumbent firms.

The disruption destroys production that already existed notwithstanding the incumbents’ monopoly power. It follows that this output could not otherwise have been precluded through incumbent firms’ deterrent power—otherwise it would not have been there to be destroyed by the disruption.

Anyway, Small Amounts of Harm Are Small Amounts of Harm, Whether the Perpetrator Could Do More Harm or Not

But suppose the Attorney General were right that monopolists cause more harm through price gouging. Would it make sense to treat any price increase by a monopolist as presumptively unlawful but only increases by non-monopolists in excess of 10% as presumptively unlawful?

Of course not.

That’s like saying that it should be battery if a semi bends your fender but it should not be battery if a Prius bends your fender.

Harm is harm whether it’s inflicted by someone who could have done you a lot more harm or by someone who could only have done you a little more harm. A 5% increase in price above cost is a harm to consumers, whether that 5% markup is charged by a firm that could have, under some circumstances, charged you $100 more or only a dollar more.

A Lesson in the Perils of Antimonopolism

Antimonopoly framing may appeal to progressives because they are pushing back against two generations of market fetishism in economics. The framing lets progressives assert that markets aren’t free without having to go to the trouble of rejecting markets in the abstract.

That might feel like a powerful move.

First, it’s true: there’s a lot of monopolization in the economy.

Second, it means progressives don’t need to get into theoretical battles about the virtue of markets in the abstract.

But because antimonopolism sidesteps the theoretical problem of the market, it’s a compromise, not a power play. And a bad play at that.

In order to score points on means antimonopolists concede ends. To curry support for government intervention in business they concede that the end of intervention should be (truly) free markets.

But progressives have known for more than a century that the free market is the problem, not just in practice but in its abstract, idealized form. There’s no guarantee that really, truly, perfectly competitive markets will distribute wealth fairly. Instead, they arbitrarily distribute wealth to those who happen to own relatively productive resources or who happen to place a relatively high value on what they consume.

As David Ricardo pointed out, if you happen to own land having relatively good soil, you will earn a profit, because the price of agricultural produce needs to be high enough to cover the higher cost of tilling less fertile land. Your costs—including any reward needed to induce you to make your land more fertile—are lower, so you will generate revenues in excess of costs. That excess isn’t necessary to keep you in the market or to fertilize your soil. It’s a pure distribution of wealth based on the arbitrary fact that someone else in the market doesn’t have costs as low as your own.

Indeed, as Thomas Piketty has pointed out, the source of the explosion of inequality in recent decades has nothing to do with “market imperfection[s]” like monopolization. It has to do with markets.

There’s no way to divorce the gains progressives make on the means from the losses they suffer on the ends. If you succeed at convincing Americans that every market is monopolized, then Americans’ response is going to be: deconcentrate markets.

It’s not going to be to use every means available, including tax and transfer and price regulation, to redistribute wealth.

But, more importantly in the context of the New York price gouging law, the habit of proving market concentration in order to appease conservative priors regarding the benefits of markets can take on a life of its own.

It makes progressives forget that market concentration is far from the only source of inequality. And they end up casting aside or hamstringing policies aimed at those other sources.

That’s what may have happened here.

Categories
Antitrust Monopolization Philoeconomica

The Twice-Anti Monopoly Progressive

Keynes was no antimonopolist.

One of the most interesting and unnoticed developments of recent decades has been the tendency of big enterprise to socialise itself. A point arrives in the growth of a big institution – particularly a big railway or big public utility enterprise, but also a big bank or a big insurance company – at which the owners of the capital, i.e. its shareholders, are almost entirely dissociated from the management, with the result that the direct personal interest of the latter in the making of great profit becomes quite secondary. When this stage is reached, the general stability and reputation of the institution are the more considered by the management than the maximum of profit for the shareholders. The shareholders must be satisfied by conventionally adequate dividends; but once this is secured, the direct interest of the management often consists in avoiding criticism from the public and from the customers of the concern. This is particularly the case if their great size or semi-monopolistic position renders them conspicuous in the public eye and vulnerable to public attack. The extreme instance, perhaps, of this tendency in the case of an institution, theoretically the unrestricted property of private persons, is the Bank of England. It is almost true to say that there is no class of persons in the kingdom of whom the Governor of the Bank of England thinks less when he decides on his policy than of his shareholders. Their rights, in excess of their conventional dividend, have already sunk to the neighbourhood of zero. But the same thing is partly true of many other big institutions. They are, as time goes on, socialising themselves.

John Maynard Keynes, The end of laissez-faire (1926).

In Robert Skidelsky’s great three-volume intellectual biography of Keynes, there is but a single reference to antitrust—an entreaty by Felix Frankfurter that Keynes should lend some support to the antitrust project.

Keynes opposed the early New Deal’s state-sponsored cartels because they restricted output when the economy required more investment. But, like many in the early 20th century, Keynes viewed monopoly as an inevitable and possibly salutary adjunct to industrial progress.

Indeed, Skidelsky suggests that Keynes found debates over market structure—including self-righteous antimonopolism—dumb.

Writes Skidelsky:

Keynes used to come away from Manchester with feelings of ‘intense pessimism’, provoked by the short-sighted individualism of the Capulets and Montagues, . . . the sermonising of those who wanted to put the industry through the wringer, the ingrained dislike of any suggestion of monopoly.

Robert Skidelsky, John Maynard Keynes: The Economist as Saviour, 1920-1937 262-63 (1995).

(This post originally appeared as a Twitter thread.)

Categories
Antitrust Monopolization Regulation Tax

Wealth and Happiness

In a new paper, Glick, Lozada, and Bush have done both antimonopolism and the antitrust academy a service by making the first real attempt to put the movement in direct conversation with contemporary antitrust method.

GLB have a simple message: welfare economics long ago stopped using willingness to pay to measure consumer welfare, and antitrust should too.

What is more, welfare economics today pursues an eclectic set of approaches to measuring welfare. Some of them suggest that the dispersal of economic power and the availability of small businesses can make people happy.

It follows, argue GLB, that it is entirely consistent with contemporary welfare economics to take these things into account in evaluating mergers or prosecuting monopolies.

The Social Welfare Function

GLB start with the problem that welfare economists faced at the beginning of the 20th century: how to compare the value that different people—say a producer and a consumer—obtain from a transaction in the absence of some universal measure of value.

If the producer gets a profit of $2 and the consumer pays $5 for a bag of apples, did the transaction confer the same amount of value on the two? Are $2 worth the same to the producer as a-bag-of-apples-for-$5 is worth to the consumer?

If there were some universal measure of happiness—denominated in, say, “utils”—then we could answer that question.

We would look up the consumer’s change in pleasure associated with swapping $5 for apples and compare it to the producer’s change in pleasure associated with making a $2 profit. If the former were 50 utils and the latter 30 utils, then we could say that the transaction did not confer the same benefit on both parties.

Pareto

Economists eventually decided that they would not be able to find a universal metric of happiness. But they hoped that they might be able to glean some information about happiness from the behavior of economic actors.

The first approach that they hit upon was the pareto criterion. It said: the only bad transactions are those into which the parties do not enter voluntarily, because those must make at least one party worse off (the party who would not voluntarily enter into the transaction).

Any transaction the parties do enter into voluntarily is, in contrast, good, because they wouldn’t be willing to enter into it unless the transaction made neither worse off.

It followed that voluntary transactions could be treated as welfare improving—or at least not welfare reducing. The parties were signalling, through their willingness to enter into them, that the transactions were at least not undesirable.

If the producer and consumer voluntarily transact in apples at $5, then welfare could be said not to have been reduced and indeed potentially to have increased. That was the pareto criterion.

It helped welfare economics a bit. But it also failed to answer an important question: what about people who are affected by a transaction but who are not entering into it themselves?

If, for example, two producers merge, and, as a result of the merger, they are able to charge a higher price, consumers are affected. But consumers have no choice over whether the merger takes place.

The pareto criterion tells us that the merger does not make the merging parties worse off. But it tells us nothing about whether the merger makes consumers worse off.

Some way of comparing the costs of the transaction to consumers with the benefits to the merging producers is needed, but the pareto criterion cannot provide it.

Willingness to Pay and Potential Pareto

The solution proposed by some economists in the early 20th century was to use willingness to pay as a measure of happiness.

The idea was that if a consumer would be willing to pay $10 for an apple, then that would be a measure of the pleasure the consumer would get from consuming the apple. By noting that a person should be willing to pay cash for cash on a dollar-for-dollar basis, one could proceed to do with dollars what economists had originally hoped to do with utils.

To return to our example of an apple purchased for $5, if the consumer were in fact willing to pay $10 for the apple, then the value to the consumer of the transaction would be the $10 the consumer would be willing to pay less the $5 price that the consumer actually paid for it.

And the value of the transaction to the producer would be the producer’s $2 profit. It would then follow that the consumer did better than the producer in the transaction because the consumer generated a “surplus” of $5 whereas the producer generated a profit (“producer’s surplus”) of only $2.

This willingness-to-pay approach made it possible to evaluate a merger of producers.

If producers were to merge and drive the price up to $7, then the producers (who, if their costs are as before, would now make a $4 profit) would end up better off than the consumers (who would now enjoy a surplus of $10 less $7, or $3). The merger would reduce the welfare of the consumer by $3.

If antitrust were to adhere to a consumer welfare standard—the rule that mergers that reduce consumer surplus are to be rejected—then this merger would fail the test and be rejected.

As GLB note, the willingness to pay concept made it possible to consider tradeoffs as well.

The merger might, for example, also reduce the costs of production of the merged firms from $3 to $0.50, thereby increasing the merging firms’ profits on the transaction from $4 to $6.50.

If one were to view the goal of the antitrust laws as the maximization of total welfare—meaning the maximization of the combined surplus of producers and consumers, however that surplus may be distributed between them—this cost reduction would justify the merger. It would expand the sum of producer and consumer surplus from $7 ($2 for the producers and $5 for the consumer) to $9.50 ($6.50 for the producers and $3 for the consumer).

Moreover, the merger might even be said to satisfy the consumer welfare standard if one were to adhere to the peculiar sophistry that any increase in total welfare should count as an increase in consumer welfare because the increase in total welfare could be redistributed to consumers.

Because the merged producers could be forced to pay the $2.50 increase in total welfare to the consumer, leaving the consumer with $5.50, which is more than the $5 he would have without the merger, the deal could, according to this peculiar sophistry, be classified as consumer welfare enhancing.

At least in potential. And if such a transfer were made, then the consumer and the producers alike would welcome the deal (the producers would be left with $4, which is more than the $3 in profit earned without the deal). Hence GLB refer to this as the “potential pareto criterion”. It is also called the Kaldor-Hicks efficiency criterion.

Wealth Effects

Economists should have, and, indeed, did, realize from the start that willingness to pay was a doomed approach because a person’s willingness to pay changes with his budget.

Between People

Two people who would be willing to pay the same amount for an apple if they had the same wealth would likely be willing to pay vastly different amounts if one were poor and the other rich. The rich person might be willing to pay much more for the apple than would a cash-strapped poor person.

One can avoid this problem by supposing that the poor man is willing to pay less for an apple because he in fact would derive less pleasure from it. He might have to deny his child meat in order to be able to afford the apple, and that might ruin his meal.

But viewing actual pleasure as perfectly consonant with willingness to pay amounts to shoehorning subjective feelings into budget constraints.

It is just as likely that the poor man who did make such a substitution would feel a great deal of guilty pleasure. His rational faculties might enable him to forego that pleasure and give his child meat. But that does not mean that his pleasure centers would not be the worse for it. They would be.

If wealth effects matter, however, then one cannot compare producer and consumer surpluses—or indeed the surpluses generated by any two people.

One cannot say, for example, that a merger that decreases cost by $2.50 is on net a good thing if it results in a price increase of only $2 because $2.50 is more than $2, so the total amount of pleasure generated by the economy has gone up. For if the producers are rich but the consumer poor, then the $2 cost to the consumer might inflict more pain on him than the $2.50 increase in profits for the producers.

Redistribution of those $2.50 in benefits to the consumer is now required for efficiency and not just to achieve distributive justice. If efficiency is about increasing the total amount of happiness generated by the economy, and those $2.50 make the consumer happier than the producers, then efficiency requires that the $2.50 go to the consumer.

If the only implication of wealth effects were that redistribution from rich to poor is required for efficiency, then wealth effects would not be particularly problematic for progressives.

But very often a policy change not only creates a benefit and raises a price, as in our merger example, but also inflicts an economic cost in the sense of precluding some production—or aspect thereof—that consumers value.

The merger might, for example, not only reduce apple production costs by $2.50 but also lead to slightly less tasty apples. Perhaps the merger saves on costs by enabling the sale of an orchard that produced particularly tasty apples but was also relatively costly to maintain.

If the consumer’s maximum willingness to pay falls by $2 because the apple is less tasty, then the willingness to pay measure suggests that the merger should go ahead. The benefits in terms of a reduction in costs of $2.50 exceed the costs in terms of a reduction in the value of the apple to consumers of $2. There is a net gain of $0.50.

To be sure, if the price again rises to $7 as a result of the merger, consumers find themselves even worse off than before. Their surplus falls to $1 (a maximum willingness to pay of $8 less a price of $7).

But the merging producers can, at least in theory, make up for this by transferring $2 to the consumer to offset the price increase and by transferring at least $2 of the cost reduction they enjoy as well, ensuring that the consumer ends up with at least the $5.00 in surplus the consumer would have enjoyed without the deal.

And the producers, who initially enjoyed an increase in profits of $5.50 ($2.50 in cost reductions plus $2.00 from the increase in price) end up better off so long as they do not pay more than $5.50 to the consumer.

So all parties can, in theory, end up better off.

That’s because the benefits created by the merger exceed the costs by $0.50. Once one uses transfers to correct for the resulting price increase and to compensate the consumer for his loss, which is smaller than the producers’ gains, there is necessarily some net gain left over that producers and consumer can divide up, leaving them all better off.

The potential pareto criterion is satisfied and, if the transfers are actually made, so is the consumer welfare standard.

If wealth effects matter, however, then one cannot reliably compare the $2.50 benefit in terms of production cost savings to the $2 loss associated with the reduced tastiness of the apple. If the consumer is poor, then the consumer may place a dollar value on the reduction in tastiness of the apple that is far below the actual loss of pleasure the consumer would suffer in consuming a less tasty apple.

If there were utils and we could compare the value of the production cost savings to the producers to the reduction in the consumer’s happiness associated with the less tasty apple, we might find that the producers’ gain is 100 utils and the consumer’s loss is 1000 utils, resulting in a net reduction in happiness due to the merger.

Wealth effects prevent the consumer from registering his dissatisfaction in terms of willingness to pay, however, and so the merger appears to offer a net gain when in fact it does not.

It follows that the producers will never be able fully to compensate the consumer for the loss without incurring a loss themselves, and so according to the potential pareto criterion the merger should be blocked.

If we nevertheless treat willingness to pay as a measure of welfare, however, the deal will appear to be welfare increasing and the deal will go through, reducing overall happiness.

Wealth effects cause willingness to pay to lead to bad policymaking.

Within People

Wealth effects also undermine the commensurability of values with respect to the same person.

To see why, let’s go back to the example in which the merger raises prices but doesn’t reduce the tastiness of apples.

If unwinding the merger would reduce the price of an apple from $7 to $5, it is clear that the consumer becomes $2 richer. He saves $2, which he can now spend on other things.

In order for willingness to pay to be a useful proxy for welfare, one would, then, like to be able to say that the consumer is made just as well off by the price reduction as he would have been had he been given $2 in cash in lieu of the price increase.

But if willingness to pay depends on wealth, we cannot say that a $2 cash payment would leave the consumer in the same position as the consumer would be had price fallen by $2.

If a consumer cares more for apples the richer that he is, then the consumer will prefer a $2 cut in the price of apples to a $2 cash payment. Given his stronger preference for apples, the consumer might want to plow the $2 savings on apples into buying more apples, and that money would buy more apples at the lower apple price than would a $2 cash payment used to purchase more apples at the higher price.

It follows that the consumer would require a cash payment in excess of $2 in order to be made as happy as he would be if the price of apples were reduced by $2.

Similarly, we might ask whether taxing away $2 from the consumer when prices are low would leave the consumer just as happy as the consumer would be were he to experience a $2 price increase.

Again the answer would be “no.”

When the price of apples increases, it is clear that the consumer becomes poorer; his wealth buys him less. If the consumer’s taste for apples decreases with poverty, however, then the consumer will prefer a $2 increase in the price of apples to having $2 of cash taxed away from him.

Because he prefers other things to apples as he becomes poorer, the consumer will place a higher value on cash, which he can use to buy things other than apples, than he places on the price of apples.

But if a tax of $2 makes him less happy than he would be under an increase in the price of apples of $2, then a tax of less than $2 is equivalent, from his perspective, to an increase in the price of apples of $2.

So, overall, we have the peculiar result that a $2 price reduction is equivalent to a cash payment of more than $2 but a $2 price increase is equivalent to a cash reduction of less than $2.

Commensurability would, of course, require that all these things be equal.

And so we see that wealth effects not only prevent us from saying that a $2 gain to the producers creates the same amount of pleasure as a $2 gain to a consumer, but also that a $2 gain to the consumer via a price reduction creates the same amount of pleasure as a $2 cash payment. And the same can be said of losses.

GLB don’t acknowledge that between- and within-person incommensurability both stem from the same problem of wealth effects. But they do a good job of discussing both.

They also spend considerable time refuting the arguments of mainstream economists that within-person incommensurability is small and can be ignored.

But even if it were small, and indeed, even if wealth effects were not a problem for commensurability between persons either, willingness to pay would remain a highly problematic measure of value.

There is no basis for supposing that, just because two people having the same wealth level are willing to pay the same amount for a particular good, they will get the same level of pleasure from it.

Indeed, it is possible that two people who place the same relative values on all goods, and so are willing to pay the exact same amount for each good, might experience very different levels of pleasure from consuming them.

One person might take almost no pleasure from any good. Another might be sent into fits of ecstasy by the smallest purchase.

So long as the relative pleasure conferred by each good vis a vis the other goods is the same for both people, each will be willing to pay the same amount for each good. They will divide their budgets between goods in exactly the same way despite deriving very different levels of pleasure from them.

The Return to the Social Welfare Function

As GLB relate, welfare economists responded to these limitations by giving up on what might be called the overall “revealed value” approach to measuring welfare embodied in the pareto criterion and potential pareto (i.e., willingness-to-pay-based) criterion.

These criteria took a common revealed value approach because they both tried to read value from the actions of economic agents.

Whether a transaction satisfied the pareto criterion could be determined by checking to see whether the parties entered into it voluntarily. If they did, then it followed that neither party was made worse off.

And if a consumer purchased an apple at $10 but not at $11, one could infer that the maximum the consumer was willing to pay for apples was $10 and use that number to determine by how much the consumer could be compensated, pursuant to the potential pareto criterion, for the loss of an apple.

Under both approaches, economic agents were assumed to reveal the pleasure they take in goods via their actions, enabling economists to identify changes in welfare associated with various policies without needing direct access to the pleasure centers in consumers’ brains in order to make those determinations.

With the demise of willingness to pay, welfare economists would no longer try to find a way to read the pleasure and pain of consumers through their economic behavior.

Instead, they would return to the direct approach that they had abandoned more than fifty years before; they would try to measure happiness directly.

They took a variety of approaches to this problem. They would ask people if they are happy or not in various situations; they would study health indicators such as longevity, freedom from disease, and so on, in various situations; they would consult psychologists and neurologists.

Based on the results of these inquiries, they would identify the material circumstances most likely to be conducive to happiness and recommend economic policies (such as antitrust cases) that produce those circumstances.

Medical inquiry might determine, for example, that spinach is good for consumers. Welfare economists would then respond by ranking policy choices that lead to more spinach consumption higher than those that lead to less.

This was a departure from the willingness to pay approach, according to which welfare economists would have given spinach consumption the ranking implied by the dollar value that consumers revealed themselves to be willing to pay for spinach relative to what they would pay for other things.

Now other branches of science, and not revealed preference, determined the ranking.

This takes us up to the present state of welfare economics.

And for GLB, this completes the argument for taking political power and small businesses into account in doing antitrust.

According to GLB, one can no longer argue that, because consumers are manifestly willing to pay high prices charged by dominant firms, consumers like big firms and like the influence they have over politics.

Consumers’ willingness to pay is no reliable measure of the pleasure they get from buying the products of politically influential, small-business-destroying monopolists.

Instead, as already mentioned, GLB point to studies that suggest that consumers are happier in democratic environments free of concentrations of economic power. And that consumers are happier when they have access to small businesses.

It follows, argue GLB, that it is perfectly reasonable, per current practice in welfare economics, to argue that mergers that increase consumer surplus in the willingness-to-pay sense nevertheless make consumers unhappy, and should therefore be targeted for antitrust enforcement.

The Willingness to Pay Measure Is about Choice, Not Happiness

GLB’s paper presents a powerful rejoinder to any antitruster who might have been under the misapprehension that willingness to pay is a good measure of happiness. There are surely some out there.

But I suspect that the paper will not win too many converts, because what attracts people to willingness to pay is not that it is a good measure of happiness, but instead that it is the best way of doing justice to consumer choice that we have.

Welfare economics embodies a tension felt throughout the modern human rights project regarding who decides what happiness means.

Do we study human beings as if they were complex robots, figure out what makes these machines happiest, and impose those conditions on them? Or do we let the machines decide what makes them happiest?

GLB tell the story of welfare economics as if the field has always been interested only in the first option: to figure out what makes people happy and then impose those conditions upon their economic lives.

Under this assumption, GLB’s conclusion follows immediately from the arc of welfare economics. Willingness to pay is not a good measure. Others must be found.

But, as GLB acknowledge, economists have known almost from the inception of the willingness to pay approach in the 1940s that it was unsound. Why hasn’t the field moved on?

GLB chalk it up to “zombie economics.”

The real reason is that many people want to preserve a space in which consumers can vote for what they want through their purchase decisions.

That is, these people don’t view economics as a descriptive science but rather as a democratic project. It is the project of empowering consumers to vote on the character and magnitude of production through their purchase decisions.

The willingness to pay measure is ultimately built upon such a foundation, because willingness to pay is measured by observing the prices at which consumers do and do not buy.

The measure is highly imperfect, even incoherent, but it is the only way economics knows to recommend policy changes that account for the votes consumers have cast in markets. It honors their choices.

Happiness surveys, public health information, and the like are based on consumer input, but they are not based on purchase decisions—they are not based on circumstances in which consumers are forced to put their money where their mouths are.

Of course, the question whether consumers should take direction from experts regarding what to buy, or make those choices themselves, has already been resolved in favor of consumer choice.

Neither GLB nor anyone else will be able to impose purchases on consumers unless consumers vote to elect political leaders who take the GLB approach.

If antitrust enforcers decide to follow GLB’s paper, but consumers don’t like it, consumers can always vote political leaders into office who will sack those enforcers or give them new legislative commands to follow.

The premise of the economic project of enabling consumers to vote through their purchase decisions is, however, that the electoral process is defective.

The assumption is that, at least with respect to industrial production, consumers are better able to choose by voting through purchase decisions than by voting for elected representatives to direct production.

That is the subject of public choice theory. It is the view that, at least with respect to some matters, markets are more democratic than democracy.

People who hold this view won’t be swayed by GLB. In their view, markets are most likely to maximize happiness if they are structured to read it in consumers’ purchase decisions, not if they are structured by consumers’ elected representatives to achieve happiness according to any other measure.

Ultimately, the battle in antitrust over the consumer welfare standard, is, like all battles over regulation, a battle over the legitimacy of the electoral process.

And yet progressives have spent remarkably little time contesting the public choice view of the electoral process and government regulation as inherently vulnerable to capture.

I suspect that is in part because many progressives share the public choice intuition.

Indeed, distrust of government seems to be one of the major reasons for which some progressives have focused in recent years on strengthening antitrust instead of pursuing the projects that earlier generations of progressives thought were more likely to be effective, such as price regulation and taxation.

Even an antitrust that imposes an external standard of happiness on markets instead of trying to read a standard from consumer purchase decisions pays a certain amount of respect to those purchase decisions. It is oriented toward preserving markets and empowering consumer choice within them.

In contrast, taxation and price regulation are relatively indifferent to those goals. They represent a pure privileging of choice via the electoral process over choice via markets.

And to many people from both left and right operating in an essentially anti-statist culture, that’s scary.

The irony, then, may be that the worldview required to overturn the consumer welfare standard in antitrust is undermined by progressives’ own attraction to antitrust as a vehicle for progressive change.

Categories
Antitrust Monopolization World

Does It or Doesn’t It?

An important part of the Chicago Revolution in antitrust was the argument that no monopoly is forever. Eventually, someone will innovate and offer a superior product that the monopolist cannot match. And, just like that, the monopolist will be history.

Microsoft’s lock on operating systems looked assured in 1998 when the Justice Department tried to break the company up. But that remedy was never ultimately imposed. And in the end it didn’t matter. For, less than ten years later, smartphones arrived, and now most people do most of their computing on operating systems not made by Microsoft.

It seems to follow that antitrust action is a waste of time.

So interesting, then, to hear all the talk of late about how, despite its best efforts, China won’t be able to catch up with the West in chip production.

Not for decades.

Maybe never.

We are told that chip production relies upon an entire ecosystem of designers and suppliers. That experience matters. And so on.

But if that’s right, then the view that no monopoly is forever must be wrong—or at least not absolutely true in all cases. If the Western chip fabs have a near-permanent lock on the market, then it can’t be the case that we can always rely on markets to erode monopoly power. It can’t both be true that China can never catch up with the West on chips and that no position of market dominance is forever.

So which is it?

I suspect that those who think China can never catch up are wrong.

It may well be the case that the learning curve on chip production is such that a latecomer will never be able to catch up with a first mover absent technology transfer. But the argument about the impermanence of monopoly power has never been that newcomers will one day master the incumbent’s technology. It has always been that newcomers will one day introduce a completely different technology that carries out the same tasks as the old technology, only ten times better.

To this day, Microsoft continues to dominate the market for PC operating systems. What eroded Microsoft’s power was the introduction of a different technology—smartphones—that required a different kind of operating system. Microsoft didn’t start out with a lead in mobile operating systems, and, in the event, Microsoft lost the race.

So the question about whether China can overcome her lack of cutting edge chip supply and find a way to go head to head with the West as computing revolutionizes everything from military equipment to passenger vehicles is really the question whether China can come up with different technologies that do computing better—not just more semiconductors.

I don’t know the answer to that question. But it is perhaps useful to note that while China is not a leader in the design and production of conventional chips, China is a leader in quantum computing—which promises vastly greater processing speeds—and in artificial intelligence.

Indeed, it is worth asking whether TikTok’s success at challenging both Google’s dominance in search and Facebook’s dominance in social media doesn’t contain a lesson. At the same time that at least some Americans were quaking in their boots regarding these American tech giants’ size—and calling for antitrust enforcement—TikTok was quietly applying superior artificial intelligence to revolutionize the core functionality of both companies. TikTok is a Chinese company.

The view that technological advance always ultimately erodes dominant positions is perhaps most closely associated with Joseph Schumpeter, who called this process “creative destruction.”

The question, then, is whether the West should worry that creative destruction will erode its dominant positions.

If the Chicago School of antitrust is right, the answer is “yes.”

Categories
World

The Struggle with Russia

The West has careened from fear to confidence in Ukraine.

The fear may well return.

On February 25, 2022, the day after the present Russian invasion of Ukraine commenced, the West feared Russia.

The West feared escalation.

The West feared nuclear war.

And so the West would commit only to very limited supply of arms to Ukraine. Antitank small arms. But no tanks. No artillery. No big missiles.

Then Russia’s conventional forces struggled in the field. Russia withdrew her forces from the north and gave up on the reduction of Kyiv.

Now the West’s confidence grew. Russia appeared to be a paper tiger. And now the arms poured in. Artillery. Missile systems. Tanks.

But it was unclear why Russia’s poor showing as a conventional military so stoked Western confidence, because the threat to the West had always been a nuclear threat.

On February 25, 2022, no one thought that Russia might retaliate against the Western arming of Ukraine by undertaking a conventional invasion, of, say, Britain.

What the West feared was that Russia might respond by using battlefield nuclear weapons according to her doctrine of escalating to deescalate. She might drop a nuclear weapon on Kyiv.

And then what?

For the West not to respond by dropping a nuke on Russian forces would be to condone Russia’s use of the weapons. But to respond in that way might draw a nuclear response from Russia.

There would be nuclear war.

The peculiar thing about the resurgence of Western confidence after Russia’s conventional forces struggled is that Russia’s poor performance with conventional arms didn’t change the nuclear calculus.

Unless one wished to infer from Russia’s difficulty hitting Ukrainian targets with her long range missiles, or her inability to achieve air superiority, that Russia was in fact incapable of using her nuclear weapons at all, there was no basis in Russia’s weakness in conventional arms to support the view that Russia was not a threat to the West after all.

Indeed, as we have been reminded by Russia’s renewed threats to use nuclear weapons in response to her loss of the city of Izium, her conventional military failures made her more likely to resort to nuclear war.

If the West feared nuclear war with Russia on February 25, 2022, the West ought to have feared nuclear war with Russia even more on April 2, 2022, once Russia had retreated from Kyiv. If the West thought it unwise to supply Ukraine with weapons on February 25, 2022, then the West should have thought it even less wise to supply Ukraine with weapons in April 2022.

Instead, the West sent more.

I do not mean to say that the West should not have sent more weapons. But I do mean to say that the West ought to have been aware, in sending them, that the West was increasing the risk of nuclear war. The arming of Ukraine ought to have been carried out with a sense of courage—with an awareness that great and increasing risks were being undertaken in the interest of winning a struggle with a nuclear adversary.

Instead, it was undertaken with a sense of relief and confidence. It was undertaken out of the irrational belief that Russia’s conventional military weakness meant that supplying the weapons would not be as dangerous as had at first appeared.

Which is a problem, because it suggests that West will not be ready when, as is increasingly likely, Russia uses nuclear weapons in Ukraine.

For if Russia were to do that, the West would find itself right back in the scary world of February 25, 2022, a world in which the West’s first instinct was not courage but rather fear and paralysis.

If the West’s basis for standing up to Russia over the past six months has been the belief that Russia is no real threat, then when Russia demonstrates that she remains a threat—indeed an existential threat—the West’s basis for standing up to Russia will disappear.

If it seemed obvious to the Biden Administration in the lead-up to the invasion that drawing a red line and committing to arm Ukraine would be unwise, because it could lead to nuclear war, then it will seem equally obvious to the Biden Administration on the morning after a nuclear attack by Russia on Ukraine that responding in kind would be unwise, because it could lead to nuclear war.

But if the Biden Administration does not respond, then the West will lose.

The problem with the past six months of Western policy toward Russia is that it has been built on the fantasy that Russia’s inability to get tanks to Kyiv means that she is not a nuclear threat.

In fact she remains a grave nuclear threat, and the West has been fighting a proxy war against her. And not just any proxy war, but the supremely dangerous form in which only one side believes that there is a proxy involved. Russia does not think this is a proxy war. She thinks that Ukraine (unlike, say, Afghanistan) is hers.

When the West decided in April to arm Ukraine in earnest, the West decided to escalate a conflict between the West and a nuclear adversary. The West can only win such a nuclear struggle if the West knows that it is in fact a nuclear struggle.

And only if the West is willing to risk the West’s annihilation in order to win.

On February 25, 2022, the West was not prepared to take such a risk.

Is the West ready to take it now?

Categories
Regulation

Second Thoughts about Government as the Origin of Property

It has been a common progressive move for more than a century to argue that property rights are not sacrosanct because the government creates them. Government creates them and government can take them away. I heard this in seminars in law school. And I recall Elizabeth Warren making this argument on the campaign trail in 2020. And Morris Cohen said it in 1927.

But I do not understand why this argument has so much appeal to progressives, because it does nothing more than assert a contrary position to that taken by libertarians. It does not prove anything in favor of government intervention.

The question, after all, is how the government should use its power. Should the government use it to protect and expand property rights? Or should the government use it to dissolve and restrict property rights?

The libertarian asserts that the government should protect property rights because these are in some sense prior or fundamental.

And the progressive, in arguing that the government giveth and so can taketh away, asserts no more than that government action—regulation—is in some sense prior or fundamental.

There is no more substance than that to the government-giveth argument.

It resolves nothing, just as libertarians’ assertion that property rights are prior and fundamental resolves nothing. It merely asserts the opposite of what the libertarians assert.

The fact that government is needed to enforce property rights does not in itself imply that government need not enforce them. It does not imply that property rights are not in some sense prior or fundamental. It may well be that property rights guaranteed by government are prior and fundamental in relation to all things. And that government elimination of property rights is posterior. I do not personally believe this to be so, but I do not see why the fact that government is needed to guarantee property rights implies that property rights need not be fully and absolutely guaranteed.

And that is before we even get to social contractarian arguments that claim that government is no more than a mutual defense pact between prior possessors of things that come to be called property under the terms of the social contract.

Moreover, as a historical matter, it is not clear to me which side has the better of the argument. On the one hand, government enforcement does reduce outside interference with one’s possessions, and one cannot speak of property in the legal sense without assuming the existence of a legal authority committed at least in principle to recording and enforcing such rights.

On the other hand, it is the case that people living in places that have no central government possess things and enjoy them quietly for long periods of time, so long as they are individually powerful enough or in sufficient harmony with their neighbors to protect their possession of the things.

The only thing that progressives’ assertion that the government giveth really does is to demonstrate to progressives that there is an alternative to the view that property is fundamental—namely, that government regulation is fundamental.

But it does not answer the question how much to protect property and it does not even establish that the libertarian view is necessarily wrong.

Categories
Regulation

Democracy as the Heart of Debates about Regulation

There is a tendency among free marketers to say: “if markets are bringing it about, it is necessary.” And the big insight on the left for at least the past fifty years has been to say: “ah, but the market is shaped by the law, so if we pass a law preventing it from happening, then markets won’t bring it about after all.”

So, in the context of the influx of American digital nomads to Mexico City, and the opposition to gentrification they have aroused, the free marketer says: “locals are getting rich selling to the Americans, and the Americans clearly believe they are getting something of value, so this is natural; it’s going to happen; if you try to stop it you might as well try to use your pinky to dam the Nile.” And the left winger replies: “the only reason the Americans can move here is that Mexico permits them to stay for six months without a visa. Change the rule, and this goes away. Mexico has a choice.”

The free marketer seems to espouse market naturalism: society is self-organizing and policymakers have little choice regarding outcomes. The most they can do is create a temporary disequilibrium—a dam that will break. By his reply, the left winger restores the policymaker’s freedom to decide social outcomes.

Both the free marketer and the left winger miss the point.

The proper argument for free markets is not that market outcomes are natural. The left winger has the better of the debate on that score: the state does in fact come first, then the market. That is why free marketers fear Communism. Because the state really can shut down the market—and, by extension, use a lighter touch to shape the social outcomes to which the market leads.

A proper free market argument accepts that policymakers can channel or override market outcomes but the argument holds that policymakers shouldn’t do that, because the people speak clearest through markets. That is, the only really coherent argument for free markets is that markets are more democratic than the electoral process. When people buy and sell, they vote, and the free market position is that un- or lightly-regulated markets process those votes in a way that is more faithful to the preferences of the voters than are the institutions of representative democracy that process the votes that people cast in electing the policymakers who would otherwise regulate market outcomes.

So, the argument would go, while activists might not like the fact that Americans are moving to Mexico City, the fact that Mexicans themselves are willing to rent them places to live and sell them tacos is the clearest possible indicator that Mexicans want the Americans to come.

The left winger’s response that the state comes first is not a good rejoinder to this proper form of the free market argument. For the free marketer can argue that the state ought to embrace the system that most faithfully reflects the will of the people, and that markets, in his view, are that system. The only way for the left winger to strike back is to argue that the electoral process, through which people can choose to alter market outcomes, does an even better job of reflecting the will of the people.

That might be true—and I tend to think that it is. But unlike the question whether the law is prior to the market and can influence market outcomes, the answer is easier to contest, as several generations of public choice theorists have done.

In the market, one’s ability to speak is mediated by wealth—more money and more ownership means more votes. But even were it possible to keep money out of politics—and if not, then elections are mediated by money, too—elections would still be prone to distortion by small, highly-organized interest groups. Many voters don’t show up to the polls, and their representatives don’t always do what they want even when they do.

Indeed, it is difficult even to compare outcomes under these approaches because they represent, in effect, different social welfare functions. Are the sale decisions of landlords and street food vendors a more accurate expression of the abstraction that is the “will of the people” than the decisions of representatives elected by the subset of the population that showed up to vote in the last election?

The debate over regulation of markets is really a debate over norms—and specifically democratic legitimacy—not market naturalism.

It won’t be resolved until both sides start acting that way.

Categories
Despair Meta Miscellany

Cover, Dispersion, and the Defense of Schools in Depth

The principal problem with liberal gun rights policies in the modern age is the same problem that has bedeviled all modern warfare: firepower. What do you do when a single rifleman with enough ammunition can wipe out hundreds of people per minute?

This was, of course, a problem with which militaries were much concerned between 1914 and 1918 in particular. One might have expected that the principles that they developed in response would have been put to use already by school defense planners, especially since those principles govern the way all armies today deal with the same problem of firepower that schoolchildren now face.

But they have not been applied.

To my knowledge, the principal principle employed today by schools is that of concealment. If a shooter enters the building, classroom lights are to be turned off, doors are to be locked and barricaded, and children are to hide.

Concealment is, indeed, one of the methods that World War One tacticians identified as a means of dealing with firepower.

But it’s just one, and far from the most important—especially when the enemy has a rough sense of where you are. If he knows you’re behind a wall, or a door with a few chairs and desks up against it, he doesn’t need to know exactly where you are. With enough firepower, he can shoot up the entire wall or the entire door, and everyone behind. Just so, the modern soldier is taught to distinguish between concealment and cover.

Cover as in armor or concrete: stuff that stops bullets and negates firepower.

Cover is another method that World War One tacticians identified as a means of dealing with firepower. But it, too, is not enough. Cover works, but only if you can prevent the enemy from closing with you and pulling you out of your cover. In war, that is done by marrying cover with firepower of your own. You can close on a tank that has no gun, but not so easily on a tank that has a gun.

But it’s hard to marry a school with firepower of its own. The trouble has to do with the element of surprise. You need to have a lot of guards on duty at any given moment in order to minimize the advantage an attacker gets from surprise. Guards get bored and fail to notice things. They panic. They run. And they get shot before they can reach for the arms that they have careless cast aside. You would need a garrison effectively to support an armored school.

Absent such a garrison, you can armor your doors and make desks and chairs from concrete, but all the enemy needs to find is one unlocked classroom door and he’s in—and will have plenty of time to step behind every concrete desk or chair therein.

Cover, too, does not exhaust the principles developed by World War One tacticians.

Another is: dispersion.

Modern weapons can bring astonishing amounts of firepower to bear on discrete areas, but they can’t bring astonishing amounts of firepower to bear on everything at the same time. That is especially true for a lone rifleman.

The more dispersed the targets, the longer it takes to hit all of them.

Which brings us to one of the principal school design flaws from the perspective of modern defense: schools concentrate students. Once the shooter has entered a classroom, the walls of the classroom corral his targets whereas modern tactics demand that targets disperse in order to defend successfully.

But the most important lesson that tacticians learned in World War One was something else: combination.

A successful defense cannot be mounted using any one of these principles alone. Concealment alone won’t do it (the enemy will just shoot all the concealed places). Cover won’t do it (the enemy will just close with you and pull you out). Dispersion won’t do it (given enough time, the enemy will find a bullet for every target).

You have to use them in combination.

If you disperse and conceal yourself behind cover, the effects of the enemy’s firepower are much reduced. It will take him longer to find you, make it harder for him to hit you, and take him longer to hit all of you.

This was the rationale behind the defense in depth developed by the Germans toward the end of the war.

Rather than concentrate thousands of defending troops in a frontline trench against which the allies could bring to bear massive firepower, the Germans created a deep patchwork of trenches, lightly manning each. They took advantage of natural obstacles, like hills, by stationing troops on reverse slopes. And they devolved authority onto commanders of small teams of defenders whose job was to adjust their positions dynamically as the battle evolved to maintain dispersion. This approach soon became a staple of modern tactics.

Modern militaries deal with firepower by deploying cover, concealment, and dispersion in combination. The least schools can do for their students is to deploy same.

The first and most important change that must be made to school defense is to eliminate the corralling effect of classroom walls. As soon as an attacker is known to be inside a school, the walls separating the classrooms from the outside world must disappear. Make them garage doors, say, and program them to spring up at the first sign of trouble. (A more fanciful approach is illustrated below.)

Interior walls should be armored and stay in place, as one doesn’t want temporarily to increase the number of available targets—concealment and cover still matter within the building—but the exterior walls should disappear, allowing students and teachers to disperse as fast as their legs will carry them.

But that, alone, is not good enough.

Rather than disperse into open fields enabling our rifleman to mow down fleeing students like a World War One machine gunner overlooking no-man’s land, students must disperse into concealment and cover.

To achieve this, schools must be ringed by concrete blocks in irregular patterns (irregular to deny the shooter an unobstructed field of fire in any direction). (Even better, they should be great concrete busts of historical figures, so that they both teach and protect.) As soon as the outside walls go up in response to a threat, students must be able to flee into cover and concealment of this kind. The blocks must be spaced closely enough to conceal and cover, but not so closely as to prevent students from continuing to run and run and run; for they must not stop behind these blocks, but weave through them, continuing to disperse (according to arrows conveniently painted throughout this field of cover) until they have arrived behind the cordons set up by first responders.

Here the box-like outside walls of a schoolhouse are tethered to a boom which jerks the walls away at the first sign of trouble. Students are then free to flee to safety in all directions using the cover and concealment provided by a dense assemblage of irregularly-spaced concrete blocks.

In this way, the rifleman’s firepower is almost completely negated. In seconds, his targets disappear behind cover and concealment. He must chase them down on foot, close with them, one by one, and each time he pauses, all the other targets recede further from him. He cannot see them. He cannot shoot them from afar.

A country that gives each person a right to that hallmark of modern warfare—firepower—must give its students the benefit of modern defensive combat tactics. It must give them the defense in depth.

Of course, another approach would be not to honor an individual right to modern firepower in the first place.

Categories
Miscellany

Ressentiment in the Pacific Theater

The Japanese in New Guinea:

I still remember well when we retreated from Buna. It was the night of 20 January 1943. . . . There were many soldiers wounded, or too sick to retreat. Five or six of us were standing around with Captain Kondo when he said to one of the wounded, ‘We are going to leave now. But there is no one who can carry you. It would be a big problem if a soldier like you, who is still clear in his mind, should become a prisoner of war. So, you should kill yourself here.’ The wounded soldier said, ‘Yes sir. I will die here, sir.’ Kondo said, ‘I’ll give you my pistol. Please die now.’ But the soldier didn’t have enough strength to pull the trigger of the pistol. He told Kondo that. The Captain said, ‘Alright. Give it to me. I’ll shoot you.’ The soldier pleaded with Kondo to wait. Kondo said, ‘Now what! Are you scared?’ The soldier asked if he could call out to His Majesty, the Emperor. He shouted ‘Long Live the Emperor,’ then the Captain shot him in the head and killed him. It was the first and last time I saw someone calling out the Emperor’s name before he died. We all knew that it was not his real feeling, because everyone else called out for their mother. To call out ‘long live the Emperor’ was just for show.

Peter Williams, Japan’s Pacific War: Personal Accounts of the Emperor’s Warriors 62-63 (2021).

The Americans on Guadalcanal:

I was passing a line company when I heard the company commander berating a Marine for walking along the top of the ridge. Because of sniper fire it was against regulations. I knew this captain was a Reserve officer, and stopped to watch. The Marine on the skyline did not immediately come down as ordered. The captain proclaimed that he had one minute or the captain would shoot him on the spot for refusing a direct order. He looked at his watch and placed his right hand on his sidearm, a showy, chrome-plated, ivory-handled, Smith and Wesson revolver. A few yards behind, a Marine was cleaning his rifle and seemed to be paying no attention. He replaced the bolt, loaded the magazine, and put a round in the chamber. Then he cradled the rifle in his arms and gazed off into the distance. I noticed that the piece just happened to be pointed right at the captains back. The Marine on the ridge ambled down, the captain took his hand off his revolver, the rifleman took the bolt out of his rifle, and I continued on my way.

Eric M. Bergerud, Touched with Fire: The Land War in the South Pacific 437 (1997).

In both of these stories there is abuse. In the Japanese story it is from the top and in the American story it is from the bottom.

I find the American story more frightening, I think.

For there is nothing more frightening than resentment.

All the more so, here, because, fifty years later, the American historian who recounts the tale, Eric Bergerud, remarks that the officer was inadequate—because he tried to exercise authority.

I am struck by how different the American experience was, also, from another army that, like the Japanese army, was authoritarian—that of Germany in World War One. Here is Ernst Junger:

As we hurried on, I called out for directions to an NCO who was standing in a doorway. Instead of giving me an answer, he thrust his hands deeper into his pockets, and shrugged his shoulders. As I couldn’t stand on ceremony in the midst of this bombardment, I sprang over to him, held my pistol under his nose, and got my information out of him that way. It was the first time in the war that I’d come across an example of a man acting up, not out of cowardice, but obviously out of complete indifference. Although such indifference was more commonly seen in the last years of the war, its display in action remained very unusual, as battle brings men together, whereas inactivity separates them.

Ernst Junger, Storm of Steel 194-95 (1961) (M. Hoffman, trans. Penguin Books 2004).

Junger fought almost continually from 1915 to 1918.

Authority means freedom from resentment—you accept your position.

On the other hand, the Americans came out on top in both wars.

Categories
World

What Does the Invasion of Iraq Tell Us about Whether the American Military Would Have Outperformed Russia in Ukraine?

Many have drawn the conclusion from Russia’s failures in Ukraine that the United States would have been more successful—because the United States were more successful in their own most recent military adventure against a functioning state: the invasion of Iraq.

To see whether such a conclusion is sound, it is useful to consider whether the Iraqis took the same steps as the Ukrainians to defend their country—and if they did not, whether those steps would have been effective in Iraq.

If Iraq did take the same steps, or they would not have been effective, then the comparison between Ukraine and Iraq is a good one: if America beat Iraq it could probably have beaten Ukraine as well.

But if Iraq did not take those steps, and they would have been effective, then American success in Iraq may not tell us much about how America would have fared against Ukraine.

A RAND post-mortem on the Iraq War is most helpful in this regard. It suggests that Iraq, unlike Ukraine, failed to mount a basic defense of its territory—one that would have seriously hampered the American invasion—which in turn suggests that American success in Iraq tells us little about whether America would have been more successful than Russia in Ukraine.

The report points out five basic mistakes that Iraq made, which would have slowed the American advance, and which Ukraine has not made.

First, on the eve of the American invasion of Iraq in 2003, much of the Iraqi military was stationed in the north or the east of the country—the opposite of where they needed to be to counter a massive, well-publicized American buildup of troops to the south and west.

No one seems fully to understand why Iraq did this. But it is as if Ukraine had stationed all of its units on its borders with Romania, Slovakia, Hungary, and Poland prior to the Russian invasion.

Ukraine did not, of course, do that; on the eve of the invasion, its troops were stationed in the east of the country, where the Russian invasion force was massed. If Iraq had deployed its forces to the south and west, American forces would have faced a much more numerous and concentrated enemy during the Iraq War.

Second, Iraq failed systematically to prepare to blow up ports, bridges, dams and oil fields to slow the American advance.

This was not because American planes or special operations forces had prevented demolition plans from being carried out. There simply were no plans.

As RAND puts it: “Even though the Iraqi strategy was to impede the U.S. march toward Baghdad, measures that could have slowed the American advance, such as the systematic mining of roads, destruction of bridges, and flooding of choke points, were not part of the Iraqi defense scheme.”

Ukraine has not made this mistake—at least not entirely. While Ukraine failed to scuttle ships in the Port of Mariupol in advance of the Russian invasion, she did mine the port. And she did blow bridges and dams, slowing the Russian advance on Kyiv and forcing Russia to engage in costly bridging operations in the east of the country.

If Iraq had done the same—in particular, if she had blown the Hadithah Dam, flooding the Karbala gap, which was the main bottleneck along the American line of advance—the invasion would have been slowed.

According to RAND, “[t]he gap to the west of Karbala was the only feasible route of advance as the area to the east of Karbala and around the Euphrates River crossing was a ‘nightmare of bogs and obstacles.’ . . . Had the dam been breached, the resulting flood would have made an armored movement through the gap impossible.”

The report continues: “However, there is no evidence that the Iraqis ever intended to breach the Hadithah Dam, as they had ample opportunity to do so before the Rangers seized it.”

Third, Iraq made virtually no effort to defend against the American advance on the most defender-favorable terrain in the country: its cities, which offer excellent concealment from air attack, among other advantages.

RAND reports: “According to [an American researcher], the [Iraqi] Regular Army and Republican Guard commanders his team interviewed [after the war] found the entire concept of city fighting unthinkable. [The researcher] quoted one Iraqi colonel as saying: ‘Why would anyone want to fight in a city? Troops couldn’t defend themselves in cities.’”

Indeed, almost none of the Iraqi army had trained for urban combat and its leaders hardly considered the option. And the sole Republican Guard unit stationed inside Baghdad failed to prepare any serious strong-points in the city. According to RAND:

A survey of Iraqi defenses in Baghdad found no defensive preparations, such as barricades, wall reinforcement, loophole construction to permit firing through walls, or wire entanglements, in the interiors of buildings and few, if any, obstacles, minefields, and barriers on the streets. What prepared fighting positions existed were typically outdoors and exposed. The protection surrounding such positions was often one sandbag deep. As a consequence, the militias and Special Republican Guard units often fought in the open or from easily penetrated defensive positions.

Stephen T. Hosmer, Why the Iraqi Resistance to the Coalition Invasion Was so Weak 51, 71 (2007).

Iraq did plan on using militias to defend cities using small arms. But instead of using the urban terrain to their advantage, these units engaged in suicidal frontal assaults on armored vehicles.

Ukraine has not made these mistakes. While it is difficult to judge the preparations made by Ukraine for the defense of downtown Kyiv—which seemed to be ad hoc at best—the urban defense of Mariupol appears to have been well executed. It slowed the Russian advance for three months despite heavy bombardment from both Russian strategic bombers and artillery.

A bit of arithmetic suggests that if Iraq had taken urban defense seriously as well, American casualties would have been very high and the Iraq War—which lasted six weeks—would have been greatly prolonged.

While the ratio of Iraqi to American casualties in the invasion was 40 to 1 (according to calculations based on numbers supplied by Wikipedia), virtually none of that fighting involved urban combat. The following year, however, when the United States spent another six weeks wresting control of a single small Iraqi city–Fallujah—from lightly-armed Iraqi insurgents who used the urban terrain to advantage, the ratio of Iraqi to American casualties fell nearly to parity: 4.45 to 1 (again based on casualty figures in Wikipedia). (The Iraq War, like the earlier Gulf War—which I touch upon below—was an American-led coalition affair; the numbers I report for “American” troop strength and casualties in these wars are coalition-wide numbers.)

If Iraq had concentrated its 1.3 million combatants in its cities, 292,000 American casualties—approximately the size of the entire American invasion force—would have been required to kill or wound them all. (And, if the numbers in Wikipedia are any guide, virtually all of the Iraqi combatants in Fallujah did have to be killed or wounded before the United States were able to declare victory in that battle.)

Even if we reduce the number of casualties required for victory by 75% to make liberal allowance for desertions and the application of measures not employed in Fallujah—such as the carpet bombing of Iraqi cities—America would have needed to sustain about 50,000 casualties to take Iraq via urban combat—roughly the level of casualties that Russia has sustained so far in the invasion of Ukraine (assuming 15,000 Russian dead and a 3-to-1 ratio of wounded to dead).

Moreover, the amount of time required to reduce Iraqi cities would have been great. American forces entered the Battle of Fallujah with the 3-to-1 ratio of attackers to defenders generally thought required for a successful advance. But at the beginning of the invasion of Iraq, Iraqi combatants outnumbered American combatants by more than four to one—1.3 million Iraqis under arms to about 300,000 Americans.

It would not, therefore, have been possible for American forces to reduce every Iraqi urban area at the same time. Instead, America would have had to proceed piecemeal—perhaps even city-by-city—in order to achieve favorable attacker-to-defender ratios, greatly increasing the duration of the campaign.

Fourth, apparently few in the Iraqi military could hit a target. As RAND puts it with endearing understatement: “Coalition forces were also fortunate in that Iraqi shooting accuracy was so poor. This bad marksmanship was apparent in both Iraqi regular military and militia units, and it was frequently commented on by U.S. forces.”

Many Iraqi units had no live fire training in the year before the invasion and units that did allocated four or ten bullets per soldier for target practice.

The marksmanship problem extended to elite Iraqi tankers. According to the report:

At Objective Montgomery west of Baghdad, an elite Republican Guard tank battalion fired at least 16 T-72 main gun rounds at ranges of as little as 800-1000 meters at the fully exposed flanks of the U.S. 3-7 Cavalry’s tanks and Bradley fighting vehicles—with zero hits at what amounted to point[-]blank range for weapons of this caliber. In fact, the nearest miss fell 25 meters short of the lead American troop commander’s tank. Similar results are reported from American and British combatants throughout the theater of war, and across all Iraqi weapon types employed in [the war].

Stephen T. Hosmer, Why the Iraqi Resistance to the Coalition Invasion Was so Weak 7273 (2007) (quoting Congressional testimony).

Ukraine has not had this problem.

One of the success stories of the Ukrainian defense against Russia has been the accuracy of her soldiers’ fire, particularly artillery. This has, of course, been greatly aided by the use of drones. But for coordinates relayed by a drone to be useful, an artillery team must be able actually to hit them. The large number of Russian armored vehicles and soldiers destroyed by Ukraine further testifies to the Ukrainian military’s overall competence at aiming and shooting a variety of different weapons.

If the Iraqi army had actually been able to hit American tanks at point blank range, American losses during the invasion would have been rather higher than they were.

Fifth, the Iraqi army apparently made no attempt to attack lightly-armored American supply lines during the initial invasion itself. (That changed during the insurgency that followed.)

Indeed, Iraqi units purposely bypassed supply vehicles to mount suicidal assaults on armored vehicles instead. This inattention to supply is striking because American supply lines were stretched very thin during the invasion, as commanders chose to bypass numerous cities in their sprint to Baghdad.

As RAND puts it:

The fast-moving Coalition combat forces depended on extended supply lines through areas that had not been fully cleared of enemy forces. However, the Iraqis apparently had no plan and made little or no attempt to interdict those lines of supply by having militia and other forces attack the thin-skinned tankers and other supply vehicles supporting the U.S. advance. Instead, the militia forces were directed to attack U.S. combat elements, particularly the tanks and APCs leading the U.S. advance.

Stephen T. Hosmer, Why the Iraqi Resistance to the Coalition Invasion Was so Weak 55 (2007) (quoting Congressional testimony).

The Ukrainian military has not made this mistake. A successful Ukrainian campaign against Russian supply lines, which, like American supply lines in Iraq, were seriously stretched because Russia bypassed cities on the way to the capital, helped win Ukraine the battle for Kyiv.

Had the Iraqi military targeted American supply lines, the American invasion would have been delayed. Indeed, it is striking that Russia, like America in Iraq, gambled that she could achieve capitulation via a sprint to the capital despite the rear-area vulnerability this creates. America won her gamble. Russia lost hers.

Overall, then, Russia has faced a much different foe from the one the United States faced in Iraq.

Ukraine had forces stationed roughly where she needed them to mount a defense, denied Russia infrastructure, used urban terrain to her advantage, showed competence in the use of her weapons systems, and attacked supply lines. And several of these actions—denial of infrastructure (e.g., bridges), accurate fire, and attacks on supply—have been credited as reasons for Ukraine’s success thus far in the war.

Iraq did none of these things. Indeed, Iraq was incompetent, whereas Ukraine has proven at least minimally competent.

Had Iraq been a minimally competent opponent, the results of the Iraq War might have been very different—and the image of American invincibility might have been tarnished in the way that Russia’s military image has been greatly tarnished by Ukrainian resistance.

(Of course, America’s struggles responding to the subsequent Iraqi insurgency demonstrated the limits of of the American military as an occupying force. But my interest here is in the ability of the military to prosecute a successful invasion rather than in its ability to prosecute a successful occupation—if successful occupations are ever possible in the absence of genocide.)

Russia’s failure to achieve air superiority in Ukraine has been striking, and it does seem reasonable to assume that America would have achieved air superiority—and would have carried out vastly more devastating air assaults in Ukraine than has the Russian Air Force.

But it is not clear that air superiority would have won the war for Russia so much as shifted the battlefield from the countryside and the suburbs to Ukraine’s cities.

Indeed, the American experience in the first Gulf War (as opposed to the Iraq War), suggests that air superiority is no silver bullet against a minimally competent defender such as Ukraine even outside of cities.

The Gulf War of 1991 was a contest between thousands of tanks and armored vehicles on both sides undertaken in the deserts of Kuwait, Iraq, and Saudi Arabia. It was preceded by six weeks of continuous air attack conducted by the United States on Iraqi forces, during five of which American planes had air superiority.

Despite this, many Iraqi armored units remained near full strength when the ground invasion began, forcing American ground units to destroy thousands of Iraqi tanks and armored vehicles in combat.

According to a researcher:

The Iraqi armor force that survived the air campaign was still very large by historical standards, and many of these survivors fought back when attacked by Coalition ground forces. It is now known that about 2000 Iraqi tanks and 2100 other armored vehicles survived the air campaign and were potentially available to resist the Coalition ground attack . . . . By contrast with these . . . Iraqi armored vehicles, the entire German army in Normandy had fewer than 500 tanks in July 1944.

Stephen Biddle, Victory Misunderstood: What the Gulf War Tells Us about the Future of Conflict, 21 International Security 139, 149, 152 (1996).

Moreover, simulations conducted by the American military after the Gulf War showed that if Iraqi armored units had been minimally competent—that is, had they bothered to dig trenches for their tanks instead of hiding them behind sand berms and had they posted sentries to alert them when the invasion started—they would have inflicted more casualties on American invasion forces than American forces would have been able to inflict upon them, notwithstanding the inferior range and target acquisition systems of Iraqi tanks.

As the same researcher recounts:

Western armies dig their fighting positions into the earth below grade, and hide the soil removed in excavation. The [elite Iraqi Republican] Guard, on the other hand, simply piled sand into loose berms, or mounds, on the surface of the ground around combat vehicles and infantry positions. This gave away the defenders’ locations from literally thousands of meters away, as the berms were the only distinctive feature of an otherwise flat landscape, without providing any real protection against the fire this inevitably drew. Loose piles of sand cannot stop modern high-velocity tank rounds. In fact, they barely slow them down. U.S. crews in [one battle] reported seeing 120 mm tank rounds pass through Iraqi berms, through the Iraqi armored vehicle behind the berm, and off into the distance. No U.S. tank crew would leave itself so exposed.

Stephen Biddle, Victory Misunderstood: What the Gulf War Tells Us about the Future of Conflict, 21 International Security 158-59 (1996)

He continues:

Iraqi covering forces systematically failed to alert their main defenses of the U.S. approach, allowing even Republican Guard units to be taken completely by surprise. Going back at least as far as World War I, all Western armies have used covering forces—whether observation posts, forward reconnaissance screens, or delaying positions—to provide warning to the main defenses that they are about to be attacked. Ideally, these covering forces serve other functions as well (such as stripping away the opponent’s recon elements, slowing the attacker’s movement, or channeling the assault), but the minimum function they must perform is to notify the main defense of an attacker’s approach. This is not difficult. A one-word radio message is enough to sound the alarm. Even less can work if commanders agree in advance that failure to check in at specified times will be taken as warning of attack. The brevity of the message makes it virtually impossible to jam; the procedural backup of interpreting silence as warning means that even a dead observer can provide an alert. Yet at [the Gulf War battle known as] 73 Easting, for example, the Iraqi main position received no warning of the [American tanks’] approach. A few observation posts were deployed well forward of the main defenses, but these were evidently destroyed without sending any messages, and without the local commander interpreting silence as evidence of attack.

Stephen Biddle, Victory Misunderstood: What the Gulf War Tells Us about the Future of Conflict, 21 International Security 160 (1996)

If, as those simulations showed, a minimally competent Iraqi army not making such basic mistakes and fielding mid-twentieth-century-vintage Soviet armor could have made the Gulf War a costly affair for the United States military, notwithstanding American air superiority over a desert terrain affording nowhere to hide, then a minimally competent Ukrainian military operating in Ukraine’s rather more defender-friendly terrain of farms and forests would likely be able to inflict substantial losses on an American invasion force having air supremacy.

And that’s before the fight would even make it to the cities.

The fact is that, against a minimally competent foe, even one with somewhat inferior technology and no air defenses, attacking is hard.

One gets a hint of this when one considers that the great advances of World War Two were, for the most part, executed only with the aid of staggering amounts of men.

To dislodge the 100,000 German soldiers holding the Seelow Heights on the route to Berlin, for example, the Soviets flung a million men at them and took thousands of casualties.

By contrast, both the American invasion force in the Iraq War and the Russian invasion force in Ukraine were smaller in size than the forces mustered by the defenders (300,000 against 1.3 million in Iraq and 225,000 against 300,000 in Ukraine according to Wikipedia).

One should expect, then, difficulties for the invader, regardless how competent the invader may be—or how advanced its weaponry. America was fortunate enough to dodge these difficulties by picking a foe that was not minimally competent.

The lesson of Iraq was not American invincibility but rather that the most important choice to make in planning a military adventure is whom to fight.

The difference between American success and Russian failure in these conflicts may be due to no more than that the United States chose wisely—and Russia poorly.