Self-Determination as Imperial Policy

I am much concerned these days with self-determination as a bad thing in American foreign policy. By self-determination I mean the notion that we should support whatever groups wish to be free. The Kurds in Turkey. The Tibetans in China, and so on.

By a very fortunate coincidence, the more we promote freedom of this kind the weaker our enemies become; at an extreme the world entire consists of us plus an infinity of finely divided completely free groups. It is a commonplace that the larger the number of independent decisionmakers the harder it is for them to do anything together as a group. The collective action problem. It is for this reason that we don’t rely on victims to organize to solve environmental problems (we have the EPA instead) and for which we chose to form the United States and then killed hundreds of thousands of people in order to preserve them.

We tend to treat the promotion of self-determination as a selfless act of foreign policy, but it is not. Another name for it is the sowing of division, a very old tool of empire. A really selfless bit of policy would be for us not only not to encourage self-determination but instead affirmatively to promote the empires of others.  But that would be folly, for we would simply be creating the wolves that one day would turn upon us!

But wait, the definition of a selfless act is one that is against interest. If we don’t feel threatened by the promotion of self-determination abroad, that means it cannot be self-less. In our bones we know the promotion of self-determination to be a power play, a mode of domination all the sweeter in that it allows us both to dominate and to carry the mantle of selflessness.

Meta Miscellany World


Nine are enough.

Civilization Meta Philoeconomica Quantity

The Illiteracy of the Literate

The complex feelings of lawyers and humanist scholars with respect to quantitative subjects, and particularly the quantifization of the social sciences, ought to give them greater empathy for the illiterate and uneducated. The humanist scholar is to the scientist as the illiterate are to the literate.

The illiterate view books with distrust, for books are used to undermine their most heartfelt positions in ways against which they are unable to mount a defense. But this is precisely how the lawyer feels when her nuanced doctrinal argument is demolished by a mathematical model of the economy that shows that regardless of the substance of the legal rule, the same economic outcome will obtain.

“It’s just mathematical mumbo jumbo,” says the lawyer. “These economists don’t know how things work in the real world.” But what the lawyer cannot do is to beat the economist at her own game. She can’t show that the economic model cannot withstand close scrutiny; all she can do is try to delegitimize the entire method. But the illiterate levy the same charge on the literate: “it’s just book learning,” they say. They cannot defend themselves in writing; but they can try to delegitimize writing itself.

It is particularly bitter for the humanists that they have been socialized to occupy the power position. For millennia, since the invention of writing, they have been the ones who use their learning to lord it over others. But now these merely-literates, these innumerates, must know what it means to be crushed by ideas. A very bitter position indeed.

I do not mean to say that the mathematicians have any better claim on the truth. But if the humanists think the mathematicians don’t, then it should perhaps worry the humanists to think that maybe they don’t either, in relation to the illiterate. Or maybe we are marching forward, after all, from one stage of intellectual progress to the next!


An Intuitively Sufficient Statistic

You have a set of samples and you are interested in learning something about the probability distribution from which they are drawn. That something is the parameter of interest. It might be the mean. If you do something to the samples, add them together, for example, then you might lose some piece of information that they contain regarding the parameter. But you also might not. Whether you lose information or not by manipulating your samples depends on what you do to them.

For example, if you are sampling from a binomial distribution for which success has value 1 and failure value 0, then adding up the results of the samples won’t destroy information about the mean of the distribution (i.e., the probability of success). That’s because the mean is expressed in the number of successes, rather than their order. You know just as much about the mean of the distribution if your first nine samples are successes and your tenth a failure as if your first is a failure and the next 9 successes. In other words, when you add up the results, you lose information on the order with which the successes occurred, but the mean does not determine that order, and so you don’t lose any information relevant to determining the mean.

When the mean increases, the sample results change because you end up with more successes. So a statistic that counts successes changes too. Both the sample and the statistic change in the same way. That is what happens when a statistic is “sufficient.”

That’s why for a sufficient statistic the probability of drawing a particular sample, conditional on a particular result for the statistic, is independent of the parameter. As the parameter changes, both the sample and the statistic change in the same way. So their relationship to each other remains constant regardless of what happens to the parameter. In a sense, the sufficient statistic transforms the sample, instead of altering it. So any change to the parameter doesn’t change the relationship between the samples and the statistic. Sample and statistic are just different ways of expressing the same thing about the parameter.

The sample conditional on the statistic is just the ratio of the probability of the sample to that of the statistic. This means that if the statistic is sufficient, the probabilities of the sample and the statistic must both be products of the parameter, so that the parameter will cancel out and therefore have no effect on this conditional probability.


Power and Work

Ease should be your measure of power. You are sovereign over all things that you find easy.

How strange that when we think about power, we often think about things that we find hard to do! It’s hard, for example, to become President. But that tells you something about your power, doesn’t it?

Often we confuse the acquisition of power with power. Getting power can be hard. But having it is easy! Power means doing it is quick; you don’t have to think about it; you don’t have to work for it. Just like killing insects.

Are you surprised that the border guard didn’t think twice about executing the refugee? You shouldn’t be. Not needing to think twice is what it means to have power. The act figures very little in your mental map of important things, precisely because it’s easy!

You suffer for what you covet, not for what you have.

Civilization Meta Monopolization Quantity World

Optimal Prediction

When optimization arrives, either others will optimize against you or you will optimize against others. Business against you or you against business. There will be either corporate planning or central planning.


The Trappings of Wealth

Really rich people have more money than they can consume. Why is it valuable? Investment power. The ability to implement “private” policy. Suppose we think it’s good for the political system to have a group of private parties who can do that. Why do we choose them by the luck of the draw in business? Why not have a lottery every ten years and give a trillion dollars to ten lucky winners? Or elect ten people every ten years, or thirty years, give them the money, and tell them to spend it? Why not choose them a better way?

How interesting that these thoughts have raised Unger’s rotating capital fund (pages 35-36) out of the ocean trenches of my memory!


Rearranging the Law of Large Numbers

The Law of Large Numbers is completely meaningless to me when it is phrased as “the probability that the sum of results, divided by the number of independent samples, equals expected value gets very high as the number of independent samples goes to infinity.” mu=(sum of ys)/n, in which mu is the expected value, the ys are the results of independent samples, and n is the number of samples. Why should anything converge to the expected value?

But it is very meaningful to me when it is rephrased as “the probability that the expected value times the number of independent samples equals the sum of the results gets very high as the number of independent samples goes to infinity.” mu*n=sum of ys.

Yes, as the number of samples gets high, you know better and better exactly what your aggregate results will be.

As you multiply your expected value by larger and larger sample sizes, expected value goes from being totally fictitious and unhelpful to completely real.  If I get $100 with a 50% chance and zero otherwise, it is meaningless to tell me that my expected value is $50. I will never have $50. But if you tell me that I will face this chance 1000 times, then I can tell you with great confidence that I will have $50 times 1000 equals $50,000.


A Model for Macro

So far we have been discussing the properties of matter from the atomic point of view, trying to understand roughly what will happen if we suppose that things are made of atoms obeying certain laws. However, there are number of relationships among the properties of substances which can be worked out without consideration of the detailed structure of the materials. The determination of the relationships among the various properties of the materials, without knowing their internal structure, is the subject of thermodynamics. Historically, thermodynamics was developed before an understanding of the internal structure of matter was achieved….

We have seen how these two processes, contraction when heated and cooling during relaxation, can be related by the kinetic theory, but it would be a tremendous challenge to determine from the theory the precise relationship between the two. We would have to know how many collisions there were each second and what the chains look like, and we would have to take account of all kinds of other complications. The detailed mechanism is so complex that we cannot, by kinetic theory, really determine exactly what happens; still, a definite relation between the two effects we observe can be worked out without knowing anything about the internal machinery!

1 Richard P. Feynman et al., The Feynman Lectures on Physics 44-1-44-2 (1963).

Cf. .


The Field of Thought

The extraordinary thing about algebra is that it provides accurate solutions in advance of intuitive understanding, rather than after it. Normally this only happens with the observation of empirical phenomena. You see that a thing happens and then you try to explain it. But with algebra, too, sometimes you see that a result pops out of your equations, and then you try to explain it. But algebra is pure thought! Herein the facticity of thought.