Categories
Civilization Meta Miscellany World

Why Do Mechanical Explanations of the Social Deny Software?

They say that a good social theory must throw out some reality in order to have any explanatory power. Thinkers who favor mechanical explanations of the social—the people who claim that it is climate or asteroids or guns, germs, and steel that explain the rise and fall of civilizations—always seem to throw out the part of the mechanism that is the software. Why?

That is, all mechanistic explanations of the social treat people as machines—robots—that have certain operating limits. They need food and water. They need temperatures that are not too high and not too low. They cannot withstand the slash of a steel weapon. They are susceptible to disease. And so on and all true. These operating limits do constrain what the robots can do. But that is far from all.

Robots need an instruction set to run; they need, in other words, a behavior. And if the dawn of the age of artificial intelligence should be teaching us anything, it is that behavior matters a lot. There is a very big difference between a car, a car that knows to break before hitting something on the highway, and a self-driving car. There is a very big difference between a Rhoomba that moves only in straight lines and one that criss-crosses the room. It would seem to follow that the robots’ software should matter a lot in the rise and fall of civilizations. So why not make social theory by keeping the software and throwing out the robot hardware instead?

Programming in the social is thought, belief, training, worship, prejudice, emotion, philosophy, literature, letters, culture, art. It is the humanities. Humanistic explanations for things—Ruskin’s observation that you can read the decline of a civilization in its art—theorize the social in terms of the human robot’s programming. The humanities throw out the hardware.

(By programming I do not mean that we are necessarily controlled by others. In human beings we are dealing with semi-autonomous, artificially (nay, actually!) intelligent robots. So programming, for us, necessarily means self-programming at both the individual and social levels. Our programs are some peculiar function of inputs from other robots, inputs from the programs of the robots themselves (that is, we use our thought to influence ourselves), and hard-coded inputs (those determined by our genes).)

It is a peculiar thing that at the same moment that, as a technological matter, we are coming to recognize the transformative nature of artificial intelligence in relation to hardware, and indeed at the same moment that, thanks to the great financial success of companies like Google and Facebook, which derives entirely from the value of connecting businesses with individual minds, we are coming to appreciate the great difference influence over minds makes in social outcomes, we should continue to favor mechanical explanations for the social, to attribute the fall of Rome to barbarian invasions rather than decadence, or the rise of China to good policy rather than good spirit.

When we do consider the software, we tend to ignore the most important parts. We credit the power of propaganda, but not the power of religion, ideas, philosophy, love, or, indeed, art. But these too are a part of the programming, and if you judge by the things you yourself hold most dear, likely the most important part.

So do not tell me that talking won’t work. That writing will never change things. That symbolic protest is weak. Or that the only political power grows out of the barrel of a gun—unless you believe that your computer will behave the same no matter what software it runs.