Digital Shibboleths

I was surprised by the responses to my piece “Is tribalism a natural malfunction,” out this September in Nautilus Magazine. The piece was a meditation on a series of computer experiments we did in the study of Prisoner’s Dilemma, and a reflection on what these simulations, and more complex arguments from mathematical logic, might tell us about social life. There were some lovely comments, of course, and some of what wasn’t was just the natural rough-and-tumble (and preening, self-promotion, and me-too) displayed by varieties of the species Commentator Internetalis.

What was unexpected was the robust defense of tribalism made by those writing on ostensibly science- and technology-focused sites. (A student in my seminar sent me a link to the Hacker News discussion; a search pulled up a similar kinds of discussion on Reddit.)

The claim that educated and apparently intelligent people made on these sites was simple: “Tribalism works”. To be clear, this was not said as part of a lament on the human condition. What they meant was quite a bit stranger, and darker: tribalism “works”—in the sense that at least some forms of it are good and desirable.

While the arguments made in favor these views, and thus (at least implicitly) against my piece, took different forms, it was surprising to see the same fallacies appear again and again. Some of this repetition might have been a contagion effect, or an information cascade, as people imitated other people’s arguments further up the comment stream. For reasons like this, it’s not usually useful to deal with failures to reason on the internet. Yet doing so here is both instructive and diagnostic of a particular failed form of thinking among those who consider themselves reasonable, or rational, thinkers.

It’s particularly useful because my sense is that these readers who in other domains may be able to reason well enough to write a functional piece of computer code (say) suffer, when thinking about their social lives, from a form of motivated reasoning: there is something they want to believe, and driven by that desire they preferentially select the mental moves necessary to avoid locating contradictions or evidence-against the belief in question. My further concern is that these readers believe that other intelligent people, at least secretly, share this belief—lending additional motivation to avoiding the arguments, and evidence, that might unsettle it. If Very Intelligent People probably also think this, it’s not worth the effort to consider alternatives.

Now, I don’t think that something—that thing that these readers really want to believe—can be the assertion they make explicitly, that “tribalism works”. I don’t think they are keen to travel to places on the globe where the kinds of violent in-group/out-group dynamics I describe in my article actually obtain—South Sudan, say, or 1990s Rwanda, or the New Jersey depicted in The Sopranos or The Godfather, the gang violence of Chicago, or even the general “are you our kind of man” games that play a mildly villainous role in the biography of any great idea or innovation.

I don’t know what, precisely, it is in every case, nor do I have a taxonomy of it. But in many cases it seemed clear that the underlying belief drew on a fantasy that’s become increasingly popular to express: that the fantastic success of many places on the globe in improving human welfare and personal safety and security (see, for example, Max Roser’s data), as well as more abstract benefits such as science, culture, and the radical increase of more subtle and sophisticated forms of human flourishing beyond nice things like not dying before the age of five—that all of this was not driven by (say) the cosmopolitan values implicit (in very different ways, and often in mutually incompatible formulations) in science, free market systems, Enlightenment theories of political participation, &c.. But rather by (wait for it): racial, cultural, or religious tribalism.

Once someone believes this, a lot of other things become clear. Terrorism no longer appears as a threat to the values that sustain some of our most successful societies. The whole problem, rather, is tamed: it is now an obvious move in a game of racial war (and one you might consider trying yourself, on different targets). Judging the meaning of cultural change, and the complex feelings we have of both watching our past become past while looking forward to an uncertain future where we may or may not have a place also becomes simple. Change is bad when it comes from outside and change is good when it intensifies, or returns to, of the practices of the tribe.

To be clear, I’m not going to make the argument against tribalism (see, if you’re curious, the last five thousand years of state formation and human development; spoiler alert, it’s the plot of Sophocles’ Antigone, and Antigone lost). Nor am I saying that the cosmopolitan values that sustained the cultural, technological, and political progress of the last few centuries aren’t themselves under continuous (and necessary) challenges that require them to continue their long-term evolution. Philosophy is more important than ever.

What I’ll do, instead, is show exactly how strained the arguments are that people made against the piece—as an illustration of the claim above. Something else is going on when people make arguments this bad.

The fallacies came in three main classes, and I’ll handle them in turn.

• Some commenters accused me and my colleagues of concealing the truth: far from showing that tribalism was a malfunction, we had instead demonstrated its superiority. (On Hacker News, the claim that we had hidden the true lesson of our simulations was explained by one particularly frank commenter as a form of self-loathing on the part of the “Western, Christian, European” tradition.)

This is the easiest to correct; it’s simply a failure of reading comprehension. Both the text and the figure we included show clearly the devastating effects these shibboleth machines have not only on others, but on themselves. After a particular variety of shibboleth machine comes to dominance, the system—thanks to errors induced by neutral drift—enters a period of large-scale instability. Their society (such as it is) collapses as mothers birth daughters who engage in civil war.

This kind of collapse argument, by the way, seems to apply to even more sophisticated error correction mechanisms—unless you switch off cultural evolution itself, the basic strategy of total war against non-copies will be vulnerable to misrecognition. You can see this in a few ways, but a simple one is to imagine the emergence of a less-tolerant subspecies that took less care to avoid killing fellow members of its tribe; it will outcompete its more cautious brothers, driving them to extinction. (There should be a theorem hidden in there somewhere, though I haven’t gone so far as to try to prove it.)

This resembles the narcissism of small differences, dealt with so unforgettably by Monty Python, when the People’s Popular Judean Front battled the People Popular Front of Judea. The groups that formed in our simulation were unable to tolerate others—and eventually became unable to tolerate the differences that emerged amongst themselves. In the historical record, you might consider the violence and repeated purges that appear during revolutions. If you’re really clever, you might think that the instability of these self-/other-recognition systems is related to the particularly intractable classes of suffering associated with autoimmune diseases.

(A few readers took the unusual tack of defending tribalism by defending something that usually goes by another name—say, competing corporations—and calling that tribalism, ignoring the Joseph Kony-style tribalism that normally comes to mind in discussions of genocide and killing those who don’t provide the handshake. It’s a variety of the No True Scotsman fallacy, except now there are no Good Scotsmen: “Oh, no, I didn’t mean that kind of bad tribalism, I mean the good kind that we call something else.” This also appeared in a version of the argument that amounted to a reductio ad absurdam: that the entire apparatus of Western democracy was just itself a very complex shibboleth designed for self-/other-recognition — which, among other things, raises the question of why the system would go through the process of teaching and sharing the code.)

If all of this passed the reader by, you might have thought the second half of the article, dealing as it does with work by people at the MIRI conference, and the goal of finding systems that achieved cooperation without the use of self-/other-recognition mechanisms, would have provided a clue. Sadly, no.

• A second argument, equally fallacious, was that these simulations, by the very virtue of their being simulations, can tell us nothing about real society. Taken at face value, of course, this would imply that no model of the world that amounted to a simulation could tell us anything.

It’s a rather bleak view, and one the evidence doesn’t bear out. Simulation is a natural extension of the kind of reasoning scientists and scholars engage in all the time—the kinds of “what if”, sometimes counterfactual, reasoning that we do when we’re trying to understand the causes of things and to build explanations. Simulations do a whole bunch of things that are both a source of power and a source of unease. Most obviously, they simplify. But a good simulation is one that simplifies in the right way, and tells us a great deal about the essence of the phenomena at hand.

We can, of course, debate whether any particular game theoretic, computational, or narrative “what if” account tells us what we think it does. But at the very least (as one game theorist told me at our Game Theory and Information Theory conference years ago) it’s playing “tennis with the net up” — it places enabling constraints on the imagination. And very often, they do much more; in this case, they show us the lengths to which even simple evolutionary systems will go towards creating self-/other-recognition even when none has been made explicitly available.

It’s a good question what, exactly, distinguishes a simulation from the other kinds of models we build in science. A simulation certainly has to try to capture some of the underlying causal mechanisms — in contrast, say, to a simple regression model. But there are plenty of causal models that aren’t simulations. In the end, I’d say a simulation is something where we don’t just include causal mechanisms — we actually transpose over the causal mechansism into the methods of simulation themselves. A causal Bayes-net model of the economy is not a simulation, but that giant MONIAC certainly was, even if the mechanisms it proposed (“money as fluid”) didn’t work.

To reject simulation outright, as a number of commenters seemed to do, is to reject a practice of thinking at the heart of scientific progress. A slightly different complaint, that misses the point in a similar fashion, is the idea that because we didn’t hard-code the groups themselves, they can’t be true groups — as if all human aggregations begin with the writing and signing of a social contract, something even Rousseau didn’t believe!

(A second version of this argument was that we didn’t know that these were simulations, but that the commenter in question had detected this fact that was otherwise invisible to us. While I tried to write the piece in a vivid fashion, rest assured that I don’t think that these pieces of code are actual, well, people…)

• A third argument was somewhat revealing: that my description of tribalism as a “malfunction” meant I supported the creation of a dystopian, coercive world of forced re-education camps, or even DNA-editing, to breed it out of an unwilling population. In some cases, this dystopian view was linked to a story where a kind of Hunger Games-style world-civilization was doing the reprogramming.

The fallacy is pretty simple: that X is something we wish didn’t happen does not prescribe a particular way to handle it, even if it’s something we really wish didn’t happen.

Yet take a step further, and think about how weird this response is. The Nautilus piece tells a story about a difference-intolerant computer program that exterminates those unlike it and likens its emergence in simulation to reversions to tribalism in the modern era.

A certain kind of reader takes issue with the description of tribalism as a malfunction, asserting, by contrast, that this kind of behavior works (is successful, is desirable). And in doing so, they then suggest that the author of the piece (or other readers, perhaps) supports the creation of… a different-intolerant state concerned with the extermination of those unlike it.

There’s not much more to say on this final argument, except perhaps that it might reveal rather more about those who make it than they might first think.