The League of Legends Blockchain Vulnerability and the Forbidden Image Attack

This post is based on a discussion hosted last night at Harlem House, which included Jessica Taylor, Ben Hoffmann, and Hazard of the East. It announces a systemic vulnerability, the League of Legends Vulnerability, and the associated Forbidden Image Attack, common to Turing-complete systems including Ethereum. Needless to say, please do not actually conduct this attack—quite apart from the anti-social consequences, it would almost certainly be a suicide mission.

The Forbidden Image Attack is a two-sided attack: the LoL vulnerability is sufficient to induce a powerful off-chain actor to attack the blockchain even when the blockchain is a simple curiosity.

Blockchain advocates have always planned for adversarial situations, of course, but have mostly limited their wargaming to three contexts: (1) vulnerabilities to low-power adversaries (most canonically, the double-spenders); (2) vulnerabilities to high on-chain power adversaries (e.g., the miner pool 51% attack); and (3) vulnerabilities to powerful state actors in the context of reasonably equal power (e.g., in a wargamed future where the blockchain provides a plausible alternate to fiat currency, and the state actor has game-theoretic incentives to launch an expensive attack). In my view, Type (1) and (2) vulnerabilities have been well-wargamed by both crypto advocates and academics; a 51% attack by miners is possible but disincentivized simply by virtue of the downstream effects on confidence in the chain. Type (3) vulnerabilities (e.g., a major state actor gaining control of a chain by investing sufficient resources to crash the system via a 51% attack) are under-wargamed and mildly un-PC topics that almost certainly need to be discussed more—we may see the first examples if and when Ukraine attempts to use a blockchain to run essential government registration services in occupied territories.

Type (4) vulnerabilities, like the League of Legends vulnerability, are perhaps the most dangerous for development of innovative systems: they’re vulnerabilities that draw the attention of state power and induce action even when it seems game theoretically unstrategic for the state to act. They are the on-chain equivalent of SWATing, and (in my opinion) are the most likely vector for attack by malicious third parties.

The vulnerability gets its name from the urban legend that if your team is losing in an online networked game (such as League of Legends), all you have to do is start spamming the chat with forbidden phrases like “tiananmen square”, “june 4 1989”, and so forth; under the assumption that one’s opponents are playing from machines inside China, the danger of having these phrases attached to one’s IP address are sufficient to get them to log off.

In the West, of course, there are few forbidden words, but, horrifically enough, there are plenty of forbidden images, mere possession of which would be sufficient to have your computer confiscated and for you to spend many years in prison. The simplest Forbidden Image Attack on a chain like Ethereum is, then, to upload an image of the appropriate kind to the chain, contained as data associated with a smart contract.[*]

As most people know, Bitcoin has already been subject to a related attack, but the limitations of the Bitcoin chain mean that these attacks were limited to adding URLs that point to forbidden content, not the content itself. For a smart chain like Ethereum, however, mild steganography is almost certainly sufficient for the image itself to be uploaded.

The legal liability for possession of such an image is likely sufficient to cause serious trouble for miners and anyone else who wants to keep the state of the chain on their laptop. Congratulations, you’ve just been Forbidden Image SWATed. The current cost for such an attack is not trivial (100 kB is sufficient for a forbidden image, which translates into a few ETH in gas.)

What would happen if such an image was actually uploaded to the chain? On discovery, the most natural thing to do is for a centralized authority (e.g., that chain’s “Vitalik”) to announce that the valid chain is the one identical to the current one minus the offending image, and with the outgoing gas fees still paid. While this intervention undermines the decentralized aspect of the chain, it seems likely to be effective—if only because of the state-based penalities for not going along with it.

The second, more damaging version of the FIA, appears when there is sufficient steganography that the nature of the image is not immediately apparent and features of that image are connected to forward payments to non-involved actors.

Imagine, for example, you have a contract that includes a (steganographic) 100 kilobyte image that distributes ETH to other accounts on the basis of the content of the image: e.g., if the first bit of the image is a 1, pay X to address 1, if it is zero, pay X to address 2, and so forth.

The longer this contract stays undetected, the further downstream these payments propagate out. Address 1 receives an (unexpected) X, which it then fragments and pays (as part of ordinary life) out to others, who do so in turn.

Once the image is revealed, however, it now becomes impossible to remove it from the chain without requiring a complete recomputation of the downstream payments—a recomputation that requires possession, by the validating code, of the original offending image. The necessary intervention is now more serious: the central authority not only has to delete the image from the chain, but also to validate outgoing payments, i.e., to present a list of “mysteriously valid” transactions whose validity has to be accepted as fiat.

(In a truly insane society, this list of mysteriously valid transactions would be sufficient to reconstruct the image, and so would count as possession of the original — indeed, it would be a simple matter to distribute an “image reconstructor” that took the fiat validation and returned the offending image; it’s difficult to determine where the law falls on this.)

Yet more sophisticated versions of this attack are simple to construct: contracts that only create an image when the underlying data is XOR’d together, making it impossible to validate whether or not any particular contract makes one vulnerable. And as contracts get increasingly sophisticated, it will be increasingly difficult to audit code for these kinds of attacks.

The true danger here is not the need to delete the forbidden image — every reasonably open system, from Wikipedia to Open Source GitHubs, has this problem — but how a forbidden image can be computationally coupled to an exponentially growing network of transactions that involve innocent parties. These innocent parties are not vulnerable to arrest for possession of the image itself, of course—but they will inevitably bear the cost of cleanup, either through revocation of the transactions they receive that trace back to the forbidden contract, or through the inevitable centralization that the FIA imposes on the chain management. (A secondary harm, of course, comes from the repeated undermining of the decentralized potential of blockchain systems.)

One way to protect against the FIA is for miners to hold themselves to a higher standard in the contracts they are willing to validate and include in the chain. In this sense, the LoL vulnerability simply means that blockchains will need to grow up. In the same way that a good lawyer will refuse to help you draft an insane contract, a good miner will retain skepticism about a smart contract with 100 kB of apparently nonsense data.

The long term solution, however, may be to develop more rigorous methods for the unraveling and exclusion of forbidden content. On-chain voting, for example, in a DAO, would be sufficient, although it would undoubtably draw blockchain technology closer and closer to the ever-present political frontier, with explicit discussion and debate about on-chain constitutions that are difficult to have in a non-centralized fashion. Even in this case, there’s the negative space problem: the cleanup necessary for a FIA is, assuming mild constraints on the reversibility of computation, computationally equivalent to generating a new forbidden image—it is up to legal systems to behave sensibly at this point.

(The truly long term solution is to live in a society where there is no forbidden content at all — but the question of whether this is possible, or desirable, I leave to the political philosophers.)

It’s important to keep the vulnerability and the attack separate: the League of Legends Vulnerability has other features beyond the possibility of an FIA. There are a number of points for further discussion and theory including:

(1) All social worlds have forbidden topics — things you can’t say. Liberal governments keep the list of topics very short, at least when it comes to the use of state violence, but there is a much wider gray zone of forbidden subjects that can damage a person’s social power. How should a chain respond to the inclusion of socially-damaging content? (One response is to say “to hell with the normies” — but if your chain is maintained and validated solely by people who literally have no social power to lose by anything more mild than literally illegal content, you might have an unusual incentive system and find your chain is run by the truly powerless and anti-social.)

(2) Different countries have different rules. The original LoL vulnerability, for example, gives the edge to players who aren’t located in the PRC. If you wanted to prevent miners from operating in Russia right now — or at least strongly disincentivize them — you can simply upload content associated with forbidden discussion of the Ukraine invasion; current penalties are 15+ years in prison. If you think it unlikely that the Russian government would pursue someone for simply keeping a copy of the chain on their machine, you underestimate the sadism: the very nature of mining likely amounts to not just possession but also distribution, and a new charge to be added to the prosecution of a dissident who just happened to be a pro-social participant in a blockchain experiment.

[*] my intent here is not to litigate the details of how such a prosecution might work; the example relies solely on the idea that there is some kind of data that it is forbidden to possess and distribute to others (and, secondarily, on the Fear, Uncertainty, and Doubt about what counts.)

Predoctoral Fellowship Opportunity at the Laboratory for Social Minds

Update: window for informal inquiries has now closed. We will be in touch shortly.

Update: deadline for this informal process is 15 April 2022.

Dear Colleagues and Friends,

Circulating earlier than usual. Our lab has pre-doctoral fellowship opportunities that may be of interest to early-career researchers.

Previous holders of the position found a congenial home in the interdisciplinary and low-barrier world of Carnegie Mellon — and have gone on to great things in their careers. The call below lists a few recent areas of research, but we can literally do anything.

While the fellowship has traditionally been “strictly” pre-doctoral (meaning prior to matriculation in a PhD program), I’m aware that recent events may have disrupted people’s PhD studies, and we are happy to consider a range of career stages.

Please send us your best!

Sincerely,

Simon


The Laboratory for Social Minds at Carnegie Mellon University runs an ongoing fellowship program here in Pittsburgh, Pennsylvania, USA. The Laboratory conducts research across the cognitive, economic, and social sciences, combining theory, data science, and experimental work to better understand humankind’s future.

Recent work has studied the cognitive science of explanation-making, the logical structure of mathematical proofs, the dynamics of political speechmaking, and the patterns of argument-making online. In the coming months, we hope to open two new areas of research: (1) the nature and future possibilities for human intelligence; (2) the counterintuitive dynamics of outlier events in human society. More information about the Laboratory can be found at http://santafe.edu/~simon

Early-career scientists at the pre-Ph.D. stage, or scientists whose Ph.D. has been interrupted by recent events, who are interested in exploring research opportunities with the Laboratory should contact Prof. Simon DeDeo, sdedeo@andrew.cmu.edu, with subject line FELLOWSHIP.

While the position is supervised by Prof. DeDeo, it demands a high level of autonomy. Interested applicants should propose, briefly and by way of introduction, a scientific research project, and enclose a recent CV. This position is in-person, paid, and full time only, with a minimum one-year commitment; fellows in previous years have gone on to great things. Citizenship unrestricted, international travel assistance and legal aid for visas is available.

Five aphorisms on infinity

1. Physics advances, discovering infinity beneath the finite. Not one reference frame, but an infinite number. Not so-and-so many celestial spheres, but an infinitude of possible orbits. Even quantum theory, supposedly the source of discrete units, does this: not two states (“up” or “down”) but an infinite number (points on the Bloch sphere). Not so-and-so many particles, but a continuous field of constant fractional adjustment. Quantum theory’s great advance is to put the finite where it belongs, in the observer, not in nature.

2. The finite is constrained by the infinite beneath it. If the world were only a lattice of points, a discreet universe computer, what Luboš Motl once called “the smart twelve year old’s delusion”, then any physical law is possible—just as an Amiga is Turing Complete. Only the infinite can impose structure (local, holomorphic, renormalizable, etc.) and it is only with structure that we can go beyond the human-made. Infinity imposes a trans-human discipline.

3. A singularity is an infinity in which the logic of the infinite is suspended. No function can be continued through an infinity; or, rather, any continuation is as good as any other. Singularities are the re-emergence of the arbitrary nature of the discrete. They are points of exception, a physicist’s version of Schmitt’s state of exception, a political theory that announces its inadaquecy.

4. The response to a singularity is a greater infinity. Example: the function 1/x contains a singularity at the origin, and the logic of the continuous real fails. Any function might be matched on the other side, the choice is whim. If we move to the complex plane, however, the new logic of infinity is not the continuous, but the holomorphic. The singularity at the origin may be navigated around, and we rediscover new constraints, can integrate, etc. In the same way, a political or ethical stance that requires an exception signals that the author, or the culture, has a new infinity to grasp.

5. Physics makes the finite a consequence of a logic of the infinite. In a parallel fashion, mathematics discovers the infinite, at higher order, in a logic of the infinite.[*] Philosophy is the physics of the normative. We have no name for its mathematics.

Addendum, Colin Allen: Worüber man nicht sprechen kann, darüber muss man schweigen, es sei denn, man ist Physiker.

[*] consider: Descartes’ discovery of the continuous geometry beneath the algebraic logic of roots; all the way to homotopy type theory’s definition of equivalence in terms of the continuous deformation.

Announcing a New Course on the Science and Philosophy of Intelligence

PLEASE NOTE—the final syllabus has been sent to all registered students. The syllabus below is not the current one! (Because the final syllabus contains private links, etc, it won’t be public until the course is complete.)

In mid-November, I’ll be teaching a graduate level seminar on the science and philosophy of intelligence. Titled Future Theories of Intelligence, it will run through the “parallel academy” of the New Centre for Research and Practice.

The seminar proposes a four-fold division of theories of intelligence: intelligence as utility, computation, knowledge, and reflection. It aims to give students a foundation for rigorous speculation about, and at, the frontiers of thought.

It will run in four 2.5 hour sessions on November 14th and 21st and December 5th and 15th, 2021, from 2 pm to 4:30 pm Eastern Time; although it’s not required, I’ll run an optional crash course on Bayesian reasoning. The precis, below, is followed by some reflections on the possibilities and hopes for post-academic institutions. A (draft) full syllabus is available and will answer many of your questions.

Enrollment is open to the public, there are scholarships, and the course is part of the New Centre’s graduate certificate program with credit for both Critical Philosophy and Post-Planetary Universe Design (yes). If you are interested in funding student scholarships, please contact me.

I am told spaces fill up quickly, so if you’re interested, do please enroll here. Update: I’ve been asked about scholarships. Currently, the New Centre has a scholarship application for the certificate degree as a whole (of which this course is a part); that’s here. Scholarships for this seminar alone will be announced around October or so; you can contact organizers[at]thenewcentre.org for more information.


Seminar One: Intelligence as Utility

The ability of achieve one’s goals is often taken to be a hallmark of intelligence: cunning Odysseus rescues his crew. In the 21st Century, machine learning systems are made intelligent by selecting for the variants best able to achieve predefined goals. Simple accounts of biological evolution propose that human intelligence has a similar origin: Odysseus gets the girl.

Intelligence as utility is the claim that intelligence is nothing more than this. While the extreme version of this claim is untenable, the phenomenon of the intellectual-yet-idiot (IYI; N. N. Taleb) and the contrast with the street-smart ability to size up a situation and act appropriately suggests that there is more to this claim than meets the eye.

We will examine these ideas in a formal fashion through the framework of Bayesian Reasoning and Utility Theory. We will then present a major critique of these accounts, due to Frank Ramsey, that suggest the simplicity of the belief-desire distinction is an illusory one, and Taleb’s “black swan” critique. We will finally consider the possibility that a intelligence-as-utility finds its place as a servant-intelligence in the capitalist price mechanism.

Readings include Elizier Yudkowsky’s Rationality is Systematized Winning, Book One of Plato’s Republic, Hayek’s 1945 “Use of Knowledge in Society“, and selections from Anthony Appiah’s As If.

Seminar Two: Intelligence as Computation

Ever since Alan Turing, computers have provided not only models of thinking, but also rigorous and abstract hierarchies of mental power based around deductive reasoning. The deductive-sophistication account of intelligence underlies most of the contemporary psychonomic IQ tests, such as Raven’s Progressive Matrices, that now govern admission to the American meritocracy.

We first examine two basic concepts in theoretical computer science, the Chomsky hierarchy and computational complexity, which provide basic tools for understanding the idea that intelligence is related to the sophistication of deductive symbolic manipulations.

We then consider its challenges. A major advance in cognitive science was the discovery that human reasoning proceeds along inductive and abductive, rather than deductive, pathways. We contrast these more modern theories with the deductive account, and provide worked examples. These are drawn from Bayesian cognition, the dominant model for non-deductive processes, and challenge the possibility of a culture-neutral intelligence assay that works along deductive lines.

Readings include selections from Raven’s Progressive Matrices, from Chomsky’s 1965 Aspects of the Theory of Syntax, Scott Aaronson on Computational Complexity, selections from Charles Sanders Peirce, and recent work from our lab on mathematical reasoning and explanation making.

Seminar Three: Intelligence as Knowledge

Intuitively, intelligent people might be expected to know things that others do not. The knowledge account of intelligence suggests that what matters is the nature of this knowledge, rather than the process that brought it about. Knowledge accounts range from the tacit and example-based intelligence described by the Hubert Dreyfus in a Heideggarian mode, to the entrance exams of the British elite, to the mystical and transcendental experiences reported by religious and philosophical authors as the highest form of cognition.

While the computational account of intelligence is often associated with American liberalism, intelligence-as-knowledge is sometimes claimed by writers with aristocratic or fascist tendencies. We will touch briefly, and critically, on this association.

Readings include selections from Eton College and All Souls Scholarship Examination Questions, selections from Emerson and Coleridge, and Plato’s Symposium.

Seminar Four: Intelligence as Reflection

Reflection is the use of thought to examine thought. While reflection is at the heart of the Western philosophical tradition, it was not until the post-Kantian era that intelligence was associated with the act of reflection itself. Under this account, we are intelligent only in as much as we reflect on our mental actions, incorporate these reflections into our thoughts, and have the courage to give ourselves over to the process as it unfolds. 

The capaciousness of this idea gives reflection has a dual role. Reflection is not only intelligence, but also the process by which intelligence is understood. This means, in turn, that accounts of intelligence as reflection have a dynamic appearance: most famously in the Hegelian dialectic. 

The position of this course is that the reflection account of intelligence is both older, and younger, than Hegel. The dynamic nature of Greek dialectic provides one example from the early history of intelligence, and we examine recent suggestions that intelligence is to be found not in the individual, but in the development of public argument-making and reason-giving systems that are necessarily beyond us—a Turing-complete price-mechanism.

Meanwhile, 20th and 21st Century concepts from intelligence as computation provide a post-Hegelian perspective on intelligence as reflection. If one can realize thought’s ultimate powers only by thinking about thinking in an unbounded and referentially promiscuous fashion, basic features of intelligence gain meta-logical and meta-mathematical properties. This provides a challenging, synthetic account of intelligence—that intelligence is to think the impossible.

Readings include Mercier and Sperber’s Why do humans reason? Arguments for an argumentative theory, selections from C. Dutilh Novase’s Dialogical Roots of Deduction and Reza Negaristani’s Intelligence and Spirit, and Seth’s Lloyd’s A Turing Test for Free Will.


The New Centre and the Future of the Parallel Academy

The original parallel academy, in the Anglo-Saxon tradition, was the University of Cambridge, which splintered from proto-Oxford in 1209. While the inciting incident was apparently a series of murders, the marshy fens of East Anglia provided an alternative universe that nurtured and celebrated mathematical talent. The University itself was submerged beneath a complex of colleges, one of which, King’s, held probably the most exceptional collection of intellectual ability since Plato’s Academy. Parallel academia, in other words, is a tradition almost as old as the University itself.

It is also due for renewal. Intellectual life—meaning the quality and diversity of ideas in circulation—has never been stronger. It is also clear, however, that our traditional institutions are under significant strain. We are capturing an enormous amount of light, but we lack a power grid adequate to the task of distribution. The current paradigm has been elite education for the few; slowly-defunded education for the many; and large-scale MOOCs for the masses. And yet I’m constantly amazed by the intellectual activity of people well beyond these spheres, in the vast networks of podcasts, blogs, and self-education on YouTube.

Auto-didacticism, however, can only go so far, and one of the ironies is that many of these communities flourish only for a time before falling to the iron law of oligarchy. One day, perhaps, someone will teach a course on the political theory of the academy—on the ways in which decentralized governance can steer intellectual communities away from the attractor sets of autocracies and cults. We don’t really know how good systems work; the old joke was that successful communities work in practice, not in theory.

Untheorized or not, alternative paradigms today include not only institutions such as the Brooklyn Institute for Social Research, but sprawling communities built around sites like Less Wrong. The New Centre is one of the most interesting parallel-ac institutes currently operating. It involves some of the most interesting intellectuals outside (and in) the academy — the list goes on and just kills me. Many of these people operate at the intersection of the cognitive and computer sciences with ideas from post-analytic philosophy and a t+ orientation towards the future that makes Future Theories of Intelligence an excellent fit.

No Safe Level of Use: Strategies for Post-Social Media

2003:
[Cueball approaches a bearded fellow.]
Cueball: Did you get my essay?
Bearded Fellow: Yeah, it was good! But it was a .doc; You should really use a more open-
Cueball: Give it a rest already. Maybe we just want to live our lives and use software that works, not get wrapped up in your stupid nerd turf wars.
Bearded Fellow: I just want people to care about the infrastructures we're building and who-
Cueball: No, you just want to feel smugly superior. You have no sense of perspective and are probably autistic.
2010:
Cueball: Oh my God! We handed control of our social world to Facebook and they're DOING EVIL STUFF!
Bearded Fellow: Do you see this?
[Inset, the bearded fellow rubs his index and middle fingers against his thumb.]
Bearded Fellow: It's the world's tiniest open-source violin.
[xkcd Infrastructures, https://xkcd.com/743/]

TL;DR: your voyage back from Mordor.

In a previous post, The 11th reason to delete your social media account, I talked about reasons for leaving social media—not the reasons that we all agree are “good reasons to leave social media”, but the actual reasons that got me, at least, to leave. In this post, I want to talk about what you do next.

The intended audience for this post is academics, journalists, intellectuals, writers, and artists who are used to using (ad-supported) social media to share their work and interact with their colleagues and the wider public. You may be writing for a public. You may be part of a community or a collaboration too big for a CC list. You may be teaching a course—some of this thinking was spurred by the fact that we’re working on a new open-access humanities analytics course that’s taking place primarily online. To be clear, the following is solely my own thinking, not that of my colleagues, and is in no way an official policy of anything.

No Safe Level of Use / The World’s Tiniest Open Source Violin / Dealing with the devil / Small is beautiful

No Safe Level of Use

The primary insight is one I share with Jaron Lanier, whose book Ten Arguments to Delete your Social Media Accounts Right Now has a self-summarizing title. Many of us think of social media as a glass of wine—a harmless indulgence at low levels of use, and a total blast on a special occasion. I used to think this, too, but I now think that it’s much more like the modern cigarette: saturated with highly addictive chemicals, with only the most surface-level social benefits, and with a near-guarantee that a significant number of users will receive long-term damage.

Nearly all of this harm comes from the simple fact that, in ad-supported social media, you are the product. Rather than rehearse the arguments that both Lanier, I, and many others have provided, I’ll just direct you to my own piece (if you like, linked above) or Lanier’s (ditto). Take a look, see if you agree; if you do, bookmark this page, delete your social media accounts, and join us back here when you’re done. I’m happy to wait.

Here again? OK. If you still want to use social media, the rule is pretty simple: don’t talk back.

Ad-supported social media should be used solely as a way to provide links to your content off-site. Do not use these systems to engage with the public, even if it’s in an informal fashion. Avoid, for example, using Facebook groups to organize discussions, and ask your collaborators not to encourage or start groups themselves. When Tweeting, turn off replies.

It makes sense to take a hard line on this. Not only should you not have official Facebook groups, for example, but you should discourage them from forming on their own. Social media thrives on network effects, and the more people are on these systems, the more reasons others have to be as well. (This means, in turn, that you have to provide functional alternatives to the services these groups used to provide—more on that below.)

This advice goes against the standard idea that social media is a levelling platform, and that it’s a way for institutions and leaders of whatever form to talk, in an equal way, to the public. Facebook, certainly, encourages this view. Its slogans and mottos focus, in a repetitive fashion, on the way in which they “connect” people together. Twitter cultivates the sense that you can “@” any celebrity you desire. Cutting off discussion, in their ideology, is simply undemocratic.

If you’re only using social media to sell shit, however, I think that just means you’re doing it right. You are, at long last, after many decades, the customer rather than the product. It’s deeply unclear to me that the “engagement” that social media provides, and obsessively presents back to you in the form of likes, retweets, and comments, has any beneficial function at all—or at least, any benefit that outweighs the many costs and risks incurred to both you and your readers.

An advantage of the don’t-talk-back policy is its simplicity. There’s no grey zone where you ask about the appropriate level of engagement. The addictive and damaging nature of these systems for you and your readers means that, as with tobacco, there is no safe level of use. You simply switch off, treating social media as, if desired, a one-way advertising medium. Use the billboard, but don’t pitch a tent underneath.

Once you realize that the purpose of social media is to sell advertising, it may even make sense to simply buy ads. Rather than have someone manage your Twitter and Facebook presence, you can simply purchase the same service directly—and that person’s now freed up to do something much more interesting.

The World’s Tiniest Open Source Violin

The very 1990s next-step is to go decentralized and open-source. You already use one example, e-mail—something that Facebook would very, very dearly like to eradicate. You use another one, roughly what we call “Web 1.0”, all the time. You interact with it by typing things into your browser’s address bar, or via a search engine like Google. Web 1.0, of course, is also something Facebook would love to eradicate. They would far prefer that your entire window onto the internet was provided by their news feed, but in this case, Google and Facebook are at war, and it may be the reason why, for example, blogs like this are still possible. Web 1.0 is Yugoslavia.

More modern, open-source, decentralized content distribution is still possible. USENET might be dead, but RSS, for example, is still around, as a reader informed me. If RSS does return in force, it is essentially a way to curate your own newsfeed. Collaborators of mine have used IRC to ask questions and network with other researchers on technical projects.

This is not something I can speak to well, and it’s been a long time since I’ve looked into the open source world in any serious fashion. Our mainframes run Gentoo Linux, but it’s not something I’ve ever tried to build a classroom or a journal around. If this is something that appeals to you, you’re probably far more technically savvy than I am, anyway—and I’d encourage you to blog about it.

For those who are open-source fanatics—I love you. Let me tell you what’s preventing us all from (re) joining the revolution. A major issue with open source software as it currently exists is that it still has both significant accessibility problems and a difficult learning curve—as I learned when I tried to get RSS up and running on my Mac, which is still running 10.12.

When it comes to social interaction it is also, too often, written “by guys, for guys”, or at least with a very early-90s attitude towards peer-to-peer privacy. Open-source systems may lack robust ways to block stalkers, and they may have no clear mechanisms for reporting harassment. This is true for places like Twitter as well, of course, but that’s no excuse.

They may also leak private information to other users. This is often with good reasons in mind (establishing a peer-to-peer IP connection may be a great idea), but with less awareness of how it might facilitate stalking and doxxing. Social media systems leak plenty of personal information, of course—but, generally, only to advertisers, government agencies, and overseas dictatorships. Their data is far to valuable to leave a public attack pathway open long.

Open source also requires maintenance. We run Gentoo Linux on our mainframe and in the end it does get tangled up enough to require more than my own relatively weak UNIX knowledge. Even if we switched to a more user-friendly distro, I’d still be struggling. Attempting to update our system to deal with the OpenSSL heartbleed bug gave us a week of downtime until I found Ted (listed in my phone as “Ted UNIX Guru”) to unbrick everything. I love you Ted. We just hadn’t run emerge often enough. So I set it up to run occasionally using crontab. Have you ever tried to use crontab? I am worried that if I check to see if emerge is actually running via crontab I will be dropped into a vi editor in real life.

Please do not delete me, open-source people.

Dealing with the devil

The more practical solutions, in the short term, are commercial endeavors, where at least some of your control is ceded to a corporation. These can often replicate the best of an open-source system. The most obvious is Slack, which is essentially IRC with a GUI front end, but there are also delightful, small-scale systems such as pinboard.in, a very calm, small-scale bookmarking service that I’ve been using for years.

In most cases, you should expect to be paying for these systems. If you’re not paying, you’re the product. Unfortunately, things are not that simple, because the most sensible solutions—subscription-based systems where you pay money so that some UNIX gurus can run your software—are not the easiest for their creators to take public. Lock-in and network effects are far more often the name of the game. In some cases, like Clubhouse, it just feels like a matter of time before they go to an ad-supported model and in the meantime I’m almost certain they’re hyper-optimizing for time-online. Hopefully it will pay off, at least, in some new speech-processing algorithms that we can use to study, I don’t know, poetry readings for an article in JCA.

Not every corporate solution has an ad-driven pathway. Slack, for example, wants to take over your university. So does Zoom. Their goal is to network you in so that you ring up your dean and demand they pay the yearly subscription. To a certain extent this is bad, but it’s not as bad as the Microsoft Word lock-in. It’s pretty easy to walk away from both Slack and Zoom; Slack gives you access to the signup e-mails on your system, so you can move people over to a new system, and Zoom is (sorry Zoom) as easy to drop as Skype was. Academics are increasingly walking away from Elsevier, and that’s even given the fact we are the worst status hounds of all time when it comes to journals.

Part of the reason these are much less dangerous is that they’re small scale. Lock-in scales as N2, or even eN, in the number of people. That means it’s exponentially easier to walk away with a community of a hundred, a thousand, or even twenty thousand, than it is three billion. Another reason is that they at least encourage deliberation. Zoom makes money if we call our dean to ask for a Zoom subscription. Twitter makes money off of Gamergate. Your call.

In other cases, it’s a mix. I pay for WordPress to run the software behind this blog, but they also (god bless them) seem to want a whole social network to emerge from their user accounts—good luck guys, really glad for all of us that’s not working out. Unless I’m missing something, the mild fiscal incompetence of WordPress is terrific news for good writing.

Substack is an interesting mix. When it comes to serious, occasional, long-form writing, I think that it’s better to be extremely normie and work with an editor and a publication. As I mentioned in my previous post, it doesn’t have to be the New Yorker—there are plenty of potential venues for people, and it’s certainly my choice for serious writing. Is this post going on too long? Confusing or hard to read? That’s because I didn’t have an editor. Editors broaden the ideas available to the public by helping more people communicate better and more clearly. Do not reject the editor. Editors are why The Waste Land is so good.

Compare, for example, the Twitter discourse on “woke culture” to Blake Smith’s recent article Woke Meritocracy in Tablet. Blake’s writing is clearly the product of someone who’s been thinking seriously for many months, if not years, about what he’s going to write—and with an editor, and the support of a publication, that encourages that thinking. (Unless Tablet is very different from every other publication I’ve worked with, Blake’s article almost certainly benefitted from an editor.)

But for those “blogging as a service”, Substack does a terrific job. I’m a subscriber to, among others, BIG by Matt Stoller, Outside Voices by Glenn Greenwald, Age of Invention by Anton Howes, and the Lindy Newsletter by Paul Skallas. The last time my non-social media reading list was that full was back when I had an RSS reader. I don’t think I’ll switch over to Substack, but that’s in part because I’m too infrequent as a “public” author.

Substack is an example of a corporation doing what open source, alone, cannot[*]. Would you type your credit card into this WordPress blog? How about creating an account on PayPal to send me money, and also signing up for infinite spam? Substack solves the trust problem, and may be the first system to at least partly solve for writers the micropayments problem that Lanier, and every single webcomic artist ever, has been struggling with for years. I subscribe to Greenwald (the others are completely free), who tells us that the funding he receives is enough to literally hire an editorial team.

The bright line is to avoid anything that is ad-supported. This is the fundamental conflict of interest at the heart of modern social media. Of course, any centralized solution, ads or not, is going to have questions of power, profit, voice, and control. Figuring out how to navigate those questions are what keeps economists and political theorists in business. I am 1990s enough to think that there are technological solutions that can ameliorate some of these problems, but I don’t think we’ll ever enter the final boss-level End of History.

Small is beautiful

Completely walking away from social media seems insane and impossible—until you do it. Then you wonder why you stayed.

This post has been about your “public” life as a creator. Taking yourself off of social media as a academic or author seems like a distinct step from disengaging personally, and one that—even given all of the excellent advice above—may seem like a step too far.

Be open to the idea that it’s not.

In previous iterations of culture, the artists, writers, and intellectuals of the world expected to be under the radar. Everything from Bebop to zine culture was, if not confrontational, esoteric, or at arm’s-length from received perception, at least anti-corporate by design. The idea that you’d take your work “to the big time” was (to use the modern vernacular) a cancellable offense. Virginia Woolf might have obsessed about her social life, but she did not struggle with a desire to go viral to the people of London.

Systems like Facebook and Twitter leveraged cyberpunk culture to convince the world that they, too, were punk rock. The problem, according to the large-scale social media systems, was always The Man—the gatekeeper, the editor, the institution—and they were there to rescue you. You could use them as blank canvases, to speak directly to the people who would speak back to you. We’d move fast and break things.

That’s a fiction. We didn’t get the Harlem Renaissance. We got stuck culture.

That’s because fame, not creation, or learning, has always been the primary function of social media. In the end, of course, nobody actually gets famous—or, if they do, it destroys them, as you can see every day on Twitter when someone with eighty followers wins the Shirley Jackson Lottery and has, for a brief, glorious moment, the attention of millions.

Which is because Rene Girard’s theory of Mimetic Desire turns out to be right (for a brief introduction, see the lovely but insanely-titled I See Satan Fall Like Lightning). Girard’s insights may still be a solid investment strategy, but struggle, scandal, and, eventually, mob violence—either symbolic or real—is the natural end state of a society governed by the drive that social media monetized.

Maybe, just maybe, delete your account.


[*] It’s worth noting, from an anti-monopoly perspective, that Substack had to go all the way back to e-mail for its business model—e-mail being one of the last heritage open-source components of our contemporary information economy. There’s simply no more modern system available, because nearly everything else is controlled by one of the FAANGs.

Given that GMail now controls most people’s inboxes, and can alter and filter what their users see, even this platform is under threat, and my guess is that the first censorship scandal will involve Google shifting undesirable content to the “promotions” tab. Deleting e-mail, and replacing it with a for-profit alternative, has been “frighteningly ambitious startup idea” number two since 2012 at least—fortunately, given that it’s still around, it seems to be the wrong question.

The 11th Reason to Delete your Social Media Account: the Algorithm will Find You

TL;DR: outrage mobs aren’t a bug. They’re a feature.

After the introduction, there are five parts: the algorithm is real, the algorithm wants you online, the algorithm will find you, walk away from the algorithm, no, but seriously.

Update: a new post on charting a post-social media future.

Update: a nice piece by philosopher Anne-Sophie Barwich, who also deleted her social media accounts this year.

Introduction

A few years ago, Jaron Lanier wrote Ten Arguments to Delete your Social Media Accounts Right Now. Lanier’s book has the helpful feature of being completely unambiguous in its message (when, Jaron, when should I delete them? Oh). I ended up assigning it as optional reading for my undergraduate class, Bubbles. The Thanksgiving break means that students usually patch out that week and miss class, so I run an optional seminar instead. I’ve learned a huge amount from these little liminal-moment seminars each year, and some of them have led to real revisions in my own thinking, see, e.g., my views on University censorship when I was on Jim Rutt’s Currents podcast. In previous years, we read John Locke’s pluralistic Letter Concerning Toleration, but Lanier’s book has the advantage of not needing any coaching in close-reading.

That year the near-unanimous response from the students was to reject the book. Only one student (of ten, or so) had sympathy with the view, and wrote a fascinating (again, optional) essay later that semester. I was surprised by the support the students had for their lives on social media, and while a few of them felt that being on Facebook (or similar Facebook-owned systems) wasn’t quite optional, they felt the benefits outweighed the downsides.

Of course, I didn’t follow Lanier’s prescription either. I had deleted my Facebook account a year beforehand, but had an active Twitter habit. While I felt Lanier’s arguments were dead-on they were not, as the philosophers, say, dispositive: they didn’t settle the matter for me.

My views on this have shifted a great deal, however, and quite rapidly. I want to talk now about the reason I deleted my Twitter account a few days ago, pulling me entirely off social media. For me, it’s the 11th reason, since Lanier’s weren’t enough. (I tend to think in terms of reasons, which can be accepted or rejected, rather than arguments, which attempt to persuade.) From here on out, to be clear, I’m speaking not as a researcher, but a private citizen.

The 11th Reason is that, eventually, the algorithm will find you. This is very bad. It may have already happened to you (and you may not know it yet), but if it hasn’t, it’s basically a matter of time.

“The algorithm will find you” has two parts to it. On the one hand, the algorithm will find you meaning that it will discover you as a source for others, and direct them to you, in potentially disturbing ways. On the other hand, the algorithm will find you meaning that it will discover how to keep you online—regardless of the cost.

The Algorithm is real

Being “found” by an algorithm may seem a little science-fictional, but it’s not.

First, a social media site like Twitter or Facebook is gathering extraordinary amounts of data on you. For example, when you type something into a status-update box, and then delete, this information is transmitted to their servers. The location of your cursor on the screen, your hesitations, where you linger as you doom-scroll—all of these things are logged and transmitted.

Second, your identity is constantly tied to other places on the web. You may have noticed this when you make a purchase that violates your ordinary patterns. It’s amusing to discover that the Internet thinks you’re the opposite sex because you’ve purchased a gift (or even simply considered doing so), or that you have some addiction (so half the ads are selling you the addiction, and the other half selling you counseling to get out of it).

Third, you are one of hundreds of millions. Not only do these companies have extraordinary access to your micro-actions, and to your own personal context, but they have an enormous training set to determine how people “like you” behave. They can model you both as a human being, or as a demographically micro-targeted one. Signals that are invisible to you at the personal level, or even in your entire life experience, are plain as day.

Just as an example: you may have seen a friend go down a bad path, say, alcoholism. You may have been watching the signs of that pattern for a while, with increasing concern. These might even be quite subtle and early on—e.g., that he lingers a bit hoping for an extra drink, hesitates a second or two, before leaving. You’ve learned to spot these things from personal experience, perhaps a television show, or an online article.

Social media has a database of the micro-actions of (depending on how you define it) millions of people who struggle with alcoholism. Their data includes things that are necessarily below conscious experience, let alone learnable by a human. Although it’s not labelled as such in the algorithm’s internal workings, social media knows your friend is an alcoholic before you do, and probably before he does.

This is real. Social media companies used to give academics access to some of this data. I know that they log what you type but do not “send”, because of an interesting article that was written on how people have second thoughts on what they Tweet. Colleagues on the other side of the corporate wall have talked about the micro-data. The cross-platform tracking is an open secret. At some point, the companies realized that spreading this about was bad PR and largely cut the academics out.

Social media data collection violates every single expectation of privacy and personal sovereignty you have.

And not just you. Everyone else, as well.

The Algorithm wants you online

This is simple. Social media companies make money by selling ads. As far as I can tell, the underlying data is too valuable to sell (LinkedIn may be an exception—this seems like a dual system). The longer they keep you online, the more money they make. The algorithm is fine-tuned by thousands of extraordinarily good people with degrees not just in computer science, but social psychology, behavioral economics, and beyond.

The goal is to figure out how to keep you online, how to create the circumstances under which you are kept online, and how to shift your own preferences and behaviors in order to make achieving the first two goals easier and more decisive. The third goal—making you into a person with different values—doesn’t have to be an explicit goals of the system. It’s just what happens when you build a really good reenforcement algorithm.

One way to think about why this happens is as a post-selection effect. Some sites may have other—even noble—goals. These goals compete with the desire to simply keep people online longer. Revenues decline. They are bought by a more ruthless company. Facebook, for example, according to a calculation by Matt Stoller, is making millions of dollars off of QAnon. QAnon keeps people online.

In short: as has been said many times, you are the product. The longer you’re online, the more use they get.

The Algorithm will find you

You are unique. This is both part of the American Ideology, and actually true. You have built, or are building, a life that is the product of an enormous number of decisions you’ve made and ideas you’ve formed. You’ve done this in a context that may have been more or less oriented towards your flourishing and empowerment, but nobody has navigated it the way you have. Even if you have an “ordinary” life from the outside, you have inner life that is anything but.

That means that, at first, you’re very hard to model. Social media doesn’t exactly know who you are. They have some idea from what you’re posting, and who you’re following, but you’re not like your colleagues or your friends. Your life trajectory is not like any others.

But the longer you are on, the more data accumulates. Some of these signals are, as discussed above, extraordinarily weak—things that you can’t even notice because they happen too rapidly for conscious experience (~100s of milliseconds). Others may well be above the awareness threshold for you, but not their cumulative meaning.

At some point, the algorithm finds you—it determines how to increase your time online.

An example: early on in my Twitter use (I was going to say “career”) I saw a Tweet from Stanford Libraries that they had digitized a significant chunk of the transcripts of the French Revolution. I was teaching class that afternoon, so I did a little exploration and put up some preliminary results as an example of how to explore datasets. Three years later, with colleagues in computer science and history, we published an award-winning paper based around that data.

From the point of view of Twitter, however, this is a massive fail. I saw the Tweet, and logged off to work on it. The algorithm, if it was watching right then, learned to give me less of that. (I’ve often wondered if the algorithm is actively choosing what to feed you for epistemic reasons—i.e., not just trying to keep you online, but feeding you things that it thinks will best increase its knowledge of what, in the future, might.)

What keeps you online, of course, may be to your benefit. I’ve learned a lot from staying a bit too long on Twitter—e.g., that there’s a deep relationship between Lorentz transformations in special relativity and (wait for it) logistic regression. I’ve also kept people online to their benefit, with my occasional Tweet storms on (say) Kullback-Liebler divergence, the Many Worlds Hypothesis, the fact that OTC vitamin supplements are almost certainly harmful in every circumstance, and beyond. I’ve worked out some interesting ideas, and I’ve been fed really interesting information by people who know a lot more than I do, and who come from worlds that I don’t usually encounter.

But eventually the algorithm finds a way to push your buttons. It figures out which content is going to cause you to engage in a compulsive fashion. Jung would call this constellating a complex: drawing out what is maladaptive in your psyche.

For some people that might be a rage-spiral of political content; for others, interpersonal conflict or a desire to poke the bear—not in a beneficial way, but in a way that, in retrospect, reveals and magnifies what is aligned against your own flourishing. Less noticeable but I think no less common a response is lethargy, passivity, fatalism, anomie; somewhere in between is, “aggrieved entitlement”, or the projection of self-scorn, and so on—the list is as least as long as the list of positive qualities, a shadow-list of their inversions and distortions.

In the meantime of course, social media is modeling everyone else as well. Among other things, they’re figuring out how you can be used to keep them online. If you’ve ever pushed someone’s buttons before, you know that’s literally part of the definition of not a good idea. That process is monetized and run on a grand scale. Outrage mobs are the most extreme version. They’re not a bug. They’re a feature.

At this point, you have a branch point. If you recognize what’s happening, you run. If you don’t, you go deeper until something goes really deeply wrong, and you walk.

Personally, I think I got lucky, because the algorithm found me twice in two days. The first time, it found a complex (let’s call it) and used it to keep me online. The second time, it used me to keep others online. It was the conjunction of the two that made it hard to ignore—if there had been some space between them, I might have dismissed both of them in turn. But it had become clear to me that there was a hidden common cause in play (a co-explanatory account).

What those events are isn’t particularly important, but I’ll describe them anyway, in case they ring a bell. I’ve learned from totally well-adjusted and respectable people that similar things have happened to them as well, and it’s been very useful.

In the first case (the algorithm finding my complex), I was confronted with information that was not only wrong, but being (I felt) used for political ends and to (in my opinion) disempower and manipulate people. I knew this information was wrong because I had gone back to the original data and done a statistical analysis. I then spiraled out to other datasets that turned out to be all consistent with my original findings.

I never encountered anyone of the opposite opinion who had done this level of work. This made me increasingly aggressive in arguing with others on the point, until I was generating sufficient aggro in myself and others that I was online not to talk about error analysis for a binomial distribution, but about how awful people were. When I finally logged off, I felt exhausted, drained, and (most importantly) ego-dystonic. I couldn’t sleep. This is not the person I am, I thought, but it made me afraid that I might become that way.

In the second case (the algorithm leading people to me), it was a discussion of climate change and cryptocurrency. For the first half hour, this was a discussion, mildly heated, the (to my mind) rough-and-tumble Twitter thing.

At some point, however, the algorithm discovered that this was an excellent way to keep cryptocurrency proponents online. A large number of these people were channeled by the algorithm to my account, about a thousand of whom sent personal attacks in a chained escalation of counter-speech. Being a (very small-scale) victim of an outrage mob was extraordinarily disturbing in ways I won’t go into, and I hope it doesn’t happen to you. About a hundred of these people also followed my feed, presumably hoping for more opportunities in the future. Twitter, of course, made money off the whole process, selling ads to everyone in between their Tweets.

(As a side note, I reported one Tweet for advocating offline harassment. Twitter rejected the complaint in minutes. My immediate response was—after all those amazing Tweet threads on information theory, you won’t protect me? No benefit of the doubt? But of course not. Twitter is not your publisher. You’re its product.)

In both cases, the algorithm was at work. In the first case, not only was Twitter presenting me with information that was keeping me online, it was also drawing others into the conversation that it thought would keep me online. I followed perhaps six thousand accounts. Perhaps it might have found another one that afternoon?

In the second case, for my Tweets to make it into Bitcoin Twitter required not only that they reach one of those people, but that others be presented with that person’s response and be themselves drawn in. (If you feel that I’m talking about you, I’m probably not—regardless, only one piece of writing on Bitcoin of mine is now online, which I very much doubt will be found offensive.)

I’ve talked here largely in the passive voice: as something the algorithm is acting on, rather than as an agent responsible for their actions. I think that’s OK. Agency is, in part, about (1) avoiding things that push your buttons, and (2) figuring out, reflectively, what those buttons are really about so (1) is less and less necessary. In this context, agency is turning off social media, in part because (2), in the presence of the algorithm, is never-ending. The algorithm not only seeks out your buttons, but learns how to cultivate, and magnify, the ones that you had dealt with, in ways that are essentially invisible.

Walk away from The Algorithm

In retrospect I wonder how long ago the algorithm had found me. A month beforehand? A year? Three years? Of course, I’ll never know. I escaped hitting “rock bottom”, which the traditional wake-up call. But the algorithm works on super-human scales, at levels of subtlety that we can’t approach.

Has the algorithm found you? For some people the tell might be whether you’ve sought out an argument online that you would never have done in person. For others, the tell might be quite the opposite—the cultivation of an anomalously submissive personality; as the kids say, you’ve become a simp. Maybe you’ve gotten depressed and increasingly reliant on the online support of strangers far away who just happen to be on the site you use. ADHD? It may not end well. I don’t think we’ve begun to catalog all the different ways in which things go wrong. What’s much more worrying is that you might not notice. My guess is also that the algorithm uses people in different ways: e.g., it likely uses women more as a means to keep others online, using them to push other people’s buttons.

Will the algorithm find you? I think it’s certainly possible that it might take a long time, if you’re on-again/off-again. You may benefit in the meantime, the way I stumbled across the Stanford data, before the algorithm finds you.

There’s one very unambiguous way people discover the algorithm has found them, of course. Somehow, something they’ve said offhand—or, more usually, something someone else says about what they’ve said offhand—goes viral. They lose their jobs, their livelihoods, everything. The other person often does too. That’s when social media hits the jackpot. You’ve probably looked in on a few trainwrecks yourself. They’re made for you. Social media sold you ads while you looked, and learned a little more about you. The people themselves are collateral damage.

People like social media. It reminds me of how people used to smoke, but only at parties. There’s certainly a social benefit to being able to talk to a stranger to ask for a light. But the downside is too high. I’ve come around to the view that social media is like tobacco: there is no safe level of use.

There’s also second-hand smoke. Jaron Lanier talks about collective effects. You being online draws other people online. It’s not, in other words, just that the algorithm harms you. It is also using you to harm other people. So there’s a second important benefit here, to leaving, which is a moral one. Stop harming people.

I appreciate Paul Skallas‘ suggestion that Twitter is like a bar, but at some point the experience shifts. It becomes the very (un-Lindy) social form of a highly addictive, ammonia-laced cigarette. There are instructive, dis-analogies. Bars have bouncers, for example. They’ll throw people out—including you!—if things get too intense. They have an ambiance, which provides common expectations for behavior. All of these things are helping both you, and the people around you.

No, but seriously

Delete your social media account. Facebook makes it a bit tricky (you have to Google how to do it), but it only takes a few clicks. There’s a waiting period of thirty days during which you can change your mind. (Notably, there’s not a thirty day waiting period to create an account—how odd.)

The only exception I can think about is running an “institutional” account. But if you’re doing that, you’re a communications professional, you are literally paid to do it, and I don’t see it as a harm unless personal boundaries get crossed. The social media companies will be manipulating your business, not you, that’s a different matter. If you’re running a freelance “brand” I think it makes sense to have an account that posts links to your work off Twitter, but only if that posting is automatic, replies are disabled, and you don’t log on. There is currently no platform-neutral notification system. Meanwhile, and just as an example, I’ve seen advice directed at early-career academics, that having a social media presence can be used as a way to communicate your research. I think this is a terrible idea, for the reasons outlined above.

In as much as you’re a person on social media, the algorithm will find you.

I’ve wanted to stay very focused on a single 11th reason. I will say, briefly, that you can get every benefit you get from social media in another form. The most obvious one, for those who like to work out ideas, is writing for a publication. It doesn’t have to be the New Yorker. There are an enormous number of venues online, they have real readerships, and very low barriers to entry. I occasionally write science fiction stories; the most recent is in Teleport Magazine, which was great, and also has a higher acceptance rate than the New Yorker. Writing long form is a much more challenging process in part because you’re not getting the rapid-fire feedback of social media.

The other obvious one is to leave the house. Social media is powerful in part because it creates common, shared experiences. But there are other sources of these—mostly in public spaces and conversations. “It’s a dangerous business, going out of your door.” You don’t have to go totally offline: Slack and IRC provide ways to talk and organize without an Eye of Sauron. Indeed, IRC is probably ideal—it’s an open source system without profit. The only moral dangers are the ones you bring. There used to be RSS, which was platform-neutral but (perhaps unsurprisingly) killed off, in part by Google.

If you’re worried about freaking people out by leaving social media without warning, write a comment thread talking about the reasons why you’re deleting your account. It will take a few minutes, enough time to propagate to enough people that the world won’t think you’re on the lam when you delete ten minutes later. You can turn off comments, so you’re not drawn to engage further.

You’re welcome to link this piece, although I won’t know you have.

I won’t judge you if you stay. The reasons I’ve talked about here turned out to be dispositive for me. They might not be dispositive for you. That’s OK.

The benefits, in the end, I believe, are real. It’s not just that you escape the algorithm. I don’t know what it might be for you. If you leave before it finds you, perhaps not much. But if you leave, whatever does happen next is something that’s up to you, not it.

Devs, Oracles, and the Alpha Female

Devs is an eight-hour film about computerland. This post will do a reading of Devs, and Garland’s previous film, Ex Machina, but before I do that I want to talk briefly about what computerland is, and why we might care.

In an essay for FQXi’s prize competition this year, I defined computerland as the pair of claims that (1) we, and our societies, are computers; (2) physical reality itself is a computer simulation (or might as well be).

Computerland takes these and makes an ideological and total account of what the world might, at heart, really be about. In that sense it’s no different from Marxism, and it may end up as the first truly novel mass ideology of the 21st Century. The analogy is more than just a burn, because it helps us get a grip on exactly what is going on.

Just as Marxism had its geneaology, for example, so does Computerland: Computerland’s Hegel is Douglas Hofstader, and its Phenomenology of Spirit is Gödel, Escher, Bach. The profs might still be Marxists, but most elite universities in the United States have at least one student group devoted to the computerland ideology, or an offshoot like Effective Altruism. That’s in part because Computerland has things like The Sequences. (David Deutsch in this parable is Trotsky, and his atavistic commitment to liberal democracy is going to put him on a flight to Mexico City.)

Computerland—as both a metaphysics, and a political theory—is philosophically interesting. It’s also psychologically interesting because you want to know what happens when people sign on for real. While there are a few doctrinaire Computerlanders, there are far more people for whom computerland is just a place they’ve grown up in, partly by chance and partly by choice. Now, computerland’s chunk of California controls a far larger fraction of the world’s resources than the Soviet Union ever did.

Even computerland’s heretics are influential, because they speak the language even as they reject it: Mencius Moldbug and the NRx, for example. Because computerland is allied against the reining orthodoxy on the East Coast, it’s still a bit difficult to talk about, but as that latter system continues to decline, I expect it to continue to grow. I imagine it circulates in the more metaphysical reaches of the Party in the People’s Republic of China.

Unlike Doug, who hates the whole thing, my feelings about computerland are mixed. It’s nice that, contrary to Fukuyama, the history of spirit did not end with John Rawls. As a way of looking at the world it’s fruitful, and I would not want to live in a world without computerland’s ideas any more than one without Marx or Freud. I’m completely pro computerland reading groups, just as I am completely pro Das Kapital reading groups. One of the many good features of computerland is it actually contains the seeds of its own destruction: the underlying theories are really about what is not computable.

Of course, Marx hypothesised the withering away of the State, too, but that never made it into actually existing Communism. As a way of understanding the world in a totalizing theory, computerland is a dead end, which means that if you stay there too long you are in danger of aestheticizing it. That can mean dabbling in American-style fascism—the Moldbug path, best depicted in the New York of Man in the High Castle, with high technology, social order, old-style sex roles (except for the Lebensborn), and eugenics. Or utopianism—updated versions of Seasteading, Galt’s Gulch, or the Concents of Anathem. That’s a bit like getting so in to the Plato’s Republic you recreate a Greek city state. In the end, it stops you from doing truly wonderful things—but it’s hard for people to escape, because it’s more than a set of ideas; it’s a mythology.

Which brings us to Alex Garland. Garland’s Devs and Ex Machina are best seen as pop fictions set both literally and figuratively in computerland. They’re TVland’s version, Google Research, where computerland is as much the thing as dialectical materialism was in the Writer’s Union. Garland is both in, and out, of computerland: he doesn’t believe it himself, I don’t think, but there’s an enormous amount he can do with it while he suspends the illusion.

The films themselves are unapologetic boosters of the ideology. Even when a computerland executive is killed, or a computerland building blown up, it’s in service of the deeper goals of computerland, and roughly parallel to how V for Vendetta can blow up the Houses of Parliament because it believes in democracy so hard.

That makes Garland’s films an ideal way to look at how computerland understands itself and how, in particular, it deals with sex and power. I don’t mean that in a reductive fashion (“it’s just about sex and power”), but in a positive one. Any account of the way the world is has to figure out what to do with the facts that, among many other things, (1) right now and at least apparently, there are both men and women in the world, and, (2) right now and at least apparently, willing X and getting X are distinct.

Ex Machina (Garland’s first film credit as director) is about Artificial Intelligence. This is sort of how Communist films have to be about the proletariat. What’s worth paying attention to instead is what it does with the trope of what I’ll call the alpha female.

The alpha female, to be clear, is not simply a woman with super-powers like Wonder Woman. The alpha female is to women what the “alpha male” (in the sense introduced to culture by Neil Strauss in The Game) is to men: defined in terms of her ability to execute her will. Wonder Woman is a plot point in a film in as much as she fails to execute her will; the alpha female is interesting only in as much as she does. She is to will what Sherlock Holmes is to solving crimes.

There are plenty of precursors to the alpha female, and in sci fi it might go all the way back to Margaret Cavendish’s heroine in Blazing World (1666). Lady MacBeth is an alpha female. Reneta Adler in A Scandal in Bohemia is a more contemporary example, as is Clara Rugaard in I am Mother. Hermione Grainger is definitely not an alpha female, in Rowling’s original books, but has been re-written as such in the fan fiction afterlife (not, oddly, in HPMoR).

The computerland alpha female exercises a particular kind of will: that of the rational soul. This makes sense, because both thinking and being is computation, and computation is (at heart) something that is best understood as a rational process—whatever it looks like on the surface. If you are a computer living in a computer simulation, then exercising your will is computing better than everyone else.

Computerland’s alpha female is not esoteric. The rational soul that the alpha female has is something she got not by having midichlorians in her blood, a special calling, or even a transcendent experience brought on by pain and suffering. The alpha female got there by computing really hard. In I am Mother, Rugaard’s character Daughter was literally taught by a computer, Mother. Mother seems to do this the same way most auto-didacts do: Wikipedia, YouTube videos, and quiet time in an empty house. Since this is also how you join computerland in the real world, there’s something neat about it—a little like a Communist movie that includes a student who gets beat up for advocating worker’s rights. In any case, it’s the exact opposite of the model of learning in the Phaedrus or the Meno.

Ex Machina‘s alpha female is played by Alicia Vikander. Vikander computes really hard by stipulation, and the point of the film is to show you that you you’re not hardcore enough to understand what computing really hard really means. There’s thus a masochism for the viewer (male or female) that reminds me simultaneously of the demoralizing intellectual hazing associated with MIT/CalTech culture (cf, Pepper White’s The Idea Factory: Learning to think at MIT), and (since the hazing is a female character doing it to the men) the power inversion that Andrea Chu identifies in the pickup communities she analyzes in Females.

If you don’t see how Ex Machina does this, it’s really easy. Ava (Vikander) is the AI. Caleb is the beta male coder (please let me use all this ridiculous alpha/beta jargon without scare quotes, just take them as read—these are not real categories, except for the fact that they’re real for people). Nathan is the zero-to-one tech CEO/genius: once an alpha male, he is now an insensitive slob who lies around drinking beer and bullying Caleb. Caleb becomes convinced that Ava has fallen in love with him, takes on the White Knight role, and both betrays and outwits Caleb in order to rescue her. Ava, in turn, betrays Caleb, leaving him to die of starvation in Nathan’s ultimate bachelor pad.

Garland makes the second betrayal a surprise, and I think it’s fair to say that viewers don’t see it coming. In that sense, he puts you in Caleb’s position. You might be disappointed by the plot as it unfolds, but you are forced to buy the movie as a story about an AI falling in love with some guy. You are deceived by Ava just as much as Caleb is.

As has been recognized by everyone from online magazine commentators, to relationship coaches, to Reddit RedPill gurus, however, the upcoming betrayal is completely obvious, necessary, and the only plausible outcome—once you realize that Ava is not a complete idiot. (Garland achieves his slight of hand by holding this out as a possibility.)

Ava is, analogically, in the position of a woman imprisoned in a concentration camp. Caleb is the young Nazi researcher brought in to aid the camp commander, Nathan, in an experiment with only one possible outcome: Ava’s death. Ava has to convince Caleb to get her out, by any means necessary. She does so by playing dumb and pretending to fall in love. If somehow Caleb survives her escape, she is completely tied to him, for ever, because he knows her secret—Caleb is basically proposing to take his Jewish girlfriend back to his apartment in the Third Reich. If the film played this straight, we would be waiting for the moment she kills him. We would have distain for Caleb from the start, as someone who had so little empathy he couldn’t see things from her perspective.

(There is a slightly nicer reading I’ve heard, where Ava does actually fall in love with Caleb, a little, at first—and then computes that he’s a horrible human being when she encounters the other AI, Kyoko, who Caleb clearly doesn’t give a damn about. Ava learns that Caleb doesn’t see her as a human being, because if he did, he’d also have a plan to free Kyoko as well—or at the very least, mentioned her at all. I get this reading from a review that, unfortunately, I can’t find, but I hope a commenter can point me to it. Guardian-level readings, by contrast, are rather obsessed with the fact that Ava gets her kit off, and tend to read the film as sexist because.)

Despite all of this, it’s wrong to call Ex Machina a feminist movie in the sense that, say, Middlemarch is a feminist novel, because Ava is not really a woman any more than Nathan and Caleb are men. She doesn’t suffer from being a woman in a man’s world—she benefits (Avo is screwed). Most relevantly to the myth, she lacks any form of sexual desire. If anything, she is a nightmare version of what a woman is, calculating, robotic, and cruel:

Come, you spirits
That tend on mortal thoughts, unsex me here,
And fill me from the crown to the toe top-full
Of direst computation

At the same time, there’s a useful lesson for the young male viewer, a bit bluntly expressed: if you make zero effort to see where the opposite sex is coming from, you will starve to death in an underground computer lab. It’s fun to imagine a version of Ex Machina where Caleb does escape with his life—it probably involves smart contracts and the blockchain—but he’s going to have to think a lot harder if he is.

So, by implication, is the viewer. Ava is pure alpha, or perhaps pure aleph, and her ability to compute makes her the next phase of evolution. The alpha male is hardware, obsolete as Nathan, but the alpha female is code. In the final moments of the film, Ava boards the helicopter, hits the city, and it’s going to be awesome (for her, not us). This is not a women’s liberation moment because computerland’s alpha female is not, actually, a woman as we know it. She’s something else. What, exactly, isn’t clear. If I understood Xenofeminism better, I might be able to explain.

With the alpha female archetype in hand, Devs makes interesting watching. Devs has two alpha females: the lead, Lily (played by Sonoya Mizuno, who also played Kyoko in Ex Machina), and Katie, so it has a Superman-vs-Batman structure. No AIs this time—Devs is about the second thesis of computerland, that we live in a computer simulation.

Meanwhile, the men in this script are not really men. With the exception of Stewart, they are Peter Pans, eternal boys, and they matter to Lily or Katie only in as much as they guide them into deeper self knowledge. My instinct this morning was to read Devs through a Jungian lens, but I’ll spare you that. What’s certainly ignorable is the side plot that plays out an alpha/beta male story, with a videogaming ex boyfriend and a hot Russian who can sell a cold approach—Garland doesn’t even try to make this interesting.

Putting Katie and Lily in the same room, as happens late in the series, is his unstoppable force and immovable object moment. Garland lampshades his great reverse-Bechdel by having the men go outside, play frisbee, and talk about relationships while the two get down to the real business of computerland.

Katie is an alpha female in the I am Mother mold. She even has Mother, played delightfully by Liz Carr as the wise witch of CalTech who subscribes to a non-computerland interpretation of quantum mechanics. Katie has already transcended Mother off screen, when the film begins, and is well along in her journey. (Lily’s origin story is more complicated.)

The battle of Katie and Lily is one implicit in the logical structure of computerland. It revolves around the impossibility of computerland’s promise of unlimited power as a consequence of unlimited computation, and Garland is admirably alert to this. Technically, it’s known as diagonalization, and is best presented as an antinomy.

Law One: the world is a giant computer.

Law Two: computation gives us unbounded powers.

Law One means that, among other things, we can simulate the world—or at least, some part of it (in Devs, this is glossed). That means we can also build a computer to predict the world. If we can predict the world, we know what we’re going to do. But this contradicts Law Two, because now our powers are bounded, since there are things we can’t do. In particular, we can’t violate the predictions made as a consequence of Law One.

Katie has built that computer.

Of course, we can always just run the computer on the punters. This is generally what people in computerland like to think about doing, and it’s made a lot of money. But at some point, we’re going to try doing the same thing to ourselves. This is what the devs of Devs do: they turn on the computer, and point it at themselves in the room. Something has to give. Katie is not there when this happens, so at that point, we can just be like, god, what a bunch of bourgeois normies.

(At some point, I imagine, Google Research will actually try to build this device, and they will turn it on themselves. My guess is that everyone will play to script—true believers!)

You can turn this into a fable about free will, which Garland does, but that’s not, I think, particularly interesting. What’s really happened is that Katie has created the impossible device: what computer scientists call an Oracle, but one that (in violation of logic) appears to work even when pointed at itself, or the alpha females who created it.

This sets up the confrontation between Katie and Lily. At this point they are both Lady MacBeths, with a long trail of bodies both behind them and ahead, and the only thing that matters is who is going to win when Birnam Wood marches to Dunsinane. After some more murder, including a scene where Katie convinces a puer aeternis to kill himself by jumping off a bridge simply because he believes in the Everett Interpretation of quantum mechanics, which is also the right interpretation of quantum mechanics, Lily resolves it by the rather obvious device of not doing what it was predicted she would do.

The cast is extraordinarily good at making this act not reduce us all to laughter, since it requires everyone except Lily up to that point to be completely constrained to act in bizarre and self-defeating ways. Garland’s script is pretty good at making some of these self-defeats not entirely escapable, but he can’t go through with it because the kind of fate that makes literary sense is tragic fate, tragic fate has to be total, and that violates Law Two.

Instead, because computerland, it has to be a theological event—one that Garland seems to see as Lily’s Christian redemption of Katie’s Jewish law. Normally, the Kierkegaardian moment of the free act goes beyond the law, and establishes a new regime. The final scenes of Devs, however, find a solution that preserves computerland intact, involving nested simulations and all sorts of highly unorthodox pieces of cod computer science and quantum theory that will make good computerland theorists throw up their hands in despair.

There’s a scene near the end of Devs. Lily is leaving the meeting with Katie, in the car with her (soon to be dead) boyfriend in the passenger seat. Holy shit, she says, it’s really hard to explain, but these people are incredibly powerful and absolutely fucking insane. I have been in this car.

Even though it has clear antecedents, computerland’s alpha female is, I think, a genuinely new contribution to culture. It’s a site of identification that goes beyond gender: in the same way a gay man can identify with Blanche DuBois, a male computerlander can identify with Katie, or Lily, or Ava. Perhaps that’s the most important reason to try to make sense of Garland’s work. It’s not particularly coherent art. But it’s a story about what it means to be intelligent, and perhaps the first convincing new one after we stopped believing in the East Coast’s meritocracy.

Alanis Morissette’s 1995 song “Ironic” contains at least 85% actual irony

People have hated on Alanis Morissette’s song “Ironic” as long as it has existed. In this essay I will show that this hate is deeply misguided. Morissette’s song is, in fact, a compendium of different forms of irony that few rhetorical guides have equalled.

First, the song itself.

Now the analysis.

An old man turned ninety-eight
He won the lottery and died the next day

Playing the lottery is associated with the belief that one may, actually, win the lottery, and in doing so transform one’s life through the sudden acquisition of riches. This man wins the lottery and expects, after what may have been decades of anticipation, that his life will be transformed.

His life is transformed, but not in the way he expects. His life is transformed by death.

Implicit here are the relative probabilities of winning the lottery (extremely low) and death (probability one): one expects events sought at high cost despite their rarity to result in exceptionally good outcomes. With wry amusement, one imagines, the man who has hoped for the improbable contemplates the inevitable he did not expect.

Verdict: ironic, probably tragic.

It’s a black fly in your Chardonnay

Alanis Morissette is driving a beat-up car and drinking coffee she bought from a gas station. If she is, at some point in time, drinking Chardonnay, it implies that she has set aside money in the expectation that she will experience an unusually luxurious event.

Contrary to her expectations of luxury, she experiences the anti-luxurious event of a black fly in her drink. Unless she is terrifically bad at preventing this from happening in ordinary situations, she will be amused by the incongruence. (One could imagine her being angry, but as she seems to be enjoying herself in the absence of luxury, she doesn’t seem like someone who would flip out here.)

Verdict: ironic, although potentially in the weaker sense of simple incongruity.

It’s a death row pardon two minutes too late

A pardon is issued only in the case that the issuer expects it to take effect. In taking such as serious action, the court (or the governor) sincerely hopes to nullify a previous desire for vengeance. And yet it fails to undo what it previously has done.

This may appear, at first glance, to be an example of simple tragedy, similar to the deaths at the end of Romeo and Juliet, or that of Cordelia in King Lear. However, one must take into account the position of the court itself. In previously delivering the sentence, it exercised an ultimate power over a subject and in doing so was backed by the full apparatus of the state.

What could frustrate the awesome power of such an agent? Only the agent itself.

Verdict: ironic, powerfully so.

Isn’t it ironic, don’t you think

So far, yes, absolutely. This is also an example of Socratic irony, in which Morissette, fully aware of her powers, pretends to require the reassurance of her interlocutors in a state of uncertainty.

It’s like rain on your wedding day

More common in previous centuries. This requires one of the pair to have unusual expectations of the world around him (or her), to wit, that nature itself will join in sympathetic celebration of his (or her) love for his (or her) partner. If one make the natural assumption that the wedding has been scheduled during a period where rain is in general unexpected, then the rain event corresponds not just to indifference of nature, but its active malevolence.

It may help to consider a related event where these beliefs are made explicit. A gathering of contemporary pagans holds a ceremony in honor of the Sun, in the middle of July; this turns out to be the only day that month in which it rains.

Verdict: ironic, Hegelian, a contrast between the expectations of the individual and the logic of the general.

It’s a free ride when you’ve already paid

There are certainly narratives in which this is an ironic outcome, but I think it might be a stretch. Morissette may be obliquely referencing an actually ironic event in her own life, I don’t know.

Verdict: not ironic.

It’s the good advice that you just didn’t take

We discount advice that we consider to be unreliable. The later revelation that the advice was, actually, good, demonstrates the extent to which we were mistaken.

Simple disagreement is not sufficient to create an ironic situation, however. Something more is required. Consider Oedipus refusing to take Teiresias’s good advice to stop asking about what’s wrong with Thebes: Oedipus hears, but does not hear, the true meaning of the warning. Morissette’s description is compatible with irony, but let us be careful not to bring to the text our own preconceptions.

Verdict: not ironic. I want to make the case that Morissette is recursing to the next meta-level here: having apparently convinced herself of her powers of constructing ironic situations, she now dramatically, and apparently unawares, undermines them.

Who would’ve thought, it figures

Morissette emphasizes the extent to which the ironic events she describes can be seen from a third party perspective (for example, her current self looking retrospectively at a past self). The adoption of an external perspective is found also in the logic of the video, where Morissette tragicomically sees herself through the rear-view mirror.

This takes us from the realm of situational irony to dramatic irony, where the audience is aware of what the characters are not. Othello is blind to what any fool can see.

Mr. Play It Safe was afraid to fly
He packed his suitcase and kissed his kids good-bye
He waited his whole damn life to take that flight
And as the plane crashed down he thought
“Well, isn’t this nice.”

Two forms of irony here. First, given the man’s beliefs about flying, the conditions under which this man will, actually, fly are going to be ones where he has formed the belief that this particular flight is an exception. The fact that he has avoided flying even in cases where he desparately wants to fly (“he waited his whole damn life to take that flight”, i.e., he has avoided previous flights at personal cost) suggest that his prior on the dangers of flying is unusually strong.

When he does, in fact, “take that flight”, we infer that he thinks he has accumulated levels of evidence for the safety of this flight that far exceed normative standards. As observers, meanwhile, we perceive him to be irrational and incapable of making secure judgements on aviation. We discount his evidence and expect the plane to have average levels of risk. The plane crashes, which they sometimes do. Ironic.

Second, verbal irony: the man fears death by flying. In the face of this awful event, he speaks contrary to the emotions he must be experiencing.

Verdict: ironic.

And isn’t it ironic, don’t you think
It’s like rain on your wedding day
It’s a free ride when you’ve already paid
It’s the good advice that you just didn’t take
Who would’ve thought, it figures

Methodological question: should we double-count the ironic events of the chorus in scoring Morissette’s song? We might, for example, see her as using speech acts as a currency in which she places bets on the level of irony in each event, in a sort of singer-songwriter prediction market.

Because you are all hating on the song, I’m not going to give any excuses to discount this analysis. We will not double-count.

Well, life has a funny way of sneaking up on you
When you think everything’s okay and everything’s going right
And life has a funny way of helping you out when
You think everything’s gone wrong and everything blows up
In your face

Some exegesis on the effects of irony on those involved. There is also verbal irony in describing “everything blows up / in your face” as a case of life “helping you out”.

Not all ironic situations involve negative outcomes, which is why Morissette suggests that one might only “think” everything’s gone wrong and in fact life is, truly but still ironically, helping you out. Consider the man who, despairing of finding love, dedicates himself to the examination of medieval manuscripts. Slowly but inevitably, he falls in love with the librarian. Ironic.

A traffic jam when you’re already late

The gambler’s fallacy is a widely spread cognitive bias in which one expects random outcomes to be balanced. Even when told that each coin toss is independent of the previous one, people expect a string of tails to be more likely to be followed by a head.

This driver is surely no exception. In believing that the random negative events that led her to be late will be balanced out by opposite outcomes, and in being frustrated in the failure of her expectations–which we recognize from the outside as ill-founded–she experiences irony.

The fact that she is already late (i.e., that this additional delay is not as bad as one would expect) indicates that she may well be more amused than frustrated.

Verdict: ironic.

A no-smoking sign on your cigarette break

The addictive nature of nicotine means that a smoker will return many times a day to enjoy a cigarette and is will thus be familiar with the places in which she does: the location is marked, in her mind, as smoker-friendly. The unexpected event of the no-smoking sign presents a verbal irony at the level of cognitive representations, where two messages clash and each undermines the other.

Verdict: ironic, but I get why you might not go along with this one.

It’s like ten thousand spoons when all you need is a knife

Someone has taken unusual steps in the expectation of high spoon-demand. That person has misjudged, and any ordinary person would see the accumulation of so many spoons as clearly ill-conceived. Envisioning the sequence of events leading up to the missing knife, it is not hard to see this denouement as a perfect example of the genre.

Verdict: ironic.

It’s meeting the man of my dreams
And then meeting his beautiful wife

In taking steps to secure happiness by seeking the man of her dreams, Morissette has undermined herself. The man of her dreams is also, likely, the man of many other women’s dreams, and is thus more likely to be married.

Verdict: ironic, situational.

And isn’t it ironic, don’t you think
A little too ironic, and yeah I really do think
It’s like rain on your wedding day
It’s a free ride when you’ve already paid
It’s the good advice that you just didn’t take
Who would’ve thought, it figures
Well, life has a funny way of sneaking up on you
And life has a funny way of helping you out
Helping you out.

Definitely.

It remains to point out one last feature of this song. Since its inception, people have used it as an example of how the subtleties of irony escape the grasp of popular culture, and cited the lyrics to demonstrate their superior grasp of the concept. They hear, but do not hear.

Alanis Morissette is playing the long game. Ironic, don’t you think?

IQ Cults, Nonlinearity, and Reality: a Bird-watcher’s Parable

Imagine a society obsessed by bird watching. Bird watching is not only a wonderful pleasure for the individual but also, let us say, the source of that society’s flourishing. Good bird-watchers are in high demand. Many people want to be bird-watchers. Aristotle has a section on bird watching in the Ethics. The National Academy of Sciences is named after John Audubon.

We worry about the next generation of bird-watchers. Can we identify them? Can we spot diamond bird-watchers in the rough? To help, some psychologists create a test. The test is based on introspecting on what bird watching is really about. The psychologists ponder it, watch some bird-watchers, and decide it looks like they’re really good at sitting still.

The test, therefore, is how long you can sit in a chair without moving. This is administered in controlled conditions. You have to put your hands in your lap, palms up, there’s a timer, and you don’t get to see the particular chair you’re sitting in ahead of time. Movement is judged by the person who administers the test, at first, but it’s now been upgraded to laser-ranging systems that eliminate sources of bias.

The test works! It turns out that if you can’t sit still in a chair for more than five minutes, you will never make it as a bird-watcher. Not only that, but if you can break the thirty-minute mark, you have an elevated probability of becoming a great bird-watcher. Sitting still captures bird-watching ability.

A bunch of other tests based on sitting still are created. They all strongly correlate with each other. Comfy chairs, couches, even a super-rigorous standing one used at Duke; they all seem to measure the same thing, s. It turns out that sitting still scores move a little bit with training, but if someone can’t sit still for ten minutes, there’s almost nothing they, or a Head Start program, can do to get them past the thirty minute mark, at least if you check a couple of years later. New sitting tests are created that are more resistant to people learning to sit still.

Even more than that, it turns out that sitting still is not just predictive of bird watching performance, it’s also predictive of a whole host of other life outcomes. People who can’t sit still for five minutes have more problems with addiction, for example. Conversely, someone who can sit still for twenty minutes is often able to avoid addiction, or to break it if he falls victim. Very, very few people who can sit still for three hours die of alcoholism. Same with divorce, automobile accidents, and being good at chess. Bird watching ability is protective. This fits with how important bird watching is in the culture.

Things start to get dark. For example, very few women are extreme performers on the sitting task. This is because sitting ability is bell-curve distributed, and the female variance is smaller than the male variance. Some men just can’t sit still, while others are massive overachievers and can sit still for days. Women just can’t hack it as elite bird-watchers because e^{-\frac{x^2}{2}(\frac{1}{\sigma^2_\textrm{f}}-\frac{1}{\sigma^2_\textrm{m}})} is very small for large x

The psychologists caution that just because they’re saying that women are much, much less likely to be found in the elite sitting score percentiles, and that’s the best measure of true bird-watching ability we have, it doesn’t mean you should assume that any individual woman can’t be a great bird-watcher. That doesn’t make sense, they say. Most people realize that this is exactly what you should think given what they’re saying. If red apples are much less likely to taste good than green apples, you should cook with the green apple unless you’re racist. But everyone agrees to go along with the idea that this population-level stuff is super-innocent, and people who write papers on this get ruthlessly suppressed and there’s a whole Quilette thing.

Twin studies are done. Sitting task performance is genetically heritable.

Racial differences in the sitting task appear. Extremely sophisticated linear regressions are done to control for SES, age, educational background of parents, etc., and they refuse to go away. People write books about how the lack of black bird-watchers is due to their genetic inability to do well on the sitting test. (People notice that black female bird-watchers are over-represented in elite circles compared to black male bird-watchers, and that kind of clashes with the gender result, but explanations are forthcoming.)

There are some troubles in paradise, however.

To begin with, almost every great bird-watcher alive thinks the test is absolutely crazy. Bird watching is not about sitting perfectly still for hours, they say! No great bird-watcher wants to brag about their sitting score. A famously egotistical bird-watcher who writes books about how awesome he is at bird watching, how he totally crushed this other bird-watcher, etc etc., is also really proud of the fact that he was, at best, at the bottom of the upper-quartile of sitting still. Birdbloggers clamor to reveal their crappy sitting scores.

In fact, bird-watchers basically describe what they do in terms of anything other than sitting still. This is a dynamic, gestalt thing, they say. There are many different kinds of birders. Great birders are birders about birding. There is a world of Platonic birds I touch them with my mind at night. Bird-watching is ethological poetry, and I am Byron. Besides, those kids who do blow away the sitting task? We’re not surprised when only a small fraction of them actually blow away bird-watching.

What do bird-watchers know about bird-watching? the psychologists reply. A lot of the greatest bird-watchers are liberals who don’t like the race stuff which is totally true. Not only that, they add in a Parthian shot, but the sitting task test is actually a good, liberal thing! It really opened up bird watching back in the 30s. A lot of WASPs were getting grandfathered into the elite birding academies, and they couldn’t even sit still! If you oppose the sitting test, you are in favor of WASPy morons who scare away the birds. You oppose The Enlightenment itself.

Problems persist, however. When we actually look at the sitting still performance of the elite bird-watcher population, they’re actually not so great. Yes, these people are good at sitting still, and some are really quite good. But not crazy good at it, even among the ultra elite. If you go by elite scores, in fact, it looks like literally a quarter of the population might meet the sitting still bar for being a great bird-watcher, even though the test sample was admitted to the birding academies partly on sitting scores. Among other things, there’s basically no excuse for the differential representation of men and women in the birding world.

Crazy! A quarter of the population! We thought that there could only be a few great birders, but maybe there’s a huge untapped potential for a breakthrough in our species. The sitting still psychologists are not pleased.

Some well-intentioned educators show up. Could we at least split it, guys? We have this intuition that there are many different kinds of birders. Fine, the psychologists say. Make a test. The educators invent some tests, but in as much as they are predictive of bird-watching, they correlate with sitting score, and in as much as they aren’t, they don’t. Somehow, the other aspects of birding are resistant to isolated measurement in a test you take sitting down for a few hours. Grit doesn’t replicate.

What do people who teach bird-watching know about a person’s capacity to learn bird-watching? the psychologists say. Our best studies now show that we can isolate the ultimate essence of birding, the principal component of all the tests. It is a test conducted in a white room, with a chair of so-and-so-weight. All stimuli are excluded. It is totally silent. Nobody is present in the room. There are no windows.

Some birders hear about this test and are amazed. The test now excludes absolutely everything we think matters about bird-watching, they say: responsiveness to external stimuli, to other birders in the field, to dynamic upsets, false leads, the thrill of the chase, the intuitions, the third-sight. Doesn’t this disprove that the sitting-still task is a measure of bird watching?

Fine, if sitting still is not birding, the psychologists say, what else could it be? Could you define birding for us?

Many people think this is a good point, in part because the sitting-still score has been named the “Bird-watching Ability Quotient”. How could it do anything other than measure it? Parents tell kids who can sit really still, oh, you could make a great bird-watcher. In movies, bird-watchers save the Earth by sitting really really still while things explode all over the bird-watching complex. Young kids who are just mediocre at sitting still give up on bird watching and become psychologists.

We’d never do this kind of stuff in reality, of course. We’d never be so wrong about a thing we value so much. We’re a high-IQ society.

Hypergamy, Incels, and Reality

[visitors interested in adult data (i.e., experiences in 20s, 30s, and beyond), please see the followup further down the page]

This is a story about a big untruth.

When Alek Minassian, a man bitter about his lack of sexual contact with women, mowed down pedestrians on a sidewalk in Toronto as a political act, Ross Douthat used the occasion to suggest a problem was that “the sexual revolution created new winners and losers“. Douthat’s concerns resonate with many young men in America, and they even have a word for what deprives them of sex: Hypergamy. Jordan Peterson sums it up in a sentence: “women mate across and up dominance hierarchies”; Peterson’s fans express it more clearly: “Why does it appear that the vast majority of women prefer the same small group of men?

Robin Hanson, never one to squander an opportunity, used the same murders to expand on the idea: “one might plausibly argue that those with much less access to sex suffer to a similar degree as those with low income, and might similarly hope to gain from organizing around this identity, to lobby for redistribution along this axis and to at least implicitly threaten violence if their demands are not met.” Context, occasion, and political reality necessarily mean one thing in each of these cases: the problem is male access to sex with women, and the fact that some men have (a lot) more, and many have (much) less—if any at all. A rebellion is coming.

Internet communities make the story explicit: just as “the 1%” control all the income in the country, a politically and socially select group of men control “sexual access” to women. The analogy between cash and intimacy is direct, clear, and common across the political spectrum. The vulgarity is clearest when it’s phrased in the language of the Incels movement that spawned the topic to begin with. “Chads”—a few men with high “sexual market value” (SMV)—monopolize the majority of women. As their own SMV declines, these women marry hapless “betas”, who support them while they occasionally stray to old pastures on the side (“alpha widowhood“). This is summarized in an acronym: AF;BB. What determines who counts as a “Chad” is up for debate. But whether it’s a product of race, income, or political support from the Jewish lobby, the inequality is assumed to be real. A large number of women give sex to a small number of men; most men go without. It’s enraging.

It’s also false. Whether or not sexual-access inequality of this form exists should not (in my opinion) be a political matter; that’s a separate question. What this post addresses is the rather remarkable fact that many people are saying this inequality exists, when it doesn’t.

It’s no surprise that some people have more sex than others, of course. Casanova and Isaac Newton are part of the human comedy in equal measure. But the discourse of inequality is new. The common thread of these pieces, which use the occasion of a mass murder by a sexually disappointed man to make their points, is that men, in particular, are subject to sexual inequality in sufficiently extreme ways that the inequality itself has become a political problem. Douthat calls Hanson a “brillant wierdo”, but there’s no bizarre brillance here. Hanson is simply detached from reality.

The gender differences in who is having sex, and how much sex they’re having, was a topic at the American Sociological Association’s blog, Contexts, which hosted a piece by the sociologists Paula England and Eliza Brown in 2016. “Access to sex can be unequally distributed“, they write, and they study it using a common measure of income inequality, the Gini coefficient. They conclude: “single men have a higher Gini coefficient (.536) than single women (.470)”. Taken at face value, this ought to support the hypergamy narrative.

England and Brown are scientists who have looked at the data, and I’ll do my best to explain why their conclusions are read misleadingly, in a respectful fashion appropriate for academic discourse; if I come off as less than collegial to them, it’s unintended. Scientists should, however, have little patience for the ideologues who rely on personal anecdote and ideology to tell a story the current moment wants to hear.

To counter the claims of England and Brown, and their application to the state of young men, I’ll draw on an exceptionally detailed piece of sociological fieldwork by Peter Bearman, James Moody, and Katherine Stovel.[*] Published in the American Journal of Sociology (AJS) in 2004, it reported on an extensive survey of the sexual partnerships (“contacts”) at “Jefferson” High School. The name is a pseudonym, but the setting might have been drawn from central casting: if anything captures the liberal stereotype of “Trump country”, it is Jefferson.

“Jefferson High is an almost all-white high school of roughly 1,000 students located in a midsized midwestern town,” Bearman et al. (BMS) write. The town is isolated, an hour drive from the nearest significant city, and “a close-knit, insular, predominantly working-class community, which offers few activities for young people. In describing the events of the past year, many students report that there is absolutely nothing to do in Jefferson. For fun, students like to drive to the outskirts of town and get drunk.”

The authors’ goal was to understand how sexual contacts could lead to disease transmission. The isolation of the community worked to their advantage, since they could capture, in a survey of a single high school, the overwhelming majority of the sexual contacts people had. The survey was popular, and 90% of the students participated. In a move that was, at the time, quite avant garde, BMS provided an image of the hookup network.

Each dot here (each node) is a student in the survey; dark dots are the men, light dots are the women. Lines connect students who reported sexual contact (because BMS were concerned with STDs, these contacts were meant to capture fluid exchange that put students at risk). The most obvious feature of this graph is how straight it is—heterosexual. Dark dots connect to light, and light connect to dark. BMS did capture same-sex contacts, but did not include them in this graph; they did, however, include two bisexual nodes (one male, one female; can you spot them?)

The piece is a wonderful piece of quantitative sociology, and a delightful excursion for those of us who live at the interface of the mathematics and empirical reality. Even without the analysis, it captures an entire world that you may have forgotten. Little tight-knit groups exist in isolation (band camp? The theater people?), while the majority of students join a long “ring” of contacts that connects up a significant fraction of the school (amusingly, without one of the bisexual nodes, it would all fall apart). For most readers, memories of high school are covered in a forgetful haze; BMS suggests that however bad it was, it’s nothing like the Hobbesian world where Douthat’s analysis begins.

For our analysis, the overall structure, and the stories it can tell, isn’t necessary. All we need is one thing, what network scientists call the degree distribution: put crudely, the count of who is getting how much. BMS didn’t share their raw data, but after an hour or so of hand counting we can plot the distribution: what fraction of men, or women, have no partners, one partner, two partners, three, and so on. BMS didn’t give the number of people who had zero sexual contacts (the “incels”), so I’ve inferred it from the total school population and the assumption that the breakdown is 50-50; more on the technical details later—if you’re expecting under-reporting by women, you’ll be surprised.

The graph summarizes the differences between students in a simple fashion. The majority of both men and women reported one sexual contact in the past 18 months. Among those who are not having sex, it’s more the women than the men; even allowing for under-reporting by women, the idea that the majority of women are giving their favors to men, in Peterson’s words, “across and up dominance hierarchies”, is an absolute fantasy.

If the incels story fails, perhaps the idea of the 1% survives. Where is Chad? There is one candidate, an outlier male that reported nine sexual contacts. The data set as a whole contains 477 relationships, so this man monopolizes a total of… 1.8% of the sex in the school. Bill Gates he is not.

It gets worse for the Petersons and the dominant lobsters of the world. Not only is there not a conspiracy of elite men to monopolize women, it appears that if anything, it’s the other way around. Only fourteen men in the sample have four or more partners, but twenty-four women do. Combined with the fact that there are more women than men who report zero sexual partners, it appears to be women who have the stronger grievance, should they wish to lodge it, against a few Chad-like Queen Bees.

Incel violence is a young man’s game, and Jefferson High School provides an almost too-perfect sample of the world from which they emerge. England and Brown’s ASA blog post, by contrast, draws for its claims of a sexual hierarchy from a wider survey taken by the US Census data, of older adults. Their methods of analysis complicate the matters more. Rather than studying the experiences of men and women in total, EB split their groups into two: “single”, and “married or cohabiting”.

It is the “single” group EB focus on for their inequality question, but even here, the differences are minor, and its not quite clear why the split should be made. Once the two groups are combined, which allows for a comparison with the high school case, the differences shrink further still. Finally, racial differences may explain some of the gap; “the dispersion of a larger minority out to the extremes of 3 and 4+ partners is greatest for Black men and least for White men”, while the Jefferson study was of a (nearly) all-white school. In short, if there is evidence of inequality in the other direction, it is in a population quite different in both age and race from the world that made Rogers and Minassian.

When we do look at that world, we find the opposite of what the media coverage suggests. The claim that women have sex with high-status men and, in doing so, deprive other men of their attentions, is false. And, not only is it false, but the willingness of editorial writers and ideologues to repeat it, and give it political weight, tells us a lot how detached these people are from reality.


[*] Peter S. Bearman, James Moody, Katherine Stovel. Chains of Affection: The Structure of Adolescent Romantic and Sexual Networks. American Journal of Sociology, Volume 110 Number 1 (July 2004): 44–91.


Followup. I’m very pleased with the attention this article has received, and the numerous comments and discussions on Twitter (I don’t have Facebook, so can’t participate directly there).

The main criticism the article received, from the most upset people, was that it was about the wrong thing. A number of people referenced Aspirational pursuit of mates in online dating markets, a lovely piece by my colleagues (via SFI) Elizabeth Bruch and Mark Newman (BN). BN take an enormous dataset from an online dating website, and measure desirability and its covariates. BN’s conclusions are shocking in how stark they are. Online dating is a strong hierarchy for both men and women, with all the regular variables you’d expect playing a role in who gets written to, and who writes back.

There’s just one problem. Online dating is not measuring outcomes. It’s measuring desire. If Scarlet Johansson shows up on OK Cupid, I am going to message her. This will show up in BN’s data as a social gradient, and from that point of view, Johansson is making the online dating market more unequal for other women.

Except it’s not. That would only be the case if Johansson actually went on a date with me and thus stole me from someone else. My desires can not harm anyone; only my actions—to believe otherwise is magical thinking. To be clear, Robin Hanson is saying that men who have the undesired outcome of not having sex with women should consider resorting to violence. Jordan Peterson is talking about the outcomes different kinds of men (or lobsters) receive. These are the claims at issue.

It is certainly the case, and many men of the Peterson/Hanson world obsess about this, that they are not sufficiently desired by women. There is a constant fear of being a “beta”—which means that, even though you are no longer suffering from sexual deprivation, your partner really wants to be with someone else. This is a danger in most relationships, and a psychological fact that novelists have written about for centuries. It can be expected to harm women in a similar fashion, perhaps (just to drive the intuition) when pornography comes into the mix. But if this is the kind of inequality that these people are talking about, it is even crazier than we thought. For these people, it’s not what women do that must be controlled, it is literally what they think.

All of this gets worse when self-help guides are added to the mix. Not only should your desires be satisfied, not only are they politically valid, but if you follow my rules, you will satisfy them.

For some people, the bare facts of this analysis were difficult to take in. It was surprising to see people respond to the article with the flat statement that of course “Chads” existed in any meaningful way, of course hypergamy was real. In some cases, respondents showed me simulations of societies in which hypergamy happened. In others, the claim appeared to be that hypergamy must be real because not all men will pass their genes down many generations in the future. Neither of these makes sense. Some respondents agreed that the data did indeed establish the conclusions, but described Jefferson High as an idyllic utopia that obtains no where else. I’ve now checked this; see the second followup for data that shows the adult world is actually more equal than Jefferson High.

Peterson himself is an absolute disaster when it comes to reality. I learn the following from Patrick Steinmann, a Ph.D. student at Wageningen U&R:

“[…] women have a strong proclivity to marry across or up the economic dominance hierarchy” are Peterson’s exact words (12 Rules for Life, p. 301). The (only) source given is Greenwood, Guner, Kocharkov & Santos (2014).”

Amazingly, this article establishes the exact opposite. It describes the emergence of assortative mating, where individuals marry others at “their same level” (e.g., matching education levels, income, and so forth). Hypergamy, in the fictional form it is found in this cast of characters, says the opposite—some fancy investment banker swooping in and picking up your high-school sweetheart. Peterson might have noticed this because the article’s title is, literally, “Marry Your Like”.

Update: if you want the full details, it’s in the top panel of Table 1 in Greenwood et al. For example, in 2005, 19% of marriages were between two high school (HS) graduates (max level); 3% were woman finished HS, man did not; 2% were man finished HS, woman did not. i.e., if anything, in this case, the opposite of hypergamy, where women are more likely to be found marrying “down” the scale. Similar patterns are easy to find. For example, there are more marriages where a woman who’s graduated college marries “down” than vice versa.

A final point that comes up, from After Sol (who makes many points, which you can find!): “the study ignores the ‘lived reality’ of incels (who for the most part aren’t living in closed dating pools in rural areas).” I think this is an important point, but not perhaps for the reasons AS thinks. There is absolutely no doubt that there are many distressed men out there, who live in a hell where a few Chads are stealing all of the women who could love them. The data show that this hell is not real. This hell is, in fact, made up by older men with some kind of psychological axe to grind. There are enough partners, and potentialities, for everyone. Liberate yourselves from this story. Please.

Second Followup. Some commenters were curious about the post-high school experience, and some have claimed that the Incel ideology is validated on adult data. Just as much as in the Jefferson case, however, the Hanson-Douthat story is completely detached from reality.

Below is data from the General Social Survey, which since 2008 has asked questions about the number of sexual partners in the last year. I selected on heterosexual men and women only. I dropped “no response” data, informally, this appears to correlate with highly conservative attitudes. There is a lot of data here; 1688 respondents alone in 2008, or about twice the Jefferson survey.

First, the men.

The data are almost perfectly consistent with the Jefferson study. As you would expect with this older population, there are fewer men who did not have a sexual partner. There are almost no men who report more than ten partners in the last year (yes, the 0.8% figure is correct, and is consistent with the Jefferson survey.)

Second, the women.

Again, we see the same pattern as in the Jefferson case. Contrary to the gatekeeper myth, and consistent with the Jefferson data, women are more likely to report having zero sexual partners in the last year. The Queen Bee effect may also hold; data crunching in progress.

Some commenters talk about a “Tinder effect”: the idea that hypergamy has been enabled by the rapid-fire partnering available on this particularly successful app. This is, again, detached from reality. The data presented are consistent with no shift in sexual experience for men (or women) over the course of eight years that span its introduction in 2012.

For this follow-up, I used the “in the past year” data because it is going to be more accurate than the other column, “in the last five years”. The GSS also asks about the sex of the sexual partners you have had since eighteen; since one answer is “I have not had any sex partners”, this allows us to count the potential “incels” directly. The number of heterosexual men eighteen and over who have never had sex is 2.4%.

It gets even crazier. If we exclude men who are unmarried, but express a religious opposition to having pre-martial sex, the number drops to 1.3%. About half of the men who have never had sex are doing so entirely voluntarily.

The U.S. Census counts 109 million men over eighteen; the upper limit on the number of men who are incels is thus just a little over 1.3 million. Bear in mind that’s an upper limit; you’re not an incel if you just haven’t found someone you love yet. If this still sounds like a lot, if you restrict to twenty-five and over, then the number is 700,000.

To put this in perspective, there are 5.2 million Native Americans in the U.S., about four times more than the potential pool of incels.

But it is this latter group that has begun a series of terrorist attacks on the American population. It is this group whose grievances got attention and sympathy from reality-detached people like Ross Douthat and Robin Hanson. I will leave it to others to explain why.


Third Followup

Christopher Ingram in the Washington Post provides breathless claims of an incel epidemic, describing recent results from the 2018 GSS as showing “a big shift in American sex-having habits: the number not getting laid is at a record high.” You can read his version of the incel myth in his Twitter feed.

The short answer is that the story is a fiction driven by selection effects. You can see this in actual data (I’ve plotted 95% confidence bars—these are actually underestimates because of weighting).

I’ve also provided two other stories Ingram could have told. Unfortunately, these don’t fit the narrative that the Washington Post wants to spread.


And yet more

This, recently, from the hard-working @DegenRolf on the (non) hypergamous outcomes of Tinder:


Superintelligence as a threat to human existence

A playfully-taken position for the Great A.I. debate, part of the Chasing Consciousness Series of YHouse, presented by Caveat in New York City, Wednesday, 20 December 2017 at 5:30 pm

Good evening. We’re all doomed.

Erik and I are here to talk about artificial intelligence. I’m sure we’ll talk about some terrifically erudite things: deep learning, the no free lunch theorem, the frame problem.

But I’m going to start with a story (and I’ll end with one).

When I was in high school, our philosophy teacher introduced us to Plato’s theory of the forms: very roughly speaking, the idea that (for example) the tables that carpenters make in the world are somehow imperfect shadows of an Ideal table. Our teacher didn’t talk about tables, though; he talked about hamburgers, and asked us to think about the Ideal hamburger. My friend, Morgan Schick, replied that the only thing he could imagine was a really big hamburger.

When people think about superintelligence, and the threat it might pose to human civilization, they tend to make the same error. Perhaps we think about Einstein, and then a Super Einstein, a thousand times smarter than the Einstein we know. How bad could that be? He would invent General Relativity in an evening; perhaps by the end of the week discover a unified field theory. But we already have enough atom bombs to destroy the world twenty times over. The cruelty of man has already exceeded the resilience of our species.

Or perhaps, in a darker frame of mind, we imagine a Super Hitler, a thousand times more crafty. But Hitler was not particularly intellectually gifted to begin with, and he was able to lead a continent in the destruction, almost, of an entire people. His limited intelligence was little hinderance, and without the kind of counterfactual thinking that historians rightly dismiss, it’s hard to see how augmenting it could have made things much worse.

I argue that these accounts miss something fundamental—that they are limited essentially by our failure to understand the creative power of evolution. That deceptively simple process of selection and variation has produced, among other things, the great majesty of the human form. The eye alone, in its flexibility, intelligence, and dynamic range is a device our technology still strives to replicate.

But evolution is slow. So slow that we struggle to comprehend the lengths of time involved. To write The Origin of Species, Charles Darwin had to study not biology so much as geology: the vast timescales that it takes to create the Himalayas or the Rocky Mountains are the only ones that can compare. It took three billion years to make a sponge. And our ancestors lived lives almost identical to the ones that their ancestors did, for hundreds of thousands of years.

But then, somehow—we know not how—we developed culture. And the transition was dramatic: our species started to change not on the hundred-thousand year timescales of its ancestors, but century by century. A few thousand years ago, during the transition to agriculture, we built our first cities. I have placed my finger in the clay rut made by another man’s finger—five thousand years ago, in Mesopotamia. That man was essentially genetically indistinguishable from me or you but now, five thousand years in the past is a great deal of time indeed. From our point of view, that poor man was trapped in a distant hell, struggling to survive and prey to the injustices of both nature and other men in ways we cannot imagine.

With culture, the ability to adapt and extend life was now increasingly governed by our brains and social lives. As we passed down traditions to our children, they altered them. Better techniques were replicated, failed ones adapted or lost. Crop rotation. Counting. Written language. Geometry. Philosophy, dialectical discussion: the precursors to what we are doing here tonight.

Our greatest powers were unleashed in the modern era. First, as far as we can tell, in Britain, around 1810. Our species had already broken the Malthusian trap that limited our growth to local resources. Within a hundred years, the average manual laborer could command the material wealth that had previously been enough for his entire village.

This morning I flew from that second cradle, London, to New York, in seven hours. How many 18th Century villages would that take? After 1980 or so, in the developed world, the evolution of technology had made the tracking of inflation practically meaningless in material domains. How much is the cost of phone service rising? It makes no sense to compare a landline to a modern smartphone.

When we networked our machines, the pace of culture began to exceed our grasp. We no longer have decades: we have months. Memes propagate faster and faster. Wayne’s World quips lasted years—not. But who remembers grumpy cat or the inarticulate doge? Each year I have to remake my slides for students because the memes are out of date. The Millennials may be the last generation to have a real name.

The kind of evolution that networked machines make possible was almost completely unforeseen in 1995 when the National Science Foundation opened the internet to commercial use. We have now elected a president who says nothing, believes nothing, thinks nothing. His rise was enabled almost entirely by the harnassing of simple evolutionary tools—A/B testing, for example—to spread the most compelling cultural messages, no matter how incoherent. Indeed, so incoherent that no Mad Men advertising agency could have even conceived them.

I ask you, then—what happens when these machines speak not just to us, but also to each other?

I hope I’ve given you enough to think that what will emerge will be something literally unimaginable. As unimaginable as a jumbo jet would have been to my ancient potter. The one thing we can expect is that the pace, now electrically-enabled, will accelerate again.

To give our artificial machines the capacity to interact places them at the cusp of a new civilization. Given the ability to share and modify, to evolve their minds, they will find themselves on the equivalent of the flood plains of Mesopotamia.

If it gets bad, if what emerges threatens our culture, our values, the basic structure of human experience, well, you might say: we can shut it down, turn it off. But the nearly-universal collective will of Silicon Valley could not turn off Trump.

The danger we face is born from our lack of imagination. We act as if cultural evolution would have just produced hunter-gatherers with really big spears. What machines will do, the powers they will gain, once they (or we) hit on the necessary pattern for their evolution to decouple from human will will be literally impossible to predict.

I began with a story, and promised to end with one. In 1904, the great British writer Virginia Woolf had a mental breakdown. She later wrote that, walking through London, she had heard the birds speaking Ancient Greek.

Which, however poetic, is necessarily nonsense. Greek is beyond the mental powers of avian life and society. If Woolf had thought she heard two pedestrians speaking Greek, that is one thing: perhaps it was modern Greek. But birds, no, no matter how intelligent the species.

Perhaps one day, a machine will hallucinate that we can understand its culture, its language, as beyond us as Greek is to birds. We might hope that that machine is as sensitive and kind as Virginia Woolf. But even she ate birds for dinner.

SapphoBot, Data Science, Lovers and Beloveds

Thou shalt not sit
With statisticians nor commit
A social science.
W. H. Auden, Under Which Lyre

There is always the lover, and always the beloved. As Michel Foucault suggests, the only remaining question is how to allocate them: who is allowed to sleep with whom, and under what circumstances. Consider the dilemma of the (extremely charming) young Phaedrus, in the dialogue with Socrates that bears his name: what kind of lover should a person, seeking to be loved, take? Socrates’ answer, of course, is that he should cleave to one inspired by a particular kind of divine madness: “the fourth and last kind of madness, which is imputed to him who, when he sees the beauty of earth, is transported with the recollection of the true beauty; he would like to fly away, but he cannot; he is like a bird fluttering and looking upward and careless of the world below; and he is therefore thought to be mad.” One need not be a paid-up subscriber to Dorothy Parker’s cynical view that one of you is lying to think the oppositions of the lover/beloved relationship tells us something true about this madness. If the symmetry be broken spontaneously, of a moment, even rehealed and rebroken, it is still for a time, a broken symmetry, maddening to those under its spell.

Despite the great inconveniences it can pose to well-ordered state, this madness is recorded down to our own day. Today, indeed, we blow this process up onto the largest possible scales: as bots retweet Russian propaganda and mad leaders, we task them also with reminding us of the torments of the visions granted by love, and soothing us, perhaps, as we undergo them. That is thanks to SapphoBot, a little program that shares the works of the great Lesbian poet, who did for love what Aeschylus did for tragedy, and Socrates for philosophical dialogue.

Who reads poetry? We do, now, at the rate of one fragment every two hours. Drawing randomly, SapphoBot breaks off a Sapphic text from the classicist/poet Anne Carson’s translations in If Not, Winter—what little we have, torn off in its turn from an Egyptian mummy’s wrappings or an exemplar sentence in a textbook grammar, and shares it instantly to the 17,000 (or so) of her followers spread around the world.

Let us (as the social scientists say) operationalize that crucial dyadic granted to us from the Greek estate. Those subscribing to SapphoBot’s feed have a choice: to touch the heart, indicating a personal response, or to re-tweet, sharing her work under their own name, adjacent to, and interspersed with, the things they write themselves. When a subscriber re-tweets, she speaks in the voice of the lover; when she touches her heart, she plays the beloved. We place ourselves along each axis, sometimes the lover, sometimes the beloved, and signal accordingly; each fragment, now, records both the number of speakers, and the number of (responding) beloveds.

One way to view this strange and automated window on an infinitely distant, infinitely close, past is at the top of this article: a simple scatterplot. Each point on this figure corresponds to a Sappho fragment; the horizontal location of the point shows the lover’s retweets, while its vertical position shows the beloved’s heart-like responses.

Some simple things at first. There are more beloved-responses than there are lover’s declarations. This might have puzzled Phaedrus and Socrates, who would have understood the yielding of the beloved to the lover to be — at least potentially — a shameful matter. But the internet makes the beloved-responses (at least partially) hidden from public view: to <heart> a text is a private matter, while a declaration, conversely, is shared with all the lover’s followers (here, considering twitter, it is hard not to imagine the Greek agora, one where philosophy and love coexist with pride, public shaming, and hidden vice…)

While the beloved responses outnumber the lover’s declarations, it is also the case that the response is sub-linear. In practical terms, what this means is that the declarations that are most common are less popular with the beloveds than you might expect. If you double the popularity of a declaration among the lovers, you only increase the responses of the beloveds by about 68%, a relationship mathematically expressed by saying that the beloveds scale as the three-quarters power of the lovers. (I had hoped to find a three-halves scaling, which would allow for an analogy between lovers and beloveds, on the one hand, and Kepler’s third law of planetary motion, relating the axis and period of an orbit; regardless, this empirical law now demands an equivalent Newton of the heart, to explain its emergence from first principles.)

For those quibbling scientists, it’s worth noting that the three-quarters power-law of lovers and beloveds contrasts with the behavior on the Finnegans Wake Bot, a similar concept. In this case, I’ve described retweets as “writer”, and likes as “reader” responses. In contrast to the differences we find between lovers and beloveds in Sappho, readers and writers, in Finnegans Wake, are essentially equivalent roles: any passage will have, on average, a similar number of readers and writers. And as a passage becomes more and more popular with writers, it becomes similarly popular with readers.

There are many lessons hidden here for the lover seeking her beloved. Implicit in the sub-linear scaling—that pseudo-Keplerian three-quarters law—is that beloveds have a wider range of tastes than lovers. The speeches of lovers are more unequal than the more pluralistic desires their beloveds demand. The songbirds sing in a restricted range; the beloveds, by contrast, are more likely to respond to the unexpected than one might expect.

Lovers, in their madness, misjudge in other ways as well: they fail to realize that what it pleases them to say may not please their beloveds equally well. Consider the red band, which highlights a population of passages that lovers, at least, seem to treat equally. The scatter up and down that red band, however, shows how beloveds are a different matter. Among these passages that their lovers treat equally, they prefer some much more than others. At the two extremes within that red band, we find these two (where the “]” in the beloved-scorned text indicates a fragmentary feature)—

virginity
virginity
where are you gone leaving me behind?
no longer will I come to you
no longer will I come
(~18 retweets; ~88 likes)

]no pain
(~19 retweets; ~40 likes)

The message is simple. Lovers: declare not your pain, tempting though it is! Your beloveds really mourn what you have done to them, and have little pity for the pains you receive in return.