In the game of Go, bad shape is the term for configurations of stones on the game board that are inefficient in achieving their offensive goal (territory capture) and unlikely to achieve their defensive goal (the state of “life“). You can extend a bad shape in a fruitless attempt to make it better, but you’ll generally be wasting your time.
The idea I keep coming back to is that the big platforms, like Marley, were dead to begin with, and are now something particularly bad, which is dead on their feet. Not because they’re been abandoned by users (yet) but because they’re structurally incapable of governing the systems they made, and most of the things they try to do about it introduce more and weirder problems.
While they were still gobbling hundreds of millions of new users each year—and while the old political machines were still catching up—platforms could outrun the problem. Now, though, the number of remaining uncaptured humans dwindles, the politicians and propagandists have adapted to exploit the mass-scale machinery the platforms provide, and the positions platform companies have contorted themselves into trying to shoehorn governance into ungovernable structures are increasingly hard to maintain.
Facebook especially is likely to zombie-shamble along for some time, held upright by its deep weave into the coordination of offline life and by the communities for whom it serves as a sole accessible connection to the internet, but the whole apparatus looks increasingly precarious.
(These are very simple points, but it remains a wince-inducing faux pas to say them in a lot of tech-thinking spaces, so I will keep pushing on the obvious.)
The evidence of the past decade and a half argues strongly that platform corporations are structurally incapable of good governance, primarily because most of their central aims (continuous growth, market dominance, profit via extraction) conflict with many basic human and societal needs.
As entities, large social platforms continuously undergo rapid mutations in service of their need to maximize profit and expansion while minimizing the kinds of societal and individual harm that can plausibly cause them regulatory trouble or user disengagement. (The set of things that can cause trouble is also always shifting, as political and cultural spheres influence and are influenced by the platforms.) But platform mutations emerge only within a narrow range of possibilities delineated by the set of decisions considered valid in, roughly speaking, Milton Friedman’s model of corporate purpose.
Within this circumscribed mutation zone, certain goals are able to be named and earnestly pursued (“stop spam” or “eliminate the distribution of CSAM”), even if they’re never achieved. Other goals (anything to do with hate speech, incitement to violence, or misinformation, for example) can be named and pursued, but only in ways that don’t hinder the workings of the profit-extraction machinery—which mostly means that they come in on the margins and after the fact, as in “after the fact of a genocide that Facebook had years of explicit advance warnings about.” Working on the margins and after the fact still matters—less damage is better than more damage—but it means “trust and safety” is kept well clear of the core.
Again, this is all simple and obvious. A tractor structurally can’t spare a thought for the lives of the fieldmice; shouting at the tractor when it destroys their nests is a category error. Business does business. The production line doesn’t stop just because a few people lose fingers or lives. And what is a modern corporation but a legal spell for turning reasoning beings into temporarily vacant machines? We know this, which is why we have OSHA and the FAA and the FTC, for now.
It’s no surprise that when prodded by entities with cultural or regulatory power, platforms build more semi-effective AI classifiers, hire more underpaid contract moderators, and temporarily stiffen their unevenly enforced community rules, but then immediately slump back toward their natural form, which appears to be a cartoonishly overgrown early-2000s web forum crammed with soft targets and overrun by trolling, spam, and worse.
It’s possible to make the argument that sufficiently strong leadership could make even a tech corporation appear to be capable of holding an ethical line, and maybe even capable of accepting slightly smaller profits in service of socially beneficial goals—and that, conversely, the awful people in charge are the main source of the problems. It’s not a very good argument, though, even when I make it myself.
Yes, X is currently controlled by a bizarrely gibbering billionaire with obvious symptoms of late-stage Mad King disease. Yes, Facebook and Instagram—which control vastly more territory than X—are controlled by a feckless, Tulip-craze-mainlining billionaire with a long history of grudgingly up-regulating governance efforts when under public or governmental pressure and then immediately axeing them when the spotlight moves on. But would these platforms inflict less damage if they were led by people who valued the well being of others? Probably yes, to a degree. Twitter/X has offered a lurid natural experiment, and the changes in X after it moved from Jack Dorsey’s spacey techno-libertarian leadership to Elon Musk’s desperately needy quasi-fascist circus act have been obviously bad. A version of Meta founded and led by someone with a reasonably sharp ethical grounding clearly wouldn’t look much like the real Meta at all.
On the other hand, TikTok’s social function is reasonably close