Statues of men wearing headphones
Monument to the Bandeiras, International Noise Awareness Day in Brazil. (Dario Oliveira/Anadolu Agency/Getty Images)

In today’s attention economy, ideas don’t need to be deleted or redacted to be silenced. They can be drowned out privately, screen by screen, by unchecked noise from decoy bots, doxxing campaigns, and filter bubbles.

In WIRED‘s Free Speech issue, Zeynep Tufekci describes how so many of the “most noble old ideas about free speech simply don’t compute in the age of social media.”

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase.

John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

Freedom of speech continues to be an important democratic value, Tufekci writes, “but it’s not the only one.” The First Amendment isn’t even the only amendment to the Constitution, let alone our only vision for a functioning democracy. Ideally, we’d also have a knowledgeable public, a capacity for informed debate, an atmosphere of honesty and respect, and a transparent system for holding powerful people and institutions accountable to their constituents.

But constituents aren’t users, and today’s giants of search and social are hardly bastions of free speech. Algorithms promote democratic ideals about as often as they safeguard friendships from advertisers. While social media platforms may feel like vibrant public spheres, they’re more like operating theaters. Procedures are expertly monitored in a controlled environment, and the glass only goes one way.

“To be clear, no public sphere has ever fully achieved these ideal conditions,” Tufekci reminds us, “but at least they were ideals to fail from. Today’s engagement algorithms, by contrast, espouse no ideals about a healthy public sphere.”

But we don’t have to be resigned to the status quo. Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones. The rules and incentive structures underlying how attention and surveillance work on the internet need to change. But in fairness to Facebook and Google and Twitter, while there’s a lot they could do better, the public outcry demanding that they fix all these problems is fundamentally mistaken. There are few solutions to the problems of digital discourse that don’t involve huge trade-offs—and those are not choices for Mark Zuckerberg alone to make. These are deeply political decisions. In the 20th century, the US passed laws that outlawed lead in paint and gasoline, that defined how much privacy a landlord needs to give his tenants, and that determined how much a phone company can surveil its customers. We can decide how we want to handle digital surveillance, attention-channeling, harassment, data collection, and algorithmic decision­making. We just need to start the discussion.

Read the story