Advertisement

PSAC leaderboard

Cory Doctorow: Blasting through Big Tech’s ‘walled gardens’

Interoperability is key to curbing the power of the social media and tech giants

Science and Technology

Facebook headquarters entrance sign, Menlo Park, California. Photo from Flickr.

The following is an excerpt from The Internet Con: How to Seize the Means of Computation by blogger, journalist, and science fiction author Cory Doctorow. Published by Verso 2023. Copyright © Cory Doctorow 2023. For more information, visit www.versobooks.com.


The history of technology is one long guerrilla fight where the established giants wield network effects against scrappy upstarts whose asymmetrical warfare weapon of choice is low switching costs.

Take the early web. Actually, the early pre-web. Almost a decade before Tim Berners-Lee—then a researcher at the CERN supercollider in Switzerland—created the World Wide Web, academics at the University of Minnesota had launched the first “user-friendly” way to access the internet: a service called gopher.

Before gopher, the internet was a motley, mismatched set of connected servers at universities, research institutions and military and government institutes all around the world. Each internet node had its own servers, offering their own services: one might let you search the catalogs of its library, another might tell you the local weather, a third might have a file-server full of technical manuals.

There was no central directory of these services. That was a feature, not a bug. The internet was designed to be a “network of networks”—a way for anyone to connect any kind of computer and make it accessible to anyone else. To add a new service to the internet, you designed it, built it, and plugged it in (after cajoling your university’s IT administrators for an IP address and a network connection). There was no way to know what was plugged into the internet because there was no way to control what was plugged into the internet.

Just finding out what was on the internet was one challenge, but even when you found a service you wanted to use, you still had to figure out how to use it. These early services were all accessed and controlled via a terminal program (or, indeed, an actual, physical terminal—a kind of brainless printer/keyboard or screen/keyboard device from the computational Paleolithic age). Each one had its own esoteric syntax—abbreviated commands with finicky spellings and variables that had to be appended in just the right order, with just the right separators. To “browse” the internet in those days, you had to master dozens of esoteric computing environments.

Enter gopher. Gopher was created in 1991 by a team at the University of Minnesota’s Microcomputer Center—the group that supported students who wanted to hook their computers up to the internet. This was a significant challenge! The Microcomputer Center staff had to guide students through installing the basic networking software that let their computers send and receive internet protocol messages (in those days, most PCs shipped with operating systems that were not capable of connecting to the internet out of the box).

On top of that, the Microcomputer Center had to install specialized software—an email program; a “newsreader” for reading Usenet, the internet’s message boards; a program for transferring files; and more—for each student and show them how to use it.

But most of all, the Microcomputer Center had to teach students and faculty how to use the finicky command-line services that constituted the bulk of the internet’s servers. Many of these ran homebrewed server programs that were unique to their institutions, reflecting the idiosyncratic views of the systems’ administrators about the best way of using the data they maintained. The hours you put into learning one system wouldn’t transfer over to the next.

That’s where gopher came in. The Microcomputer Center’s programmers created intermediate programs that served as a kind of overlay to each of these one-off systems. These programs would present naïve users with numbered menus listing all the commands the system supported. Instead of remembering that typing “ls” produces a list of files from an ftp server (a kind of early file-server), you just looked down a menu like this:

List files
Get file
Delete file
Upload file

It was a lot easier to type “1.” than it was to remember “ls,” and while it took a fair bit of work to create these menuing overlays, the gopher programmers only had to do this once for each service, and then they could train the users they supported to use menus, rather than teaching each user a hundred obscure computing dialects.

Gopher menus didn’t just let you interact with a service—they also let you hop from one service to another, the way modern web-pages today do. Gopher became both a directory of nearly everything connected to the internet and a means of connecting to and controlling all those services.

Gopher was an open protocol. Any programmer who wanted to help other people interact with a service for which there was no menuing system could write their own, and make it available in “gopherspace.”

In this way, hundreds of proprietary interfaces designed for highly technical users, many of them the product of the world’s largest technology companies, were commodified, subsumed into a volunteer-managed, globe-spanning interface that was designed to welcome laypeople to the burgeoning internet. Though some service operators objected to these unsolicited improvements, they had few options: they could send angry lawyer letters, re-engineer their systems to break gopher automation tools, or make their public services private, with access gated by passwords.

Most of these services did nothing, apart from grumble. Some sent lawyer letters, but the law was unsettled and confusing, and much of the time, the recipients of these letters simply ignored them (or not—gopher developers built their automation tools because they liked the services they were trying to make accessible, so a letter from the services’ maintainers explaining that they didn’t welcome these volunteer efforts was sometimes enough to stop them).

For a brief moment, gopher was the dominant internet service. The World Wide Web was much smaller than gopher, and, unlike gopher, the web grew primarily through the creation of new websites— services designed for the web. Gopher, by contrast, grew by swallowing existing services.

What happened next? It’s the best part. As the web’s growth took off, web users tired of having to remember which services they accessed via gopher and which ones they accessed via the web. Gopher’s developers tried to solve this by making it possible to load webpages in a gopher browser, but the web’s developers turned the tables on them, making it possible to load gopher pages in web browsers.

Gopher simply became another kind of webpage, which you accessed by typing gopher:// rather than http:// into your location bar. The administrators who ran gopher servers stood up webservers alongside them, accessing the same documents, so you could type either gopher:// or http:// and have an identical experience.

Gopher dwindled and disappeared (try to remember the last time you typed gopher://). But the collapse of gopher wasn’t the end of gopherspace. The files, services and sites that we once accessed with gopher are now part of the web.

The gopher story is remarkable because it’s such a perfect tale of how the intrinsic interoperability of technology meant that cornering the market on digital systems was a technical impossibility. It didn’t matter that the largest corporations in the world created mainframe-based walled gardens; volunteers were able to open them up with hobby projects.

Technology is so flexible that even as gopher was swallowing the web—integrating http and HTML into gopher browsers— the web was swallowing gopher, too. It’s like one of those weird Wikipedia pages like “lakes with islands” that list lakes that have islands.

Again and again in the early days of personal computing and the web, we get stories like this. Take the IBM PC: IBM was a giant, abusive tech monopoly long before this was in vogue. The company was tightly integrated with the US military and government, and this afforded it a measure of security; even though its rivals griped to their members of Congress about how they were being bigfooted by IBM, the company fended off serious regulatory action for decades, thanks to powerful friends in the Pentagon and other parts of the US state apparatus.

Eventually, IBM’s luck ran out. In 1970, the DoJ opened an antitrust case into “Big Blue” (the company’s army of sales reps all wore blue ties). Because IBM was a monopoly it had a lot of money to spend in the ensuing fight. A lot of money. Over the next twelve years, IBM outspent the entire Department of Justice Antitrust Division, every year, in a war that came to be called “antitrust’s Vietnam.”

IBM won. Sorta. Twelve years later, the Reagan administration decided to drop the enforcement action against Big Blue (they broke up AT&T instead—for the ideologues in Reagan’s orbit, AT&T was an acceptable antitrust target because it was so entwined with the US government that breaking it up was like making the government smaller).

But now, IBM lost. Twelve years of having to produce every memo and the minutes of every meeting took its toll. Running a company where every word committed to paper—let alone uttered in public—had to be vetted by paranoid lawyers locked in a high-stakes battle with the US government changed IBM, blunted its predatory instincts.

The company began to second-guess its commercial plans, steering clear of the kinds of things that the DoJ frowned upon. The DoJ didn’t like it when a big company monopolized the parts for its products, so IBM made a PC that used commodity parts— parts that any manufacturer could buy on the open market.

Nor did the DoJ like it when companies tied their software to their hardware, so IBM decided not to make its own PC operating system. IBM chairman John Opel asked a friend who served with him on the board of the United Way if she knew anyone who could provide an OS for his company’s PC. Her name was Mary Gates, and her son, Bill Gates, had a company that fit the bill: Micro-Soft (they dropped the hyphen later).

Cory Doctorow. Photo by Jonathan Worth.

Once the IBM PC—built from commodity components, running a third-party operating system—hit the market, other manufacturers wanted to follow it. They could buy their operating systems from Microsoft and their parts from IBM’s suppliers— but they still needed a “ROM”—the “read-only memory” chip that had the low-level code that made a PC a PC.

In stepped Phoenix Computers, a small startup, who reverse engineered the PC ROM and made its own, customizing it as needed for a booming market in “PC clones”—Compaq, Dell, Gateway and even electronics giants like Sony.

Here we see two kinds of interoperability in action. IBM “voluntarily” embraced interoperable components and operating systems—that is, the DoJ didn’t order it to do these things, but twelve years of brutal legal battles convinced IBM not to stir up the DoJ’s hornet’s nest again.

But alongside that voluntary interop, we have adversarial interop, wherein engineers at a scrappy startup pitted their wits against the best minds at the world’s largest and most powerful technology company—and won, cloning the PC ROM and selling it on the open market (something IBM likely tolerated due to the possibility of attracting more DoJ attention if it clobbered a small company that was enabling rivals to launch competing products).

Today, IBM no longer makes PCs. Interoperability in PCs meant that anyone who started on an IBM PC could switch to a Compaq or a Gateway or a Sony. Interoperability also means that anyone who makes that leap can leap again—to a Mac, where iWork will let them open and save all those Microsoft Office documents they created on their IBM PCs, Compaqs and Gateways.

And that’s without even getting into free/open operating systems like GNU/Linux, available in dozens of flavors, all of which can run on Apple hardware or hardware from any of the PC vendors. Free GNU/Linux apps like LibreOffice can open the files created by Microsoft Office and iWork and Google Docs, and exchange them seamlessly with users of rival platforms.

That seamlessness in Office documents isn’t just a matter of diligent reverse engineering. Remember, after the success of Apple’s iWork, Microsoft threw in the towel and ceded control over the Office file formats to an independent standards body, which means that anyone who knows how to write software can download a copy of the standard and the reference code for implementing it, and make their own Office product that will work with everyone else’s.

Walled gardens can only exist when switching costs are high. Tech companies understand that making interoperability-proof computers is a lost cause—like making dry water. Computers work because they are interoperable.

The outcome of a war on general-purpose computers that is fought on a technological battlefield is foreordained. General-purpose computers will win every time.

But would-be tech monopolists (and their investors) still dream of walled gardens and scheme to build them.

In June 2021, a US federal court dealt a severe blow to the FTC’s antitrust complaint against Facebook, rejecting the regulator’s case. But the court left the door open to a new complaint, inviting the FTC to re-file its case.

In August 2021, the FTC did just that, filing an “amended complaint,” guided by the new FTC Chair Lina Khan, who is also the leading theorist of Big Tech and antitrust. The new complaint drew heavily on the documents that Facebook had been forced to cough up in the earlier case, and large parts of the filing were initially blacked out for reasons of commercial confidentiality.

But by October 2021, the FTC had won the right to unseal many of its new documents, and that’s where we learned some of the grimy details of Facebook’s plans to raise switching costs. Over and over again, the FTC found senior Facebook managers and product designers explicitly designing products so that users would suffer if they left Facebook for a rival.

Take this quote from a memo that Facebook’s Mergers and Acquisitions department sent to CEO Mark Zuckerberg, giving the case for acquiring a company that let users upload and share their photos:

imo, photos (along with comprehensive/smart contacts and unified messaging) is perhaps one of the most important ways we can make switching costs very high for users—if we are where all users’ photos reside because the upoading [sic] (mobile and web), editing, organizing, and sharing features are best in class, will be very tough for a user to switch if they can’t take those photos and associated data/comments with them.


Later, a Facebook engineer discusses the plan to reduce interoperability selectively, based on whether a Facebook app developer might help people use rivals to its own projects:

[S]o we are literally going to group apps into buckets based on how scared we are of them and give them different APIs? How do we ever hope to document this? Put a link at the top of the page that says “Going to be building a messenger app? Click here to filter out the APIs we won’t let you use!” And what if an app adds a feature that moves them from 2 to 1? Shit just breaks? And a messaging app can’t use Facebook login? So the message is, “if you’re going to compete with us at all, make sure you don’t integrate with us at all.”? I am just dumbfounded… [T]hat feels unethical somehow, but I’m having difficulty explaining how. It just makes me feel like a bad person.


Then, a Facebook executive describes how switching costs are preventing Google’s “Google+” service from gaining users:

[P]eople who are big fans of G+ are having a hard time convincing their friends to participate because 1/there isn’t [sic] yet a meaningful differentiator from Facebook and 2/ switching costs would be high due to friend density on Facebook.


These are the machinations of a company that believes that its most profitable user-retention strategy is to lock its users up. They’re the machinations of a company that is thoroughly uninterested in being better than its competitors—rather, they’re dedicated to ensuring that leaving Facebook behind is so punishing and unpleasant that people stay, even if they hate Facebook.

Facebook isn’t alone in realizing that winning user loyalty by providing an excellent experience is harder work than punishing disloyal users, nor are its user-facing services the only place where this strategy is deployed.

Many people have observed that Facebook’s customers aren’t the users who socialize on its platform, but the advertisers who pay to reach those users. “If you’re not paying for the product, you’re the product” is often invoked to explain why Facebook treats its users so badly.

But being Facebook’s customer—an advertiser or even a publisher—doesn’t mean you’ll get better treatment from the company. Time and again, the company has been caught stealing from advertisers, falsifying its records about who it showed their ads to, and for how long. At least half of the ads that companies pay Facebook to show to its users aren’t actually seen by a human being—but Facebook bills the advertisers for that money anyway.

Mark Zuckerberg speaks at Facebook’s annual conference, 2018. Photo by Anthony Quintano/Wikimedia Commons.

The same goes for the publishers whose commercially prepared reporting, opinion and coverage are a major reason that Facebook users are attracted to the platform. In 2015, Facebook decided to use these publishers as part of its bid to dethrone Google’s YouTube service as the leading video platform online.

The result—the notorious “pivot to video”—was a devastating fraud, a mass-extinction event for media companies. Facebook lied to media companies about the popularity of Facebook videos, falsely claiming that Facebook users had all but abandoned reading text in favor of watching videos. They told the same lies to advertisers, whom they fraudulently billed for phantom ads that never ran on videos that were never watched. Media companies around the world fired their print journalists and built out expensive video production divisions.

This was Facebook’s version of “fake it until you make it.” The company wanted to be the number-one internet video platform, so it declared that it already was that platform, and suckered advertisers and media companies into participating in its delusion.

This is just a slightly sleazier version of what other companies had done before—think of Steve Jobs promising media companies that if they invested in making apps for his new iPad they would reap massive profits, tapping into a new movement in which readers were willing to “pay for content.”

Jobs had no idea if Apple users would pay for apps, but he won either way: if media companies filled his App Store with software, then other software developers would follow, and some of them would eventually make apps that his customers valued, which would sell more iPads—even if no one was willing to pay for the news.

And if people were willing to pay for the news, well, Apple would be able to rake off 30 percent of the sale price of the app (and, once companies had completed their “pivot to apps,” Jobs altered the deal, to guarantee Apple 30 percent of the app’s sale price—and of all the purchases users made within the app; that is, the entire lifetime revenue of every app-using customer these companies had).

Apple’s bet paid off. Users were willing to pay for apps—not as much as Jobs promised, but there were success stories like the New York Times that Apple could gesture toward when smaller newspapers complained that they’d spent all their available cash flow building an app that no one wanted to pay to use.

But Steve Jobs’s famed “reality-distortion field” did not materialize for Mark Zuckerberg. Despite Facebook’s egregious lies about the popularity of video on its platform, despite the billions media companies poured into video production based on those lies, despite Facebook’s content-recommendation algorithms putting their fists on the scales to ensure that every user’s feed was a wall of Facebook videos, Facebook users just didn’t want video.

When the dust settled, advertisers lost the hundreds of millions they spent on ads that no one ever saw. Media companies had no way to service the debt or satisfy the investors that supplied the capital for their pivot to video. Worse, they had laid off their newsrooms and replaced them with video producers—many lured away from stable jobs with huge cash promises based on Facebook’s fake video viewership numbers.

Even after laying off their video producers, these media companies couldn’t recover. For one thing, they had jettisoned the staff and contract writers they’d need to pivot back to text, and even if they could get the band back together, they had blown through so much money on videos for an imaginary audience that they didn’t have anything left to pay these writers.

Media companies imploded. The industry shed hundreds of jobs—young, promising creators at the start of their careers met with ruin, and many old veterans exited the field in ignominy, unable to find another job.

The idea that Facebook abuses its users because they’re not its customers is just wrong. It treats its customers terribly. It also treats its workers abysmally—think of the traumatized army of content moderators whom Facebook puts to work in the content mines, screening images and videos of torture, sexual abuse, murder and other things they’ll never un-see.

Facebook treats you terribly, but that’s not because you’re not its customer. They treat you terribly because they treat everyone terribly. They’re a monstrous company.

It may seem like I’m picking on Facebook, and in truth, I am. All the tech giants are pretty terrible, but I’d argue that Facebook is uniquely bad.

Below is a two-dimensional grid. The x-axis is “more control- freaky.” That’s how much a company tries to circumscribe what you do and rob you of your technological self-determination. The y-axis is “more surveillant”—how much a company spies on you.

Google occupies the top left corner. The company is com- paratively cavalier about exercising control over your behavior, because it spies on you and hems you in on all sides with its products. Google doesn’t have to block its rivals from showing up in your searches. All it has to do is put its own services at the top of the page.

Now, Apple is in the bottom right corner. It doesn’t care to spy on you, but then again, it doesn’t need to: it can control you by depriving you of choice. Buy an iPhone and Apple gets to decide which apps you can use. Not content with burying its rivals on page umpty-million of its App Store results, Apple prohibits those rivals from offering competing apps unless they cough up a 30 percent commission on every dime you spend, and it prevents you from installing apps unless they come from its App Store.

Finally, there’s Facebook, up there in the top right corner: maximum surveillance, maximum control. It’s a company that combines Google’s insatiable appetite for your private data with Apple’s iron-fisted control over how you use its service. The worst of all possible worlds.

What’s more, Facebook has become the template for other tech giants, like TikTok. In fact, our 2x2 surveillance/control grid needs to get a lot bigger to accommodate TikTok, because it’s so far off to the top right that I’ve had to put it on the top right corner of the inside back cover of this book to get the scale right … go ahead and check!

(Attentive readers will notice that there’s a missing quadrant in this grid—the bottom left corner, where there is no surveillance and no control; that’s the quadrant where community-supported, free/open software lives.)

You don’t have to agree that Facebook is worse than Google or Apple to know that something needs to be done about Big Tech.

Digital technology was sold to us as an infinitely customizable, responsive, idiosyncratic new way of living. Networked tools were supposed to give us more control over our lives. Instead, we find ourselves manipulated, controlled, corralled and milked dry.

What is to be done?

See Cory Doctorow speak in Winnipeg on Thursday, May 2 at an event hosted by the Canadian Centre for Policy Alternatives (CCPA-MB). For more information or to purchase tickets, visit this Eventbrite page.

Cory Doctorow is a special adviser to the Electronic Frontier Foundation and a visiting professor of computer science at the Open University. In 2020, he was inducted into the Canadian Science Fiction and Fantasy Hall of Fame.

Advertisement

URP leaderboard September 2024

Browse the Archive