Editorial
State of the Internet

Drew Austin

by Graham Johnson

In “To 2018,” our December blog post ushering in the new year, we touched on the state of the web, acknowledging issues like digital advertising and algorithmic filter bubbles while remaining optimistic about the potential for building a more ethical and sustainable Internet. To continue the conversation, we’re hosting a virtual “roundtable” with digital humanities scholars, tech thinkers, archivists, and more, conducted through a series of interviews.

This week, we spoke over email with Drew Austin, on sabbatical after a long tenure as an engineer at Uber. Drew runs Kneeling Bus, a blog about the intersection of urban planning, architecture, and online infrastructures, and completed a writing residency at Ribbonfarm in 2013. Our conversation touched on everything from social engineering and High Modernism to archival practices and the failed dream of a free and cosmopolitan Internet.

In the discourse surrounding filter bubbles, there’s a tension where, on one hand, as you note in your piece “Scaling Bias,” platforms like Facebook are merely showing users what they want to see. People in opposing filter bubbles may call that content fake news, but no one is exempt; everyone self-selects platforms that confirm their existing suspicions. What do you do with this tension? To what extent are tech companies to blame, and to what extent are we our own villains?

It’s tempting to interpret the present technological moment as a complete break from the past, but what we’re calling “filter bubbles” and “fake news” are just newer versions of phenomena that have always existed. The 2016 election and the postmortem that followed sparked a fascinating discussion about this. The surprise, I think, is not that people have found information sources that confirm their biases—again, nothing new—but that the Internet, this supposedly cosmopolitan space that was supposed to free us from the ignorance that older media like print and television reinforced, might be equally vulnerable to those distortions, and is actually more so due to the efficiency with which audiences can be targeted with specific messages.

To me, the Internet’s crucial difference in this regard is that it’s more decoupled from physical geography than those other media. When newspapers and even television ignited debates, they had to continue in other, more nuanced spheres like one’s own community (partially because those were both one-way channels of communication). On the participatory Internet, the discussion continues within the medium itself, which seems to limit its capacity for self-correction.

I certainly don’t think Facebook and their ilk are malicious in what they’ve done, they just haven’t understood their own platform well enough, and it happens to be one of the most powerful forces on the planet. Users don’t understand it well enough either. Regardless of Facebook’s intentions, by trying to curate and target individual streams of information, they are taking a step away from “truth value” and toward whatever biases those individualized feeds embody.

Le Corbusier, Quartiers Modernes Frugès

A lot of your writing looks at online infrastructure through the lens of architecture and urban planning. Current conversations about online “architects” designing online spaces that are “good for us,” but perhaps contrary to our natural instincts — this description evokes High Modernism, Le Corbusier, and spaces which, rather than being designed to accommodate for the way we live, instead attempt to change us. Very few people end up wanting to live in such structures; they inevitably migrate elsewhere, or end up in broken, anti-human systems. Is this analogy simplifying things too much? What are the lessons of the Modernist project?

One of the lessons of Modernism was that prescriptive design that begins with utopian motives easily gives way to worse outcomes for its inhabitants or users. The same could be said of most idealistic master planning. Modernism began with the goal of elevating mankind through a certain spatial logic, but ended up generating merely interesting (and occasionally beautiful) buildings that were subsumed into the prevailing capitalist milieu and thus defanged.

In its worst instances, this idealism yields an infrastructure of control that gains traction via benevolent leadership, but then gets handed over to someone with less altruistic intentions but the same level of control. Facebook could be an example of this today—its creators didn’t set out to build an advertising platform when they were in college, but they ended up with the perfect infrastructure for that. The key to designing systems the right way, in light of this history, is to create the incentives without the control mechanisms that only work when someone “good” is behind the wheel.

Decentralization is a helpful quality for such a system to have, which is probably why people are so excited about blockchain now, but it’s hard to “design” a system for decentralization or distributed agency, since those typically arise organically and design often involves a kind of control. Urbanism offers plenty of useful examples: The ideas of Jane Jacobs or Christopher Alexander start with the assumption that cities are better when individuals have more influence over their local environments. Again, though, much of what they praised had evolved organically over time and is harder to design or engineer, although their suggestions for such design are a great starting point.

You note in “Digital Wastelands, Societies of Control” that we often trade up for infrastructural networks that give us speed and efficiency at the cost of “humanity.” What does “humanity” look like to you? How do we notice it in order to preserve it?

I do use that word “humanity” a lot. In short, I’d define it as having control and influence over one’s immediate environment. [Philosopher] Ivan Illich probably gave the best explanation of how high-speed transportation infrastructure in particular leads to a direct loss of individual and local control, a phenomenon he called “radical monopoly.”

Freeways, for example, isolate parts of cities that were formerly connected to one another, and thus make more “human” forms of transportation like pedestrian and bike travel more difficult and less available for everyone.

In that sense, I suppose, I use “human” more broadly in reference to inherent needs that people have always had, but which technology doesn’t always help to serve, such as community and emotional well-being.

Library of Babel

In your essay “Decay Value,” you make the case that a society that keeps all of its information, perfectly preserved in a Library of Babel, is functionally identical to a “memoryless society that keeps no records.” I’m reminded of Borges’ famous piece on cartography, in which he describes how an endlessly detailed map becomes so loaded with information that it is in the end unusable. Where do you think we’re headed in terms of information management? What kinds of solutions are presenting themselves to us?

Since we each only have a finite capacity for absorbing or even recognizing information, I’ve come to realize that information will still decay in a sense despite infinite storage, it will just decay differently—not just via absolute loss, like paper records that finally disintegrate, but in two other ways: piling up digitally and becoming meaningless in its excess, or by catastrophic crashes that wipe out huge amounts of stored data at one time. The former will probably dominate and in many ways is where we already are. Twitter is probably the best example of this (and most like Borges’ Library of Babel): most tweets are still out there, but an average, unmemorable tweet from five years ago has essentially “decayed” by being forgotten and not existing anywhere that someone else would ever see it. Over time, such information approaches zero in its relevance and ceases to be “information” at all.

We’ve written elsewhere on this blog that online advertising—and its prerequisite, the mass collection and monetization of user data—was becoming an urgent social problem. Do you agree? Do you think there are other, larger culprits behind the changes we’ve seen in the Internet the last few years?

I agree that it’s an urgent problem. Personally, I think the most problematic aspect of that trend is the redirection of attention toward screens and away from everything else, along with the corresponding negative effects on non-digital social interaction, but that’s only because the stewards of my personal data are relatively benevolent and not using it for anything truly nefarious. That could certainly change. In general, I think we’re all correct to acknowledge that the current version of the Internet has exchanged some valuable elements of society for inferior replacements, and digital advertising appears to be the most powerful force that drives the Internet toward the version that we’ve seen it become lately.

What makes the new era of advertising online different (and more dangerous) than the old era of newspaper and television ads that we’ve so normalized?

As I alluded to before, the nature of online information (not just ads) is that it’s less situated in a pre-existing community that can rein in its excesses. In many ways, the Internet is the community as well as the source, and undesirable content can move more quickly, reproduce at greater scale, and impact people more deeply before its effects are even known. As for advertising, it seems to be different in the sense that it’s more solipsistic. If I see an online ad I can’t even be certain that anyone else I know also did. That probably informs a totally different relationship with the outside world than the old form of mass advertising did.

Any reading material you feel is integral to conversations around the future of the web and would recommend?

I’ll recommend two of my favorite essays from the past year, which both relate to the discussion above in different ways. First, I loved this Ian Bogost piece about the Internet of Things and how we connect stuff to computers and the Internet, not because it makes them work better, but because we simply like computers and want them more involved in everything. The downside to this is pretty obvious, as Bogost suggests. Second, this James Bridle post about algorithmic videos for children on YouTube, while possibly a bit overwrought, seemed like a pivotal moment in the ongoing discussion of what the advertising-dominated platform web might become. There’s a darker undercurrent to some of these developments that we’re all probably aware of but can ignore because everything’s worked pretty well so far. The interest in documenting evidence of that dark side was a productive outcome of 2017 and this essay exemplified that trend.