In “To 2018,” our December blog post ushering in the new year, we touched on the state of the web, acknowledging issues like digital advertising and algorithmic filter bubbles while remaining optimistic about the potential for a more ethical and sustainable Internet. To continue the conversation, we’re hosting a virtual roundtable with digital humanities scholars, tech thinkers, archivists, and more, conducted through a series of interviews.

This week, we spoke over email with Grant Wythoff, a historian of media technologies and Visiting Fellow with the Center for Humanities and Information at Penn State University. Wythoff has written for Real Life, Cinema Journal, Grey Room and recently published a book on the history of science fiction titled The Perversity of Things: Hugo Gernsback on Media, Tinkering, and Scientification. Working notes on his newest project, Gadgetry, can be found online. Our conversation touched on the differences between print- and screen-based reading, how to deal with information overload, and managing digital privacy in an age of mass surveillance.

Taking up the Marshall McLuhan idea “the medium is the message,” a lot of your research has looked at how our media and tools can have serious (and often unacknowledged) effects on how we interact with their “content.” Even though we call an eBook an electronic book, and refer to paper and eBooks interchangeably in some contexts, our reading experiences will be different between the two. In what ways might we be failing by uncritically analogizing, for instance, online journal or newspaper articles to physical journal and newspaper articles? What changes between physical and digital media too often get brushed over or ignored?

On the one hand, I feel like the continuities between print- and screen-based reading aren’t discussed often enough. It’s all too easy to associate the image of people sitting in public and looking at their phones with fears of an atomized society, made up of individuals detached and distracted from the world around them. But these people are all reading. We’re reading more than ever today, and I think it’s useful to understand the continuities between print and digital eras in terms of the growth and democratization of literacy from antiquity to the present.

And yet, following that McLuhan aphorism, there are constitutive differences in how print and digital texts are assembled and delivered to us that have very different kinds of social and political consequences. McLuhan might have understood the shift from print to digital in terms of a shift in the “sense ratio” of each medium: the ways our senses are addressed in different combinations and intensities from one platform to the next.

But the most pressing concerns on this front today are best addressed in terms of authorship and authority. When texts are generated to exploit the biases and prejudices of algorithmically-filtered publics, how can we teach content and platform literacy at the same time? The public has to become a lot more savvy about the way they read, very quickly, if we’re going to recover any confidence in our civic institutions after the election. People will need the skills to interpret not only the argument or meaning of a given text, but also the technical literacy necessary to understand the materiality of how that text was placed in their feeds in the first place. This is going to take a lot of work, a lot of outreach.

Here, I think we can rely on the work of bibliographical scholars and book historians who, for decades, have shown how the materiality of print should inform our understanding of any given text’s meaning. Textual scholarship that was designed for the print era is just as valuable today for conversations on literacy and the public.

The Analytical Engine, an early concept for a general-purpose computer designed by Charles Babbage in collaboration with Ada Lovelace, 1833-1871.

Obviously, as we adopt new platforms and media, there will be a transition period in which we evolve supplementary tools, techniques, and practices with which to optimize our experience of them. For paper books, some examples might be the commonplace journal, dogears to mark a page, or standardized methods of annotating text (lists of shorthand and symbols for copy). What are some problems with online reading that we might not have adequately dealt with yet? Are there tools, techniques, or practices you recommend to those who spend a lot of time reading online?

Personally, I feel like I defer reading things online way too often. Because there’s so much out there, I end up bookmarking articles I come across more often than actually reading them on the spot. This is a kind of reading practice, I suppose, one I’ve fallen into in order to deal with the glut of material online. But it’s basically just leaving breadcrumbs behind, trying to form a cognitive map of everything that’s out there so that, if necessary, I can circle back around and dive deeper into a particular topic given the signposts I’ve left for myself. Robert Pfaller writes about this a bit in his book Interpassivity: The Aesthetics of Delegated Enjoyment.

Is it at all problematic that the rate of tool improvement is accelerating? Will we have less time to learn how our tools work, how to minimize their downsides while taking advantage of the opportunities they offer?

I don’t know that I’d agree that our tools are improving (by what metric? according to whose values?) but they are certainly becoming more complex. The rate at which technology evolves was a classic problem for science fiction authors after the Golden Age. By the time a novel went to print, the future it had imagined was already outpaced by real-world developments. William Gibson dealt with this problem by deciding to set his science fictions in the present, beginning with Pattern Recognition in 2003.

Today, there are countless repercussions of this acceleration. Devices that require rare earth minerals from almost every continent on the planet, only to be discarded within two years of their original purchase, are having an immense ecological impact. The cumulative “technosphere”—or the collective mass of materials that have been manufactured by humans, from the built environment to underground infrastructure to electronics—is now estimated to weigh 30 trillion tons.

Another way in which this acceleration is playing out is the number of tools that are simply impossible for their users to repair. There is a base level of technical literacy that certain design principles make impossible. The right to repair movement is doing very interesting work on this front.

A table depicting some of the many rare earth elements, typically mined in developing countries, which are needed to produce computer chips and general electronics.

In the previous installment of this series, State of the Internet, we talked with Drew Austin about the ways online advertising (and its prerequisite, the mass collection and monetization of user data) has become an increasingly urgent social problem. I’d love your thoughts on the magnitude of this problem, and on what makes this age of advertising different from advertising in the past.

I absolutely agree that it is troubling, the ways data brokers profit off our every behavior online. A vast majority of the information economy runs on advertising revenue as its lifeblood. The fact that others profit off the minutiae of our social interactions means that there are very high stakes in maintaining the shape, quality, and frequency of our interactions, if only to protect and grow that revenue. This fact encourages a very particular kind of relationship between people and ideas, people and others, individuals and their sense of the public sphere in which they participate.

There’s a kind of paranoia that online ads elicit today. Given the granularity of how ads are targeted to our interests and beliefs, each time we see an ad we wonder: how did they know this about me? Were they listening to my conversation? Even if the ad seems completely off, the fact that it’s dropped in our feed makes us wonder about the categories we’ve mistakenly been slotted into.

You’ve probably heard the story circulating that Facebook accesses device microphones to record user conversations, ultimately in order to better target them with advertisements. People will mention a random craving for a food they would never usually purchase, and the next day an ad for that product shows up in their feeds. This momentary desire could just as easily have been gleaned from patterns in other activities. But whether or not it’s the case that Facebook records our conversations, the fact that these mythologies circulate about who may or may not be listening goes to show that there’s a growing collective unconscious about the capabilities of these technologies, and their ability to identify whole individuals from patterns in the data.

So it seems like advertisements have become one of the few ways in which we get any feedback on the portraits of our lives that are being collected by data brokers and sold by social media companies. We tacitly leave behind traces of our preferences and routines every time we use our devices, but the moment we receive an advertisement it feels like we’re given a glimpse of that version of our lives as it’s being assembled and monetized somewhere out in the ether.

What are some of the best tools or strategies you’ve come across for dealing with issues like filter bubbles, targeted ads, harassment, and attention manipulation?

I’m very interested in the tactic of “self-surveillance.” Different from “quantified self” gadgets, which track calories, steps, and blood pressure in order to see patterns in your daily life and maybe (the idea goes) change behavior for the better, “self-surveillance” involves duplicating for ourselves the kinds of trackers that (as far as we can tell) data brokers and social media companies train on our activities.

So for example, Data Selfie was designed by Hang Do Thi Duc, a designer and media artist based in New York. It’s a browser plugin that displays a ticking clock on the bottom left corner of your Facebook feed, counting the number of seconds you look at each post. Click the plugin, and you get a cumulative sense of your attention to particular topics and people along several metrics over the course of your use of the plugin: your data selfie. As an art project, I think this is purposefully creepy. The plugin sits there in the top of your browser, an eye icon watching you all the time.

Apply Magic Sauce is a project developed by Cambridge University’s Psychometrics Center, a group founded in 1989 that includes psychologists, statisticians, mathematicians, computer scientists, and linguists. Connect your Facebook or Twitter accounts (or simply input some text you’ve written) and Apply Magic Sauce will show you the kinds of assumptions that can be made about your personality, satisfaction with life, intelligence, age, gender, political views, relationship status, etc.

I have personally found the Data Detox Kit designed by the Tactical Technology Collective to be very useful. So too is FemTechNet’s comprehensive Feminist Guide to Cybersecurity. Also essential: Robin Linus’s social media fingerprint checkup to see if websites can use third party cookies to easily tell that you’re logged in to Facebook, Twitter, Spotify, Dropbox, and many other services.

Finally, there’s PersonalData.io, a group that processes legal requests on your behalf, requesting from companies like Tinder, Uber, and Facebook a copy of all the data they have on you.

Lastly, we’re putting together an Are.na channel with resources related to this roundtable, and would love if you had recommendations for papers, essays, links, etc. that you feel are integral to conversations around the future of the web. Last month, Drew Austin put a spotlight on Ian Bogot’s Atlantic essay, “You Are Already Living Inside a Computer”, and James Bridle’s viral piece “Something is Wrong on the Internet”.

There are many media theorists writing about contemporary technological issues in ways that often play with the affordances of long-form, print scholarship. I’d recommend:

Wendy Chun, Updating to Remain the Same: Habitual New Media (2015)

Dominic Pettman, Infinite Distraction (2015)

Benjamin Bratton, The Stack: On Software and Sovereignty (2016)

Adam Greenfield, Radical Technologies: The Design of Everyday Life (2017)

Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018)

Jean Baudrillard, The Ecstasy of Communication (1987/2012)