large 09857f3053305931f40025c2e4ed456b

In “To 2018,” our December blog post ushering in the new year, we touched on the state of the web, acknowledging issues like digital advertising and algorithmic filter bubbles while remaining optimistic about the potential for a more ethical and sustainable Internet. To continue the conversation, we’re hosting a virtual roundtable with digital humanities scholars, tech thinkers, archivists, and more, conducted through a series of interviews.

This week, we spoke to Jenny L. Davis, sociologist at the Australian National University in the College of Arts and Social Sciences. Davis co-edits the Cyborgology Blog, theorizes about emergent technologies as social structures, and engages in laboratory and ethnographic research to understand social processes of identity, status, and stigma. She has two current projects: one on “technological affordances” and a second on role-taking. We spoke about ethical design, the link between social media and mental health, and to what extent the Internet era represents a “phase shift” versus a continuation of the status quo.


A lot of your work revolves around “affordances,” which you define as the “range of functions and constraints that an object provides”—a sort of intermediary between a “feature” of a platform and the actual outcomes of that feature. Does good design, in your opinion, narrow the range of affordances to minimize negative outcomes and maximize positive outcomes, or does it increase overall user freedom?

I think the standard for “good” design is primarily about harm minimization and positive user-experiences more than about user freedom. The degree of flexibility (i.e. user freedom) should serve user-experience ends rather than representing an end in itself. In some cases, tight technical regulations are crucial to minimize harm/maximize positive outcomes. For example, my 89-year-old grandmother has a special mobile phone with large numbers and very few functions. The phone makes calls, stores contact numbers, and connects to a call-center with the touch of a button. The phone won’t support apps or browsing, it can’t store music, and it doesn’t take photographs. The inflexibility of the phone is what makes it so functional for my grandmother and it was built to serve her demographic. On the other hand, flexibility can be really useful sometimes. For instance, a phone that seamlessly supports complex functions through both text and speech commands can make that product equally accessible to able-bodied persons, those with hearing impairments, and those with vision impairments (along with enabling safer communications while driving).

The idea of flexibility serving user-experience is rooted in a larger point: affordances aren’t uniform across persons and contexts. My grandmother’s phone would be as frustrating for me as mine would be for her. She would get lost in the choices of a mainstream phone just as I would feel stifled by the absence of Google Maps on her Jitterbug. This is why affordance analyses must always ask how features operate, for whom, and under what circumstances.

In public discourse around social media and depression, one major crux of disagreement is over correlation and causation. It’s frequently argued that studies linking time online with depression are actually reversing causal mechanisms—that more depressed people end up going out less and spending more time in front of screens. Have there been studies that confidently show a causal relationship between time on social media and increased depression?

I’m not aware of any clear causal studies. Most studies are cross-sectional and based on self-reports. Causality would be extraordinarily hard to tease out and would require, at the very least, longitudinal data that began before participants had any contact or engagement with social media and then followed those participants over time. If there is a causal relationship, I suspect it is neither clear nor linear. Once depression kicks in, some people will likely withdraw from social media while others may turn to it for support. A complicating factor, of course, is that “social media” is not a monolithic phenomenon but an interconnected and fragmented suite of platforms and tools. Maybe scrolling through a Facebook newsfeed generates envy but engaging in private conversations over messenger lends support; maybe Twitter amplifies feelings of anxiety with its streams of news and argument but a carefully curated Instagram soothes with images of cute animals and sunsets.

large 8f6ab41ccb115924bd868dccdc62d402

You write in “Designing Emotion: How Facebook Affordances Give Us The Blues” that online social platforms reinforce and perpetuate many pre-existing offline structures and hierarchies. I’m curious the extent to which you see online platforms like Facebook or Twitter as a break from previous social organization and social infrastructures (versus as a continuation). A lot of discourse surrounding the Internet has focused on the ways traditionally marginalized voices are suddenly made powerful, for example—but perhaps these cases are the exception more than the rule.

Social media platforms are both a break from and continuation of traditional interaction structures. They do offer new opportunities for “voice” among everyday publics—including those who have traditionally been relegated to the margins (e.g. #ArabSpring #BLM, #MeToo). At the same time, high status people maintain more opportunities to stand out from the crowd. Celebrities and politicians have massive follower counts and their content gets picked up and addressed by mainstream broadcast outlets. Similarly, the voice afforded to marginalized groups can be both an opportunity and a vulnerability. For instance, dissidents in authoritarian regimes have a new avenue to express their voices, but doing so also exposes them to the consequences of state censure. Through social media, there are new venues in which to speak, but fewer places to hide.

What kinds of behaviors seem largely carried over from pre-Internet interaction? What elements of online platforms, and their respective user behaviors, seem distinctly new and need reckoning with?

The status games, gossip, need for connection, and validation all carry over. We are engaging on these platforms socially, and social rules persist. However, networks are potentially much larger, content is less ephemeral, and conversations and connections are continuous rather than finite. Getting away from social interaction has to be more of a conscious choice with social media. The social stimuli are vast and fast-coming. Managing that becomes less about seeking out interaction and information and more about curating content—your own and others—by drawing boundaries around when, how, and how much you engage.

One way you point out that platforms like Facebook perpetuate pre-existing offline social hierarchies is with algorithmic feeds in which the socially “rich get richer.” Popular content from popular users is given more visibility; unpopular content by unpopular users is given less. And yet from a content consumption perspective, these algorithmic feeds have higher value or “utility” to users because on average, already-upvoted content will likely be more interesting to audiences. How should we think about balancing things like equal representation and visibility with users’ consumptive preferences? Are user preferences a form of de facto silencing, or something more complicated?

I do think upvoting has a strong silencing function. It’s an a priori form of curation that relies on the normative values of a given audience. When “unpopular” content gets pushed down or erased, it not only reinforces the normative voice of the existing user base, but also reinforces who the site/platform serves. Rather than generating a complex conversation that invites new and unexpected audiences to join in, those who go against the grain are pushed out. Would-be participants who diverge from the norm may thus find no material that appeals to them and when they speak, find their voices hushed. This is not in itself a bad thing, but it is a normalizing and stabilizing thing. Vote-based content curation makes cultural change and diversity more difficult to accomplish.

large 09857f3053305931f40025c2e4ed456b

In general it seems there’s a difficult-to-resolve tension between “giving users what they want” and “giving users what’s good for them.” In our interview with Drew Austin last month, we talked about this briefly in terms of High Modernist architecture, but analogies to fast food versus health food make a similar point. Am I falsely perceiving tension between these two things in terms of social design? If not, how should we think about resolving these kinds of tensions?

I think this tension incorrectly assumes that what people want and what is good for people are opposing forces. I think people usually want what is good for them, but what’s “good” and “good for you” can take myriad forms. I think a problem with social media companies is that they aren’t especially concerned with what people want or what’s good for people, but how they can design products in ways that optimize monetization. Issues of customer satisfaction and social responsibility serve the larger goal of profit maximization. Sometimes profit maximization coincides with desirable user-experiences and/or social good, but these only come to matter in relation to buying and selling. Users pay for usage with attention, so social media companies are in the business of attention cultivation. It’s bad for business when people log off or look away, so companies have to consider the user experience and respond to user-publics, but will always do so in ways that support—rather than undermine—profit motives. Hence, Facebook has gone through a flurry of redesign work to manage the problem of “fake news,” yet maintains an advertising payment structure that gives preference to attention-grabbing content, thus rewarding sensationalism.

There’s a fine line in attention economy issues between making platforms less “addicting” and making them less appealing, less enjoyable, or even (as Maya Ganesh advocates in her pieces on the Center for Humane Technology) less visually attractive. How do we meaningfully distinguish between something like “providing positive utility that keeps users engaged” and “hijacking users’ mind” in thinking about platform design?

This may seem like a silly answer, but the addiction element feels more cultural than technical. If we don’t expect each other to respond immediately, then the phone notification may seem less urgent. Interactions are based on norms and rituals. Social media companies are going to design products in ways that keep those products as sticky as possible. As users, it behooves us to be considerate and generous in the norms we develop even as the technical features encourage more demanding forms of engagement.

Do societies inevitably work out countermeasures and practices to “extract the benefits [of new technologies] without the costs,” as Steven Pinker has argued over at Vox? That is, do you view our current conversations around social media and big data as part of a standard historical trajectory? And does this process occur with new social platforms, systems, and contexts as well?

As Pinker points out, people are agentic and use technologies in ways designers and distributors likely hadn’t envisioned, at times circumventing some of the negative consequences. For instance, Facebook users combat “context collapse” through privacy settings, multiple accounts, and savvy disclosure strategies, while Tor browsers connect people to information without giving away their data or risking violations of privacy. With that said, we can’t overlook the shaping effects of technologies and their sometimes vast power. People push back on the effects of technological advances, but technologies themselves can be incredibly powerful social forces. I know we are talking about social media, but I can’t help thinking about guns. Clearly, our capacity to “extract the benefits without the costs” has not panned out with these technologies of violence. This is all to say that technologies are not deterministic and dystopic nihilism is unnecessary and unproductive. However, it is unwise and a bit arrogant to think that people can do with technology what they will and maintain full autonomy in light of technological systems that increasingly shape interaction flows in personal and public life.