Logo Utrecht University

Datafied Society (research platform)


A draft write up of the “Bots & Digital Infrapuncture” workshop

By Karin van Es

Cristina Cochior and Manetta Berends, both researchers/designers operating in the Netherlands, developed an online workshop Bots and Digital Infrapuncture inspired by professor Deb Verhoeven’s concept “digital infrapuncture”. The aim of the workshop was to imagine bots in chat protocols that intervene in the assumptions on which the platforms are designed. The workshop was held in June 2020 and hosted by Datafied Society together with Creative Coding Utrecht. It was preceded by a twenty-minute lecture in which Verhoeven introduced digital infrapuncture followed by a ten-minute lecture in which Cochior put forward “Bot Logic” as a framework through which to understand the impact (their affective and effective forces) of bots. In what follows I offer a first reflection on the insights generated through the workshop.

Verhoeven (2015) proposed “digital infrapuncture” as a new model for digital humanities infrastructure. She was responding here to scholarly conversations on the need to reconsider digital infrastructure in terms of capacity and care (Nowvisikie 2015). Verhoeven argues that more can be done than simply recognizing where it hurts and argues that we need to “amend the systemic causes of that pain” by intervening at critical locations. Digital infrapuncture concerns “how small-scale infrastructure interventions can transform larger contexts. It is, as in the practice of acupuncture, aimed at relieving stress”. The concept draws on the work of Manuel de Sola Morales on urban acupuncture and that of Steven Jackson on rethinking repair. Of particular interest to Verhoeven is incorporating values in design with regard for ethics and social sensitivity. She proposes agency, impact and power as key conceptual axes on which current systems are rethought and built. Verhoeven thus directly engages with the politics of artefacts and emphasizes that “the work we do is social in nature; that we are building social and cultural amenities not simply technical platforms or information systems” (Verhoeven 2015).

Digital infrapuncture is similar to the aims of reflective design which argues that “reflection on unconscious values embedded in computing and the practices that it supports can and should be a core principle of technology design” (Sengers et al. 2005, 49). However, digital infrapuncture is not about replacing one system with another, but about creating small interventions that overhaul the notion of structure. Verhoeven explained how the concept is hard to define because digital infrapuncture emerges in practice. This idea connects to the important question raised by Leigh Star and Ruhleder (1996) “when is an infrastructure?” and the idea of performative materiality. As Drucker (2013) explains, performative materiality suggests that “what something is has to be understood in terms of what it does, how it works within machinic, systemic, and cultural domains”.

The workshop consisted of five activities: (a) brainstorming chat protocols, (b) identifying where it hurts, (c) imagining a bot to puncture and deflate stress, (d) scripting how the bot acts upon the infrastructure and finally (e) group discussion. To think through the concept digital infrapuncture I reflect on the process and outcome of two teams participating in the workshop.

My team looked at Neoliberal University on Twitter. We pondered about the Matthew effect of accumulated advantage and related questions of visibility and voice in academia. We briefly contemplated ways that bots could be used to amplify academics with little followers, but settled on another hurt namely that caused by the hierarchies of visibility for political inaction in the Netherlands on issues pertaining to the neoliberalization of universities. To amend this hurt we proposed a bot that would tweet posts with hashtags like #WOinActie (community of employees and students protesting the neoliberalization of the university) to the Minister of Education, Culture and Science. We realized that such a bot would be immediately muted or blocked. The idea did, however, raise interesting questions about the relation between humans, bots and infrastructure. Could we actually intervene on the platform or are bots merely a means to augment? It seemed that what the bot was doing was more akin to the tactics discussed by De Certeau (1984), acting in environments defined by the strategies of the platform. We found in this discussion a recent article by Andreas Hepp (2020) on communicative robots to be useful. He proposes that these bots are often “media in media” since they are based on platforms like Facebook and Twitter and that they “act upon” these platforms (2020, 10). But importantly, Hepp suggests, they can be infrastructures in themselves.

Another team picked the open source decentralized social network Mastodon. They were thinking about how to counter the flooding of the ‘timeline’, which is shared by different verses, by radical voices. They proposed a bot that recognizes a topic and compares it with a categorized database and counters the post with an article on the same topic, but expressing a different view. News channels and articles can be categorized as ‘Partisan’, ‘fake’, ‘inciting’, and ‘toxic’. Presenting their bot the team was rather critical of their own proposal. The group agreed there were problems and discussed this might give unwarranted attention to unsubstantiated claims. It has been suggested that impartial coverage “requires both an engagement with a range of informed positions and a commitment to drawing on credible evidence as opposed to unsubstantiated claims” (Puttnam Report 2018, 284). Moreover, Andersen and Obelitz Søe argue that fake news are “communicative actions”, rather than statements of fact (2020, 130). As such they are difficult if not impossible to be remedied with simple technological fixes. But regardless of whether the bot could work, it was interesting to consider the impact it could have on the larger context. Of course, we have existing examples of social media platform factchecking against Wikipedia. What happened here was that the hurt was simply displaced, the problem travels elsewhere. This in particular was a valuable takeaway because it brings to light that the proposed intervention might be unable to address structural problems and transform the “larger context” (which we now realized could not be neatly demarcated). Perhaps it can be understood as a failure to intervene at the critical location.

Returning to the concept of digital infrapuncture the workshop left me with more questions. Fundamentally, it got me thinking about the distribution of agency between users, bots and infrastructures, but also about issues of scale and interconnections. It got me asking who is doing the making, to what ends and at what benefits/costs to our values. In the end the workshop highlighted the need to collapse the divide between the producers and users of infrastructures.