New Study: Misinformation Rules!? Could “Group Rules” Reduce Misinformation in Online Personal Messaging?
/This is the latest research article from the Everyday Misinformation Project that I’m leading. The project, which is funded by the Leverhulme trust, began in April 2021 and runs until March 2024.
For this piece, we explored a previously-unexamined practice our fieldwork uncovered: when users create “group rules” to prevent misinformation entering their everyday interactions.
Personal messaging platforms are hugely popular but not well understood
WhatsApp has more than 2 billion users around the world. In the United Kingdom, more than 60% of the adult population use it regularly, which makes it more popular than any of the public social media platforms. Facebook Messenger has 18.2 million UK adult users.
These services are often implicated in the spread of false and misleading content. Yet how and why remains poorly understood. This is partly explained by the difficulties of gathering data. Messaging platforms mostly encrypt communication and they lack public search archives. Arguably, this has led to neglect of messaging’s unique patterns of use. This makes it all the more important to develop systematic, qualitative understanding of how people deal with misinformation on these platforms.
Online personal messaging is hybrid public-interpersonal communication
First, a bit of theory.
It’s useful to understand personal messaging as a hybrid public-interpersonal communication environment.
This differs from how public social media can sometimes blur the boundaries between interpersonal and mass communication when interactions between people online are simultaneously public for mass audiences to observe.
Services such as WhatsApp and Messenger are never fully public in that sense. These platforms are used mainly among strong-tie networks of family, friends, parents, co-workers, and local community members. Experiences are shaped by the iterative, mobile, and socially networked context of smartphone use and its affordance of perpetual, if sometimes ephemeral, everyday connection.
Yet when (mis)information is shared, it has often originated in the more remote and public worlds of news, politics, and entertainment before it cascades across one-to-one and group settings. This can mean it loses markers of provenance such as cues about its source, purpose, and temporality, along the way.
Information from the public world intervenes, but, in contrast with public social media, there are no fully public audience reception settings. An unbounded audience does potentially exist, afforded by the facility to join many groups or quickly “forward” messages into many other groups, but its size is hazy and impossible to predict.
People often engage in rapid and subtle switching on these platforms—between private, interpersonal, and semi-public contexts and between one-to-ones, small groups, and larger groups.
This hybrid public-interpersonal communication environment enables (mis)information’s easy transition from the public world into relatively private interpersonal communication networks, where different norms of correction and challenge might apply, or where such norms may be absent. Rumours and misunderstandings in one-to-one or small-group interactions can also spread across other, larger messaging groups and acquire a more “public” character. And in these larger group contexts, weak norms of correction may also apply, for example, due to social relationship factors, such as people’s anxiety about speaking out in larger groups, but also the technological ease of sharing and the difficulty of determining information’s provenance.
What we did
During 2021 and 2022, as part of the first phase of the broader project, we carried out a 16-month programme of longitudinal qualitative fieldwork. This involved semi-structured interviews with 102 members of the UK public in which we explored in depth how people deal with misinformation that circulates in their personal messaging networks. Our recruitment screening, via Opinium’s national panel, ensured participants roughly reflected the UK population across gender, age, ethnicity, educational attainment, and basic digital literacy. Our data for this article are a subset of interviews with 33 participants who told us about rules or rulemaking.
Between their first and second interviews, participants could also voluntarily donate examples of misinformation to us via a customized smartphone application we asked them to install. This method enabled us to discuss some of these uploaded examples with participants in their second interviews.
Key findings
Some people use group rules to try to soften platform affordances, particularly weak information provenance, minimal verification opportunities, and easy sharing, which they believe cause misinformation to spread, harm a group’s members, provoke conflict, or derail a group’s purpose.
In smaller groups, some personal messaging users turn to rules because they believe social relationships in these contexts are driven by personalized trust that springs from strong emotional bonds of kinship or friendship. Trustworthiness is perceived as inhering in the members of the group due to their close relationships and shared experiences; it is less dependent on whether information shared can be verified or fact-checked outside the group. However, this social structure also makes misinformation more emotionally difficult to challenge, and, as a result, social ties are more likely to be inadvertently exposed to harmful content than in other online communication settings. Group rules are seen as one way to collaboratively reduce this vulnerability.
With larger groups comprising workmates or neighbours, social trust, which is more transactional, may be more important than personalized trust. However, in these contexts, there are still individual threshold barriers to speaking out against misinformation. There is a perceived “cost” to crossing the threshold from inaction to action, for example a lack of confidence about marshaling evidence or not wanting to be perceived as undermining cohesion. In groups made up of larger proportions of people with weaker social relationships, rules reduce the need for ongoing confrontation. They serve to “institutionalize” the management of expression. However, in larger group settings rules, though valuable in priming general vigilance, can lead to delegation effects: the rules come to be perceived as the responsibility of others to maintain and apply. As we show with one particularly vivid example, this can reduce attentiveness to the importance of the rules and therefore the rules’ protective power.
Why all of this matters
Understanding people’s real-world relationships, their emotional responses to platform design, and the structure of group communication could generate new knowledge for reducing the spread of misinformation and online harms. This is especially relevant to personal messaging platforms, which now mediate the production and reception of information for billions of people but are unsuited to the automated moderating and fact-checking typically used on public social media.
Among our participants, rulemaking involved different types of focus and different degrees of formality, and its power differed depending on group size and membership, but the practice could have some previously untapped advantages for combating misinformation.
We also found evidence that rulemaking can involve metacommunication—which means communication about the norms of communication—that inculcates habits of collective reflection and norms of vigilance. Reflection on rules communicates social signals stressing the importance—as an end in itself—of developing desirable norms of discourse. When a new group is first established, rulemaking can constitute a kind of “founding moment” with complex social implications, even if the task of creating a group on messaging platforms is trivial technologically. Rulemaking may also punctuate established groups at important moments, for example, when sharing information stimulates metacommunication about the norms of the group. It engages people in reflection, with varying levels of intensity, on how social relationships and platform affordances can help misinformation spread and, importantly, how these forces can be blunted in routine, everyday interactions by rules. Founding moments of rulemaking can also have a long-term impact.
Fact-checkers, news organizations, platform companies, and educators might further explore the benefits of encouraging group rulemaking in everyday personal messaging.
Boosting people’s capacities to develop group rules to counter these affordances can shape whether misinformation is challenged and corrected. This could scale up and have broad impacts.