Yes: Affirmative Consent as a Theoretical Framework for Reimagining Social Platforms


This is a project website of "Yes: Affirmative Consent as a Theoretical Framework for Understanding and Imagining Social Platforms", a paper accepted to CHI 2021, a top-tier conference in Human-Computer Interaction. Link to paper is here.

This work builds on Una Lee's wonderful work on consentful technologies: consentfultech.io.

How can we design a social internet where people's consent boundaries are protected? Non-consensual interactions are pervasive in online spaces, such as online harassment and revenge porn. In this work, we use a theoretical framework of affirmative consent ("Yes means yes!") to understand such problematic phenomena and generate new design ideas to tackle them. This website highlights the 1) principles of affirmative consent and 2) the design insights generated from the principles for building consentful platforms.

1. Principles of affirmative consent

Affirmative consent is the idea that someone must ask for, and earn, enthusiastic approval before interacting with someone else. For decades, feminist activists and scholars have used affirmative consent to theorize and prevent sexual assault. Here, we introduce the five principles of affirmative consent, deriving the principles from prior work in feminist literature, legal scholarship, and HCI. If you are curious about the prior research that has informed these principles, please check our paper! Our principles also build on Una Lee's wonderful zine on digital consent.
1) Affirmative consent is voluntary.
Consent is an agreement that is 1) freely given and 2) enthusiastic.
2) Affirmative consent is informed.
People can only consent to an interaction after being given correct information about it—in an accessible way.
3) Affirmative consent is revertible.
Consent is an ongoing negotiation and can be revoked at any time.
4) Affirmative consent is specific.
People should be able to consent to a particular action (or a particular person), and not a series of actions or people.
5) Affirmative consent is unburdensome.
The costs associated with giving consent should not be so high that a person gives in and says "yes" when they would rather say "no."

2. Affirmative consent for generating new system ideas

In this section, we first describe the sociotechnical building blocks generated by the principles above, and then introduce the concrete interaction features grounded on the building blocks. If you click a building block, the corresponding features will be highlighted in the table.


1) Sociotechnical Building Blocks

1. Building blocks for voluntary.
System periodically asks the end-user (and does not assume) whether they want the interaction to take place. For instance, a system asks a person if they want to enter the group chat room they are invited to, instead of automatically adding them.
System allows granular levels of visibility of personal information for different friends. While some social platforms provide this, many are limited to differentiating “friends” and “non-friends.” For example, users could have agency over their visibility based on strength of ties.
Systems permit limits on how far a post can be shared. For instance, a person can allow people to only directly share their post (hops=1) , helping the author control the degree of visibility and interaction.
Systems allow users to accept a friend request but isolate it, sending the request sender to a separate queue. Users can apply customized social rules to the accounts in the queue. This is in contrast to the current platforms’ rigid options regarding relationships (e.g., accept vs. decline), supporting deeper social rules.

2. Building blocks for informed.
Using algorithms, systems synthesize account-level behavioral data. Of course, every user needs to be aware this could be happening (otherwise it violates the informed principle). For example, a system could show whether an account a user is about to interact with has consistently used toxic language in the past.
Systems provide feedback as soon as the real audience diverges from the likely imagined audience. For example, a system might notify a user if their post is shared within a new network neighborhood using community detection algorithms.
3. Building blocks for revertible.
System efficiently allows users to completely delete all types of information—tags, posts, comments, friendships, etc. For example, when someone unfriends another person, the platform might ask “Would you like to remove past tags of this person as well as related posts?”
System completely deletes past shares/copies if the original data (e.g. post) is deleted. For example, on a centralized sytem like Twitter, retweets disappear if the post is deleted by the poster; on a decentralized system like Mastodon, a protocol could enforce revertibility, with punishments for defections.
4. Building blocks for specific.
Using computation on interaction data, systems can scaffold classifying relationships into groups, or “social circles.” This might be accomplished with community detection algorithms, for example.
Using computation over textual and image data, systems can scaffold classifying content into high-level categories.
Once these circles and topics are created with computational scaffolding, systems can let users articulate more specific group-level policies for messaging, content feeds, etc. For example, a user might choose to only allow comments on a post from people who have commented (and not been blocked) before.
5. Building blocks for unburdensome.
Systems can put customized time limits to interactions. While ephemeral content is an example of this, we argue timeboxing can be applied to a wide range of interactions, and not just posting (e.g. disallow sharing after one week).
Using computation, systems learn about consent boundaries. Users can annotate posts/comments to articulate their preferences (e.g., annotate posts on content feed as triggering).
Systems limit volumes of comments, mentions, etc. based on end-users’ preferences. For example, a user may decide to only allow up to five comments to a post that is on a sensitive subject.

2) Sociotechnical Interaction Features

Using the building blocks above, we next present proposals for new designs based on affirmative consent. We take the five principles of affirmative consent and use them as design axes to generate sociotechnical interaction features. In some senses they are "primitives"—core interaction ideas that could be repurposed on a variety of social platforms in flexible ways. Each cell of the table presents an interaction primitive. We also sketch three cells from the table in more detail in the next subsection.

Voluntary Informed Revertible Specific Unburdensome
DM + group chat Users are asked if they want to join when invited to group chat.
Periodic checks
Platform visualizes topics discussed in group chat before a person decides to enter.
Topic inference
Users can revert message read status to unread. Different online status by group: would love to chat for friends; online, but busy for others.
Granular visibility
Group-level policies
Profile Users can control profile visibility by audience: only show selfies to friends & friends’ friends.
Granular visibility
Platform shows how many people that viewed the profile are strangers.
Audience intel
Users can query and delete, en masse, tags and comments from their profile related to account (e.g., ex-partner).
Efficient expressivity
Some profile fields are only shown to accounts that have been friends for > t time.
Group-level policies
Platform periodically reminds user how their profile looks to other people: “This is how your profile looks to Jake.”
Periodic checks
Friend + follow Users can accept a friend request but can isolate it, sending it to a separate queue. (e.g., if acceptance is coerced).
Request isolation
Requests from people previously unfriended are sent to a queue. —ensuring revert.
Request isolation
Post+ comment *most platforms already support voluntary posting and commenting Users receive reports of how many post viewers are strangers.
Audience intel
Users can query and delete posts/comments at large scale.
Efficient expressivity
Users can apply audience rules to hashtags: e.g, creator can restrict who can use it.
Group-level policies
Users can rate limit comments per post.
Individual rate limite
Feed Feed asks what users want to see today (or this week).
Periodic checks
Content feed makes algorithms visible and salient. Users can bookmark feed settings to easily revert to prior settings. Users can set different types of content feeds per social circle.
*similar to mastodon’s local timelines
Group-level policies
Users can annotate posts in feed, from which the system can learn what posts the person wants to see (or not see).
Annotation for system-learning
Tag By default, platform always asks user if they consent to being tagged when another user initiates tagging.
Periodic checks
Platform provides high-level summary of audience, outside friends, that sees tagged post.
Audience intel
If user unfriends, the system asks if they also want to delete tags of the person.
Efficient expressivity
Users set tagging rules by content type: disallow tags in photos of people.
Topic inference
Users can timebox tag frequency: Jake can only tag once a month.
Timeboxing
Share + retweet Users can limit how many hops shares are allowed to travel.
Sharing hops
Users are notified if post is shared to a new network “neighborhood.”
Audience intel
When user deactivates post’s sharing, or deletes the post, existing shares disappear.
*twitter partially implements this
Cascading & normative revert
Platform alerts user if their post starts being shared rapidly by strangers.
Audience intel



3) Examples

Here we provide tangible mockups that illustrate three examples suggested above. The first and second illustrations are designed by Katherine Mustelier and the third is made by Jane Im.

1. Voluntary Content Feeds: Feeds that ask what you want to see today/this week/this month
The image shows a content feed of Socious, an imaginary social platform. At the top, the feed asks the user what they want to see this week, with a search bar for searching topics and some recommended topics right below the search bar. Below the search bar and recommendations, there are topics the user liked from last week, so the user can easily re-select them if they want to. Among the topics Lucy liked last week, which are Flowers, News in Korea, Volleyball, Slow Motion, etc., Lucy selected Flowers, Volleyball, and Slow Motion. At the very bottom, the user can select topics they want to filter out. There are currently Self Harm, Alt right, Race, Anime, etc. showed and Lucy selected Self Harm, Alt right, and Race.
The image shows the feed reflecting the preferences of the user. The first post, written by an account with the username equalighte, says "I came to a flower festival today! Everyone should check it out!" which is related to the topic "Flower Trending." The second one, posted by liberati, says "I saw Howl's Moving Castle tonight and it was so beautiful...!" which is related to the topic "Animation." Lastly, the user secretdancer48 posted "It's been a week since I started to learn waltz. It's more difficult than I thought!" which is related to the topic "Dance."
When Lucy opens Socious, they are greeted within the content feed asking what they want to see this week.
Once Lucy selects the topics they want (or not want) to see, the changes are immediately reflected in the feed.
Current content feeds do not ask what a user wants to see; they typically assume what a user wants based on inference over platform data. As a result, many encounter unwanted posts in their feeds, sometimes even after the user has invested great effort to avoid such posts. A content feed constructed around the voluntary principle of affirmative consent would periodically ask what the user wants to see.

Imagine that Lucy logs onto a new platform called Socious, and the platform greets them by asking “What do you want to see this week?” Lucy sees Socious recommended keywords like “Flower Tending”, “Animation”, and “Dance” based on topic modeling. Lucy decides they would like to see more of flowers, dance, and animation. Lucy also notices they can specify topics they do not want to see. Lucy can also select among tags that include well-known triggering topics. Lucy selects “Self Harm”, “Alt Right,” and “Race” for exclusion from their feed. As Lucy scrolls down the feed, they see the new preferences immediately reflected. After a week, Socious asks Lucy again for topic preferences—though Lucy can change the frequency of requests any time.

2. Revertible Profile Pages: Revert posts, comments, and tags efficiently
The image shows Jon's profile page on WebCon. At the top, Jon's profile image and username are shown. It also shows that Jon has posted 579 photos and has 110 followers while following 47 users. Jon's bio says "36 years young, disco lover." Below the bio, six of Jon's uploaded photos are shown. Three of them show Jon with Emily, while the other three are either a photo of Jon or a scenery.
The image shows a dashboard where a WebCon user can query for certain likes, tags, comments, and posts related to a certain account. At the top, Jon has typed in Emily's username in a search box and selected to query for his posts that Emily liked, is tagged in, or left comments on, as well as Emily's posts that he liked, is tagged in, or left comments on. At the bottom, corresponding posts, likes, tags, and comments show up. Jon selected to delete everything that can be deletable.
The image shows the reflected changes in Jon's profile page. All of the photos that included Emily are now deleted (which previously were included in the first subfigure). Another difference is that Jon has now 109 followers and follows 46 people, as he stopped following Emily and removed Emily from his follower list.
Jon’s profile page on WebCon.
Jon queries for posts containing tagged photos of Emily or ones that Emily left comments on or liked. Jon decides to delete all of them.
Jon goes back to his profile page and sees the queried posts removed from his profile.
Our social networks constantly change offline—we sometimes distance ourselves from people who were once close friends, go through break-ups, or our loved ones pass away. However, the rigidity of current platforms makes it hard to reflect these changes. For instance, Facebook’s feature called Memories shows content that you shared in the past—in some cases showing memories that a person may not want to recall, such as photos of one’s recently deceased family.

Imagine Jon logged into WebCon, a new social platform. Jon recently went through a break-up, and wants to remove all data related to his ex-partner, Emily. Jon goes to the dashboard and queries for his posts that Emily liked, is tagged in, or left comments on, as well as Emily’s posts that he liked, is tagged in, or left comments on. He decides to delete all of his posts that are related to Emily. He also chooses to remove his likes, comments, and tags in/on Emily’s posts. Jon goes back to his profile page and sees these posts removed from his profile. Jon also deletes all of Emily’s comments in his remaining posts. In contrast, Jon cannot delete Emily’s posts of Jon, as those posts are Emily’s.

3. Unburdensome Messaging: Leverage network data to control chats
The image shows Sannvi's chats on CoMedia, an imaginary new platform, in a mobile view. It shows the five most recent messages where four of them are messages from men about Sannvi's looks, such as "Hi beautiful." from a user named John Williams and "Damn, looking hot!" from David Jackson.
The image shows the control panel of CoMedia where Sannvi can choose who can message her. Sannvi has selected the two options that only allow accounts that either Sannvi's friends have messaged them first or liked their posts to message her.
The image shows the setting immediately taken effect, with the four messages about Sannvi's looks not visible and sent to a separate queue. She sees two new messages from her friends' friends, Sharon and Preeti.
Sannvi sees many unwanted messages when she opens CoMedia.
Sannvi uses network rules to control who can message her.
Sannvi has the majority of her new messages sent to a separate queue. She also sees new messages from friends’ friends, Sharon and Preeti.
On most current platforms, when a person sets their account to public, strangers or spam accounts can DM them with unsolicited content. For instance, about half of American women ages 18 to 29 have received explicit images they never asked for. At internet scale, it becomes very difficult to exercise control over messages; some people abandon platforms altogether for this reason.

Imagine Sannvi has been receiving many unwanted messages on CoMedia. The messages often include compliments about her looks, which she finds uncomfortable. Sannvi decides she does not want to see such messages and goes to “Control Panel,” applying network-centric rules such as: Only allow people that my friends have messaged to message me. Now, if a stranger messages Sannvi on CoMedia, the system first looks up whether the sender has ever interacted with Sannvi or any of her friends on the platform. If not, CoMedia sends the stranger’s message to a separate queue which Sannvi can later review if she wants.

Research Team

Jane Im, University of Michigan School of Information & Computer Science and Engineering
Jill Dimond, Sassafras Tech Collective
Melody Berton, Sassafras Tech Collective
Una Lee, And Also Too & Consentful Tech Project
Katherine Mustelier, University of Michigan School of Information
Mark Ackerman, University of Michigan School of Information & Department of Electrical Engineering and Computer Science
Eric Gilbert, University of Michigan School of Information & Department of Electrical Engineering and Computer Science

This site is made by Jane Im, code here. Last updated 2/4/2021