UCSD Design Lab publication

Implications for Facilitating Online Feedback Exchange @ CSCW 2020

Type

Role

Time

Team

Tools

Methods

UX Research Intern

UX research, Co-author

12 weeks (Jun-Oct 2019)

Me, 2 Researchers, 1 Professor

Google suite

Remote semi-structured user interview, Affinity diagram, Literature review, Qualitative Coding, Big data analysis, Report writing

Background

The University of California, San Diego ProtoLab is a lab underneath the UCSD Design Lab that “investigates the foundations of collective intelligence, creativity, feedback exchange, and decision making using human-centered design, data science, qualitative methods, and system prototyping.” During the summer of 2019, I spent 4 months working in ProtoLab under Professor Steven Dow on a team to conduct research to understand and support feedback seekers in effectively requesting feedback and co-produced a research report paper on the findings.

Impact

Our paper was published and accepted to the ACM Conference on Computer-Supported Cooperative Work and Social Computing held from October 17-21, 2020. My team virtually attended and presented our work at the conference. Feel free to view the ACM publication here, and click the button on the right to read the full paper!

Research process timeline

CSCW timeline

Context

Today, many creative workers post their works in online communities, such as Reddit and Dribbble, where members respond to them with feedback. To support this process, previous research focused on prompting feedback providers and helping creators to make sense of feedback they receive. Our study, instead, aimed to understand and support feedback seekers in effectively requesting feedback to begin with. In this mixed-method paper, our team empirically studied how creators publicly request feedback in an online critique community and how their strategies would affect feedback responses. We focused on the r/design_critiques subreddit.

🔎 Challenge

Prior work has revealed limitations on the outcomes of online feedback exchange, but little is known about the solicitation process of how creators communicate with feedback providers about their work and their expectations of feedback. To dive into this, we explored our research questions in two studies:

(Qualitative) Study 1: What makes an effective feedback request?
(Quantitative) Study 2: How do different request strategies affect feedback responses?

💡 Solutions

First, we conducted 12 semi-structured interviews with members of r/design_critiques. Then, through building affinity diagrams and thematic analysis, we identified creators’ strategies to request feedback in the community and discovered their uncertainties around whether and how to include certain details. Next, through qualitatively coding posts and computationally analyzing requests and feedback, we found posts with specific strategies that yielded more effective feedback but were rarely used. With these insights, we offer design implications for future online feedback systems through a research report, which was published and accepted to CSCW 2020.

About r/design_critiques 

r/design_critiques is an active community dedicated to feedback exchange across a range of design domains. Anyone can publicly share their work through text-based forum posts (including embedded URLs) and members can respond in the thread. This community was chosen because it allows us to observe a range of feedback interactions performed by users with different experience levels across a variety of design genres, and because it's not a membership-based or professional site.

subreddit

(Qualitative) Study 1: What makes an effective feedback request?

User interviews

We developed our interview protocol to ask the participants to:

  1. Reflect on strategies of asking for feedback by reviewing their previous requests.
  2. Recall previous experiences of providing feedback to others’ requests.
  3. Critique the most recent posts to the community.

We then recruited 12 active users of r/design_critiques through purposeful sampling from users who have posted feedback requests to the community. All 12 of these users have both provided feedback to others and requested feedback from the community before. Then, we conducted semi-structured interviews with them and transcribed these afterward.

sample-interview

Affinity diagrams

Using all the transcribed interviews, we collaboratively and iteratively constructed affinity diagrams. From doing so, we found these common themes among the interviews:

Feedback seekers

  • Often provide design details, but not personal details
  • Want expert feedback, but avoid explicitly requesting it
  • Prompt for specific feedback, yet still want comprehensive critique

Feedback providers

  • Want more contextual information
  • Try to empathize with seekers
  • Prefer specific feedback prompts
  • Avoid lengthy and complicated requests
affinity-build
affinity-theme

Results

Through Study 1, we surfaced a set of factors that members deemed important when requesting feedback and responding to requests. However, this analysis also uncovered four key tensions that seekers struggle with when requesting feedback:

  1. How to present the design context
  2. Whether to include personal details
  3. Whether to request specific or general feedback
  4. Whether to explicitly request expert input

These uncertainties indicate that while community members have an idea of strategies they should use when requesting feedback, they lack certainty. In Study 2, we further explore the prevalence and impact of these strategies on the community’s critiquing behavior.

(Quantitative) Study 2: How do different request strategies affect feedback responses?

Qualitative coding

We developed a qualitative coding scheme to code for the presence or absence of request strategies. To do so, we selected 900 post requests from a corpus of 24,867 posts on r/design_critiques by randomly sampling 150 posts from each of the six years of activity. Then, we manually deleted posts that were unrelated to design feedback-seeking or had invalid external links to designs, leaving us with 879 posts and their 3,632 corresponding feedback comments as the dataset for our qualitative coding.

We iteratively developed a coding scheme describing the feedback request strategies in the community until we reached high inter-rater reliability (Cohen’s Kappa >= 85%). Our coding scheme resulted in 7 strategies for feedback requests, which were binary-coded throughout the 879 posts.

coding-scheme

Regression analysis + Computational methods

Next, my team built multiple regression models to further investigate how the features of feedback requests influence the resulting feedback and then calculated two text-based content measures (actionability and justification) using Natural Language Processing and semantic analysis.

descriptive-stats
nlp-results

Results

Through Study 2, we found that a majority of requests (89.0%) present design context, but rarely contain reasoning on narratives about the design process (13.4%). While many explicitly prompted for feedback in the request (87.7%), more than half of them only included general prompts without any specific scaffolds for feedback.

These study results lead us to see that:

  • Signaling novice led to better-justified feedback
  • Critiquing one’s own design in request resulted in more actionable feedback
  • Showing variants yield faster and more justified feedback
stats

(Bolded = IVs that led to statistical significance and had at least a 10% change in the DVs)

Implications

However, these specific strategies are only used by a small portion of the community (6.1%, 21.8%, 11.2%) respectively. With this in mind, we offer these design implications for feedback systems:

Support how seekers compose feedback requests
E.g. Providing hints and reviews, instead of just a free text box, that offer key principles as creative workers compose their requests.

Support how seekers reflect on their designs.
E.g. An assistant chatbot that provides step-wise instruction on reflective practices, examples of requests that integrate reflective information, automatic evaluation mechanisms.

Support how seekers generate and present variations of their design.
E.g. An interface that allows seekers to easily upload, organize, and add explanations for multiple versions of the same design, an AI-powered feature that allows seekers to easily create multiple versions of a visual component (e.g. different color themes).
Initiate private exchanges between novices and experts.
E.g. A private or semi-private channel for novices to share their personal details alongside their designs, creating a "matching" system with motivational profiles so that people only see designs that align with what they want to critique.

Personal Takeaways

Through this project, I was able to experience the holistic and iterative process of turning research questions into a fully developed research paper, which taught me a lot about not just the user research process, but also how to write in the form of storytelling while still conveying data and impacts. I exercised the process of user research, reviewing literature, and writing for an academic paper.

On a personal level, interviewing people of different ages/paths/careers from all over the world, who were all pursuing something within the field of design, taught me that there isn’t really a “set timeline” or “correct path” for anything. Being introduced to so many different perspectives ignited the love for understanding the 'user' in user experience for me.

All in all, I’m so grateful for this experience — from having an opportunity to work with such an inspirational and patient team, to learning so much about the field of user research, to experiencing what research looks like in an academic setting, to living in sunny San Diego for a summer, to attending my first (virtual) HCI conference, to co-authoring my first paper, to widening my perspectives, I couldn’t have asked for a better summer 2019!

👋 Thanks for stopping by! Feel free to browse below: