Dodging cognitive biases in decision making

Removing subjectivity and bias in team brainstorming, user testing analysis, and UX processes.

 

There are habits that I’ve observed in a lot of design meetings and brainstorming sessions that seem difficult to break, and they all stem from hastily made assumptions and subconscious slip-ups.

Cognitive biases are sneaky, but they can be weeded out. Recognizing the patterns from which they emerge in product design settings has helped me to dodge these traps the more I see them coming.

The False Consensus Bias

The false consensus bias is the assumption that your own preferences and experiences as a product user are probably shared by everyone else.

This one seems to pop up more than any other bias, especially during team brainstorming sessions (like figuring out how a new feature should work, or just collecting ideas for new features in general).

To gather as many diverse opinions as possible, I don’t like to insist on a tremendous amount of preparation (if any) by participants in case it may turn off some people. The barrier to entry is purposely kept low, but sometimes that makes it easier for people to slip into biased thinking by accident.

Somehow, we forget that while the UX and UI of a product usually seem perfectly clear to the person who created it, it doesn’t mean users will experience it the same way.

I raise a red flag when I hear assumptions such as:

  • “I feel like this is intuitive.”

    • Stop. Consider who this is intuitive for.

    • Validate the assumption: with carefully conducted user testing, where the test subjects are not given any leading prompts, questions, or tasks. The product should speak for itself.

  • “[ Other apps ] do [ this ], it’s the standard.”

    • Stop. Consider our demographic: something that appears to be a standard in one paradigm may be totally unrecognizable in another. How do you know it’ll work for them? This applies more to features and game mechanics than it does to commonly used UI patterns (like pulling down to refresh or tapping outside of a modal to close it), but there’s still a grey area.

    • Validate the assumption: conduct user research, and surveys, and A/B test this hypothesis against one that is radically different.

  • “It’s really compelling in [ favourite game ], this mechanic would engage our players, too.”

    • Stop. Consider the player’s motivations and whether or not a mechanic like this would fit their profile.

    • Validate the assumption: I like to use the concept of Value Chains to tackle this one.

Authority Bias

Authority bias is the tendency to attribute more accuracy to the opinion of an authority figure and be more influenced by it than you would otherwise.

This one often surprises me as someone who has spent time managing teams and mentoring new talent. I don’t believe that it’s the job of a manager or director to be an expert on everything. Learning is for life, and expertise has no ceiling.

Experience is a factor in forming useful opinions, but I like to think of my role as a leader in more holistic terms: enabling teams to do their best work. So when someone accepts my suggestions at face value, I don’t think I’m doing my job correctly and the rapport needs improvement.

Re-framing conflict as a healthy and constructive part of the design process is essential to making awesome products.

Don’t listen to users, watch them instead

My first rule of user testing is to give them as little information as possible. No questions, no prompts, and no instructions.

This is how users interact with products anyway. Their preferences and beliefs are, more often than not, contradictory to the truth and can accidentally make for a worse user experience.

Online user testing services are convenient and receiving a narrated, annotated video with highlighted taps and clicks is about as close to in-person testing as you can get, maybe even better in some ways. Either way, we must be careful about the illusion of validity when it comes to user feedback, whether it’s verbal or written.

I’ve seen this first hand where users remark things such as:

  • “I wish there was a tutorial, I didn’t know what I was doing.”

    • Reality: We watched them onboard themselves quickly without any problems.

  • “I can’t find [ x ].”

    • Reality: They found it.

The critical action here is to observe their behaviour instead and give it more weight than their running commentary.

Not all feedback is the polar opposite of reality, but we still need to be careful about how to interpret and implement what test subjects are saying.

  • “There should be a button here so that I can [ complete this action ]”

    • Ignore: The proposed solution.

    • Ask: What is the problem they are trying to articulate, and what are some ways to solve it?

  • “I would not pay for this at all.”

    • Ignore: The pearl-clutching threat.

    • Ask: The actual purchase data collected down the road.

This follows Jakob Nielsen’s First Rule of Usability: Don’t Listen to Users. Observe what people are doing instead of listening to what they’re saying because evidence has shown that it’s not a reliable source of information. People do not know what they want!

Previous
Previous

The case for lo-fi wireframes and concepts

Next
Next

How spreadsheets became one of my essential design tools