The Power of Fake Door Tests

An Interview with Clement Kao

The mobile apps that win markets are the ones that demonstrate speed to market with new, compelling features. 

But how do you know whether you’ve picked the right feature to ship? In other words, how do you ship quickly without picking the wrong feature?

In this app expert interview, we’ll learn a powerful technique called the Fake Door Test from Clement Kao, Co-Founder of Product Manager HQ. Read on to find out how to use a fake door test, what the best way to execute a fake door test is, why it’s better than an A/B test, and more.

    What you’ll learn in this Article

  1. Why is speed to market so important for mobile apps?
  2. What’s the downside of shipping the wrong feature?
  3. What’s the problem with A/B tests?
  4. What’s the problem with user interviews?
  5. How can I validate features before building them?
  6. How do I conduct a fake door test?
  7. What best practices should I use for running a fake door test?
  8. What are the key takeaways I should remember about fake door tests?

Why is speed to market so important for mobile apps?

The barrier to entry for mobile apps has been steadily falling over time, and that means that more and more competitors are entering the mobile app space.

That means that if you want to stand out from competitors, you need to deliver valuable features to the market before they do. After all, consumer expectations have risen over time, which means that consumers are less loyal to any given app.

So, shipping quickly is crucial for mobile apps if you want to succeed. The earlier you ship, the better you can solve for the needs of your users, and the more live data you can use to drive future iterations and enhancements.

The problem is, shipping quickly will only help you win if you ship the right feature. If you ship the wrong feature quickly, you run into multiple negative downstream impacts.

What’s the downside of shipping the wrong feature?

When you ship the wrong feature, you create the following four problems:

  1. Lost opportunities
  2. Lost potential revenue
  3. Lost customer confidence
  4. Competitor traction

First, when you ship the wrong feature, you’re investing time that could have been spent on building the right feature. That means that you’ve lost precious time that you’ll never be able to get back.

Second, when you ship the wrong feature, you’ve lost out on revenue that you could have earned by shipping the right feature. Every time we miss out on revenue opportunities, we come closer to becoming unprofitable – or worse, becoming insolvent. If we can’t demonstrate a strong track record of shipping the right feature to our investors and our employees, we lose their faith, and that puts the company at risk.

Third, when you ship the wrong feature, you lose the confidence of your customers and users. We’ve all seen it before – a massively successful app is unexpectedly attacked by its own user base because it ships a feature that doesn’t satisfy any of its users’ needs. Losing their confidence means losing virality; when you lose the goodwill of your users, they will stop advocating for you to their friends and colleagues, and you lose out on the flywheel of exponential growth.

And fourth, when you ship the wrong feature, your competitors gain more traction in the market. Competitors aren’t static – they’re in the same race that you are, and every time you stumble, you give them the opportunity to take the lead.

So, it’s crucial that we ship the right feature to the market. Yet, we need to ship quickly, and these two principles appear to be in tension with one another.

If you ship quickly, you run the risk of selecting the wrong feature to build. And if you take the time to select the right feature, competitors might beat you there.

That means that we need to validate the features that we do decide to build. How do we do that?

Traditionally, most experienced app builders leverage A/B tests to validate their features. However, there are some problems with the A/B test approach.

What’s the problem with A/B tests?

The problem with an A/B test is that you have to actually build the feature. After all, an A/B test requires live data from people using the functionality.

If you decide to use an A/B test to validate whether your feature is valuable, you’ve already lost out on the potential to build something else instead.

In other words, you can’t tell whether your functionality is yielding value until you build it, and then test it with people to actually get the metrics you need to validate it.

Okay, then let’s try going the other direction instead. Let’s not build anything at all. Instead, let’s run user interviews to see what people say they would like us to build. Does that fix the problem?

What’s the problem with user interviews?

Unfortunately, there’s a significant problem that comes up with user interviews. While user interviews are extraordinarily valuable for understanding how people make decisions and gaining user empathy, they’re not very good at identifying how likely a user is to use your feature.

Why is that? It’s due to the likeability problem.

When people are being interviewed, they want to be liked by their interviewer. So, they’re far more likely to say that your idea is great, even though they would never actually use it in real life.

That means that we can’t trust people’s verbal indicators about how likely they are to use our product. Behavioral metrics are far better for assessing people’s actual behavior, because they’re not consciously thinking about how they’re being observed by others.

So we can’t do user interviews and we can’t do A/B tests. What other way do we have to test whether a feature is going to work without having to build it first?

How can I validate features before building them?

That’s where the fake door test comes in! A fake door test is something that you implement at low-cost in your live mobile app, where you build out the visual entry point into the feature that you’re thinking about building – but, you purposefully don’t build out the rest of the feature.

In other words, the door is functional: users can click into the proposed feature. But, since the functionality itself isn’t live, it’s a “fake door” that doesn’t actually lead into new use cases.

Why is a fake door test so much better than either A/B testing or user interviews? Well, it gives you the best of both worlds!

Similar to A/B testing, you can create a control cell vs. a test cell, and you can measure “conversion” in terms of how many people click on the proposed feature. But, you get none of the downsides of A/B testing, because you didn’t have to build out a fully mature feature – all you had to do was build the visual entry point.

And, similar to user interviews, you can see users “self-identify” how likely they are to use the feature. But, unlike user interviews, you won’t run into the likeability problem, because users don’t feel like they’re being observed by another human being. So, without building out the functionality, you get high-signal “intent to convert” at low cost.

How do I conduct a fake door test?

So, now we’re excited about the power of fake door tests! Let’s learn how to actually make one.

First, you have to understand how you want to position your proposed functionality. After all, you’re building the entry point to it, and that entry point is likely a CTA (call to action) that sits within your mobile app. If you can’t convince people to take the call to action, then they’re not going to click on it, and you’re not going to get the results that you want!

Second, you do still need to do some lightweight user interface design and copywriting. You want it to be visually compelling so that users are interested in clicking on the CTA.

Third, you need some lightweight user interaction for what comes after someone clicks on the CTA. You don’t want people to think your app is broken, after all! You can use a pop-up modal, or some error notification validation text, or whatever makes the most sense for the kind of feature that you’re testing.

And finally, you need to implement some tracking. After all, if you can’t measure the rate at which people view the CTA and click through the CTA, you have no way of measuring conversion or user interest in your proposed new feature!

So, we know how to conduct a fake door test now. But how do we take it to the next level? What are some best practices that we should keep in mind?

What best practices should I use for running a fake door test?

The key is user empathy. Remember that these are real people clicking on your fake door CTA, and they’re going to feel confused or disappointed if you don’t reset their expectations immediately.

When your user clicks on the fake door CTA, be honest and tell them what’s happening.

Let them know that you’re looking to understand whether people are interested in the functionality or not. Share with them that by clicking on the CTA, they’ve voted in favor of having you build out the feature.

And, be sure to give your user some value. A great way to give them value is to give them the opportunity to sign up on a waitlist, so that they can be notified once your fake door feature turns into a real live feature!

Another thing to keep in mind is that you shouldn’t use fake door tests for every single possible feature. The point of a fake door test is to assess user interest without having to build the rest of the feature – but if the rest of the feature is pretty easy to build, there’s no real gain that you get in using a fake door test.

For example, say that you’re thinking about giving your users the ability to change their usernames. It’s not a good idea to use a fake door test here, because it’s pretty straightforward to build out the full capability and run a legitimate A/B test on it. 

What are the key takeaways I should remember about fake door tests?

To win in the highly competitive mobile app marketplace, we have to ensure that we ship the right feature as quickly as we can.

If we take too long to figure out what the right feature is, then our competition will take the lead and our users will abandon us. But if we move too quickly and pick the wrong feature, then that’s even worse!

The great news is that we can use fake door tests to quickly validate a feature idea at low cost, in a way that neither A/B tests nor user interviews can do for us. We don’t have to build out the functionality entirely, yet we can get high-fidelity signals into user demand.

Try using a fake door test next time – you’ll likely get great insights at high speed and low costs!


Clement Kao is a Co-Founder at Product Manager HQ, a community dedicated to providing career advice for aspiring product managers and experienced product managers. He’s written more than 80 best practice articles, and he has been featured by more than 60 different organizations around the world.

Outside of Product Manager HQ, Clement is also a Product Manager at Blend, a San Francisco-based startup that partners with lenders and technology providers to re-imagine consumer finance. In the last four years, Clement has launched seven multi-million dollar products, and he’s actively tackling new problem areas to unlock the next generation of multi-million dollar products.

Jennifer Sansone

Subscribe to our free newsletter.

Related Articles