Photo by Landon Martin on Unsplash

A Product Manager’s guide to Painted-Door Tests

How your Product Team can learn faster and reduce your cost of experimentation

Curtis Stanier
5 min readJul 23, 2019

--

As Product Managers, one of the many difficult questions we need to answer is, “what should I build”? Most Product Development approaches involve A/B, or multivariant tests, as a way of validating the new variation improves the metric you intended to without adverse effect. However, sometimes even A/B tests have a daunting scope requiring large amount of engineering or other investment to validate. Enter the painted-door test.

What is a Painted-door Test?

A painted-door test, sometimes called a fake-door or smoke-tests, is a method of early testing to gauge if your users would engage with a particular feature — it primarily focuses on tracking the completion of a main call-to-action. It is a way of exposing a partially or non-functional feature to your users to test their level of interest. This allows you to validate if this is something a customer truly wants in your product, or if you need to reevaluate the idea.

The name comes from a technique used in building architecture and design. Fake windows are added to a building to add character to a wall that may otherwise be plain. Fake doors are added to cultivate a sense of balance and symmetry, or make an entrance appear grander. Unfortunately, neither provide any actual functional use to a user — just like a painted-door test.

The windows in the centre of this picture are actually painted — adding some personality to the exterior an elevator shaft or some other internal that does not warrant windows. Image source: Coolstuffinparis.com

Painted-Door tests are best suited to validating simple measures such as take-rates or conversion metrics. The test is setup in such a way that it looks like it is fully functional. Those customers that do engage (or buy) are shown an apology explaining the feature is unavailable and thanking them for their input.

Using Painted-Doors

So what should a painted-door test look like? As close to the real thing as possible. There should be a clearly defined Call-to-Action, which is what you will use to measure take/conversion) rate. The metric you are measuring should be as close to the end (if not the end) of the funnel you are testing. The further down the funnel, the more accurate your data will be. It could be a setting toggle in the account area, a promotion banner with a buy button or even an out-of-stock product. For all these examples, the measures would be the amount of users that switch the toggle, click buy or attempt to add the item to their cart. All are valid methods but the approach will depend entirely on your product and your initiative

Let’s work through an example. Perhaps you’re working for a publishing platform that allows authors to post articles on a variety of topics. User research suggests authors have expressed interest in some kind of patronage / tipping system but no such feature exists. Readers have also responded warmly to this concept as well , however, humans are notoriously bad at predicting their future behaviour. After discussing the idea with your team, they’ve suggested the requirements could take a couple of months for even a basic MVP (a lot of legacy slowing things down). You’re reluctant to commit a large amount of time without more validation that this is something that will add value for your users. This is a perfect example of when a painted-door test can expedite learning and minimise cost.

The team brings together a basic design. A new call-to-action will be shown at the end of the article prompting the reader to tip the author for their work. However, the button is a dummy, triggering nothing but a popup explaining that feature is not yet available, along with a tracking event for you to monitor. You don’t expose this button to everyone — but just enough traffic that you can get significance in your sample. The results should give an indicator of the take-rate, allowing you to better determine any potential value.

A hypothetical painted-door test to quantify interest in a tip the author functionality.

Drawbacks

The drawbacks for users are obvious. Those that have expressed an interest are faced with a switch-aroo. The are offered something, commit and then are ultimately told by your organisation it’s not available. It may feel like the rug has been pulled out from underneath them. Ensure you communicate well after the test. There are different levels of transparency you may want to present based on the test and on your brand. Use copy to empathise with your customer and apologise for the inconvenience. Let them know they are helping to make the product better and you’re working on having the feature available. You could even follow up with an email to reinforce the message and even offer a token compensation or discount.

Further, painted-door tests are particularly useful at gathering information on the take-rate of an initiate but will you little about why users behaved that way. To answer those questions, be sure to augment them with good user research and quantitative analysis of your product, users and behaviours. Painted-door tests are but one tool at your disposal — great product development requires a mix.

Finally, the conversion and take-rate numbers will tell you little about the retention impact or engagement with a feature. The test won’t answer all your questions, but it should give you an indicator on whether you continue to invest or perhaps revisit a previous stage.

5 extra tips for running Painted-Door Tests

  • Couple them with heatmaps (such as Hotjar) to understand if there is passive interaction with the test. Passive interaction is when a customer hovers around the test but does not engage — perhaps there is further potential.
  • understand if there is passive interaction with the test. Passive interaction is when a customer hovers around the test but does not engage — perhaps there is further potential.
  • Try basic segmentation to understand which users converted — new or high value customers? Low or high engagement? Desktop or mobile? It will provide extra insight when deciding how to proceed.
  • Follow up with the customers that demonstrated interested if you finally release the feature. They often appreciate that someone actually followed up.
  • Don’t over do them. They can be a frustrating experience for your audience.

Conclusion

Painted-door tests are a useful method in your experimentation toolbox — however, they don’t work in isolation. Painted-door tests are best placed early in the life of an initiative to act as a gate for the next stage. They answer, in a binary way, “will my customer use this?” They are a way to reduce the risk of a hypothesis with limited data — particularly ones that require extensive engineering or business investment.

If you have any questions about painted-door tests, feel free to ask in the comments below or get in touch via Twitter!

--

--

Curtis Stanier
Curtis Stanier

Written by Curtis Stanier

Director of Product at @DeliveryHeroCom. Formerly @HelloFresh, @BBC, @Atos. Passion for product, business &tech. I like helping people solve problems. Berlin