Before we wrote a line of backend code for netdrift, we put a pricing page on the internet.
netdrift is a payment risk monitor for EU and marketplace SaaS founders. The product does not exist yet. The page exists. There are three pricing tiers with real prices. There are feature descriptions detailed enough to be credible. When someone clicks the “Join the waitlist” button, they hit a form instead of a signup flow, and a notice in the pricing section tells them exactly what is behind it: “These tiers aren’t live yet. When you click, you’re joining a waitlist. We’re testing whether these prices make sense before we build the product.”
That is a fake door test. The name is a little ominous, but the mechanic is simple: show the product as if it already exists, track who tries to use it, and let that data decide whether you build.
Why not just a landing page
Most validation landing pages ask for an email. That is the thing on almost every pre-launch startup page: hero section, three bullet points, email field, submit button. The problem is that the email field asks for almost nothing. People sign up for things out of mild curiosity, out of politeness, because the product sounds vaguely interesting. The email list that results is mostly noise with a few serious buyers somewhere in it.
A fake door test asks for more. The person looking at the page has to see actual pricing, think about whether it fits their situation, and click a button that looks like a purchase decision. That calculation takes a few seconds of real attention. It filters for people who are not mildly curious but actively considering the product.
The difference matters when you get the results. Email capture tells you people found the concept interesting. Click-through on a priced fake door tells you people expected to buy something and acted on that expectation. From there, you know whether to build.
What goes on the page
The netdrift page opens with the problem stated in a specific situation, not a product description: Stripe adjusts your reserve ratio and monitors your chargeback rate against internal thresholds, but does not tell you when either approaches a limit. The first sign something is wrong is usually a short payout, by which point the account has been flagged for days or weeks. Rolling reserves and longer chargeback windows make this worse for EU and marketplace founders than for US subscription businesses.
That specificity does two things. Founders who have been through a fund hold recognize the situation and keep reading. Founders who haven’t still understand the category of problem, and the specificity signals that the product is being built by someone who has actually lived in this space. Vague problem framing loses both groups.
Features follow, described through what they show rather than what they are. “Reserve ratio tracking” is a feature name. “Stripe adjusts your reserve ratio automatically when something shifts in your account, no notification, no email. By the time your payout is short, it’s been climbing for weeks. Netdrift tracks it daily and shows the direction alongside the number, because a 3-point creep over a fortnight has a different cause than an overnight jump” is a feature description that tells someone exactly what they would be looking at. Only people for whom that distinction matters will care about it. That is the right audience.
Then pricing: three tiers, specific prices (€49/month Starter, up through Growth and Enterprise), specific limits and inclusions. Not “starts at” with a CTA to contact sales. Actual numbers so the person looking at the page can make the mental calculation that the test is designed to produce.
The transparency notice lives in the pricing section, before the CTA. Not in a FAQ. Before the button. Some fraction of people who read the disclosure click through anyway. Those are the ones worth talking to.
Traffic
A fake door test only works on cold traffic. This is the thing most tests fail on, and the failure is easy to miss because the results still look good.
Your own audience, newsletter subscribers, Twitter followers, people who remember you from a previous project: all of them give you the benefit of the doubt in a way the market does not. They will sign up at rates your real acquisition channels cannot match. That feels like validation. It is not the same thing.
Cold traffic means people who found you without a prior relationship. Paid ads, posts in communities where you have no history, cold outreach to founders who fit the profile. For netdrift, the right communities are places where EU and marketplace SaaS founders discuss operational pain: threads about Stripe, payment processing, reserves and risk. Hacker News surfaces these conversations regularly. So does Indie Hackers and some fintech-adjacent Slack communities.
The minimum volume question: below 100 cold visitors, a single unusual session can swing your conversion rate by 10 percentage points. The number is not stable yet. 200 genuine cold visitors is enough to draw a first conclusion. Not a final conclusion, but enough to know whether to keep going.
What to watch in the results
Three numbers. Click-through rate on any pricing CTA, which tier got the clicks, and how many people followed through to submit the waitlist form.
Click-through below 5% on cold traffic means something on the page is not working. Could be the problem framing, could be the features, could be that the price killed it before anyone got there. Hard to tell which without testing. Below that threshold, the test is not producing usable data.
Tier distribution tells you about price sensitivity. If almost all clicks go to the cheapest option and the higher tiers barely register, there is a ceiling in the audience somewhere around that entry price. If clicks distribute across tiers, the pricing ladder is working, and there may be room to go higher. For netdrift, seeing how EU founders spread across the Starter and Growth tiers is part of what the test is trying to find out.
Confirmation rate on the form is the second filter. Clicking through is cheap. Typing an email address and submitting takes another moment of intention. Someone who does both has signaled interest at two separate points. High form completion after click-through means the clicks were considered. Low form completion means many clicks were impulsive or exploratory, which changes how you interpret the top-line number.
What the results do not produce is a forecast. A 25% click-through is not evidence that 25% of your market will buy. It is evidence that 25% of this audience, on this day, reading this page, chose to act. The result is directional. It says whether to keep going, not how much revenue to project.
The thing most people get wrong about the data
There is a version of fake door test analysis that treats click-through as confirmation. “People clicked, so there’s demand.” The logic sounds reasonable and mostly fails in practice.
What click-through rate tells you is whether your positioning connects with this traffic source at this price. What it does not tell you: whether those people can afford the product long-term, whether the problem occurs frequently enough to justify a monthly subscription, whether your acquisition cost at that price will work. The click answers the first question in the chain. The rest requires building.
The one thing a fake door test can genuinely confirm, when the numbers are strong, is that the price is not the reason people are not buying. If you get 20%+ click-through and 60%+ form completion at €49/month from cold founder traffic, that price did not kill the conversation. That is worth knowing before you build the product. It is not the same as knowing the product will work.
A clarification on the terminology
“Painted door test” means the same thing. Optimizely uses painted door; you see it in product management writing from the B2B SaaS world. The mechanic is identical: a door that looks functional but leads somewhere other than what it promises.
Some practitioners use “fake door test” for the specific version with pricing and a waitlist, and “painted door test” more broadly for any test where a feature is presented before it is built. The distinction is minor in practice. Whatever you call it, the constraint that makes it work is the same: the traffic has to be cold, and the action you are asking for has to carry some weight.
The netdrift page is live. The data from the first few weeks will tell us whether EU and marketplace SaaS founders, reached through cold channels, see a €49/month payment risk monitor as worth clicking on. That question is worth getting an answer to before writing the backend.
For more on where a fake door test fits in the full validation sequence, the 5-Day Startup Validation Sprint maps the steps before and after. If you are at an earlier stage and not ready for the pricing test yet, the smoke test post covers the lighter-weight version.