Most founders who build something nobody wants didn't skip validation. They validated — talked to people, got a landing page up, collected interest. Then they built anyway, and found out six months later that none of it meant what they thought it meant.
The problem isn't skipping validation. It's using weak experiments and treating the results like proof. A friend saying "I'd definitely pay for that" feels like validation. It isn't. Search volume feels like demand proof. It isn't. A waitlist of 200 emails, without friction, is mostly noise.
Some experiments confirm a problem exists. Others confirm people will act on it. Only one confirms they'll actually pay. Founders who don't know the difference collect encouraging data right up until launch, then discover they had the wrong kind.
What follows is five experiments, ranked by how much you can trust the results, with specific thresholds for each.
Keyword Research (weakest)
Search volume tells you whether enough people actively look for answers to the problem you're trying to solve. That's useful. Use any keyword tool, but search the problem your product solves, not the product category. If you're building expense tracking for freelancers, search "freelance expense tracking" and "how to track expenses as a freelancer" — not "expense tracking software." Look at monthly volume and SERP intent.
You want 500+ monthly searches on a niche-specific term with informational or commercial intent. Better: three or four related queries showing people approaching the same problem from different angles — that's a real problem cluster, not a one-off search.
Here's where founders go wrong: they find the search volume and treat it as permission to build. "50,000 searches a month" is not a business case. Those searchers might already use a free tool. They might not have the budget you need. Keyword data confirms the problem exists at scale. It says nothing about whether anyone will pay you to solve it.
Customer Interviews
The rule most people violate first: the people you interview cannot be friends, colleagues, or anyone with a social reason to be encouraging. Cold outreach only — LinkedIn, Slack communities, wherever your target customer actually participates. Ten conversations minimum. The goal isn't to pitch your idea. It's to find out whether the problem is real, how urgently people feel it, and what they're currently doing about it.
Ask about behavior, not hypotheticals: What have you tried? What does this problem cost you — in time, money, or stress? What does your workaround look like right now? Those answers tell you whether you have something. "Would you use a product that..." tells you almost nothing.
The threshold: 7 of 10 describe the same core problem unprompted, with specificity. At least 3 mention a workaround they currently pay money for or spend meaningful time on. If you have to steer people toward the problem, the urgency isn't there.
One thing to watch for: polite agreement. "Yeah, that does sound annoying" is not the same as "I spent three hours on this last week and I'm still not happy with the solution I've got." The former is acknowledgment. The latter is pain. You need the latter.
The Smoke Test
A landing page does something interviews can't: it asks for action from people who have no reason to be kind to you. They either act or they don't.
Build a minimal page that describes what the product does for the user — the outcome, not the feature list. Signal a price or waitlist position. Ask for an email. Drive traffic from people who don't know you: paid ads, cold outreach, communities. Not your newsletter. Not a link you texted to a friend. Traffic from your own network produces numbers that feel real and mostly aren't.
The bar: 15–20% email capture from cold, unaffiliated visitors, with at least 20 valid visits. Below that, one of three things is true — the positioning isn't landing, you're reaching the wrong people, or the problem isn't urgent enough to prompt action from someone with no relationship to you.
After clearing this step, you have behavioral evidence from people with no reason to be encouraging. That's meaningfully stronger than a keyword count or a warm conversation.
Waitlist with Pricing Revealed
Consider the kind of person who signs up for a waitlist out of vague curiosity — tabs they never go back to, newsletters they forget they subscribed to. That's who populates most early waitlists. Show the price and ask people to confirm they're still in, and that population collapses down to the people who might actually buy.
Same landing page, one addition: a specific price. Either upfront, or as a two-step flow where you reveal pricing after the initial email capture and ask people to confirm they're still in. "$49/month," not "starts at $X."
Passing: 30%+ of email captures confirm after seeing the price. Twenty people join, 8 confirm at $49/month — 40%, something real. Four confirm — curiosity list, not a buyer list. The people who stay have made a quick mental calculation and decided it's worth it.
Pre-Payment (strongest)
This is the only experiment where the results genuinely can't be misread.
Search trends, positive interviews, even email captures still leave room for an optimistic interpretation. Pre-payment doesn't. Someone handing over money for a product that doesn't exist requires actual conviction that it will solve a real problem.
Take payment. A Stripe link, an invoice, a letter of intent — the format barely matters. Any stranger, found through a cold channel, who paid you voluntarily.
There's no minimum target. A single pre-sale from someone who found you on their own is worth more than 200 casual email signups. To get there: post in communities where your target customer is active, describe what you're building, offer a founding price in exchange for early commitment and input. The people from your interview pool who described an urgent, active workaround are the right ones to approach first — they've already told you the problem costs them real time or money.
Running the experiments
You don't have to reach all five before building. But the order matters. Never validate a solution before confirming the problem is painful enough that someone would pay to solve it. Never write production code before strangers have confirmed that pain.
The minimum: pass the interviews (7 of 10, 3+ with active workarounds) and the smoke test (15–20% from strangers). Higher-stakes bets — significant time, complex build, competitive market — push to the price reveal or pre-payment first.
Pick the experiment that would change your mind if it failed. Most founders choose experiments they expect to pass. The question to ask yourself: if this fails, do I stop? If yes, run it. If no, you're not really validating — you're looking for permission.
Run the experiment that scares you. That's the one with the real answer.