The Design Sprint is a five-day process developed at Google Ventures. It's well-documented and genuinely useful, but built for teams with five or more people, a dedicated facilitator, and a full week blocked off. Most solo founders don't have those things, but that's the lesser problem.
The bigger issue is this: most validation approaches don't have an ending condition. You run some tests. You look at the results. You don't quite feel ready to commit, so you run more tests. The idea never gets killed, and it never gets built. It accumulates evidence that means whatever you need it to mean: the 6% click-through becomes "decent for a cold audience," the interviews where nobody asked for the product become "useful qualitative signal about positioning," the waitlist full of your network becomes proof of demand. The test keeps running because there's no condition under which it tells you to stop.
The thing that fixes this is a standard you write down before the data exists. Not "I'll see how it goes." Specific criteria, defined in advance, that you measure against on a specific day. This sprint builds that in: five days, one set of tests, one decision.
Before anything else (Day 0)
Write down your go/no-go criteria. Two or three specific, measurable signals that will tell you, on Day 5, whether this idea is worth pursuing.
Something like: at least 15% click-through on the landing page from cold traffic, at least 5 signups from people who don't know you personally, or at least 2 of 3 problem interviews where the person has actively tried to solve this before. The specifics will depend on the idea. What they can't be is vague: "people seemed into it" doesn't count.
The reason this happens first is that results warp expectations the moment they exist. You see 8% click-through and convince yourself that's reasonable for a cold audience. Maybe it is. But you should have decided that before you saw the number. The failure mode here isn't dishonesty. It's that rationalization happens automatically and fast. You write down your bar in advance so that the version of you staring at disappointing numbers doesn't get to retroactively change what "good" looks like.
Most founders who end up building the wrong thing didn't run bad tests. They ran tests without pre-committing to what a bad result looked like. The sprint is designed so you can't do that.
Get a page live (Day 1)
One page: a headline that says what you're building and who it's for, two or three sentences on the core value, an email capture. That's it. Get it live.
The goal is a real URL by end of day, not a polished design. If you want to move fast (Day 2 depends on having something to send traffic to), EarlyProof takes your idea description and generates the page. Useful when speed matters more than aesthetics.
You need strangers, not supporters (Day 2)
Warm traffic breaks validation. Friends click through because they support you. Existing followers click because they follow you. None of that tells you whether a stranger who'd never heard of you would care. The rule is cold traffic only: people who have no prior relationship with you and no social reason to be kind.
Three ways to get there: run a small paid ad on Meta or Reddit targeting the problem profile ($20–50, a few hundred impressions is enough); post about the problem (not the product) in a community where your target audience hangs out, linking to the page as what you're testing; or send a single short message to 30 people on LinkedIn who fit the profile, asking if they'll take a look.
Pick one. Execute it well. By end of day you want real people, who don't know you, having landed on the page.
What the interviews are actually testing (Day 3)
While the page runs, have three conversations. Not demos, not pitches. You have nothing to show and nothing to sell. You're trying to find out whether the problem you're building for actually exists in someone's life in a recurring, concrete way.
The question that separates real problems from theoretical ones isn't "is this a pain point?" Almost any problem sounds like a pain point when framed right. It's "tell me about the last time you dealt with this." People who have a real problem can describe a specific instance: they tried something, it didn't work, they found a workaround or gave up. That history (the failed attempt, the awkward fix, the thing they use now even though it's annoying) is the signal you're looking for.
People with a theoretical problem give you agreement without specifics. They recognize the problem when you describe it, they can imagine how it would be frustrating, but they've never actually had to solve it. That recognition isn't worthless, but it's not the same as demand. The gap between the two is the whole interview.
Three conversations isn't a sample size. It's a minimum. Find people through your network, LinkedIn, or anyone who responded to your Day 2 community post.
Numbers only, no story (Day 4)
Write the landing page numbers next to your Day 0 criteria. Distill the interviews to yes/no on concrete problem history. Stop there. Save interpretation for Day 5.
The call (Day 5)
Look at your Day 0 criteria. Look at your Day 4 numbers. Make the call.
Above threshold: keep going, but not by building. More interviews, a different traffic source, a pre-order test. You've established that strangers respond to the pitch and real people have a concrete version of the problem, which is more signal than most ideas get at this stage. What the sprint didn't tell you is everything else: sustainable acquisition cost, whether your solution approach is actually right, whether the audience is large enough. The sprint earned you the right to pursue those questions. It didn't answer them.
Below threshold: the angle didn't work. That's different from the problem being dead. Maybe the pitch was off, maybe the audience was wrong, maybe this is a problem for a different kind of buyer. You can re-run in five more days with a different framing. Extending the current sprint won't move the numbers.
On the line, call it no. Ambiguity at this stage usually means the test gave you a soft no you're not ready to hear.
Hold to what you wrote on Day 0. That version of you set the bar before any results existed.
What this process is actually for
Most validation approaches give you tools without a decision. Here's how to run an interview. Here's a landing page builder. Here's how to analyze feedback. Useful techniques, but none of them tell you when you've learned enough to make a call, and none of them force you to make one.
This sprint does. Five days, real tests, and a specific day when you measure what you actually got against what you said you needed. Most ideas that don't survive it aren't bad ideas. They're ideas without a market, and that's worth finding out in five days rather than six months.