How many teams have you seen celebrate the launch of a product that ultimately nobody used? It's a common tragedy in business where high-performing engineers and designers work with incredible efficiency to build something that fails to move a single needle. To solve this, entrepreneurs are turning to a lean startup kanban to ensure that every task completed translates into real business knowledge.
In a typical startup, the pressure to ship features is relentless. But shipping doesn't equal progress if those features don't lead to a sustainable business model. By adding a "validated" column to the traditional workflow, you shift the focus from merely doing work to actually learning what your customers want.
This system acts as a speed regulator for your company. It prevents you from flooding the market with useless code and forces you to confront the data before you move on to the next task. It’s the difference between driving fast in the wrong direction and steering a car toward a specific destination.
In his book The Lean Startup, Eric Ries adapts the Japanese concept of Kanban—originally a scheduling system for lean manufacturing—to the world of high-uncertainty business. While a factory uses Kanban to manage physical inventory, a startup uses it to manage the flow of ideas and learning.
Traditional Kanban boards usually track three stages: Backlog, In Progress, and Done. In Ries’s version, "Done" isn't the final destination. A feature isn't finished just because the code is written and the bugs are fixed. It’s only finished when the team has measured its impact on customer behavior.
This matters because startups operate under conditions of extreme uncertainty. You don't know if your pricing page is right or if your onboarding flow is intuitive. The lean startup kanban provides a structured way to test these assumptions scientifically rather than relying on gut feelings or vanity metrics.
Most project management tools are designed to track output, not outcomes. If your board ends at "Done," you’re incentivizing your team to simply crank out more features without checking if those features are valuable. This leads to "achieving failure"—the successful execution of a flawed plan.
When you stop at "Done," you're ignoring the Build-Measure-Learn feedback loop. A team might ship ten features in a month, but if none of those features increased customer retention or revenue, that month was a waste of resources. The system must prioritize the search for a sustainable business over the production of features.
Managing startup tasks requires a capacity constraint. In this framework, each column on your board has a limit. For example, you might decide that the "In Validation" column can only hold three features at a time. This prevents the team from moving on to new projects until they’ve finished measuring the previous ones.
If the validation column is full, the engineers can’t start a new task. They must instead help with the data analysis or customer interviews needed to move the current tasks to the final stage. This creates a "pull" system where learning pulls the work through the pipeline, rather than a "push" system where features are shoved onto the market.
The "Validated" column is the most important part of this setup. To move a feature here, the team must show a change in customer behavior through a split test or other actionable metric. If the experiment fails, the feature is either removed or pivoted.
Ries suggests that this discipline keeps the team honest. It’s easy to tell ourselves a story about why a feature is good, but the data doesn't lie. According to research cited by Ries, even high-performing teams find that many of their ideas have zero impact on customer behavior, making this validation step a critical filter for waste.
Grockit, an online education company founded by Farbood Nivi, implemented this system to fix their stagnant growth. They were shipping features constantly—new social tools, chat boxes, and peer-to-peer functions—but their user engagement wasn't moving. They realized they were using vanity metrics to track success.
Once they switched to a Kanban system that required validation for every story, they discovered a shocking truth. Many of the "best practices" they were following, like lazy registration, didn't actually change customer behavior at all. This realization allowed them to stop wasting time on social fluff and focus on the solo-study tools that students actually wanted.
At IMVU, the team wanted to build a high-quality movement system for their 3D avatars, similar to The Sims. This would have taken months of engineering. Instead, they released a crude "teleportation" feature where the avatar simply disappeared and reappeared in a new spot. They felt it was a low-quality compromise.
To their surprise, when they measured the results, customers loved it. It was faster than walking and solved the users' primary problem: getting across the room to talk to a friend. Because they had a system to measure results, they saved months of work that would have gone into a complex walking algorithm that nobody actually needed.
Immediately add a column to the far right of your current workflow labeled "Validated." Update your team's definition of "Done" so that no task is moved to the final stage until it has been tested against real customer data. This ensures that your backlog is constantly being informed by reality rather than speculation.
Assign a maximum number of items allowed in each stage of your workflow, especially the "In Development" and "In Validation" columns. If you hit the limit in validation, the entire team must stop starting new work. This forces everyone to focus on the bottleneck—learning—before they return to building.
For every item in your backlog, write down the specific metric you expect it to change. Don't let a feature enter the "In Progress" column unless you have a plan for how you will measure its impact. If you can't define what success looks like in numbers, you aren't ready to spend engineering time on it.
Critics often argue that this level of testing slows down the creative process. They worry that by focusing purely on what can be measured, teams will lose their long-term vision or fail to take big risks. There's a fear that the team will become "optimization zombies" who only move buttons around rather than inventing new categories.
Others point out that in the very early stages of a startup, you might not have enough traffic to run statistically significant split tests. In these cases, the feedback loop relies more on qualitative interviews. While Ries acknowledges these limits, he insists that some form of validation is always better than flying blind. The goal isn't to replace vision, but to ensure that the vision is actually working in the real world.
Switching to a lean startup kanban requires a fundamental shift from valuing output to valuing knowledge. This process protects you from the most common cause of startup failure: building something efficiently that nobody wants. Focus on clearing the validation bottleneck today to ensure your team's hard work actually drives business growth.
One concrete action you can take this afternoon is to look at your "Done" column from last week and ask your team for the specific data proving each of those features improved the business.
Scrum usually works in fixed-length sprints, often focusing on completing a specific set of features within two weeks. Lean Startup Kanban is a continuous flow system that focuses on moving ideas through a feedback loop. The primary difference is the goal: Scrum aims to finish tasks on time, while the startup Kanban aims to validate learning as fast as possible.
A feature is validated only when its impact on customer behavior has been measured against a specific hypothesis. Usually, this involves a split test (A/B test) where you compare the behavior of users who have the feature versus those who don't. If the data shows a meaningful improvement in an actionable metric, the feature is considered validated.
If a feature fails to move the needle, it should be removed or reworked. This is a success in terms of learning because it prevents the team from maintaining useless code. The team then uses that insight to pivot their strategy or try a different approach to solving the customer's problem in the next cycle.
Yes. While split testing is the gold standard, you can validate learning through qualitative methods like customer interviews or 'Concierge' MVPs. The key is that the feature cannot move to 'Done' until some external evidence—beyond the team's opinion—confirms that the feature is solving a real problem for the user.
How to Use Kanban to Drive Validated Learning
Why Validated Learning is More Important Than Your Revenue
Using Red Flag Mechanisms to Turn Data into Action
Product Value Testing Do Users Actually Care About Your Idea?
Relentless Improvement How to Move the Needle on Existing Products
The Alchemy of Greatness Combining Discipline with Entrepreneurship
How to Use the 'Window and Mirror' to Build Accountability
The Build-Measure-Learn Loop The Real Secret to Startup Speed
Learning Milestones An Alternative to Traditional Business Goals
Resolving Product Management Conflict Without Calling the Boss