One of the most persistent myths in product design is that every project should follow the same process. Discovery, wireframes, high-fidelity, testing, launch — repeat. In practice, that mindset often creates more friction than clarity. Over the years, I’ve learned that the most effective design work doesn’t come from rigidly applying a single framework, but from intentionally choosing an approach based on the type of problem you’re solving and the risk you’re managing.
Not all design projects are equal. Some are about reducing ambiguity, others are about managing complexity, and some are simply about shipping something useful under tight constraints. Treating them the same leads to over-design in some cases and under-design in others.
What's mattered most in my work is being able to quickly answer a few core questions at the start of a project. These core questions emerged from a pattern I kept seeing across very different projects: when work struggled, it was rarely because teams lacked effort or skill—it was because they were answering the wrong questions too early.
This question exists to prevent false starts. Many projects are mislabeled at kickoff—an iteration treated like discovery, or a system redesign approached like a single flow. Clarifying the nature of the work upfront helps set realistic expectations around scope, speed, and depth, and prevents teams from defaulting to a process that doesn’t fit the problem.
Every project carries multiple risks, but not all risks deserve equal attention. This question forces prioritization. Whether the risk is solving the wrong problem, mishandling edge cases, breaking existing mental models, or shipping too slowly, identifying the dominant risk helps focus design energy where it matters most instead of spreading it thin across unnecessary artifacts.
Design decisions are only as strong as the assumptions behind them. This question shifts the team from output-focused thinking to learning-focused thinking. It ensures research, testing, or alignment work is intentional—designed to answer specific unknowns rather than performed because “that’s what the process says.”
Not every deliverable is useful in every context. This question protects teams from performative design work—creating artifacts that look complete but don’t move decisions forward. By grounding artifacts in decision-making needs, the work stays lighter, faster, and more aligned with the project’s true constraints.
Taken together, these questions create a consistent decision framework that adapts to the project instead of forcing the project to adapt to a predefined process. They keep teams focused on clarity, risk, and momentum—regardless of the type of design work being done.
Defining Design Project by Risk, Not Deliverables
Instead of categorizing projects by platform or output, I've found it more useful to define them by what's true about the problem space. The table below outlines common project types, along with their primary focus and primary risk.
| Project Type |
Primary Focus |
Primary Risk |
| Zero-to-One Concept |
Validating that you’re solving the right problem before investing in detailed design. |
Building the wrong thing or solving a non-problem. |
| Iteration on an Existing Flow |
Identifying friction, improving usability, and driving measurable outcome improvements. |
Optimizing symptoms instead of addressing the root issue. |
| Rules-Driven / multi-Use Case System |
Establishing clarity around states, logic, and edge cases before visual design can succeed. |
Inconsistent behavior, broken edge cases, or logic gaps. |
| MVP / Time-Boxed Release |
Prioritizing coherence and restraint over completeness to ship something useful on time |
Shipping something too thin or creating long-term design debt. |
| Design System / Standardization |
Defining contracts, consistency, and adoption rather than focusing on individual screens. |
Low adoption or components that don't match real needs. |
| Feature Expansion/strong> |
Protecting existing mental models while introducing new capabilities. |
Breaking user expectations or navigation clarity. |
| Service-Oriented Experience |
Designing beyond the UI to include operations, communication, and trust-building moments. |
Experience breakdowns at handoffs of user trust. |
| Discovery-Focused Project |
Reducing uncertainty and creating alignment before committing to a solution. |
Analysis paralysis or learning too late to change direction. |
Each of these project types benefits from a different starting point. The mistake isn't choosing the "wrong" framework — it's assuming the same framework should apply at all.
A Consistent Start Plan, Configured Per Project
While the approach should vary, I've found it valuable to keep a consistent decision-making structure at the start of every project. Regardless of type, I aim to clarify:
- The primary risk we're managing
- The inputs required to move forward confidently
- The core artifacts that will drive decisions
- How we'll validate progress
- The conditions that need to be true before advancing
This allows teams to move with intention instead of habit. It also creates shared language across product, engineering, and stakeholders — especially when expectations around scope, research depth, or timelines differ.
Example: How the Same Starter Questions Lead to Different Work
Project A: Zero-to-One Concept
- What kind of work is this?
- A brand-new concept with unclear user needs.
- Where is the biggest risk?
- Solving the wrong problem
- What do we need to learn or validate?
- Whether the problem exists and how users currently address it.
- What artifacts help us decide?
- Assumption mapping, lightweight journeys, concept sketches, early feedback.
Project B: Rules-Driven, Multi–Use Case System
- What kind of work is this?
- A complex system supporting multiple scenarios and edge cases.
- Where is the biggest risk?
- Inconsistent behavior and broken logic across use cases.
- What do we need to learn or validate?
- How rules, states, and conditions interact across scenarios.
- What artifacts help us decide?
- Decision tables, state models, edge case inventories, scenario-based testing.
The starter questions don’t change — but the work does. The difference isn’t the process itself, but how intentionally it’s configured to address risk.