The Problem with Customer Interviews
You have probably heard the advice a thousand times: talk to your users. It is the first bullet in every startup playbook, the core message of every accelerator's week one lecture, and the thing that every mentor repeats as if they invented the concept. And they are right. Talking to users is the single most important thing an early-stage founder can do.
But there is a problem nobody talks about. Most customer interviews are terrible. They produce bad data, waste both people's time, and leave founders with a false sense of validation that sends them sprinting in the wrong direction. The founder walks away feeling great because the person smiled and said "yeah, I'd probably use that." Six months later, nobody signs up.
The issue is that asking people about your idea is fundamentally different from learning about their problems. One of those activities is useful. The other is a trap disguised as research.
Rob Fitzpatrick wrote an entire book about this called The Mom Test. The title comes from a simple observation: even your mom will lie to you about whether your idea is good. She loves you. She does not want to hurt your feelings. And every potential customer you interview has some version of this same instinct. People are polite. They want to be supportive. So when you sit down and say "I'm building a tool that does X, would you use it?" they say yes. Almost always.
That yes is worthless.
Why Founders Ask the Wrong Questions
There are a few reasons founders fall into this pattern, and all of them are understandable.
The first is excitement. You have been thinking about this idea nonstop. You have mapped out features, imagined the UI, and already named the company. When you sit down with a potential user, you want to share the thing you have been working on. So you pitch. And then you ask them what they think. This turns a research conversation into a sales conversation, and it happens without you even noticing.
The second is ego protection. Deep down, you are afraid the idea might not work. Asking vague, leading questions like "wouldn't it be nice if you could do X more easily?" is a way to collect encouraging responses without ever testing the core assumption. You get to go home feeling validated without actually learning anything.
The third is a lack of structure. Most founders have never been trained in qualitative research. They sit down with a user and wing it. They ask whatever comes to mind, follow tangents, and finish the conversation with a pile of notes that do not connect to anything. Two weeks later, they cannot remember what anyone actually said.
Here is the principle from The Mom Test that fixes all of this: talk about their life, not your idea. Ask about the past, not the future. Look for specifics, not generalities. If you follow these rules, even your mom cannot give you a false positive.
Five Questions That Actually Work
You do not need a long interview script. In fact, simpler is better. These five questions form the backbone of a useful customer conversation. You can adapt the wording to fit your tone and context, but the structure is what matters.
- "What is the hardest part about [problem area]?" This is your opening. It is broad enough to let them tell you what they actually care about, rather than what you want them to care about. Their answer reveals whether the problem you are solving even ranks as a priority in their daily work. Sometimes you discover that the thing you thought was their biggest pain is number four on their list.
- "Tell me about the last time that happened." This is where you get concrete. You are asking them to recall a specific event. Specifics are gold. If someone says "it happens all the time" but cannot describe a single recent instance, the problem might not be as acute as they are suggesting. Real pain leaves memories.
- "What did you do to solve it?" This question tells you two things. First, it shows you who your actual competition is. The answer is often not another software tool. It is a spreadsheet, a manual process, an intern, or simply ignoring the problem. Second, it tells you how motivated they are. People who have tried to solve the problem have demonstrated real intent. People who just shrug and say "nothing, I just deal with it" may not be willing to pay for a solution.
- "Why was that not good enough?" Now you are digging into the gap between their current solution and what they actually need. This is where you find your positioning. The answer to this question practically writes your marketing copy for you. They will tell you, in their own words, what is missing. Listen carefully.
- "What would it mean for you if this problem went away?" This question gets at the emotional and financial stakes. Are they losing money? Losing time? Losing sleep? The size of the consequence determines their willingness to pay. If the problem going away would save them two minutes a week, you probably do not have a business. If it would save them ten hours a month or prevent them from losing clients, you have something worth building.
Notice that none of these questions mention your product. You are not pitching. You are investigating. The difference matters more than almost anything else in early-stage work.
How Many Interviews You Actually Need
One of the most common misconceptions is that you need a huge sample size to get reliable signal. Founders either interview three people and call it done, or they set a goal of 100 and burn out after 20 while still feeling like they do not have enough data.
The real answer, supported by decades of qualitative research, is that patterns emerge far faster than you expect. In user research, there is a concept called "saturation," which is the point where new interviews stop producing new information. For most B2B products, this happens between 12 and 20 interviews. After about 8 conversations, you will start hearing the same pain points repeated. By interview 15, you will be able to predict what someone is going to say before they say it.
Here is a useful heuristic: when three or four people in a row tell you essentially the same thing, you have enough data on that dimension to move forward. You do not need statistical significance. You need a clear, recurring pattern.
That said, there is a nuance. You need 15 to 20 interviews within a single customer segment. Talking to five marketers, five engineers, five salespeople, and five founders about the same problem will produce chaos, not clarity. Each of those groups has different workflows, different tools, different priorities. Pick one segment and go deep. You can explore other segments later.
"If you don't know who you are building for, talking to users is just networking with extra steps."
Rahul Vohra figured this out at Superhuman. Before launching, he ran a version of Sean Ellis's product-market fit survey with early users. But the insight was in how he segmented the responses. Instead of looking at all users as one group, he identified which type of user was most disappointed by the idea of losing the product. Then he doubled down on building for that specific group. The survey gave him signal, but the segmentation gave him direction. He was not trying to be everything to everyone. He was trying to be indispensable to a narrow group.
Where to Find People to Interview
Finding interview subjects feels hard until you know where to look. Cold outreach to strangers has a low response rate, and it often attracts people who are just being polite rather than people who actually have the problem you are investigating. Here are the places that consistently work well.
Niche online communities: Subreddits, Slack groups, Discord servers, and industry-specific forums are full of people who actively talk about their problems. Search for posts where someone is complaining about a workflow, asking for tool recommendations, or sharing a workaround. These people have self-identified as having the problem. Reach out with a short, specific message. Something like: "I noticed your post about [problem]. I'm doing research on this area. Would you be open to a 15-minute conversation? No pitch, just trying to learn." Keep the time commitment low and emphasize that you are there to listen.
LinkedIn: For B2B products, LinkedIn is still the best way to find people by job title and company size. Send connection requests with a brief note explaining that you are researching a topic in their field. The response rate on LinkedIn is surprisingly high when you lead with curiosity rather than a pitch. Most professionals are happy to share their expertise if you make it clear you value their time.
Review sites: G2, Capterra, and similar platforms are goldmines. Find the products that are adjacent to what you want to build. Read the one-star and three-star reviews. These reviewers have already articulated what they do not like about existing solutions. Some of them include their names or company information. Reach out and ask if they would be willing to share more.
Support forums and Twitter threads: When people get frustrated with a tool, they post about it. Twitter, in particular, has become a place where professionals vent about broken workflows. Search for phrases like "I hate [tool]" or "is there a better way to [task]" and you will find a list of people who are actively experiencing the problem.
Your existing network: Do not overlook the simplest source. If you are building in a domain you have worked in, you already know people who have the problem. Start there. They are more likely to be honest with you and more likely to respond quickly.
Stewart Butterfield, when building what became Slack, did something instructive during the early days. Before Slack was Slack, the team had been building a game called Glitch. The internal communication tool they built for their own team turned out to be more interesting than the game itself. But Butterfield did not just assume the tool was good because his team liked it. He brought in other companies to use it during a preview period and watched carefully. He studied how teams adopted it (or did not), which features they used heavily, and where they got confused. His source of users was personal connections to other small companies. He asked friends to try it with their teams. No advertising. No launch event. Just direct conversations and careful observation. By the time Slack launched publicly, the team had already iterated through the rough edges that would have killed adoption.
What to Do with the Answers
Raw interview notes are not useful by themselves. The value comes from organizing what you heard into patterns. Here is a process that works without any fancy tools or frameworks.
After each interview, spend five minutes writing a summary while the conversation is fresh. Capture three things: the biggest pain point they described, the current solution they use, and any strong emotional language they used (frustration, anxiety, relief). Direct quotes are especially valuable. Write them down verbatim whenever possible.
After every five interviews, sit down and look for overlaps. Create a simple spreadsheet or even a list on paper with one row per interview. The columns are: person, role, biggest problem, current solution, why it fails, and willingness to pay (inferred from how much effort they put into solving it). When you lay it out like this, clusters become obvious.
You are looking for three signals:
- Frequency: How many people mentioned this problem without being prompted? If 12 out of 15 people bring up the same friction point, you have something real.
- Intensity: How strongly do they feel about it? Someone who says "yeah, that is a bit annoying" is different from someone who says "I spent three hours last Tuesday trying to fix this and I wanted to throw my laptop." Intensity predicts willingness to pay.
- Existing spend: Are they already paying money or significant time to address this? If they have a $500/month workaround cobbled together from three different tools, that is a clear signal that a better solution has a market. If they have done nothing about the problem, the pain might not be acute enough to monetize.
One mistake to avoid: do not cherry-pick the interviews that support your hypothesis and dismiss the ones that do not. If half of your interviews point in one direction and the other half point somewhere else, you probably have a segmentation problem, not a validation problem. Go back to the data and see if the people who share one pattern have something in common (same role, same company size, same industry). That commonality is your ICP.
Pattern Matching Across Interviews
The real skill of customer interviews is not asking great questions. It is hearing what people actually tell you and connecting it across conversations. This sounds straightforward, but most founders struggle with it because they are listening for confirmation rather than listening for patterns.
Here is a concrete approach. After you complete your batch of interviews, go through your notes and highlight every statement that describes a behavior. Not an opinion, not a prediction, not a wish. A behavior. "I export the data to a CSV every Friday" is a behavior. "It would be nice to have real-time dashboards" is a wish. Behaviors are reliable. Wishes are unreliable.
Group the behaviors into categories. You might end up with categories like "manual data transfer," "duplicate entry across systems," or "waiting for approvals." These behavioral categories are the foundation of your product requirements. Each one represents something people actually do today that you might be able to make easier, faster, or unnecessary.
Next, look at the workarounds. Every workaround is a product feature waiting to be built. When someone says "I keep a separate spreadsheet to track which invoices have been sent because the system does not show me that," they have just handed you a feature spec. The best product roadmaps are assembled from workarounds, not from brainstorming sessions.
Vohra used this exact principle at Superhuman. He asked users to rate how disappointed they would be if they could no longer use the product, then he studied what the "very disappointed" users valued most. He found that speed was the pattern. Not features, not integrations. Speed. That insight came from pattern matching across survey responses, and it informed every product decision going forward. The team said no to feature requests that would compromise speed, even when those features seemed like obvious additions. The pattern gave them a filter for prioritization.
PostBuild's market analysis tools can speed up part of this work by helping you identify where your potential users gather and what language they use to describe their problems, which gives you a head start before you begin your first interview.
When to Stop Interviewing and Start Building
There is a dangerous comfort zone in customer research. Interviews feel productive. You are learning things. People are telling you their problems. You fill notebook after notebook. And at some point, the interviews become a way to avoid the harder work of actually building something and putting it in front of people.
Here are the signs that you have enough data:
- You can describe your target user's top three problems in their exact words, without hesitating.
- You know what they currently use to solve the problem and specifically why it falls short.
- You can predict, with reasonable accuracy, what a new interviewee in your target segment will say before they say it.
- At least half of your interviewees have expressed a willingness to try a solution, either by saying so directly or by demonstrating high pain through their behavior.
When those conditions are met, stop interviewing and start building a prototype. The prototype does not need to be a full product. It can be a landing page, a Figma mockup, a spreadsheet that simulates the workflow, or a manual service that you deliver by hand. The point is to shift from "do people have this problem?" to "will people engage with this specific solution?"
There is a phrase from the lean startup world that is useful here: "get out of the building." But there should be a companion phrase: "come back inside and build something." The two activities are complementary, and spending too long on either one is a mistake.
Stewart Butterfield's approach with Slack illustrates the balance. The team did not do six months of research before writing code. They built the internal tool first, used it themselves, then brought in outside teams and iterated based on what they observed. The research and building happened simultaneously, with each informing the other. Butterfield later said that if he had waited for perfect data, he would never have launched at all.
Putting It into Practice
Let me make this concrete. Here is what your next two weeks look like if you follow this approach.
Days 1 to 2: Define your target segment. Be specific. "Small business owners" is too broad. "Freelance graphic designers who use Figma and charge between $5K and $20K per project" is a segment you can actually find and interview. Write down your three riskiest assumptions about this group. These are the things that, if wrong, would kill your idea.
Days 3 to 5: Find and schedule 8 to 10 interviews. Use the channels described earlier. Send short, honest outreach messages. Aim for 20-minute conversations. Schedule them back to back if you can. Momentum matters.
Days 6 to 10: Conduct the interviews using the five questions as your backbone. Take notes during each conversation. Write your summary within 30 minutes of finishing. After every three or four interviews, look at your summaries side by side and note recurring themes.
Days 11 to 12: Compile your findings. Build your spreadsheet. Identify the top two or three patterns by frequency and intensity. Write a one-paragraph description of the problem you are going to solve, using language directly from your interviews. This paragraph becomes the foundation of your positioning.
Days 13 to 14: Based on your findings, build the smallest possible version of your solution that lets you test whether people will actually engage. A landing page with a waitlist and a clear value proposition is often enough. PostBuild can help you move quickly from interview insights to a validated market-entry strategy, so you are building on real signal rather than assumptions.
The whole process takes two weeks. Not two months. The founders who move quickly through this cycle have a meaningful advantage because they reach the "build and test" phase while their competitors are still scheduling interviews.
One final note. The goal of customer interviews is not to eliminate risk. You cannot interview your way to certainty. The goal is to reduce the most dangerous assumptions to a level where building is a reasonable bet. There will always be unknowns. There will always be surprises after launch. The interviews give you enough signal to build something worth launching, and that is all you need to get started.
Go schedule five interviews this week. Use the five questions. Write down what you hear. You will learn more in those five conversations than in a month of reading blog posts about customer discovery.