The Yuzu Method

6.2 - Compound

Testimonials, NPS, and the referral question

There is a sequence that, when done right, generates more pipeline than any outbound team and more credibility than any marketing spend, and it is also the sequence most companies do worst, because it lives in the gap between sales, customer success, and marketing, and nobody...

Chapter

6.2

Compound

There is a sequence that, when done right, generates more pipeline than any outbound team and more credibility than any marketing spend, and it is also the sequence most companies do worst, because it lives in the gap between sales, customer success, and marketing, and nobody owns it end to end. The sequence has three steps. Ask satisfied customers a numerical satisfaction question at a specific moment. Use the number to decide what to do next, with three different answers leading to three different actions. Turn the right customers into testimonials, case studies, and referrals.

It sounds simple, and the execution detail is where the leverage lives, which is why most companies execute it badly even when they have all the right pieces in place.

The question gets asked at the delight peak, not at the calendar mark. Most companies send NPS surveys quarterly because the calendar reminds them, and most NPS scores are noisy because the timing is off relative to the customer's actual experience of the product. The right time to ask is two to three months after a customer hits first value, when the work the product is doing for them feels recent and concrete, when the gratitude is still fresh and the answer the customer would give is going to be honest and specific. Before that they do not have enough experience to rate, and after that the value feels routine and the score drifts toward neutral because the customer has stopped consciously thinking about whether the product is doing its job. The window matters, and it matters enough that getting it right is more important than getting the survey design right.

Yuzu fires the NPS trip wire based on the customer's actual usage pattern, when their behavioral signals say they have crossed the threshold from trying the product to using the product as part of how they work, not based on what month of the year the marketing calendar says it is. The trip-wire engine that watches deals is the same engine that watches accounts after close, and the timing of the satisfaction question is just one more behaviorally-triggered moment in a long sequence of them.

The number tells you which path to take, and the three paths are entirely different motions that most companies collapse into one generic thank-you-for-your-feedback response. A nine or a ten on the satisfaction question is a green light to ask for a referral. A seven or an eight is a green light to ask what would have made it a ten. A zero through six is a green light to escalate to whoever has the relationship and find out what is wrong before the customer churns, because the churn signal has already arrived in the form of the score itself.

The referral question, when it gets asked, has to be specific. A vague would-you-refer-us gets vague responses, because the question itself is too abstract for the customer to do anything with. The version that works is more like this. If working with us has been this valuable, is there anyone you can think of who would get the same value, anyone in your network who is dealing with the same problems we have been solving for you, even one or two names would be incredibly helpful and I do not want to put you on the spot. Two things make this version work. It is framed as the customer doing a favor for someone else, their friend or colleague or peer, rather than for you, which lowers the social cost of saying yes. And it gives the customer permission to come back with one or two names rather than feeling obligated to deliver a long list, which is the thing that usually makes people freeze up when they are asked for referrals in the abstract. Specific, low-stakes asks beat broad, high-stakes ones every time.

This question is reserved for nines and tens specifically, and the reason is not arbitrary. It is reserved for them because asking lower scorers for referrals damages the relationship at exactly the wrong moment, because a seven who is asked for a referral hears the question as the company not understanding their own product experience, which is exactly the wrong message to send to a seven. Worse, a seven who reluctantly produces a referral often produces a bad one, the friend who looks like an ICP fit on paper but who is going to bounce off the product the same way the seven nearly did, and that referral poisons two relationships at once, the original customer's and the new prospect's.

The seven-and-eight question is also specific, and it is the most underused customer-research question in B2B software. Thanks for the score. We are trying to figure out what would make this a ten for you, and I am curious what the one thing is that we could do differently that would move it. This question produces concrete, actionable feedback from the people most likely to give it, the customers who like you enough to keep using you and who can also see specific gaps that the nines and tens have stopped noticing. The product team's roadmap should be partially fed by these answers, and the customer success team should be tracking which gaps come up most often, because the patterns that emerge are the patterns of how to get the next cohort of sevens and eights to graduate.

Capturing testimonials happens when the customer says yes, not later. When a satisfied customer agrees to provide a testimonial, the work happens the same week, not next quarter when marketing has bandwidth, because the agreement is a perishable resource and the customer's enthusiasm is at its highest right after they answered the NPS question. It cools fast. Yuzu can fire the testimonial-capture trip wire automatically at this moment, schedule the fifteen-minute interview, send the calendar invite, run the recording when the time comes, transcribe the conversation, and produce a draft case-study text and a sixty-second clip ready for the marketing team to review, all inside the same week the customer said yes. The customer's burden is one short conversation. The output is a real piece of marketing material that came from a real conversation with a real customer, which is the kind of marketing material that actually moves prospects.

Mining the conversations for content is the part most teams miss entirely. A fifteen-minute interview with a customer about their experience produces, in raw form, the most honest social-media content your company will ever publish, because the customer is saying it, in their own words, with their own emotional cadence, in their own framing. A skilled team can break a single interview into a long-form blog post, three short clips for social, two LinkedIn posts, and an email-newsletter feature, and all of those pieces of content will outperform anything the marketing team could have written from scratch. The companies that take customer interviews seriously and treat them as content infrastructure run circles around the companies that treat marketing as a separate, agency-supplied function, because the source material the first kind of company is working with is fundamentally different from what the second kind has access to.

The discipline part is measurement, and this is where most NPS programs quietly fail without anyone noticing. A satisfaction-score program is not valuable because it produces a score, it is valuable because the score predicts retention in your specific business, and most companies never check whether it actually does. Score nines should retain at materially higher rates than score sixes, and if they do not, you are measuring something that does not predict outcomes, and you should change what you are measuring rather than continue treating the meaningless number as truth. Yuzu validates the satisfaction-retention correlation in your data before treating the score as predictive, and if the correlation is real we treat it as a high-signal trip wire, and if it is not we tell you and we look for the actual signals that do predict retention in your customer base. Either way, the metric serves the outcome, not the other way around.

This is what compounding actually looks like, in mechanical detail. Real customers, asked the right question at the right moment, sorted by their answer, treated differently based on the sort, and turned into the testimonials and referrals and content that fills the top of your funnel without any cold outbound at all. It is slow. It is the only thing that works long-term. And almost no one does it well, which is exactly why the companies that do it well end up so far ahead of the ones that do not.