We are in the SaaS business, so by definition “software-as-a-service” should imply the actual software is the only thing that matters when it comes to retention and driving customer lifetime value.
Not exactly.
Agreeing with Dan Steinman’s post, product adoption does not paint the complete picture of a customer’s health. There are a number of other ‘external to product’ factors that can play a significant role in a customer’s decision to renew, churn, or deepen. This is especially true in the beginning stages of a company’s deployment where product may have not taken root beyond simple use cases that could be easily replaced by competitors or alternate solutions. This is why other factors – such as the level of senior sponsorship upon deployment in the case of an enterprise product – are extremely important to understand and manage.
Once your customer relationship and the product takes root, customers make an ongoing decision to use and repurchase. In SaaS, this is a conscious action to renew a contract on a monthly or annual basis. Customers have limited upfront investment and switching costs are low – unless one can create classic “barriers to exit” through sticky product adoption and high perceived utility. So how do you maximize your probability of a product adoption homerun?
In my experience, serving clients at McKinsey & Company and now focused on customer retention at Box, I see a pattern in the mutually beneficial relationship between product and customer retention. I believe there are a few No Regrets moves every company – at any stage – can make to tighten this relationship.
No Regrets #1: Measure it. This seems really obvious – if you want to drive product adoption, you first have to measure it. But what isn’t as obvious are what metrics to use – should they be aggregate, account, or user level? Do segment differences matter? What timing is appropriate given the product is changing everyday? There is no one right answer, and every business is different. Aligning on a few core metrics (probably 5-8 is plenty) and getting real visibility, cross-functional buy-in, consistency over time, and accountability is far more important than fine-tuning methodology.
No Regrets #2: Take root as fast as possible. At Box, we have a professional services team (Box Consulting) to help customers get the most out of our product as early as possible. This includes helping customers with technical implementation and helping customers drive user adoption across their organizations. We jointly set goals around seat deployment and activity levels and work with customers on an on-going basis to ensure success to plan. In product, #2 may mean focusing disproportionately on the first login or first 30-day experience (with a metric to match). There is rarely a second chance to make a first impression.
No Regrets #3: Identify and prioritize the stickiest features and use cases. This move is both parts art and science. Art involves knowing your users – spending time with them to understand how they are using your product, how it integrates with their workflow and the limitations that send them elsewhere. Science is a volume-based analysis of your product and customer data that tests hypotheses against outcomes. At Box, we do both, and as a result focus on core product features that drive engagement (e.g., we know uploading content is a stickiness driver). We also understand our customers have a universe of use cases and partner proactively with third-party developers and partners to build on our platform. Stickiness for all!
No Regrets #4: Experiment to learn. Nobody has all the answers, and at Box we have a strong culture of test-and-learn (in fact, one of our core values is “fail fast”). The key to a good experiment is agreeing on the definition of success and corresponding metrics upfront. These should cascade from #1, and the tighter the better (e.g., no one experiment is going to “Improve adoption” but it may “Increase first login session time by 20%”). In a start-up world with constrained resources, the other important factor is to time-bound the experiment so that if it works you operationalize and if it doesn’t, you move onto the next hypothesis.
Every move #1-4 requires customer and product data. Interestingly, literally every single one of the dozens of companies I have worked with has complained about the quality and limitations of their data to accomplish any of these No Regrets moves. Yet, in every single one of those companies there was plenty to work with, good enough for directional decision-making. My advice is to start hypotheses-back, and let ‘good enough’ data guide you from there.