One thing I like about digital products is that this kind of product is never finished. Not only they can continue to change over time, change is often critical to keep the product relevant in the market as technology and expectations evolve. For example, a web-based ride sharing application requiring people to plan their ride one day in advance could be extremely popular before the arrival of smartphones, and quickly become irrelevant once it’s possible to open a mobile app and send a request for a ride that a driver nearby can instantly accept.
Yet, something I hear often from startup founders in the digital product space is how uncertain and “gut feeling based” their process to decide what to build next is. And even founders of highly successful startups who are still in charge of product decisions, when interviewed by podcasters interested in learning their secrets to growth, typically describe their decision-making process as something imprecise and opinion-based rather than systematic and evidence-based.
The problem here is that, when you don’t have a solid framework to answer the question, What is the next most important thing to spend engineering time on?, sooner or later you’ll end up falling into one of these traps:
- Ignoring customer input and building an idea the CEO fell in love with that turns out nobody is interested in using or paying for.
- Meeting customer requests to the letter and ending up with a “me-too” product, or a bloated product that frustrates users, or a new feature that customers said they would love but now renders their well-established workflows useless.
- Listening to recommendations of “power users” or “early adopters” and creating a product of limited appeal to your larger audience.
In a recent article for LinkedIn, I gave an example of a time when I almost fell into the trap of listening too closely to customer input (being saved by Eric Sylvia, a colleague from the customer success team who knew better than accepting at face value the feedback received from multiple customers):
I was working as a product manager for a software product that was receiving a significant amount of complaints about response time. Users would often write to customer support to express their dissatisfaction with the time that a page took to finish loading the first time after they opened the application. Some users would also point out that one of our main competitors (which offered the same capability in a free version of their product, making it relatively easy for them to compare) had a much faster load time.
As a product manager with a technical background, my first reaction was to go talk to the engineers to better understand the constraints to make the page load faster. Not Eric! Despite also having a technical background, he didn’t take anything for granted. Eric went through the trouble of setting up the exact same scenario in both our product and the free version of the competitor’s, and timed how long it took for each page to load.
Turns out that our product loaded faster. Upon further investigation, it became clear that our software created the perception of being slower to load because of an animated loading icon that was kept in display while the system retrieved the content. The competitor’s product simply showed the static elements of the page, with a blank space where the content was being loaded. This is a fantastic example of sweating the right small stuff. Before jumping to the idea of making the page faster, this colleague decided to check whether we had identified the right problem to solve–which in fact we hadn’t. In reality, instead of requiring a high-cost, high-effort solution to try and reduce latency on a page that had already been optimized for performance, the solution was trivial: just remove the loading icon to eliminate the perception of slowness.
Anthony Ulwick, in his article for HBR called Turn Customer Input into Innovation, offers another illustrative example of the threats facing companies who don’t know how to interpret customer feedback. His example is of a physical product, but the consequences are equally seen in digital products:
There are several concrete dangers of listening to customers too closely. One of these is the tendency to make incremental, rather than bold, improvements that leave the field open for competitors. Kawasaki learned this lesson when it introduced its Jet Ski. At the time, the company dominated the market for recreational watercraft. When it asked users what could be done to improve the Jet Ski’s ride, customers requested extra padding on the vehicle’s sides to make the standing position more comfortable. It never occurred to them to request a seated watercraft. The company focused on giving customers what they asked for, while other manufacturers began to develop seated models that since have bumped Kawasaki—famed for its motorcycles, which are never ridden standing—from its leading market position.
This type of disappointment can be easily avoided if you use an approach like the one I described in this other article: When customers ask for a feature or product enhancements, instead of taking their requests at face value, ask them to explain the context in which they realized they needed the feature, and what is is that they will be able to do when they get their request that they can’t do now.
I don’t know what answers Kawasaki would have gotten from asking these questions to customers asking for extra padding on the sides, but from their initial request I can imagine that the “job” customers were hiring the jetski to perform wasn’t a challenging and rewarding workout (which would be well-served by a standing watercraft), but probably something like touring and taking people for a ride. After understanding the problem space (“I want a more comfortable ride”) it would be easier to form a robust problem-space definition and then evaluate the candidate solutions (extra padding, seated model, etc.) based on their feasibility, cost, and ability to deliver value.
There are different frameworks that you can use to avoid wasting time, money and limited resources on product ideas that turn out not to be valued by customers. The ones that I’ve seen work best focus on getting answers for the following questions:
- Who am I trying to serve?
- What set of underserved needs from my target customers do I aspire to meet with my product?
- What criteria do my target customers use to judge how well their needs and expectations are being met?
- What potential solutions–obvious and non-obvious(*)–exist to meet my customer needs and preferences?
- How will my product be better than the others in the market? What unique value will it deliver?
(*) Non-obvious solutions like “removing the loading icon” that a customer would be unlikely to come up with on their own but you can invent after asking customers probing questions to illuminate the problem space.
When your product prioritization process is based on these kinds of questions, it’s much easier to avoid the common traps listed at the top. Instead of talking about features, you’ll be articulating customer needs and desired outcomes. Instead of wondering if a product idea will “fly”, you’ll be describing the value to be delivered to the customer, and objectively measuring the candidate solutions against the benefits they are capable of providing.
It also helps to reframe the question from What should we build next? to What is the next most important thing to spend engineering time on? Sometimes the best opportunity lies not on building a shiny new feature, but on improving response time, or removing unnecessary features that are cluttering the user experience, or redesigning a user flow to make tasks easier to complete. It bears repeating what I wrote in the article Stop Prioritizing Features:
The fixation on productivity and feature throughput is as likely to lead to “bloatware”, customer aggravation, and quickly losing relevance in the market, as to produce the expected growth.
Photo credit: Vimal Kumar (Creative Commons)
Get infrequent updates from us