Measuring success for must-use tools
We have the freedom to choose most of the products and tools we use everyday, but not all of them. Some tools are required by our jobs such as the company website. And some tools provide exclusive capabilities, like my bank’s website.
For those products and services that we choose over others, measuring success is more or less binary– we choose it or we choose something else. The providers of consumer wares can look at sales or, even more directly, adoption / consumption rate as an indicator of success.
However, there are websites and services that have no competition — the users of which are a captive audience. In these cases, there is only one option for customers. Usage is non-negotiable, so adoption or consumption is not an indicator of preference or success.
So, how does the owner evaluate the efficacy of the tool?
We think the answer is by measuring ease of use. That leads to questions like: Are users able to complete tasks? Are there errors or mistakes along the way? How long does it take to complete a task?
These questions yield some metrics, like “45% of users completed the task” and “it took 5min 30sec for most users to complete the task.” But is that good or is that bad? It is not possible to perform a comparative study when there is no competition.
In these cases, the tool’s performance must be measured against itself, over time. That requires first, conducting a baseline study, then making adjustments (hypothetically improvements), and re-testing.
This is exactly how we evaluate our support website, which is a must-have tool for many of our customers and partners.
For instance, we monitor the ability for users to complete a software download and how long it takes for most people to complete the download process — from start to finish.
Recent updates to the support site have improved the task-completion rate and dramatically reduced the time
that it takes for most users to complete the process. In fact, these improvements probably go hand-in-hand: task-completion is more frequent when the user can complete the task in less time. And, by reducing the time it takes to complete a task, we have also reduced the opportunity for errors.
These improvements stem from a number of changes including reduction in steps, short-cuts for popular downloads and a variety of other subtle changes. The cumulative effect is very positive for our customers and partners who now derive value out of their products faster than before.