The axiom, “The road to achievement runs through hardship” is well preserved in quotes such as these:
“In the middle of difficulty lies opportunity.” – Albert Einstein
“I have not failed, I’ve just found 10,000 ways that won’t work.” – Thomas Edison
“A failure is a man who has blundered, but is not able to cash in the experience.” – Elbert Hubbard
“No pain, no gain.” – Unknown (References back to Sophocles, 5th Century B.C.)
A recent New York Times Magazine article, titled “What if the Secret to Success is Failure?”, connects this principle to the current challenges that educators face today as they prepare and evaluate high school students for college.
The author suggests a new set of criteria — ‘character’ attributes such as “grit” and “self-control” — will more accurately predict whether a student will be successful than the traditional academic tests. Students, he asserts, need to face real challenges in order to learn how to overcome challenges. Thus, success is linked to failure: students who have experienced failure, and have learned from that experience, may be better equipped to succeed later in life.
We follow similar principles to make design improvements to the support web site.
For instance, we follow an iterative approach when we develop new capabilities and introduce design changes. A successful design is derived from numerous unsuccessful designs that are progressively improved as a result of iterative adjustments, highlighted by repetitive testing. In other words, we try something, take note of the results, make adjustments, and try again. This is repeated until the results are acceptable.
There’s a great quote in design circles that goes something like this: “Your site will be tested; you get to decide when.”
The point is, we can test a new design with a small, representative set of users before launch or let it be tested by a mass audience post launch. If we test before launch, the failure is private, the stakes are low, and the learning can be applied. If we release new capabilities on our website without testing, our customers are the guinea pigs, and the test is on the big stage with the spotlight shining. The stakes are higher.
Thankfully that’s not the way aircraft, medical systems, and other tools are designed, and it’s not the way we design the support site.
Our typical tests are focused on task success. Users perform a number of tasks on the new design and we evaluate the performance based on 1) ability to complete the task successfully and 2) how long it takes to complete the task. Through a succession of tests and subsequent adjustments to the design, we are able to improve and refine the design.
There are other observations made during the testing that are useful to the design team. However, task completion and time-on-task are the key measures that ultimately provide a numeric score for the task. The score provides a way to compare tasks to each other and track the task success over time.
We recently conducted a site-wide benchmark study for the support site and the results were encouraging. Our benchmark studies are conducted at regular intervals and cover a broad collection of tasks that are considered most essential capabilities of the site. The most recent study indicated that overall task completion for the Support site had improved about 15% since 2009.
It is gratifying to see the improvements and receive the positive reinforcement from the test scores. But the most recent test also reminds us of areas that still need to improve.
So, it’s back to the drawing board and the user testing lab. Time to put forth some new ideas and start failing again and again. We go through this because we know failure leads to success — as long as we are learning from each attempt.
“I am not discouraged, because every wrong attempt discarded is another step forward.”
- Thomas Edison