Defining User Experience Success
Establishing the metrics by which a design’s efficacy can be determined
It is fair to say that establishing life goals is part of our collective behavior. We do this all the time. Think about it, we create new year resolutions, often set saving goals if there’s something we want to buy, write down ideas we want to execute and continuously use hard or soft timelines to judge ourselves. It is part of the human condition. We also do this at work–all the time. Quarterly earning reports, customer acquisition price, bounce rates, all of which are arguably measurements of success in their own right. As UX designers though, I think too often we fall short of truly knowing if our strategy and design execution is meeting our goals. That is if we create design goals in the first place.
I often talk to founders, product owners and UX designers to get a sense of how they measure the success of their designs. What I’ve found is that there are some that don’t measure their designs at all (read: dangerous), and some spend most or all their efforts measuring other business areas unrelated or maybe byproducts of design (read: adventurous). This might be because design is often seen as unmeasurable, untouchable, chaotic, puzzling. We don’t know where inspiration comes from (actually, we do) and non-designers wouldn’t understand (untrue).
While I was working at HBO, this became an area of focus for me and my team. What I gathered was that we could approach design from a Newtonian way: action and reaction. What this means is approaching a problem, performing research, executing a design, letting it loose and wait to find out if it was ok with end users; then react to their feedback. This is the right approach, for the most part, but it is also incomplete. As such, it can yield to paradoxical effects which would trigger all kinds of alarms upstairs. We don’t want that. Nobody wants that.
To avoid commotion, we would try to gather feedback from all areas including product owners, technical managers, potential users and stakeholders to create some sort of balance. I think this works better in certain environments that are fully supportive of innovating through design and groups that perform Design Sprints for example. However, we would find that this perceived balance could either tip over to the people with more pull (generally executives in a corporate environment), or would feel like too many cooks in the kitchen. And who likes that? In any case, this was still too much of an objective mechanism, it wasn’t granular enough. We needed more direct insight, like a formula that would provide with some level of predictability. We needed a measuring stick.
Behold, the measuring stick!!!
To us, a successful user experience must have succeeded in 4 main areas: Clarity, Accessibility, Continuity and Motivation. Let’s start by defining those, using HBO as an example.
Each step, screen and messaging within our applications should transparently communicate its value to our user. Information should be distributed in a way that is approachable and commonsensical in nature.
Content is aware of our wide audience in its design and hierarchy. Messaging is helpful, UI patterns are indicative of function and there is meticulous attention given to typography, contrast and voice of product.
HBO Go doesn’t float in a space of its own. It is part of an intricate web of products and functions including a legacy brand, broadcasting, premium events, marketing and social media component. As such, it should feel as a cohesive part of the whole in its design, interactions and overall voice of product.
Our product should be inviting in nature, user aware and conducive to a main goal: watch content. Understanding this, all parts of our application needed to lend themselves indicative to enable a play experience and stimulate our audience to do so.
Now that we had arrived at these 4 pillars of user experience success, we needed a way to measure against them. We proceeded to divide our designs by product feature and conducted tests to rate each. This was a revealing part of the process, as it was clear that we also needed a rating system. We could do a 1–10 range, but what does 1 mean? What does 7 mean? I didn’t want to add even more complexity into our workflow–even though that’s what us managers often do. Similarly, we could have adopted a five-star rating, but the effect would have been the same. One of my colleagues came to, what I believe, was the best possible rating system. The one that I use to this day which is a choose one of 3 possible answers: Not satisfactory at all, somewhat satisfactory and very satisfactory. This simple method allowed us to conduct a test on the design itself and use the averages to identify areas of friction or possible improvements in our UI.
Today I share this method with you in the hopes that it can help you create your own goals. This exercise has become part of my routine, and one that I use both with clients in the quest to create amazing products, and internally while designing Feeel.ai. It might not work in all instances, but I’ve found this to be incredibly flexible while working on many projects and I plan to continue using this as a regular part of my documentation process. The biggest takeaway here is that you should measure your designs, but to do that you must first define what success means in your context.