And why you need to audit your digital service or product before going bankrupt

What is a Usability Audit?
Usability audit is a way to pinpoint less-than-perfect areas of a digital product, revealing which parts of a site or app are causing headaches for users and stymieing conversions.
When to do a usability audit?
There are no hard and fast rules. Heuristic analysis can be performed at an advanced stage of the design process (Obviously, it would not be productive to do it too early). With new products, a heuristic analysis is usually performed later in the design phase—after wireframing and prototyping and before visual design and UI development begin. Do it too late and making changes will become costly. Existing products found to have poor usability will often have a heuristic analysis run on them before a redesign begins.
There are a few different possible USABILITY AUDIT approaches:
Heuristics
Nielsen's 10 heuristics
Our own checklist (can take JJ Garret structure)
Expert Audit
Heuristics Evaluation approach advantages:
Uncovers many usability problems and significantly improves a product’s UX
Cheaper and faster than full-blown usability tests that require the recruitment of participants, coordination, equipment, running the test, recording, analyzing, etc.
Heuristics can help the evaluators focus on specific problems (i.e., lack of system feedback, poor discoverability, error prevention, etc.)
Heuristic evaluation does not carry the ethical and practical issues/problems associated with inspection methods involving real users
Evaluating designs using a set of heuristics can help identify usability problems with specific user flows and determine the impact on the overall user experience
It can provide some quick and relatively inexpensive feedback to designers.
You can obtain feedback early in the design process.
Assigning the correct heuristic can help suggest the best corrective measures to designers.
You can use it together with other usability testing methodologies.
You can conduct usability testing to further examine potential issues.
Heuristics Evaluation approach disadvantages:
Experienced usability experts are often hard to find and may be expensive
The value of issues uncovered by evaluators is limited by their skill level
At times, a heuristic analysis may set off false alarms: Issues that would not necessarily have a negative effect on the overall UX if left alone are sometimes flagged to be fixed
Unlike cognitive walkthroughs, heuristic evaluation is based on prejudged notions of what makes “good” usability
If the evaluators are not part of the design or dev team, they may be unaware of any technical limitations on the design
It requires knowledge and experience to apply the heuristics effectively.
Trained usability experts are sometimes hard to find and can be expensive.
You should use multiple experts and aggregate their results.
The evaluation may identify more minor issues and fewer major issues.

HOW 2 DRIVE THE USABILITY AUDIT
1. Expert review
In an expert review, the reviewers already know and understand the heuristics. Because of this, reviewers do not use a specific set of heuristics. As a result, the expert review tends to be less formal, and they are not required to assign a specific heuristic to each potential problem.
2. How to Run an Effective Heuristic Analysis
Preparation is key to running the analysis well. Following an established set of steps ensures that a heuristic analysis will run efficiently and yield maximum results. Here’s a heuristic analysis checklist:
Define the scope.
Know the business requirements and demographic of the end-users.
Decide on which reporting tools and heuristics to use.
Evaluate the experience and identify usability issues.
Analyse, aggregate, and present the results.
Nielsen’s Heuristics
Though many groups have developed heuristics, one of the best-known sources is the set developed by Nielsen’s in 1994 (1991 original):
Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Match between system and the real world: The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Recognition rather than recall: Minimise the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Flexibility and efficiency of use: Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Aesthetic and minimalist design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
Jesse James Garrett model can be used as a checklist for usability audit

Image taken from here
Donald Norman’s design principles for usability
from The Design of Everyday Things book
Consistency
One of the major ways that people learn is by discovering patterns. New situations become more manageable when existing pattern knowledge can be applied to understanding how things work. Consistency is the key to helping users recognise and apply patterns.
Visibility
Users discover what functions can be performed by visually inspecting the interface and seeing what controls are available. For tasks that involve a series of steps, having clearly-marked controls in a visible location can help the user figure out what to do next.
Affordance
An affordance is a visual attribute of an object or a control that gives the user clues as to how the object or control can be used or operated.
Mapping
Pressing a button or activating a control generally triggers the system to perform some function. There is a relationship, or mapping, between a control and its effects. You should always aim to make these mappings as clear and explicit as possible. You can do this by using descriptive labels or icons on buttons and menu items, and by using controls consistently (again, similar controls should have similar behavior and effects).
Feedback
If you press a button and nothing seems to happen, you’re left wondering whether the button press actually registered. Should you try again? Or is there a delay between the button press and the expected action?
Constraints
Interfaces must be designed with restrictions so that the system can never enter into an invalid state. Constraints, or restrictions, prevent invalid data from being entered and prevent invalid actions from being performed.

247 web usability guidelines by DAVID TRAVIS
Web usability guidelines
Home page usability: 20 guidelines to evaluate the usability of home pages.
Task orientation: 44 guidelines to evaluate how well a web site supports the users tasks.
Navigation and IA: 29 guidelines to evaluate navigation and information architecture.
Forms and data entry: 23 guidelines to evaluate forms and data entry.
Trust and credibility: 13 guidelines to evaluate trust and credibility.
Writing and content quality: 23 guidelines to evaluate writing and content quality.
Page layout and visual design: 38 guidelines to evaluate page layout and visual design.
Search usability: 20 guidelines to evaluate search.
Help, feedback and error tolerance: 37 guidelines to evaluate help, feedback and error tolerance.
HOW 2 PRESENT MATERIALS?
The analysis results in a list of potential usability issues.
As with other usability tests or inspection methods, the typical deliverable is a consolidated report which not only identifies usability issues but ranks them on a scale from severe to mildly problematic. For the most part, a heuristic evaluation report doesn’t include solutions— fortunately, many usability problems have fairly obvious fixes, and once identified the design team can start working on them.
In general, as the outcome we could have a fat list of things needed to be:
improved
redesigned
revised
added
deleted
... or simply changed
Having in mind JJ Garrett's model, there could be different layer of the problem importance, and in one case the defect could be easily improved, but if we have the incorrect model as a core, that must also be detected: nobody wants to spent money on redesign, when the product is solving irrelevant (or non-existent) problem.
Basically, the report can be built according to the way how you do the audit:
for example, it can be structured by:
heuristics, then each page/screen
by page, then heuristic
by layers (JJ Garrett)
by design principals (Don Norman), then by pages/screens
In the end, we can collect all the other problems, which were found but does not connected to any aspect from heuristics or/and not so important.
Be that as it may, an audit is not the solution and even not the problem framing - here we just collecting the existing misconceptions, mistakes, and gaps. Sometimes, we can recommend the way "how to improve" and even provide some examples, but the general purpose of usability audit is the expert eye: to tell to the customer what is wrong where and why.
Usability audit could be the purpose for further work as to polish existing solution, redesign, re-conceptualise or even rebuild from scratch, using Design Thinking, Design Sprint, JTBD and other design strategy methods.

OUTCOMES:
Usability audits are a key part of any product or SaaS development process. They help to ensure that the product or service is designed with the user in mind, making it easier to use and more enjoyable. This can lead to increased profits and ROI, as well as better customer experience.
For startups, usability audits are even more important due to their limited resources. Without the right usability audit, they can quickly run out of money and go bankrupt before their product or SaaS even has a chance to succeed. By conducting regular usability audits, startups can identify potential problems early on and make changes before it’s too late.
Commentaires