

17.04.20268 mins read
Highlights
- Why a well-executed UX audit is one of the highest-value investments a brand can make
- The six core methodologies that underpin an effective audit
- The top tools we use and recommend across analytics, usability testing, accessibility, and design
- How to structure an audit that produces a prioritised, actionable backlog — not just a long list of issues
- Automated accessibility tools only catch a percentage of WCAG issues, so here's what to do about it
- How AI is accelerating the analysis and synthesis phases of modern UX audits
A UX audit is one of the highest-value investments a brand can make. A bold claim? Yes, but one that having worked in design for many years, I feel confident about making. And given that recently I read that 91% of people feel that digital experiences are not great, one that I feel is increasingly relevant.
When done well, an audit can tell you exactly why your website or app is losing users and, with evidence, where to focus your improvement efforts first.
And who could argue with that?
However, I also know that when done badly or with the wrong tools, it produces a long list of issues with no clear prioritisation, no root cause analysis, and no actionable path forward.
Which is why I decided to put together this guide. It will cover the tools and methodologies we rely on at All human to run UX audits that are actually useful and combine quantitative data with qualitative insight, and translate findings into prioritised, commercially meaningful insights.
First of all, let’s set a clear definition of what I mean by a UX audit.
A UX audit is a systematic evaluation of a digital product's user experience.
It will
- assess how well the product serves its users and meets its business objectives,
- identify the barriers preventing users from completing key tasks,
- produce a prioritised roadmap for improvement.
The best UX audits don't just produce a long list of problem areas, they explain why those problems exist, quantify their business impact, and recommend specific, actionable fixes.
How does it do this?
A thorough audit typically draws on three types of input:
- behavioural data - what users actually do,
- qualitative insight - why they do it
- expert evaluation- what best practice would suggest they should be able to do.
The audit then collates all of the information gathered and generates both insights and the backlog of tasks.

#1 Heuristic evaluation
Heuristic evaluation is an expert review of the product against a set of established usability principles, most commonly Nielsen's 10 usability heuristics. It's fast, relatively low-cost, and surfaces obvious usability problems without requiring user recruitment. Its limitation is that it reflects expert judgement, not observed user behaviour. Used alone, it can miss context-specific problems that only emerge in real use. Used alongside behavioural data, it's an efficient and powerful diagnostic tool.
For more on this, take a look at How design impacts a brand’s digital performance.
#2 Cognitive walkthrough
A cognitive walkthrough is an elaborate way of saying, "let’s walk in the user’s shoes." It's particularly effective at identifying problems in onboarding flows, checkout journeys, and any process that requires a user to learn as they go. It answers the question: does this interface make it obvious what to do next at every step?
For more on this, our blog Having trouble with declining custoemr engagement offers advice on how to experience your customer jounrey as a user.
#3 Funnel and analytics analysis
The answer lies in the data is my motto. But before you begin to explore the data, you must collect it. We typically start by looking at user behaviour data, such as drop-off rates at each stage of the conversion funnel, time on task, error rates, bounce rates, and entry/exit patterns, to identify where users are dropping off or not completing the journey. Yes, at this point, we don’t know why, but it does give us a place to start.
#4 Usability testing
We do a lot of testing because sometimes it is the only way to truly understand why things are going wrong. We’ll interview groups of people and conduct larger surveys and have found that even a small number of well-structured sessions, five to eight participants for a focused scope, can reveal the majority of significant usability issues.
For research on the difference between market research and user research, please check out our blog, What is customer research, and how to use it to get to know your customer better.
#5 Accessibility evaluation
Apart from it being a legal requirement, digital accessibility just makes sense. After all, why would you want to have a site/app that not everyone could use?
A full UX audit should always include accessibility evaluation against recognised international standards such as WCAG 2.1 AA.
We have long been advocates for digital accessibility, so if you are still unsure about this subject, please check out our resources and solutions here .
#6 Voice of the customer research
On-site surveys, exit surveys, NPS data, customer support ticket analysis, and reviews are all excellent sources of feedback directly from the people using your site/app. Sometimes it takes a negative review to highlight an issue that no amount of behavioural observation would reveal.

The tools available for UX audit work have expanded significantly in recent years, with AI-assisted analysis becoming increasingly capable.
Here are the tools we use and recommend, organised by category.
ANALYTICS & BEHAVIOURAL DATA
1. Google Analytics 4 (GA4)
GA4 remains the baseline for quantitative UX audit work. Funnel exploration, user journey reports, engagement metrics, and event-based tracking provide the foundation for understanding where users drop off and which paths lead to conversion. The transition from Universal Analytics has made some established workflows more complex, but GA4's event-based model is ultimately more flexible for custom funnel analysis.
Great for: Establishing the baseline picture of user behaviour across your full digital estate.
2. Hotjar
Hotjar combines heatmaps, session recordings, and on-site surveys in a single platform, making it one of the most widely used tools in UX audit work. Heatmaps visualise where users click, move, and scroll — quickly surfacing engagement patterns and dead zones. Session recordings allow you to watch real user journeys and identify friction points that analytics alone won't reveal. The survey and feedback tools add a lightweight voice-of-customer layer without needing a separate research platform.
Great for: Quick behavioural insight, particularly on landing pages, key conversion flows, and checkout journeys.
3. Microsoft Clarity
Microsoft Clarity offers heatmaps and session recordings at no cost, with a generous data allowance that makes it viable even for high-traffic sites. Its rage click and dead click detection automatically flags interaction frustration signals, which is useful for rapidly identifying obvious friction points. Clarity also integrates natively with GA4, allowing behavioural data to be segmented by Google Analytics dimensions.
Great for: Teams that need session recording and heatmap capability without the budget for a paid platform.
4. FullStory
FullStory offers powerful search and segmentation capabilities that enable the discovery of specific patterns across large volumes of session data. Its DX Data layer translates session behaviour into quantified signals of frustration and struggle, which is particularly useful for larger organisations that need to communicate UX issues in commercial terms.
Great for: Enterprise-scale behavioural analysis where granular segmentation and quantified friction metrics are needed.
USABILITY TESTING
5. Maze
Maze is an unmoderated usability testing platform that allows you to create task-based tests and deploy them to your own participants or the Maze panel. It's fast, quantitative, and well-suited to testing specific journeys or prototypes at scale. Its task success rate, misclick rate, and time-on-task metrics translate qualitative test scenarios into quantifiable UX performance data. Particularly strong for testing Figma prototypes at the design stage, before build.
Great for: Rapid, unmoderated testing of specific flows or prototypes at speed.
6. UserTesting (now Centercode) ·
UserTesting provides access to a large, screened participant panel and supports both moderated and unmoderated test formats. It's one of the most established platforms in the usability testing space, with robust screener capabilities and well-developed analysis tools, including sentiment analysis and highlight reel creation. Higher cost than many alternatives, but the panel quality and test flexibility justify it for complex research requirements.
Great for: Moderated or unmoderated testing with screened participants, particularly for complex products or regulated sectors.
7. Lookback
Lookback is our preferred platform for moderated remote usability sessions. It supports live moderation, team observation, and note-taking in real time, with session recording and highlight clipping built in. Its participant-facing experience is clean and low-friction, which reduces drop-out rates and makes it suitable for less technically confident user groups.
Great for: Moderated remote user testing, particularly with less technically confident participants or older user groups.
UX RESEARCH & SYNTHESIS
8. Dovetail
Dovetail is a research repository and analysis platform that makes it possible to tag, synthesise, and share findings across qualitative research projects. For UX audit work, it's most useful when you're running multiple research sessions and need to identify patterns across participants efficiently. Its AI-assisted tagging and theme generation has improved significantly and is now genuinely useful for accelerating analysis without compromising rigour.
Great for: Synthesising findings across multiple usability sessions or research streams into a coherent insight set.
9. Optimal Workshop
Optimal Workshop provides a suite of information architecture research tools — card sorting, tree testing, and first-click testing — that are specifically useful for auditing navigation, taxonomy, and site structure. If your audit identifies findability or navigation as a problem area, Optimal Workshop's tools are the most focused way to diagnose and validate solutions.
Great for: Auditing and restructuring information architecture, navigation, and content taxonomy.
ACCESSIBILITY EVALUATION
10. axe DevTools
axe DevTools is the browser extension and API most widely used by accessibility specialists for automated WCAG testing. The free browser extension is sufficient for page-by-page manual review; the Pro and API versions support CI/CD integration and automated scanning across large sites.
Great for: Automated accessibility scanning — the baseline for any accessibility-inclusive UX audit.
11. NVDA / JAWS / VoiceOver
Automated scanning tools typically identify only a percentage of WCAG issues. This varies according to tool used, but we still recommend that you perform both automated and manual testing. Screen reader testing with NVDA (Windows), JAWS (Windows), and VoiceOver (macOS/iOS) is essential for identifying barriers that automation may miss — particularly in interactive components, dynamic content, form error handling, and complex data tables. Any UX audit that includes accessibility evaluation should include live screen reader testing.
Great for: Identifying accessibility barriers that automated tools cannot detect — essential for WCAG 2.1 AA compliance.
DESIGN & PROTOTYPE
12. Figma
Figma has become the standard for UX design and prototyping work, and its prototyping capabilities are well-suited to audit-related redesign work. In an audit context, Figma is most useful for building annotated wireframes that communicate redesign recommendations, and for creating interactive prototypes that can be tested with users to validate proposed solutions before they're built.
Great for: Communicating redesign recommendations and testing proposed solutions with users before build.
A well-structured audit follows a consistent sequence: establish the baseline with data, diagnose with qualitative research, evaluate against standards, synthesise findings, and prioritise by impact.
Step 1: Define the scope and success criteria
Before starting any audit, agree on what you're auditing, what business goals you're optimising for, and what a successful outcome looks like. An audit without a defined scope tends to produce a comprehensive list of everything that could be better, which is not the same as a prioritised list of what most needs to change.
Step 2: Establish the quantitative baseline
Pull the analytics data first. Funnel drop-off rates, page-level bounce rates, time on task, error rates, and conversion data by segment tell you where the biggest problems are and where to direct qualitative research effort. Don't spend equal time on every page; spend the most time on the pages and journeys where user failure has the highest commercial impact.
Step 3: Add behavioural context
Layer in session recordings and heatmaps for the high-priority pages identified in step 2. You're looking for patterns — rage clicks, repeated hesitation, users scrolling past key CTAs, or consistent drop-off at specific form fields. Behavioural data narrows the hypothesis space significantly before you move to usability testing.
Step 4: Expert evaluation and accessibility review
Run the heuristic evaluation and cognitive walkthroughs alongside the automated accessibility scan and manual WCAG review. Document issues with their severity, the heuristic or criterion they violate, and the specific element and page affected. Standardising this format makes prioritisation and developer handover significantly faster.
Step 5: Usability testing
Run targeted usability sessions to validate or challenge the hypotheses formed from steps 2–4. Focus sessions on the highest-impact journeys. Five to eight participants is typically sufficient to surface the majority of significant usability issues; you don't need large sample sizes for qualitative research to be actionable.
Step 6: Synthesise and prioritise
Synthesise findings across all methods into a single, prioritised issue register. Classify by severity (Critical, Serious, Moderate, Minor) and rank by business impact — which issues, if fixed, would most improve conversion, task completion, or user satisfaction? The output should be a backlog your development team can act on immediately, not a report that requires another workshop to translate.
As with everything else AI touches, the impact is speed and scalability.
We use AI tooling to accelerate the analytical and synthesis phases of audits: processing larger volumes of session data faster, surfacing patterns across qualitative research at scale, and generating first-pass issue classifications that human reviewers then validate and refine.
If you're planning a UX audit, or if your website or application isn't converting as it should and you want to understand why, we'd be glad to talk through the right approach for your organisation.


