There is something maddeningly paradoxical about achieving accessibility on the web. On the one hand, it’s as simple as adhering to four basic principles: a site must be perceivable, operable, understandable and robust. On the other, it means ensuring that the entire web team—from writers to engineers—works collaboratively to ensure compliance with over 60 unique criteria as defined in the Web Content Accessibility Guidelines (WCAG).
Automated accessibility tools are a necessary part of this shared effort, and a vast array of helpful tools exist to identify and track accessibility issues. (We’ve described our current favorites in another blog post.) As great as these tools are, however, it’s important to be mindful of how they can subtly influence your team’s process and expectations in ways that might negatively impact your site’s users.
Automated tools are great...
The obvious benefit: achieve better results, faster
Even expert accessibility consultants use automated tools to reduce the burden of testing—for many issues, like color contrast, it would be highly impractical to test any other way. For novices, these tools also provide an approachable entry point for getting up to speed on accessibility criteria. Reading the full WCAG standard from start to finish can be a daunting task, but when a tool flags a specific issue and provides a link to the relevant section of the guidelines, it’s much more digestible.
Positive feedback and motivation
Particularly for teams who are new to accessibility and facing the challenge of making a large site WCAG compliant, accessibility tools help provide evidence that your efforts are actually making a difference. Even the most basic tools can help you track the reduction of issues over time, and sophisticated tools like SiteImprove use gamification to great effect, with charts that measure your accessibility score over time as well as compared against your industry average.
...but they can also lead you astray
Satisfying the tool becomes the goal
Decision making can be skewed when automated tools are used as the primary way of tracking accessibility progress. Everyone likes to score points and watch charts change from red to yellow to green, so it’s tempting to prioritize fixing the issues which are definitively identified by the tool before more subjective tasks requiring human judgment. But just because automated tools can’t (yet) know how logical the tab sequence is, for example, or if content is meaningfully sequenced, doesn’t mean issues like these should be put on the back burner.
Similarly, a tool that scans an entire site might identify an issue as being important because it appears on multiple pages, so fixing the issue will greatly improve the progress stats on the tool’s dashboard. But if those pages are low-traffic, low-value pages (for example, those archived press releases from 2005) there might be more significant gains outside of the tool’s warm embrace by first tackling the issues on your most-visited pages.
Automated tools have (significant) limitations
Simply put, if you rely on automated tools to identify all potential barriers, you’re going to miss a lot of issues. Gov.uk published a blog post describing their tests of the “world’s least accessible webpage” across a range of tools. The best performing tool in their results detected just 50% of the barriers that were deliberately created for the test.
This isn’t particularly surprising—as described above, many criteria require human evaluation. But even taking that into account, many errors that you might expect an automated tool to catch, such as a focus state that isn’t visible or content inserted via CSS were frequently missed.
In our experience, many tools also generate false positives—and this can be very misleading for those without knowledge of front-end technologies and accessibility. The responsible thing to do is to substantiate why a flagged issue is a false positive and hope that management reads your documentation. The less responsible thing to do would be to needlessly tweak your CSS styles to make the flag go away so the tool will give you your points. (But who would ever do that...)
Tools suggest there is a final state of pure, complete accessibility
If the tools make accessibility into a game, the natural tendency is to assume you can win the game, and then go about your regular business of writing, designing, or coding—and never have to think about accessibility again. And if you’ve been providing evidence of your journey towards the perfect score with higher-ups in your organization along the way, they too might think it’s a problem that has been solved once and for all.
We strongly recommend that our clients think of accessibility as a journey, and create sustainable processes to consider accessibility as just another facet of good quality work. In the same way that efforts to improve site performance or visitor engagement never end, there will always be new opportunities to make your site more accessible. Making sure that your user testing process is ongoing and includes people with disabilities will always be the best way to discover real opportunities to improve the site experience for everyone.