The WebAIM Study - are 96 per cent of the websites not accessible

WebAim published a new data collection in March 2024. With high numbers like 50 million errors found on one million home pages, attention is guaranteed. But my points of criticism remain.

Communication from WebAIM designed for effect

96 percent of the most frequently used websites are not accessible - the news is currently making the rounds again on Twitter and relevant accessibility channels. Great thing, especially if you only read headlines. Personally, I find the WebAIM collection - not meaningful for numerous reasons or that incorrect conclusions are drawn. I would like to explain the reasons in this article.

To briefly explain: In professional circles we don't talk about accessibility, but rather about conformity. Conformity means that a certain standard has been met, for example WCAG 2.1 at level AA. Since the term “accessible” for websites is not firmly defined, this expedient is always necessary.

In general, from my point of view, this is neither an analysis nor a study; the term data collection is most appropriate. WebAIM itself only releases numbers and interprets relatively little, so it lacks analytical depth. A study would require a hint of empirical methodology, which cannot be found either. The - mostly wrong - interpretations come from other people.

Since the article has become quite long, I would like to summarize the main criticism at the beginning:

  • The errors found/claimed should generally not lead to restricted usability of the websites by disabled people. Any subset of websites is poorly or unusable by a subset of disabled people, but WebAim's collection - gives us no new insights in this regard. Usability by disabled people and compliance with accessibility rules are not always the same. When some experts claim, based on the WebAIM analysis, that 96 percent of websites cannot be used by disabled people at all, this can only be described as nonsense. It is extremely rare that an entire website is not usable at all, but individual parts such as the login, the cookie message and similar things are often not usable. But these are completely different things and you should stick to the facts.
  • The test is a snapshot: At time X, so many errors were found. They may have been fixed a minute after the test or other errors may have appeared. If you were to enlarge the test sample, i.e. not just look at the home pages, all websites would probably have at least one (rather significantly more) error. What has been gained from this knowledge?
  • WebAIM does not weight how serious the errors are. 1 or 1000 errors, according to WebAIM it is equally relevant. The communication is not designed for information, but for maximum effect.
  • If I read a study that claimed that almost 100 percent of providers were violating rules, then I would conclude that the rules cannot be met. Is this what WebAIM is trying to say, that it is not possible to be compliant with WCAG rules? If not, what is the gain in knowledge? Any expert will tell you today that absolute conformance to the rules for a complex offer is hardly achievable with reasonable effort and is not necessary.
  • Automated tools have limited meaning. You can make statements about a large amount of data that may not be relevant to the individual object. For example, I can claim that people in Germany are on average 1.80314 meters tall and weigh 71.235124 kg, which may be true on average, but not for any individual. The websites probably have errors, but the WebAIM study cannot prove how many there are and whether they are relevant for use.
  • The reverse conclusion is also wrong: Because WebAIM claims to have found no problems on 4 percent of the websites checked, these pages do not have to be accessible or easy to use for disabled people, as Knowbility claims on Twitter. As a rule of thumb, around 35 percent of problems can be found automatically. WebAIM's study shows at most that these 4 percent tested automatically (probably with WebAIM's Wave) and ironed out these errors. However, they could still contain a large number of errors that cannot be detected automatically. This clearly shows how uninformative the WebAIM score is.

Methodology

The one million websites were automatically checked using WebAIM's WAVE tool. There is nothing more to say about the methodology. It is simply not possible to qualitatively analyze such a number of websites in a reasonable amount of time.

This is where the first problem begins: The tool examines both WCAG criteria according to A and AA. But even in the USA, most operators are not committed to accessibility and usually only aim for A, if at all. It makes no sense to examine organizations for AA that do not strive for this because, for example, they do not feel bound by the contrast requirements.

WebAIM is also not transparent about how they handled Criterion 4.1.1 Parsing, which was abolished with WCAG 2.2. There is no WCAG 2.2 criterion to be found among the errors shown, although they claim to have checked according to WCAG 2.2. I miss a table showing all the errors found distributed across the individual homepages. Unfortunately, WebAIM is anything but transparent here.

Automated tools are of limited or no help at all

While WebAim Wave may still be one of the better tools, the consensus is that these tools can catch perhaps 30 to 40 percent of accessibility errors. In my opinion the tools are still rather poor, I have access to Siteimprove and Silktide and both give a lot of errors that have no relevance. There are many false positive results, i.e. errors are claimed that do not stand up to manual testing. For example, there are big problems with measuring contrast correctly.

There are things you can measure automatically like the presence of certain HTML elements, ARIA attributes, labels, alt text and some contrasts. But the list of things that they cannot evaluate is longer. This includes the usefulness of alternative texts, the sensible use of ARIA, and the correct labeling of texts or form elements.

In short: Whether Wave shows errors or not is completely irrelevant. A lazy but smart developer lets the tool run over it, irons out the errors and gets his site compliant without having improved accessibility one bit.

On the contrary, the tool creates false incentives, namely optimization for automated testing tools. Why time-consuming manual testing when WebAIM gives the green light with one click?

As WebAIM itself notes, websites are becoming increasingly complex. However, I assume that many websites, especially from the Anglo-American region, have the topic of accessibility on their radar. This means that they take care of alternative texts or useful link descriptions. However, it is sometimes not possible to take these factors into account for externally integrated content.

A large proportion of the errors can be attributed to such embedded content: for example, social media content or advertising. If you go by WebAIM, you should probably leave out such content because you can't make it accessible. This is likely to scare people away from accessibility. Something different applies to integrated libraries such as generators for infographics; accessibility should of course be ensured here. But WebAIM Wave doesn't check this separately. It would make sense to separate real website content and content from external sources such as advertising networks, which would allow a more realistic assessment. I don't know whether this is always technically possible, but the results are simply not meaningful because you don't know whether the operator of the website is responsible or the advertising network.

Let's take a closer look at the errors (the numbers refer to an older WebAIM study):

  • 86 percent with contrast errors: As noted above, no AA criterion
  • 66 percent images with missing alternative texts: This is probably about externally integrated content over which you don't have much influence, the same applies to links without text.
  • 53 percent with missing form labels: really annoying, but you can only judge that in context. If it's about the search field and there's only one field, this error isn't that bad.

No page is without errors

The one million most visited websites are each managed by larger teams. It can happen again and again that individual editors make mistakes: be it the incorrect integration of a widget, the incorrect nesting of headings or forgetting the alternative text. Let anyone who is without mistakes cast the first stone at WebAIM.

This means that even a single mistake by an editor can cause the website to fail the WCAG. It may be useful, but it is not relevant to practice.

96 percent of all websites have errors, it's probably more like 100 percent. Anyone who has ever evaluated websites knows that you can find errors if you specifically look for them.

In the end, it's not about technical perfection, but about ensuring that people with disabilities can use the website. The WebAIM study actually says nothing about this.

Nobody claims that all websites are perfectly accessible. But the claim that 98 percent of websites cannot be used by disabled people is simply nonsense. WebAIM does not say this explicitly, but suggests it through the entire presentation of the communication. Sheri BYRNE-Haber writes "98% of websites are completely inaccessible." on page 33 in her eBook "Giving a damn about accessibility.

To be clear: it's good to have this amount of data. It would be even better to make the raw data available for research. The nonsense lies in the conclusions that WebAIM suggests and others draw from it.

The problem is that a website is already non-compliant if a single error is found. The errors are not weighted. So there is no difference between an image description being missing somewhere and the contrast of the entire page being off, both are a mistake, except that one usually doesn't play a role and the other has a huge impact. In this sense, a tiny error in the code has the same meaning as a cookie message that cannot be hidden using the keyboard. The former does not play a role in practice, the latter prevents the site from being used by a number of people. That cannot be a meaningful benchmark.

Motivate or demotivate

A customer tried to persuade me to mention the study in one of the training courses. I refused for the reasons stated above. But also because I think the signal is fatal. The study can show that others don't do it better than you and then motivate you to do more.

In my opinion, however, it has a demotivating effect. Doesn't it say that WCAG 2.1 AA is essentially unimplementable? And that on websites, some of which probably have a six-figure budget? If giants like Amazon or the New York Times can't make their websites accessible, how will the local self-help association succeed? In my opinion, such studies promote fatalism because they suggest that there is little progress.

The only benefit I see is actually that a large amount of data is generated here. This allows you to make comparisons and identify developments over time.

Comparing the websites with each other makes no sense in my opinion, websites are complex or less complex, it wouldn't make sense to compare a simple media site with an online shop.

Under Site Categories you can track different industries, their average error rate and development.

In fact, the other statistics are much more interesting: What is the relationship between the system/framework used and the error rate? Do websites with ads have more errors than those without ads?

The raw data from the analysis would be interesting for researchers, but WebAim doesn't seem to want to make it accessible.

What is this collection - about?

Basically, I appreciate my colleagues at WebAIM. I'm even more surprised that they publish such a collection - . What I write here is, so to speak, the small 1 x 1 of accessibility and is of course also known to those responsible.

I basically only have two explanations: Either they actually believe in the quality of their tool so much that they simply ignore the points mentioned above. Or - I suspect - the study is purely a PR stunt. This is pretty catchy for a quick message: “96 percent of all websites exclude people with disabilities.” Can be wonderfully packed into a headline. The fact that this study was published by WebAIM says little about the attitude of the WebAIM specialists: It would not be the first time that marketing does something different from the specialist department. Digital accessibility is not welfare, but a business like any other.

This has little to do with reality. At least most of the text-heavy offers can be used well, even if they have minor shortcomings in terms of accessibility. Any given website might not be usable by a subset of disabled people, but that has relatively little to do with the WCAG score.

And I'm not sure whether this has done accessibility any service. I'm surprised that WebAIM thinks it needs this kind of PR. Well-known accessibility specialists also distribute the study uncritically - which does not mean that it is therefore useful. I can only assume that this is about self-marketing or that they are not in a position to evaluate the quality of such studies. Or - this is my guess - they didn't read the study at all. The problem with such analyzes is often that only headlines or summaries are read. The accessibility professionals share the results because they can prove their right to exist.

Read more