Digital Accessibility - Why Conformance is overrated

Almost every month there is a major analysis of websites for accessibility. WebAIM is certainly the best known, but not the only one. I have already summarized my criticism of the WebAIM study elsewhere . In short, it is about the lack of informative value of these automatic test tools. They are a relief for people who evaluate a website or are responsible for quality management for numerous pages. As an analysis tool for large amounts of data, they are inadequate due to their structural weaknesses and lack of depth of analysis. It's like the Body Mass Index, helpful for an average value, irrelevant for the individual case.

Simple example: I make my large media site accessible, but include content from third parties that I cannot influence. Advertising is often embedded via third-party services, and these networks will certainly never allow you to control minimum contrast or blink frequency, let alone use alt text. This means that the core of the website can actually be used, but you fail because of the non-controllable content.

Article Content

Conformance is not everything

It is correct that these websites are not compliant within the meaning of the WCAG, because conformity means that a certain level of accessibility is met. However, the terms are used improperly here. I'll take this study as an example , but in principle you can use any study that is based on purely quantitative methods.

The equations:

  • conformance to a level of WCAG = accessible - this is debatable. There is consensus that accessibility is more than conformance with WCAG, but what that "more" actually can be discussed.
  • From this equation, non-conformance = non-accessibility. We can accept that for once.
  • From this it is concluded: non-compliant = non-usability for disabled people. This is unfortunately wrong, but is often suggested.

Difference between Compliance and Usability for Disabled

You may not be able to use certain parts of the website if you have a specific disability. A simple example of this is the minimum contrast. I and other visual testers know the phenomenon that the contrast of a dark color on a light background is perceived worse than the contrast of inverted colors, i.e. the tone of the background color as the font color and the font color as the background color. The contrast would be exactly the same, but one can still be read, the other one is not perceived. Or if I use the accessibility features built into Windows, I can also make improvements. It is said with good reason that as a web provider you should not take such things into account. What I'm getting at is that poor contrast is a problem, but not that serious. I am also not interested in whether Insta or Facebook provides image descriptions everywhere. I'm only interested in the pages or people I follow. With such websites - if you even want to call them that - it depends extremely on the tested sample. Some operators will pay extreme attention to image descriptions and other factors and others will not. If I take the latter as a sample for my automatic analysis, the question arises as to whether this data is representative of the entire offer. Added to this is the challenge of user-generated content on such large portals. Automatic image descriptions are not accepted, but most people are unwilling or unaware of adding image descriptions. Is it the right way blame it on the operators? How many people would write nonsense if captions were mandatory? But automatic checking tools would appreciate that, the main thing is that there is a description.

In my opinion, as I wrote above about the Webbaim study, these analyzes are basically not meaningful at all. They do justice to neither the providers nor the users. As a side note, the Digital Journal that references said study put the copyright in the alt text. That's how you fall into your own trap.

Wrong asumptions

The equation non-compliant = non-accessible = not usable for disabled people is simply not correct. But that is implied by these large numbers, which is why I only view these studies critically or not at all.

One thing is clear: these large samples cannot be qualitatively analyzed using reasonable means, at least not in the foreseeable future. In my opinion, however, these automatic testing tools are absolutely inadequate as far as the current quality is concerned.

What they can provide are comparative values, just as the body mass index is excellently suited to determining overweight on average, but says little about the overweight of individual people. So we can make excellent comparisons, say between certain types of websites or between different countries with these tools. The tools are unsuitable for qualitative statements.

Conformance leads to Micro Optimizations

The great weakness of the concept of conformity is that the errors are not weighted. A missing alternative text, an ID assigned twice, missing labels - from the tool perspective it's all the same.

More on Testing & Evaluation