Why wrong assumptions can harm digital Accessibility
This article is about untested assumptions – and how they can lead to barriers.
TLDR - Summary
Assumptions are mental shortcuts
People make assumptions to manage everyday life efficiently.
They are initially useful but become problematic when they are not questioned.
Untested assumptions lead to barriers
They influence decisions regarding design, functions, and texts.
Even accessibility professionals can fall into thought traps, especially when knowledge is adopted from older guidelines.
Typical examples of false assumptions
- Blind people cannot operate computers or smartphones.
- Functional illiterates cannot read at all.
- People with learning disabilities or autistic people do not use certain digital services.
- Older people do not want to use modern technology.
Concrete impacts
- Content is not designed to be accessible because it is sometimes subconsciously assumed that it won't be used anyway.
- Accessibility is deprioritized because it is perceived as expendable.
- Functions that would support those affected are omitted.
- Interfaces are not optimized for specific target groups.
Specific challenges in practice
- False assumptions can strengthen over years and be passed on subconsciously.
- Old mindsets and historical structures (paternalistic attitudes, lack of diversity) influence decisions.
6. How to question assumptions
- Check regularly: Are my assumptions still correct? Are they empirically proven?
- Critically examine guidelines to see if certain measures are still current or outdated.
- Seek exchange with people with disabilities; observe and listen.
- Include personas with disabilities in design and development processes.
- Cultivate curiosity and critical thinking – do not become "stuck" in assumptions.
- Ensure diverse team composition.
Central Message
- Assumptions are normal, but they must be checked regularly to avoid unconscious exclusion.
- Those who remain curious, question critically, and seek exchange with those affected make better decisions for inclusive and accessible digital services.
What are Assumptions?
Untested assumptions are beliefs about circumstances that we hold without actually verifying them. They function, in a sense, as shortcuts in our thinking. We assume something, find a certain concept plausible, and take for granted that it must be true. In reality, however, it often turns out that these assumptions do not apply at all. The real problem arises when we fail to question them.
This can happen both to people who deal with accessibility professionally and to those who only encounter the topic peripherally. In both cases, such assumptions can lead to products or content being designed incorrectly or incompletely. That is exactly why I want to look at a few examples today and talk about how these thought traps might be avoided.
First, the question arises: Why do we have assumptions at all?
The answer is actually quite simple. They help us manage our daily lives. Our brain uses them as a kind of mental shortcut because we wouldn't have the capacity to completely analyze every single decision and observation from scratch.
If we had to think through every step of our day in detail, we would probably get stuck just getting out of bed in the morning. We would keep analyzing and weighing options – and in the end, we would hardly take any action. In this sense, assumptions are initially useful and necessary.
The real problem is not that we have them, but how we handle them. People who work more reflectively check their assumptions regularly. They ask themselves: Is this actually true? Are there perhaps situations where my assumption does not apply?
Others do this less often. Then false assumptions persist – and can even intensify. Over time, they affect decisions and designs, often without us even noticing.
Another important factor is time pressure. In most jobs today, we have to handle many tasks simultaneously. Under such conditions, we are naturally more inclined to resort to these mental shortcuts.
A typical example would be the assumption: "No person with a disability will use my application anyway."
When thinking this way, one tends to consider accessibility only minimally – just enough so that it formally fits somehow. Other tasks seem more urgent or important, and accessibility slides down the priority list.
But this is exactly where the problem begins. Because this seemingly small assumption can ultimately lead to people being excluded, even if that was perhaps not intended at all.
Frequently, we also use our own user experience as a benchmark. We take it for granted: I see normally, I hear normally, I have no problems moving my hands and can operate a mouse without difficulty. If everything works for me, the assumption is near that it applies to others as well.
As a result, we often don't even consider that there are people for whom exactly these things are difficult or impossible. Or we simply underestimate the size of this group. One then thinks: Surely these are only very few people. And if the group is so small, it doesn't seem particularly important from a practical standpoint to consider their needs.
Another factor is the lack of diversity in many teams. Often, people work together who are similar in many respects – one could say: birds of a feather flock together. Especially in the IT sector, teams are often relatively young, technically savvy, and very familiar with digital tools. Most are tech-affine and navigate digital environments quite naturally.
When everyone on the team has similar experiences and skills, it naturally becomes more difficult to empathize with other perspectives. It is easy to lose sight of the fact that there are many people who are not tech-savvy, who have difficulties with certain devices or operating concepts, or who are simply not young and physically fit.
Assumptions in Accessibility
Let’s take a look at a few typical assumptions that frequently arise in this context.
A classic assumption is, for example: Blind people cannot use a computer or a smartphone. As readers of an accessibility blog, you naturally know that this isn't true. But let’s imagine someone who has never dealt with the topic of accessibility and has had no contact with blind people. At first glance, this assumption seems quite plausible.
If someone is blind, they cannot see a graphical user interface. They cannot visually follow mouse movements, cannot simply click on visible elements, and cannot read text that is only presented visually. All of that is correct to begin with.
The problem arises, however, when we stop thinking at this point and fail to question this assumption further. We then don't even consider the possibility that blind people might still want to use our applications – and encounter barriers in the process.
Yet, there are actually clues in everyday life. You certainly encounter blind people using smartphones. They aren't just holding them in their hands or using them as paperweights – they are clearly using them. This observation alone should trigger the next question: How does that actually work?
The answer, of course, is that technologies exist – such as screen readers – that allow blind people to access digital devices. But you only discover these solutions if you are willing to pause your own assumptions for a moment and look closer.
This is exactly what I mean by clinging to untested assumptions – and this phenomenon is surprisingly common.
Take functional illiteracy, for example. A person who is functionally illiterate can certainly read. The term does not mean that someone cannot read at all. It simply means that their reading skills are below a certain level. Perhaps a person can read words or short sentences but has difficulty with longer or more complex texts. Or they use assistive technologies, such as tools that read text aloud.
The assumption that functional illiterates fundamentally cannot read is therefore simply wrong. And such misunderstandings exist for many groups.
The same applies to people on the autism spectrum, as well as many other groups of people. It is important to emphasize again and again: it is completely normal to have such assumptions. It only becomes problematic when we don't question them.
This also applies to topics like learning disabilities. Think of people with Down syndrome, for example. If you see someone with Down syndrome, you might prematurely assume that this person likely doesn't use a smartphone or can't handle platforms like TikTok or Instagram.
The reality is much more diverse. Many people with Down syndrome naturally use smartphones, social media, and apps. Whether and how well someone can use a particular tool always depends on the individual person – not solely on a diagnosis.
The crucial task, therefore, is to pause your own assumption and check: Is that really true? Or is my assessment based only on a mental image?
An interesting detail is that such assumptions don't just occur among people without experience in accessibility. Even professionals in the field can fall into these thought traps. I constantly encounter surprising ideas or concepts that have apparently never been truly questioned.
One example is the topic of language changes on websites. I don't want to go into too much detail here because I’ve already written a separate post about it, but the basic principle can be explained quickly.
On websites, you can specify in the code which language the content is written in. This information is used by screen readers, which are programs that read text aloud for blind people. If a page is marked as English, it is read with an English pronunciation. If it’s marked as German, it’s read with a German one.
So far, so sensible. In practice, however, the importance of this setting is often overestimated, especially by sighted developers. The reason is actually quite simple: people usually set their screen reader to the language they primarily use.
A person who predominantly uses German websites will generally have their screen reader set to German. And someone who mainly uses English-language content will have it set to English accordingly.
Therefore, the language setting of the document plays a much smaller role in many everyday situations than is often assumed. Nevertheless, it is a subject of intense debate in some discussions – likely because many sighted people find it difficult to imagine how screen readers are actually used in daily life.
A particularly interesting example of such untested assumptions is the so-called language change within a website.
Technically, you can specify within a website that individual paragraphs or even individual words are in a different language. Take, for example, a German-language page that contains an English quote. You can then mark this quote block as English in the code. A screen reader would then automatically read this section with an English pronunciation. This even works at the word level – so, theoretically, a single English word in the middle of a German sentence could be pronounced differently.
There is actually a specific WCAG success criterion for this function. I don't have the exact number in my head right now, but it exists. And in my view, this is a good example of how something may seem technically sensible but can completely miss the mark in practice.
Because what happens in reality? The screen reader is forced to change languages in the middle of a sentence. A word is suddenly read with an English pronunciation, then it continues in German. For many blind users, this is not helpful but rather disruptive.
Especially people who use their screen reader at a higher speed – which many experienced users do – often find such automatic language changes cognitively taxing. You are listening to a German text, mentally adjusted to that language, and suddenly a single word or short phrase is pronounced differently. This pulls you out of the reading flow.
Despite this, many developers invest a surprising amount of time in correctly marking these language changes. They mark individual words, switch the language, repeat this in several places in the text – all with the goal of implementing the criterion as cleanly as possible.
When you then explain that many experienced screen reader users deactivate this automatic language switching in their settings, it sometimes causes surprise. The reason is simple: it’s just annoying. Additionally, it often happens that languages are incorrectly tagged.
An example from my own experience: Many years ago, the website of the "taz" (a daily newspaper) was, for some reason, marked as being in English. This resulted in my screen reader reading German texts with an English accent. And a German text with an English pronunciation is surprisingly difficult to understand.
Back then, I didn't know that you could change such things in the screen reader settings yourself. Screen readers are complex programs with many options. For me at the time, this meant quite practically: I could hardly read the site.
Today, I know where to find this setting. And the first thing I set in a new screen reader is indeed: deactivate automatic language changes.
But by no means all users know this. For them, an incorrectly set language tag can lead to a text becoming difficult to understand or even practically unreadable.
This is exactly why this example is so interesting. Here, people with good intentions created a rule – likely from the perspective of sighted developers. But they didn't fully think through how this function feels in an actual usage context.
The problem is: because it is an official WCAG criterion, it doesn't just disappear. Individuals cannot abolish it. And so, many teams continue to invest time and energy into a function whose practical benefit is, at the very least, highly debatable.
This, too, is ultimately another example of the central theme of this episode: assumptions that were never truly questioned.
Another discussion I’d like to touch on briefly, because I’m not deep enough into the subject myself, is also very exciting: the debate between "Leichte Sprache" (Easy-to-Read) and "Einfache Sprache" (Plain Language).
For a brief context: Plain Language is generally aimed at the broad population – people who have difficulty understanding complex everyday texts. Easy-to-Read, on the other hand, is a specific concept with very strictly simplified rules and is primarily aimed at people with learning disabilities.
That is at least the common classification.
In the meantime, however, it is frequently argued that this division might be overly simplified. People with learning disabilities are often lumped together, even though their abilities to understand language can vary greatly.
This means: there are certainly individuals who truly depend on Easy-to-Read language. At the same time, there might be many people who would fare better with well-implemented Plain Language – where "well-implemented" is key here.
One argument in this discussion is also the aspect of stigmatization. Easy-to-Read has a very specific appearance: short sentences, many line breaks, special formatting. Some people therefore perceive this form as conspicuous or even stigmatizing. A text in Plain Language might be more pleasant for them because it seems less "special" and remains closer to ordinary text.
Another point is that Easy-to-Read is often forced to heavily reduce information. While this can increase understandability, content is also lost in the process. Some people might therefore get more out of a clearly formulated Plain Language text.
I find this discussion exciting because it raises an important question:
Have these assumptions actually been empirically verified?
Has there been a systematic investigation into which forms of linguistic simplification actually work best for which groups? And could it be that Plain Language would already be sufficient in many cases – or that both approaches could be sensibly combined?
Because in practice, we quickly run into a completely different problem: resources. Hardly any organization has the capacity to consistently provide content in both Plain Language and Easy-to-Read. This dual system is complex and therefore often not implemented at all.
My suspicion, however, is that this question might partially resolve itself in a few years. If artificial intelligence is increasingly able to automatically simplify texts, content could be dynamically adapted to different needs – for example, through different degrees of simplification or personalized presentation.
Until then, however, we have a very practical problem: there is a lack of both. There aren't enough texts in Plain Language, nor are there sufficient contents in Easy-to-Read – and above all, they are missing exactly where they would be particularly important for the respective target groups.
Therefore, from my point of view, it would be very sensible to investigate more closely what people actually need. In other words, not just to talk theoretically about which concepts exist, but to clarify empirically which forms of linguistic preparation are truly helpful for which groups.
Impact of Untested Assumptions
I’ve already mentioned some indirectly: implicit – often unconscious – assumptions easily lead to certain topics not being seriously considered in the first place.
If, for example, I think: "This group doesn't need Easy-to-Read or particularly understandable texts because they won't understand them anyway," then I will invest little to no energy in creating such texts. The consequence is simple: you just don't do it.
It’s similar with other groups.
If I assume that blind people don't use computers, then I might consider alternative texts to be expendable. One then thinks: "We can solve that automatically later, maybe with AI – we don't have to be that precise."
If I assume that functional illiterates cannot understand the content anyway, then I might not even think to include functions that read text aloud or allow the display to be customized.
Or if I believe that autistic people won't visit my website anyway, then I see no reason to pay attention to sensory overload. I might then include animations, effects, and visual gimmicks without much thought – after all, "it doesn't affect anyone."
You can basically continue these examples indefinitely. The result is always similar: decisions are made based on false assumptions, often without being aware of it.
Even Accessibility Professionals are Sometimes Wrong
Interestingly, we see a similar problem among accessibility professionals. There, too, assumptions are sometimes taken for granted even though they are long outdated.
A typical phenomenon is: the longer someone works in the field, the more knowledge accumulates – but not all of it is regularly questioned. Especially people who have been active in accessibility for a very long time often have knowledge rooted in the era of earlier guidelines, such as the old BITV or very early versions of the WCAG.
Then certain ideas are simply carried along for years without checking whether they still apply today.
This can have two consequences. On the one hand, measures are implemented that require a lot of effort but have only limited benefit for the affected users. On the other hand, things that are actually important might be left undone.
I mentioned the example with language changes earlier. Another example is the handling of abbreviations in the code. It used to be common to explain abbreviations in detail using certain attributes. Meanwhile, this is largely outdated in the newer WCAG versions – yet this practice keeps appearing because it was learned a long time ago.
Such things often arise from assumptions about how people with disabilities use the web or how assistive technologies work. These assumptions might have been correct once, but they don't necessarily hold true today.
And yes – I definitely experience such situations personally. It happens that people with a lot of experience in accessibility explain with great conviction how something allegedly must work, even though affected users themselves bring in a completely different perspective.
Of course, anyone can be wrong. That is completely normal. It only becomes problematic when one clings to such assumptions despite evidence to the contrary and is unwilling to correct them.
Fortunately, this is not the rule. There are many very reflective experts who are open to feedback and actively seek exchange with those affected. But the situations described do keep happening.
What Can Be Done?
The crucial question is: What can specifically be done to question such assumptions?
I believe the first important step is indeed to constantly check your own assumptions. In other words, to regularly ask yourself: Is what I'm thinking still actually true? Was it ever correct – or did I perhaps just learn it that way at some point?
Especially in the area of guidelines, it’s easy to get a little lost in the jungle of requirements. You might be absolutely certain: I read this requirement somewhere in the WCAG. When you then look closely again, you suddenly realize: it’s not there at all.
That’s why it’s worth occasionally verifying such things again. And you are also allowed to question what is written in the guidelines themselves. I’ve already given a few examples.
There are requirements or techniques that are technically outdated today – such as certain notes on the autocomplete attribute, which modern browsers now handle quite well anyway. Such things are sometimes simply carried along for many years. The same applies to old workarounds from times when certain screen readers or browsers had their own specific problems – for example, special adjustments for combinations like JAWS in Internet Explorer.
These are solutions for technical situations that often no longer exist today. You can certainly ask yourself: Does this still make sense – or are we putting effort into a problem that disappeared long ago?
A second important point is direct exchange with people with disabilities. This should actually be a matter of course, but it isn't always implemented consistently. It helps enormously to simply ask: How do you actually use this application? What works well? What is annoying? Or even just to watch how someone works with assistive technology.
I would explicitly recommend this to experienced accessibility professionals as well. Especially when you’ve been active in the field for a long time, an outside perspective can be very valuable. It’s best to speak with people who use assistive technologies themselves and also have a good understanding of accessibility – individuals who bring both together: practical experience and technical know-how.
What irritates me personally sometimes is that affected individuals are not always taken seriously in some expert discussions – especially in older structures. This surely has historical reasons. Many of today’s experts started at a time when, for example, the UN Convention on the Rights of Persons with Disabilities didn't even exist yet. Back then, the attitude was often more paternalistic: one wanted to "help the poor disabled people" so they could somehow manage better. Today’s perspective of self-determination and participation only became more prevalent later.
Of course, this doesn't apply to everyone. There are many very reflective experts who actively support this development. But in some, this old way of thinking is still palpable – and that, too, can lead to certain assumptions no longer being questioned.
A practical tool can also be to include personas with disabilities in development or design processes. Although the persona concept is now somewhat controversial in the UX community, it can still help to systematically empathize with different usage situations.
Take older people, for example. One might prematurely claim: older people don't use the internet or modern technology anyway. Of course, that’s not true – there are very many older people who use digital services quite naturally.
If I have this assumption, it can happen that I unconsciously exclude this group. Instead, one could ask a different question: Why do some older people perhaps use certain services less?
And then one might come to a rather interesting realization: perhaps it’s not because they don't want to or can't. Perhaps it’s simply because of how we design our interfaces.
That would at least be a hypothesis worth thinking about. And the easiest way to find out is the same as before: talking to each other. Most of us have older people in our environment – parents, grandparents, neighbors, or acquaintances. You can simply ask them or watch them use digital services once.
Conclusion
In the end, it all comes down to one central point: curiosity.
The willingness to question things and not just take them as given.
Sometimes I have the impression that this curiosity has been somewhat lost by some decision-makers. We settle comfortably into our assumptions – and tend to them almost like a small plant. Instead, it would be sensible to check them occasionally: Is this still true? Do I really have good reasons for it? Or is it perhaps a prejudice that I’ve never properly questioned? And who might I be potentially excluding with this assumption?