Interview with Clara Henning on User Tests with disabled Persons

This is the transcript of the German Laguage nterview with Clara Henning about user testing with people with disabilities. For better readability, the language has been edited. Any errors or inaccuracies are my responsibility.

Domingos: Welcome to a new episode of our podcast on digital accessibility. Today, I'm especially pleased to welcome Clara Henning as our guest. She will give us insights into user testing with people with disabilities. Thank you, Clara, for taking the time.

Clara: My pleasure, I'm looking forward to our conversation.

About Clara and her path to accessibility

Domingos: As usual, we'll start with a brief introduction: Please tell us a little about your professional background and your current job.

Clara: Gladly. I'm Clara, and I work as a user researcher at Leefs in Cologne. Originally, however, I come from a completely different field: I initially studied physiotherapy and practiced that profession for some time. Afterwards, however, I decided to pursue further studies and switched to the field of information technology – more specifically, to the area of ​​user-centered development of digital products.

Today, I focus on improving digital applications from a human-centered perspective. At Leaves, we work as a small team to help companies better understand their users, or end users. Specifically, this means that I identify the needs, expectations, and problems that digital products should address. The goal is to make the usage context more transparent so that development teams can make informed decisions and keep relevant requirements in mind.

Domingos: In your work, you have focused extensively on digital accessibility and user research, among other things. Can you explain how you came to this topic?

Clara: Of course. The path into this field was indeed extremely exciting for us as a team. That was about two years ago now. At that time, we first became more aware that something was changing in the regulatory environment – ​​particularly in connection with the Accessibility Strengthening Act. In this context, we initially asked ourselves the fundamental question of what specific requirements and implications this would entail.

We then deliberately scheduled a team meeting to assess our current knowledge: What prior experience did we already have? Who had any previous contact with digital accessibility? The honest assessment was rather sobering – our knowledge was still fragmentary. At the same time, it became very clear to us that the topic was an excellent fit with our core values. Our work is consistently user-centered, often described using the approach of Human-Centered Design and User Research. If people are at the center, then accessibility cannot be a peripheral issue.

Against this backdrop, we decided to systematically familiarize ourselves with the topic. We deliberately proceeded in small steps: First, we activated our network to identify existing expertise. For example, I specifically asked about experience in accessible user research on LinkedIn. Through discussions with practitioners, we gained a better understanding of what a meaningful approach looks like and which methodological and organizational aspects need to be considered.

Based on this foundation, our team gradually delved deeper into the topic and examined how we could concretely integrate accessibility into our research processes and collaborations with our clients. This involved not only regulatory requirements but, above all, the qualitative improvement of our work.

For me personally, the establishment of a panel with people with disabilities was particularly impactful. The initial telephone conversations fostered a direct exchange that vividly illustrated the specific challenges of everyday digital life. These direct insights significantly broadened my understanding. Some of the conversations lasted an hour or more because it was important to me to truly grasp the individual perspectives.

From these interactions, we were able to pinpoint precisely where we needed to adapt our research designs, test setups, and communication. This personal engagement was exceptionally valuable – both professionally and personally. After many of these conversations, I was deeply impressed by the openness and expertise shown to us.

Domingos: You already mentioned the panel. Many listeners probably don't yet have a clear picture of how user research actually works. Could you explain what a panel is – and how your panel is structured in the context of digital accessibility?

Clara: Sure. Put very simply, a panel is a structured network of contacts – a curated list of people who meet certain defined criteria and are generally willing to participate in research formats. Panels can be set up for a wide variety of target groups. For example, we also have an agricultural panel made up of farmers whom we can recruit for interviews or user testing.

In the case of our accessibility panel, it consists of people with various disabilities. Initially, the focus was on people with visual impairments, as that's where we started our first activities. However, the panel now also includes people with cognitive, auditory, and motor impairments. So there's no deliberate thematic exclusion, although the initial focus was more on vision.

Methodologically, it's important to understand that user research doesn't primarily consist of surveys – even though this is a common assumption. We conduct qualitative interviews or usability tests far more frequently. In the latter, people test specific digital products: for example, websites, apps, or software solutions. These can either be live or still in the prototype stage. Testing concepts or prototypes before actual development is particularly valuable, as it allows for the early identification of optimization potential. The goal is to adapt products so that they meet the actual needs of users as closely as possible at launch.

A key challenge throughout the entire research process is regularly recruiting suitable test participants. Between the initial research question ("What do we want to find out?") and reliable results lies the operational step of identifying the right people. These individuals must correspond to a company's target group – which can involve very specific criteria depending on the project. This recruitment process is often time-consuming and represents one of the biggest hurdles in user research.

Against this backdrop, we have decided to take a proactive approach to the topic of digital accessibility. Our aim is to make this hurdle easier for companies by leveraging an existing network of people with disabilities who are generally willing to participate. The panel members know what to expect in interviews or tests and have a genuine interest in contributing to the development of digital products.

We deliberately strive for a diverse composition. Our panel includes both highly tech-savvy individuals with extensive digital expertise and people who represent more typical everyday users. This range is essential to reflect different usage patterns and skill levels.

For each project, we then carefully assess which profiles are best suited to the specific questions being addressed. This allows us to recruit quickly and precisely, without having to start from scratch each time. Especially in the field of digital accessibility, this structured preparation is a crucial success factor.

Special Testing Conditions for People with Disabilities

Domingos: What does such a test look like in practice? When involving people with disabilities, specific conditions likely need to be considered. For example, the use of assistive technologies can require additional time. Do you conduct the tests remotely, perhaps via video conference, or do you work with a stationary testing lab and special arrangements?

Clara: Currently, we conduct these tests primarily remotely. We place great importance on adapting to the needs and habits of each individual participant. A key aspect is the choice of video conferencing tool. We often use Zoom, for example, and sometimes Microsoft Teams or other common solutions. However, the decisive factor is not our preference, but rather which tool the person in question is already familiar with. This allows us to avoid additional barriers to entry and ensures that participants are working in a digital environment that is as familiar as possible.

The process of a study typically begins with close consultation with the client – ​​often designers, developers, or product owners. Together, we define the research objectives: What questions should be answered? What materials or prototypes are available? Is it a live website, an app, or a concept in the prototype stage?

We then check which profiles from our panel are suitable for the specific question. We send a structured invitation by email to suitable panel members. This invitation clearly describes the topic, objectives, timeframe, and organizational details.

When testing with people with disabilities, we generally plan more generously – usually between 60 and 90 minutes per individual appointment. The sessions are always one-on-one. Compared to traditional usability tests, we deliberately allocate more time. There are several reasons for this:

  • The technical setup, especially in conjunction with assistive technologies such as screen readers or magnification software, sometimes requires additional coordination.
  • Certain interactions naturally take longer.
  • Furthermore, the cognitive load in intensive testing situations should not be underestimated.

We therefore also take mental fatigue into account and structure the sessions accordingly with clear phases and sufficient time for breaks, if necessary.

Technically, we prepare the session to ensure a smooth process. We usually schedule a time buffer so that the test subject can join about ten minutes before the start. In many cases, we ask participants to share their screen. For example, if a live website is being tested, the person opens it in their own browser, and we observe in real time how they navigate, where they are located, and what interactions take place. This gives us authentic insights into actual usage scenarios—including the use of assistive technologies in their usual, individual setup.

This approach has proven very practical for us, as it ensures both flexibility and realism.

One particularly relevant aspect is, for example, testing with screen readers or voice assistant systems. If someone is using a screen reader, we need to ensure at the beginning of the session that the speech output is transmitted correctly so that we, and potentially our client's teams, can follow along. This is not only interesting for us, but also particularly instructive for the participants from the companies: They get direct insight into how the person works with the screen reader, how quickly the output is displayed, and what challenges might arise. This technical setup is often a minor challenge at the start of a session, but we actively support them.

During the tests, it's crucial to react flexibly to unforeseen obstacles. As the facilitator, I make sure that the test subject never feels like they are being "tested." On the contrary, they are the expert on their own everyday situation. This understanding of roles ensures that participants feel comfortable and can provide authentic feedback.

After the study is completed, panel members receive appropriate compensation – an aspect we often identified as important in our preliminary telephone discussions, as this is not always standard practice in other contexts, such as university studies. Furthermore, we maintain a feedback mechanism: After a project is finished, we share the key findings with the panel members. This allows them to understand the specific conclusions drawn from the tests.

Furthermore, it's important to us to bring things full circle: After a few weeks or months, we inform the panel members about the implementation of the results with our clients. This makes it transparent that their time and input had a real impact and that participation in the tests actually led to improvements. This approach is based on our experiences from the initial phone calls with people with disabilities and has since become an integral part of our process – both out of respect for the participants and to make the impact of the research tangible.

Raising Client Awareness

Domingos: That's a classic: Clients are often allowed to observe usability tests. How relevant is this specifically in the context of digital accessibility?

Clara: It's incredibly valuable. An example from our last study: We tested how accessible PDF documents are when sent via email, for example. In this project, two employees of our client had already made adjustments to the documents beforehand, in accordance with the applicable guidelines. We then invited these two individuals to the test session as silent observers.

"Silent observers" means that they follow what's happening but don't ask any questions during the test. This is important because too many voices during testing can be confusing—both for the person being tested and for the research team. Observers can thus directly experience how users actually interact with the product, what difficulties arise, and what their expectations are.

This was a real eye-opener, especially for our client's employees. They were often surprised by certain behaviors or problems they hadn't anticipated. This direct experience greatly enhanced the learning effect: They were later much better able to recall specific situations, the test subjects' reactions, and discuss the insights gained within the team.

When I subsequently summarize the main findings, include quotes from the tests, and derive recommendations, the effect is much greater for the observers because they experienced the situations firsthand. The added value is therefore significantly higher than if they were to receive only a written summary.

However, we don't invite 50 people at once, but only a small, select group, so that the test subject feels comfortable and the process remains smooth. Of course, we inform the observers about their role beforehand, and at the end of the test, they have the opportunity to ask questions or provide feedback. Especially in our field of digital accessibility, involving observers provides real, practical added value for everyone involved.

Domingos: Were there any findings that surprised you personally? For example, functions that seem perfectly useful to us, but are either completely unusable or very cumbersome for people with disabilities?

Clara: Yes, there were definitely some "aha" moments. I noticed something particularly with the PDF documents: For us, it's often very easy to quickly scan a document, grasp information, and understand the connections. However, for users with screen readers, it was a completely different story. Some parts of the document were simply inaccessible, and you only realize during the test that this content can't be read aloud at all.

This means you have to think much more cognitively: "Is something missing? Have I grasped the information correctly?" In some cases, the test subjects had to open the document again in a different way to access the missing content. This made me realize how significant the additional cognitive load is – you're constantly thinking and checking whether the information is complete. This wasn't something I had grasped before and it really surprised me.

Another example concerns classic website layouts: When content opens gradually, such as modal windows or dynamic selection fields, it might seem clear to experienced users. However, for people who rely on specific focus controls, this can be very confusing, even if the technology is implemented correctly.

Such observations show how different user experiences can be, depending on assistive technologies and individual needs. This is precisely what makes user research in this area so exciting: You see functions from a completely new perspective and recognize optimization potential that you hadn't previously considered.

Domingos: How challenging is it for you when someone uses assistive technologies like screen readers or screen zoom? Sometimes these systems work very quickly or with high magnification – it's often difficult for sighted people to understand exactly what's happening.

Clara: Yes, that's true, it is indeed a particular challenge. I have to concentrate much more in such cases. It's not impossible for me to follow the processes, but it requires me to pay very close attention. As the moderator, I have to observe closely what's happening: I pay attention to screen activity, the test subject's verbal cues, and often facial expressions when the camera is on. A slight frown or gesture can already indicate where the problem lies, and I make a note to ask specific questions later.

So it's mentally very demanding – a lot is constantly happening in my head while I'm simultaneously following the interaction. At the same time, I find it incredibly exciting and educational. It's part of my role to ask clarifying questions if needed: "Can you show me that again?" or "I didn't quite follow along, how exactly did you do that?" – that's perfectly fine. Some participants offer to slow down the screen reader's speech output, but I usually let them use their technology as they're used to in everyday life.

If I really don't understand something, I ask: "What did the screen reader just say?" – and that way I can still understand how they're using it. After such sessions, I'm often a bit exhausted because the concentration required is very high, but that's part of the process.

For some participants, it can be even more challenging, especially if the screen reader reads at high speed or the screen is greatly magnified. It works well for me because, in my role as moderator, I actively ask questions and observe. With increasing experience and several tests, you get used to it – and it's precisely this intensive observation that makes the work so interesting and insightful.

Domingos: Yes, thank you very much for the insight. Where's the best place to follow you?

Clara: You can reach me via LinkedIn or by email. The company is leefs CX GmbH.

``` Domingos: Thank you so much for this insight, and it's great that you're addressing the topic of accessibility and UX; it's still too rare these days. Wishing you continued success.

More talks with Accessibility Specialists

Clara: Thank you very much.