Usability Testing with disabled Persons
As I briefly touched on in the last post, there are points of contact between usability and accessibility. However, the terms are anything but congruent. A site that is wonderful to use for a blind person may be unusable for a sighted person. A Flash site, on the other hand, is useless for a blind person and is mostly an anathema to the sighted person, although he can probably use it.
Article Content
- Feasibility
- Usability issues
- Testing on the weakest
- The test leader is required
- Conclusion
- Preparing a Test
- The first reading
- Segmentation
- Definition of the core questions
- Selection of people
- The test guide
Feasibility
In principle, there is nothing wrong with conducting usability tests with people with disabilities. I would focus on people with low or medium Internet affinity, because the others can also cope in difficult situations. On the other hand, if the Internet-savvy people cannot work with the site, the site may be considered unusable for the whole group.
Accessibility places a strong emphasis on improving usage rather than good usability. All test procedures, whether carried out automatically or by human testers, therefore check according to formal criteria. If these criteria are met, one can assess the level of accessibility. Aspects of user-friendliness could also be included in these tests, but they play no role, at least in the test procedures that I am familiar with.
Usability issues
Let's take a form as an example. In addition to the label, a form can also have a default value. It can happen to the screen reader user that what is to be entered in the form is read three times, once the text in front of the field, once the label and once the default. I even saw a form where instead of a label, an alternative text was given for the form elements. Is that covered by the HTML 4 specification?
It's not a barrier, but it's annoying. Another example is the jump anchors, which are supposed to allow you to jump directly to individual areas of the website, but some people overdo it and give every pixel on the website a jump anchor. I will add more examples later.
Testing on the weakest
A problem with usability tests is the different technical skills of the users. Some don't even know how to bookmark things in their browser. In addition, people with disabilities have very different knowledge of the aids and their possibilities. So we have two dimensions of problem areas instead of just one.
People with low technical skills are therefore ideally suited for usability tests, because accessibility is really indispensable for them. I differentiate here between usability and accessibility. A page can be usable even if it appears as unstructured HTML, but it is still not accessible. Many do not understand the difference.
The entire range of different disabilities, assistive technologies and technical skills will probably not be able to be represented in one test group. The number of different types of visual impairment alone can hardly be represented with an acceptable level of effort.
The test leader is required
But extended skills are also expected from the test leader. On the one hand, he must of course be able to communicate with people, including those who are deaf or have learning difficulties. On the other hand, they also have to take their special working methods with them the computer so it can spot problems without asking or intervening. For example, it is difficult to follow a visually impaired person working on a computer with high magnification, at least if he is experienced. The combination of screen reader and magnification is funny. Here you can work in two areas at the same time: you can read text in the area visible on the screen, and at the same time you can write text or execute commands in the area of the screen reader focus. In addition, the advantages of keyboard control are combined with the advantages of mouse control.
I doubt that anyone other than the visually impaired person understands what they are doing, but that's what the think-aloud method is for.
I think that makes the difference between accessibility and usability clear. The site may be accessible, but it is not user-friendly. It's practically impossible to get to the content area of the site in a reasonable amount of time. Apparently each department wanted its place on the home page. But it's also possible that screen reader users hear more here than sighted people see, because individual areas of the page are hidden via CSS, I was too lazy to look at the source code.
But the other ministries are just as bad.
Conclusion
Hopefully it has become clear that a website - even if it is accessible - can deter users from using it if it is user-unfriendly from a disabled person's point of view. In principle, accessibility tests are sufficient to check a number of established criteria. In the end, however, the users with disabilities have to test the offer and point out any difficulties in using it. It doesn't matter whether you call it a practical test, usability check or whatever.
Preparing a Test
In human-machine interaction, people have long been concerned with software ergonomics, the old uncool name for usability. Jakob Nielsen and Stephen Krug wrote the classics for it. A variation on the theme is user experience or joy of use. You probably have to invent a new term from time to time so that the topic retains its sex appeal.
I prepared for a usability test a few months ago and would like to share my experiences here. The formulation of concrete test questions is actually a task for a usability expert, I will only deal with the preparation on the part of the client here .
First of all, a note: I know that my practice reports are very long and thus violate a golden rule of writing for the web. I accept that: on the one hand, I write texts that I would like to read. On the other hand, I have looked for such instructions myself and in most cases have not found them. I therefore hope that these detailed reports will also benefit other seekers.
The first reading
As always with a new task, I read up on the topic. However, I still haven't read the classics by Nielsen and Krug. Some time ago I read Thomas Wirth's "Missing Links" and that was enough for me. Do you learn anything from bad or good examples? I doubt it, at least unless you've built websites yourself. The best book on the subject, in my opinion, is Jacob Nielsen's Web Usability.
Segmentation
My first step was to split the site into five areas:
- the navigation
- the information area, consisting of editorial texts that rarely or never change, but represent the core of the offer
- a research database, which represents the second pillar of the offer
- the internal search engine
- the hands-on areas consisting of a small blogging platform and a forum
- the news section
This segmentation was necessary in order to be able to determine which areas of the website are used particularly heavily and therefore had a particular need for optimization. It was also clear that the basic structure, the information architecture and the pillars of the website should be tested more intensively. The rest was either very simply structured or it was questionable whether it would be continued anyway.
Definition of the core questions
It was clear from the start that the testing agency would design the test guide, while I would only define the guiding questions. So I defined the key questions for the core areas to be tested: navigation, search, texts and database. To do this, I took a close look at the areas and considered what typical requirements a less tech-savvy user would create. The well-known and serious sources of error are incomprehensible navigation, misleading information architecture and a lack of consistency in the structure of the website, see also an older post from me.
Many major tasks in interaction design are solved with patterns. This means that a feature on site X is very likely to behave like the same feature on site Y. If it doesn't, it becomes a usability problem because an action doesn't behave as expected.
Selection of people
Usually 8 - 10 test subjects are used. We decided to take an average out of society. On the one hand, the typical social worker, who should be familiar with research on the Internet, should get a chance, but also a rather inexperienced user who would like to find out more about a certain topic. In general, the technical affinity of our users was rated as rather low.
I have to say that we know relatively little about the people who visit our site. There are many indications that they are middle-aged - that is, 40 and older - that they are not very tech-savvy and use the Internet primarily for information purposes. This is based on our opinion of who typically uses offers like ours. A lot can also be derived from the WebAnalytics data, but it always remains very speculative.
The test guide
The agency developed a test guide based on our requirements. We then just nodded off the test guide. I actually thought the agency would try to implement the whole process as one scenario. It would then go something like this: "Imagine you were researching the topic XY for a friend and came across this offer...". In this case, you could have gradually worked on the individual areas with tasks embedded in these scenarios. However, the agency worked with individual tasks. The whole package with eye tracking, camera recording and so on was of course also included.
These tests usually use the think-aloud method. The test person tells what is going on in their mind or what they are thinking while solving a task.
When it comes to formulating tasks, it depends on the experience of the agency. Naturally, we ask very direct and closed questions: Can you handle the navigation? Or are you having trouble with the search results? You don't get any useful answers. The agency has to formulate tasks and get an answer from the results, the observations with the technical means and the statements made by the test person.
More on Testing & Evaluation
- Evaluating concepts for digital Accessibility
- Writing effective Accessibility Bug Reports
- Quality Management for digital Accessibility
- Why Conformance is overrated
- Is it accessibility or is the problem the disabled person?
- Testing Concepts
- Quick Checks for Web Accessibility
- The Fails of the German BITV Test