Evaluating concepts for digital Accessibility

The test procedure should be uniform. In order to do justice to different submissions, the exam can also have a modular structure, with individual modules applying specifically to websites or apps or concepts.


Qualitative Evaluation

The test should be qualitative rather than quantitative. So instead of using a relatively rigid system of rules, the practical usability is queried. The starting point for this is the consideration that an application that cannot be used or that is difficult to use is also not accessible. With tests like the BITV test, the focus is more on the technical substructure, but the user is more interested in the interface and the content.

The evaluation has two phases: In part 1, the editors evaluate the submissions. For this purpose, exclusion criteria must be developed. For example, content should be excluded where it is clearly visible that the submitter has not made any effort to ensure accessibility, for example because he has used the already accessible standard layout of an editorial system. Content that has obviously not been maintained for a long time or for which it is foreseeable that it will not be developed further should also be excluded. A minimum requirement for submission is a testable prototype.

Submissions related to disability or accessibility will be considered.

In the second phase, the concepts submitted are tested in practice by those affected.

First Phase

The submissions should be evaluated by a small editorial team, a single person would judge too subjectively. The checklist is defined in advance and, if necessary, adjusted if the criteria are perceived as too strict.

The editorial team can evaluate positively if

  • it is available on several platforms (Android, iOS) or was created as a universal web solution
  • the product is available cheaply or free of charge (costs should not be viewed negatively, after all persons should also be able to earn money and the costs of apps are usually not disproportionate)
  • persons with disabilities were involved in the development, also in the form of user evaluation
  • the solution is made available as open source
  • innovative approaches are used (open data , gamification, crowdsourcing

However, it should be rated negatively

  • a special solution for persons with disabilities (special solutions are undesirable as there is a high probability that they will eventually be discontinued or neglected in favor of the conventional solution) • an app as an alternative
  • for an otherwise inaccessible website (the app as an excuse for the the website is not accessible, so you should buy a mobile phone to access the information). An exception can be made if the functions of the website can only be made accessible with difficulty (as with Facebook) or the application provides additional functions that cannot be easily implemented on the Internet.

The editorial team meets at the end of the first phase and discusses which concept goes into the practical test and which group tests the submissions.

Second phase: practical test by those affected

The second phase is the practical test. In the practical test, the submissions are checked by at least two persons from the target group. If there is no specific target group, the editorial team decides who and how many persons will test the application. The method described here is based on heuristic evaluation, a method from the usability area.

There are two conceivable approaches. After the test, the test person is questioned according to a fixed pattern or they are observed during the test phase.

In the usability check there is the method of thinking aloud, the user says out loud what he wants to do, why he is doing it and what difficulties he is currently having. A mixture of questionnaire/checklist and observation of the test person seems sensible.

The examiner logs the evaluation and notes, for example, how quickly the test person found his way around or where he encountered problems.

He is then asked about his subjective impressions using a fixed checklist/questionnaire.

Scenario

The editorial team determines the main purpose for each application. Based on this purpose, a scenario or a task is determined for this application, which is to be processed within the framework of the evaluation. Without a scenario, the impression becomes too subjective, the user clicks through the application without actually using it.

Possible criteria are:

  • Learnability
  • Usability
  • Usability
  • Robustness (the application should work with the auxiliary software used and not constantly crash, for example)

For example, the test administrator can observe how well the user gets along with the application. The observations are logged and included in the evaluation.

Possible questions for the user after the test:

  • Did you find the application easy to use?Did you notice any problems?
  • Will the intended purpose be achieved with the application?

The test subjects should not be technical experts to get realistic results. You should be familiar with computers, the support software or the tools, but not too proficient or too inexperienced. This ensures that they are representative of the group.

Set the right incentives

It is important to set the right incentives through the evaluation process. On the one hand, accessibility should be anchored as a general topic. The developer may be willing to accept suggestions for improvement from the community. Ideally, developers and users get together at the end to further improve the application together, so a stronger impact is created - a win-win situation.

Basic Assumptions

Accessibility is often viewed as the obstinate processing of rules. persons see no reason to concern themselves with the specific needs or difficulties of persons with disabilities, but content themselves with working through catalogs of demands. These guidelines have been created with great dedication by smart persons, but in the end it is always the users who actively or passively decide whether or not to use an application. This, in turn, does not necessarily depend on whether the specific application meets, partially meets, or does not meet the accessibility guidelines. Websites of public institutions in particular meet the guidelines for accessibility, but are considered difficult to use for persons with disabilities.

Reference is often made to the fact that an application is not accessible because criterion XY is not met, although it does not seem to matter whether this criterion is relevant in the specific context. An application that is only aimed at persons with learning disabilities does not have to be primarily accessible to the blind.

The various guidelines such as BITV, WCAG and so on are to be understood as guidelines, they are a way, not the goal. The decisive question is whether the target group can get along with the product.

At this point, it is crucial that the target group gets along with a product, not the question of whether certain WCAG criteria have been met. The focus is thus taken away from the criteria and placed on user evaluation and usability. This increases the incentive for developers to involve disabled users.

Read more