The cognitive Model of Blind for Graphical User Interfaces

A cognitive model describes how something is perceived. In order to process complex objects, we need a representation of how it is constructed and how it behaves, especially when interacting with it.

The cognitive model of sighted people

Sighted people perceive the GUI through their visual senses. Visual elements such as colors, shapes, symbols and text are recognized and interpreted.

People recognize and categorize visual elements using Gestalt principles (e.g. proximity, similarity, continuity) that help to perceive related information as belonging together.

Colors and contrasts play a crucial role. High contrasts make it easier to distinguish elements and draw attention to important areas.

GUIs are often complex, with many possible interaction points. Users direct their attention to relevant areas, often based on visual cues and personal relevance.

Selective attention helps to filter out irrelevant elements and focus on important interaction points. The placement and size of buttons and icons can direct attention in a targeted manner.

Working memory is crucial: users keep a limited amount of information in mind to perform the next steps in a GUI. GUIs that present too much information at once can overload working memory.

Long-term memory is engaged when users recognize familiar icons and interactions. Icons such as the "house" for the home page or the "trash can" for deleting are examples of symbols that are anchored in long-term memory and are immediately understood.

Consistency in design helps to leverage long-term memory and ensure recurring usability.

Users weigh options in a GUI and choose paths that best fit their goals. Decisions often depend on factors such as efficiency, previous experience, and confidence in navigation.

Conformity to expectations is an important factor: GUIs that match users' expectations reduce cognitive load and make decisions more intuitive.

After making a decision, the user performs an action (e.g. clicking, scrolling). These actions depend on the direction of the goal, which creates a clear visual connection between the user's intentions and the GUI elements.

Motor controls, such as mouse placement and button pressing, are part of the translation of the cognitive model into physical actions.

Fitts' law also plays a role here: users prefer buttons and interactions that are easier to reach.

The cognitive model of blind people

The cognitive model of blind people when using digital user interfaces (UI) is fundamentally different from that of sighted people, as they have to rely on other forms of perception. Most Gestalt laws such as proximity, size, white space or graphical highlighting do not work for blind people. Most Gestalt patterns are completely visual.

Auditory information processing

Blind users use screen reader software that converts the text on the screen into speech or Braille. This means that information processing is primarily auditory (or haptic in the case of Braille). Some cognitive processes that play a role in this:

Sequential navigation: Unlike sighted users, who visually scan a page and take in information in parallel, blind users take in information sequentially. They listen their way through menus, paragraphs and lists and have to mentally construct the structure of the page.

Mental mapping: Blind people create a mental representation of the website structure, similar to how sighted people create visual maps. This mental map helps them orient themselves on the page, even though they cannot take in the page at a glance. Navigation elements such as headings, lists or links are often used as landmarks.

Focus on hierarchy and semantics: Semantic markers (e.g. heading levels or lists) are particularly important for blind users to understand the structure of a website. They use these clues to set priorities and navigate through content.

Tactile information processing (Braille)

Some blind users use Braille displays that convert the text of the user interface into tactile Braille characters. Tactile information processing requires different cognitive strategies than auditory information processing.

Slower information absorption: Braille displays often only offer a small amount of text at a time, which reduces the speed of information absorption. Users must adapt to the limited space and sequential nature of reading.

Higher working memory load: Since only a few characters can be perceived at once, blind users must hold more information in working memory as they move through the content.

Interaction paradigms

Blind users use different input methods, including keyboard shortcuts, voice control or haptic input devices. These input methods require special cognitive adaptations.

Keyboard and shortcut-based navigation: Since visual orientation is eliminated, blind people navigate through the user interface using keyboard shortcuts or gestures. They have to remember a large number of key combinations and interaction patterns in order to work efficiently.

Cognitive load due to context switching: Switching between different tasks (e.g. reading text, navigating to a form, entering commands) can be cognitively demanding, as orientation and retaining context are particularly important.

Attention management: Blind users have to search for and process information in a targeted manner, which requires effective attention management:

Relevance filtering: Blind people need to be able to filter out irrelevant or redundant information, as they often have to listen to long passages to find relevant details. It helps if websites are semantically well structured and accessible.

Cognitive flexibility: Since many digital contents and websites are not fully accessible, blind users need a high level of cognitive flexibility to find alternative ways of extracting information, e.g. B. by bypassing poorly marked navigation elements.

Increased dependence on memory: Without visual cues, blind users rely heavily on their memory to find their way around digital interfaces.

Long-term memory for layouts and structures: Frequent use of certain websites or applications leads to the storage of structural information in long-term memory. Blind users can find their way around more quickly when they revisit familiar pages thanks to these stored mental models.

Working memory: Processing information simultaneously is often more difficult because information is received linearly and auditorily. This leads to an increased load on working memory, as relevant information must be temporarily stored while the user navigates through the page.

Problem-solving strategies and error management

Blind users develop their own strategies to deal with malfunctions, inadequately accessible websites or a lack of information. These include:

Trial-and-error approach: When navigation elements are not well described or accessible, blind users often rely on trial-and-error strategies to figure out how to interact with the page.

Seek specific feedback: Blind users need accurate, immediate feedback about their actions because they cannot perceive visual indicators that confirm interaction (e.g. color changes, pop-ups).

It is particularly important for blind people to be able to select relevant information and filter out irrelevant information. This of course also applies to sighted people, but for blind people this process takes significantly longer because they receive significantly more information. They cannot simply visually skip a block of information. At the same time, they must be able to find specific information in a targeted manner. Let's say I make an online purchase. Then I have to specifically look on the last page to see whether the purchase has been completed or whether an error has occurred. Sighted people simply focus on the relevant spot, while blind people have to find it specifically.

Important principles of digital accessibility for the blind

There are several principles to reduce the challenges for the blind:

Relevance and relationships: ON GUIS, hierarchies and relationships are made clear by differences in the size of elements and proximity/distance. In code, the whole thing is based on container structures. Related elements are structured in containers that describe their function. Another option is fieldsets in forms.

Labels: The purpose of elements is communicated via the code. Since elements usually communicate their purpose through their position or appearance, this information is otherwise not accessible to blind people.

Communication of changes: Changes that take place in the GUI are communicated to the assistive technology. Since they take place visually, the blind person would otherwise not notice them.

<>Order: Since blind people without a touchscreen cannot perceive the visual arrangement, the correct implementation of the order is important. This affects the output of the screen reader, but also the order in which the keyboard is used. It can be the case that elements get their meaning from their order. Imagine a two-column form in which the first name is on the left and the last name is in the right column. The screen reader would probably read out the left column first, then the right column, causing confusion.

Keyboard usability is primarily made possible directly by assistive technology. It offers shortcuts or touch gestures to directly access or skip elements.

Read more

  • Blindness
  • The Difference between blind-friendly and digital accessible
  • User experience for blind users
  • What is Blindness
  • All Day Life of blind Persons
  • How do Blind use books, TV and the Internet
  • How do blind orient themselves
  • How Blind are using Computers and Smartphones
  • The Relation between Blind and Sighted
  • Blindness in Science and Research