Artificial Intelligence and digital Accessibility

In recent years, artificial intelligence research has been celebrating a major revival and probably its final entry into the mainstream. In this article You can find out why this is also good for persons with disabilities and digital accessibility.

Article Content

Artificial intelligence - and I can decide for myself

Accessibility, consistently implemented, often increases the costs considerably. Anyone who has ever wanted to have a medium-sized website translated into plain language will know what I mean. Let's not even talk about sign language.

However, this also restricts the freedom of choice of disabled persons. Anyone who is dependent on sign language has to be cope with the few existing offers or hire a sign language interpreter. However, it is an imperative of democracy and inclusion to also support such groups.

On the other hand, translation programs are becoming more and more powerful. While the quality of these programs was primarily dependent on computing power and pattern recognition, AI is increasingly coming into play. It can recognize connections and patterns much better than purely statistical algorithms. Neuronal networks can be trained and learn. They only get better and better over time. Anyone who speaks of AI today usually means machine learning, which is part of AI, but of course not the only one.

There are already tools that generate automatic image descriptions. Such tools are used by Facebook, for example, to automatically add image descriptions. Such tools should also be available in the current MS Office. The Google Chrome browser is able to describe images.

What the tools are currently showing is sometimes useful and sometimes not. But the algorithms are constantly getting better.

While individual images can still be described quite well by persons, it becomes difficult when it comes to thousands of images. This is interesting, for example, for companies that want to have many images described, for example in eCommerce. Or for image databases. Getting decent image descriptions at a reasonable price is difficult. A sufficiently well-trained algorithm will do this in no time.

The advantage for the blind and partially sighted is that they could receive image descriptions in very different degrees of detail. For some, “black sneaker” is perfectly sufficient. Others may want to know what patterns are present, what shade of black, and so on.

Speech recognition and automatic descriptions

One of Apple's Achilles' heels is the comparatively poor speech recognition. Try dictating something in English like a song title. AI could also bring significant progress in this area.

That would mean, for example, that audio and video content could be transcribed into text much faster and cheaper. Even subtitles for the deaf are conceivable, if the recognition of sounds develops so well.

I could even imagine that there will be automatic audio descriptions at some point. But there is certainly still a long way to go before then. However, algorithms that describe scenes already exist.

Automatic tagging of documents

There are millions of non-accessible documents on the Internet, especially PDF files. Making existing and new documents accessible, as required by the EU directive and other laws, is not feasible in terms of personnel or finances. The effort is too great and even if it could be financed, there are not enough qualified persons for this complex task.

One solution would be to automatically make the documents accessible. Pattern recognition of text elements is no longer a major challenge today, and the automatic description of images and graphics is already feasible with today's technology. If the documents within an organization are built according to a certain visual structure, an algorithm that has been trained accordingly could easily do this with an acceptable error tolerance.

More control for the user necessary

Such examples could be endlessly strung together. Microsofts SeeingAI, for example, can recognize and describe objects or environments. We cannot yet imagine the possibilities that could exist in the not too distant future.

One problem, however, is that these programs are all in the hands of the big players. I haven't heard of open source in AI yet.

That means, these tools can only be used in closed environments. If I want alternative text for an image, I must first upload it to Facebook. I upload the video to YouTube for subtitles. Apart from data protection and a lack of comfort, this drastically limits the possibility of self-determination.

AI becomes interesting when its core functions are available independently of a specific platform. For example, I would like to translate any page into easy-to-understand language or have an alternative text created from any image without copy-paste or hic-hacking with a data collector. Only then can the AI ​​unfold its full effect for us.

Accessibility Overlays = AI Trash

Accessibility overlays are tools that promise to create accessibility automatically. Let me state first: These tools are absolutely inadequate when it comes to accessibility. In the USA, several providers who have used such overlays have already been sued. The providers themselves cannot be sued, since their false promises are apparently not illegal.

The challenge is to make a complex website accessible. This is partly possible automatically for image descriptions, subtitles or the hiding of disturbances. It doesn't work automatically for forms or other complex elements. It will very likely be possible in the foreseeable future, but then it will not be the almost fraudulent companies that are on the market today.

testing

While automatic accessibility beyond the areas mentioned is still a long way off, AI can bring improvements in the area of ​​testing. Complex widgets or custom elements in particular pose greater challenges for developers. If automatic test routines are improved, this can significantly improve the quality of applications in the future.

Computer-aided creation of understandable language

Will it ever be possible to automatically convert texts into more understandable versions, such as easy-to-understand or plain language? At least not in the foreseeable future.

What works well today is translation from one language to another. Here, however, other heuristics are needed. Translation programs recognize grammatical patterns in the original text and can translate them into the other language. When translating into understandable language, on the other hand, it is necessary to first extract facts from the original text. You have to recognize what the author is trying to say and what is relevant. This must then be translated into simple language and supplemented with pictures. We come to the area of ​​natural language processing.

But what works well and should improve is the reduction in time-consuming tasks such as finding and automatically transcribing long words or phrases. Extracting facts from a text and removing filler words could also be significantly improved.

Why criticism of AI often makes no sense

There is an interesting phenomenon: everyone thinks that AI could make many jobs obsolete, only their own is never affected. Software cannot do what I can. And that's partly true.

Nevertheless, criticism of AI is often based on false conclusions. An example says that AI cannot describe an image because it does not know the intention of the image. That is correct, but even an uninvolved third party does not know the intention of the picture. So I would have to tell you one way or the other.

It is often implicitly thought that if the task is not done by software, a human would do it. Unfortunately, that is wrong: With many things, it is simply the case that either the AI ​​does it or nobody does it. Then the PDF is not tagged, the video is not subtitled, and the text is not made understandable. In my opinion, an 80 percent adequate description of the image is better than none at all.

Further Reading