Artificial Intelligence as Support for Technologically Inexperienced People

This article addresses the topic of artificial intelligence as an opportunity for tech-savvy people to access digital technologies. It's about technologies that are either already available or could become available in the foreseeable future. So this isn't a technological utopia.

Summary - TLDR

Many people – for example older or visually impaired people – are overwhelmed by graphical user interfaces. Even seemingly simple apps or websites are often too complex.

Assistive technologies such as screen readers could help, but are difficult to learn, often not robust, or expensive. This creates a big group that remains largely excluded from digitalization – precisely those who actually need it most.

Current approaches to accessibility, such as those based on the WCAG guidelines or in the area of ​​usability, barely reach this group.

Therefore, the focus is on new AI-based solutions.

One approach is adaptive user interfaces that automatically adapt to the needs of the users. An assistance tool that allows individual requirements—such as font sizes, colors, contrast, or the deactivation of animations—to be defined once and then applied across systems seems particularly possible.

Furthermore, multimodal input systems, especially voice control, could break down barriers.

If an AI responds and is controlled using voice, there is no need to operate complex graphical interfaces.

The automatic simplification of texts also offers great potential—especially if AI presents content in simple, understandable language.

Another important point is the emotional support provided by AI systems.

Especially for people with mental health problems, an empathetic AI that recognizes frustration and responds with encouragement or assistance can be a great help.

For such technologies to be truly inclusive, however, they must meet certain requirements:

  • Accuracy and reliability of information
  • Data protection-compliant processing, ideally through models which are runing on the device of the user
  • Availability and accessibility for those who need it

Despite open questions, such as data protection or technical implementation, a clear trend is emerging: With the progressive integration of AI into operating systems – for example, by Microsoft, Google, or Apple – the chances are growing that digital technologies becoming significantly more accessible in the near future.

The Problem

We know that many people use digital devices and graphical user interfaces as a matter of course – they write emails, surf the internet, or use complex apps.

But there is also a large group for whom all of this is hardly possible. Not because they lack interest, but because the technology is too complicated. Many of them – not all, but a large proportion – live with a disability.

I am particularly familiar with this from the area of ​​blindness and visual impairment. Older blind people, in particular, often don't use computers or smartphones at all because they lack the basic skills needed to use computers or work with screen readers.

Others, on the other hand, use their devices only to a very limited extent – ​​they might send voice messages via WhatsApp, somettimes open an app like the weather app, and that's it. However, as soon as it comes to surfing the internet, filling out forms, or using more complex applications like YouTube, many quickly come to their limits.

The problem is: With current approaches to digital accessibility, such as the WCAG guidelines or classic user experience concepts, we barely reach this very group.

Of course, the experts in these areas are doing great work – improving usability and simplifying use. This primarily benefits people who are already relatively comfortable with technology, or those in the "middle range": those who sometimes struggle but generally come along.

Those who are already overwhelmed by graphical user interfaces benefit the least. And this is precisely where I see a key problem: With current digital accessibility methods, we simply aren't reaching these people. Those who most urgently need digital support – people who can't leave their homes or who otherwise rely on assistance – often have the least chance of navigating digital interfaces.

They can't place online orders, make appointments with government agencies, order medication, or have groceries delivered to their homes. All things that many of us have taken for granted.

This is primarily because many of these applications are simply too complex. Almost everything is digital these days – even package tracking. You can see in real time where a package is or prevent it from being rerouted when it's supposed to be taken to the store. But using this requires technical knowledge, which many people don't have.

I've already mentioned the two decisive factors above:

  • First, graphical user interfaces are too complex for many people – even a seemingly simple app can be overwhelming.
  • And second, assistive technologies, i.e. tools like screen readers, are not easy to use. For people who grew up with them or use them regularly, this is of course routine. But for those with little or no experience with them, it's an almost insurmountable hurdle.

Regarding the first point: There is certainly still potential for optimization that has not yet been fully exploited. Regarding the latter point, little has actually changed since the introduction of mobile screen readers: Screen readers are complex to use and learn; nothing has changed so far, and I haven't seen any approaches that would change that.

So, if you're not familiar with either graphical user interfaces or assistive technologies, I honestly don't see any way to overcome this with today's technology. You have to master one of both – otherwise, access to the digital world is practically blocked.

Let's now look at some approaches that are already being discussed.

Adaptive User Interfaces

One example is the concept of AI-based adaptive user interfaces, which Jakob Nielsen proposed a few years ago. Graphical interfaces are supposed to automatically adapt to the user's needs.

That still sounds like a distant prospect today – and it's not foreseeable that something like that will be widely available next year or the year after. But the prediction comes from someone highly respected in UX and usability research. The Nielsen Norman Group is one of the leading companies in this field; they conduct intensive research and regularly publish studies on the subject. So, when Nielsen says that adaptive interfaces are coming, it's definitely something to be taken seriously.

The idea behind it: User interfaces could either automatically adapt to individual needs – for example, through AI that analyzes usage behavior – or a digital assistant could guide you through a setup where you define your needs once, and these settings are then applied to all devices and applications.

However, I consider the scenario of manually adapting interfaces using an assistance tool to be much more relevant – at least when we're talking about people with disabilities.

Disabilities such as neurodiversity or visual impairment differ so significantly from one another that an AI could hardly learn what an individual needs simply from user behavior. A way to specifically determine which adaptations are appropriate – and these should ideally then be applied to all graphical user interfaces. The needs are simply too diverse to be mapped with a uniform system.

In addition, we are talking about people who are generally not digitally savvy and therefore largely avoid digital technologies. This means that there wouldn't be sufficient usage data from which an AI could learn anything. In this respect, an assistant that allows you to set your own needs once would probably be the more practical solution.

And what could you specify there? For example:

  • No animations or flashing effects – neither automatically nor during interactions.
  • No self-starting videos.
  • Exclusion of certain colors, such as green or red, if they trigger a user.

For visually impaired users, the settings could include:

  • a specific, legible font,
  • adjusted contrast values,
  • a defined color palette that works best for each individual.

What is currently sometimes cumbersome to do using a screen magnifier or high-contrast mode – and often works unreliably in these modes – could thus be automated and implemented across systems.

The application would thus adapt without limiting functionality.

Simplification of interfaces

Another important point is the simplification of user interfaces. I envision a kind of "reduction to the essentials." For example, when filling out a form, only the form should be visible – without all the navigation and distractions surrounding it.

Perhaps the AI ​​will even display brief explanations if a field or term is unclear. A desirable scenario would be for the AI ​​to fill out the form largely automatically.

Another idea that's important to me in this context concerns the structure of websites and web applications. I think it would make sense to focus more on tasks in the future rather than on traditional web pages. When I forexampe use online banking I see hundreds of functions – loans, portfolios, insurance – even though I really just want to manage my account or make a transfer. This abundance overwhelms many people. People generally don't want to use a web portal or app; they want to complete a task. No one would seriously claim that the way we do things today is efficient.

Therefore, I could well imagine user interfaces being task-based in the future. This means: I simply say what I want to do – for example, "I want to make a transfer" or "I want to see my bank statements" – and the interface guides me step by step through this task.

Without having to click through countless menus and pages beforehand.

That would be a more elegant and inclusive form of interaction – and of course, this system would also have to function reliably, even if it is individually adapted to user habits or preferences.

Multimodal input

And finally, the type of input also plays a major role.

Today, we essentially use a keyboard, mouse, and touch. Voice control should have become more important long ago – especially through systems like Alexa.

But Amazon has barely pushed forward with further development in this area – it wasn't until the advent of generative AI that voice interaction experienced a new boost.

The next step could indeed be multimodal input – a combination of speech, gestures, and other forms of interaction. This means: The AI ​​responds with speech, and I also control it with speech.

It could then ask questions like: "Do you really want to transfer this amount to person X?" – with built-in security prompts to avoid errors or manipulation.

I could very well imagine this form of control being particularly useful for older people. Of course, you have to get used to talking to a computer, but I think that's a solvable problem.

And if the AI ​​responds verbally, the real obstacle – namely the complicated graphical user interfaces – disappears.

Linguistic Simplification

Another exciting area is linguistic simplification. This is already technically possible today.

I know that there are also critical voices in the "Easy Language" and "Simple Language" community – for good reasons.

But in my opinion, automatic simplification works surprisingly well, at least when existing texts are simply summarized or simplified.

Of course, AI systems have the well-known problem of "hallucinations," i.e., false or fictitious content . But with pure text simplification, the risk is significantly lower. And this is where I see enormous potential:

Of course, it would be ideal if all content were available in simple or easy-to-read language – but that's almost impossible in practice.

There simply aren't enough resources to create all of this manually.

So if AI-supported systems can support this, it would be a real step forward in digital inclusion. So far, this feature hasn't really caught on yet, but I'm sure it will – probably integrated directly into browsers or operating systems. I know Apple is already working on such solutions. I don't know for sure whether Google is already that far along with ChromeOS or Android, but I think the topic will soon become relevant there as well.

Once the models can run locally on the devices, i.e., without a constant cloud connection, the whole thing will become much more practical – faster, more energy-efficient, and more privacy-friendly.

Help with mental health challenges

Another interesting field is supporting people with mental health disabilities. We know that frustration tolerance when dealing with technology is often somewhat lower here – entirely understandable, as digital systems can be frustrating.

If an AI were able to recognize frustration in the future – for example, based on the voice or tone of speech – it could respond empathetically, provide calming influences, or offer assistance.

That would be a huge step toward truly empathetic technology that adapts not only to the cognitive but also to the emotional needs of its users.

This is already happening to some extent today. For example, with many AI systems – let's take ChatGPT – the standard response to a question is often: "That's an excellent question!"

So you're constantly praised, as if you've just accomplished something truly special.

While this sometimes seems a bit exaggerated – almost as if you were a small child – it's effective.

And that's precisely where the potential lies: For some people, this kind of positive feedback can actually be a relief and motivation.

Requirements

To conclude, I would like to mention a few prerequisites that, in my view, are crucial for such systems to work successfully and without barriers.

The aspect of correctness is central. When graphical user interfaces are automatically adapted, it must be ensured that no information is lost or displayed incorrectly.

Otherwise, there is a risk that users will make incorrect decisions because the AI ​​incorrectly implements content.

How exactly these adaptations will work in the future is still open – even Jakob Nielsen doesn't seem entirely sure. He apparently assumes that interfaces in the future will consist of modular design components that can be automatically adapted.

If that were the case, AI would probably no longer be needed for this pure adaptation logic – but that remains to be seen. Regardless of this, whatever the technology ultimately implements – it must function reliably and correctly.

The same applies to text summaries or simplifications: They must be correct.

And all automated functions offered by an AI must remain stable and understandable.

Of course, the whole thing must also comply with data protection regulations – and that's exactly where I see the biggest problem at the moment. Many of the current AI models come from the US or China.

And honestly, even though there's a lot to discuss about data protection, nobody wants personal data to simply flow to those countries.

That's why I very much hope that future systems will run locally – directly in the browser or operating system, without data having to be constantly sent to external servers. This is also important for latency.

And I think we're on the right track there. The models are becoming smaller and more efficient, so they will soon be able to run directly on the device.

Apple is already working hard on this – and it would of course be desirable for other manufacturers like Google or Microsoft to follow this path.

The most important thing is that these technologies must be accessible to everyone. But I'm quite optimistic about this.

At the moment, we're seeing a real race: Microsoft with Copilot, Google with Gemini – both are integrating AI functions deeply into the operating system.

Apple has traditionally been a bit more reserved in this regard, but I don't think they can afford to be left out in the long run. We'll certainly see new solutions from them by next year at the latest.

The models should ideally run locally on the device, partly because of latency. If they run in the cloud, they should respect data protection. However, in my view, it would also be acceptable to use anonymized data, with the consent of those affected, to improve the models for everyone. A major problem in this specific area is the lack of training data.

More articles