Can Software reduce discrimination
Biasing has more or less entered the consciousness of many decision-makers today. Biasing is about the fact that discrimination takes place mostly unconsciously. The core is that we unconsciously prefer persons who are similar to us in terms of age, education, behavior and appearance. And that it is not enough for us to know this in order to act against it. We will always prefer the person who is more sympathetic to us and explain afterwards in a factual way why it has to be this person. "That person fits into the team" is a killer argument that can be used to justify any decision. In my opinion, this is a basic constant of human behavior that is difficult if not impossible to discard.
We need mechanisms independent of individuals that eliminate this bias. One way to do this is through anonymous applications. Social inclusion can also help, because when we grow up with "the others," we no longer experience them as so different from ourselves.
This is more or less old hat - mainstream in the diversity approach, so to speak. What is new, in my perception, is the data-based approach. Examples of this are the recent books by Iris Bohnet "What Works" or Caroline Criado-Perez "Invisible Women."
Its clear that data already plays a major role today and will become increasingly important with AI-based processes. This is associated with the hope that application processes, for example, will be less discriminatory.
Unfortunately, an algorithm is not automatically neutral; it all depends on the programming and the data.
This also means that we can't make software like a machine and it will always work. Rather, we need to include and integrate current knowledge.
The approach for a solution is surprisingly simple, it is the implementation that fails. When data is based on a perceived majority society, the resulting decisions are discriminatory against diverse minorities. An algorithm can discriminate just as much as a human if it simply replicates human behavior.
Let's say a car's braking distance is designed for a six-foot-tall 35-year-old man, the algorithm won't work as well for a five-year-old boy or an 80-year-old man. Or an application process will sort out anyone who went to school in a certain part of town or has gaps in their vita.
We all discriminate - so do I - whether consciously or unconsciously, we'll leave it at that. A data-based anti-discrimination approach, therefore, seems to me to be one way to address this problem. The problem with many approaches to date, such as anti-bias training, is that they tend to cost a lot of money, often make you feel good, but cannot prove that they are successful. This means that even the success of data-based approaches can and must be proven with data.