Why is bias unethical

The problem of bias

Only a responsible use of AI increases user acceptance. It is particularly important to avoid the so-called bias trap. This means that all AI applications must be designed to be free of prejudice and discrimination.

People may be biased, but it doesn't mean that AI has to be the same. Algorithms learn their behavior based on the data with which they are trained. If the data is insufficient or has bias characteristics, the AI ​​model will also learn to act on it. The so-called bias
The problem, i.e. the algorithmic bias, can only be circumvented if a data scientist consistently pays attention to the fair design of the AI ​​models right from the start.

In the past year, several high-profile incidents highlighted the risks of inadvertent bias from AI applications and the associated detrimental effects on businesses. Discrimination always has its price: lost sales, loss of trust among customers, employees and other interest groups, fines and legal consequences.

The problems with the data

With some data types there is in principle a higher risk that they will be used to discriminate against certain groups. For example, it concerns information on nationality, gender or religion. But apparently “safe” data such as a person's postcode could also lead to distorted decision-making in an AI application. If, for example, a bank typically does not provide many loans to people in a neighborhood with an ethnic minority, the AI ​​could learn not to offer loans to people in this zip code - and thus introduce a racist bias into the AI ​​model through a back door. So even if ethnicity is not a factor in programming, it is possible that Artificial Intelligence will still find a way to discriminate without the bank ultimately noticing.

As a first step, companies must therefore carefully check the data used by the AI. If this step is not taken, the AI ​​used can lead to an unfair treatment of people - such as in this example to an unjustified and ultimately unethical limitation of loans for certain population groups. To avoid bias built into AI, organizations must always start with "clean data sources" when creating models. In particular, it must be taken into account that characteristics such as educational level, creditworthiness, occupation, employment relationship, mother tongue, marital status or number of followers can lead to incorrect AI decisions in certain situations. It is difficult for companies to identify such potential problems without the help of technology that is specifically designed for this analysis.

Identifying and eliminating bias

It is therefore the responsibility of companies to integrate a technology for bias detection into all AI models, especially in regulated industries such as financial service providers or insurance companies, where non-compliance with compliance requirements is a serious violation. The detection of any distortions should not only be carried out monthly or even only quarterly. Rather, companies and organizations must continuously monitor their self-learning AI models in 24/7 operation in order to proactively identify discriminatory behavior and eliminate it in good time.

The evaluation of AI training data and the simulation of real scenarios before the use of artificial intelligence helps to identify potential biases before they can cause damage. This approach is essential, especially for current, high-performance machine learning applications, as they often contain opaque algorithms and can therefore easily cover up integrated distortions. In any case, every AI user must realize that a reliable detection of distortions cannot be done manually. There is no getting around the use of an adequate technology for these analysis purposes. And this is the path a company must take, after all, the suppression of prejudices in AI decisions is an absolute must - especially with regard to the current discussions about the ethical implications of AI and the avoidance of social injustices.

Rob Walker is Vice President of Decision Management & Analytics at Pegasystems

You might also be interested in

Related articles

Pegasystems

Artificial intelligence