More and more systems today use Machine Learning (ML) and Artificial Intelligence (AI): Algorithms that learn themselves to make simple decisions. For example, A computer advises which checks should take place and analyzes when a person applies for a loan whether or not the loan can be granted. Many of these algorithms work like a “black box”: people don’t know exactly how they do their job on the inside. This poses a problem in many sectors: if these algorithms start to discriminate on, for example, origin, sexual orientation or gender.

An example of this was the automated credit determination of the Apple Credit Card. With a married couple, the man received 20 times more credit than his wife [1], while filling in the exact same financial data. Apple’s black-box algorithm thus “discriminated” purely on the basis of gender. Closer to home, the Tax and Customs Administration (surcharge affair) was also under fire because a citizen’s nationality was used as an indicator of fraud [2].

As AI plays an increasingly prominent role in our society, understanding and combating discrimination through AI will become increasingly important. CodeNext21 specializes in “clear box” machine learning, using custom algorithms that not only make a decision but also explain how a decision was made. By using, for example, clear visualizations, the decisions of the algorithms and the factors that play a role are made clear.

In the example of the Tax Authorities, this discrimination could have been prevented early if there had been an explanation next to the result “this application is rejected because you have dual nationality”. In this way it is always clear to users of those algorithms why a decision is being made. In this way, one can intervene quickly and purposefully if decisions are made for incorrect reasons: this prevents discrimination by algorithms.


Share this story:

About the Author: MetMuller