Beating the bias in AI
Originally published on Foundry4
Why are solutions to address AI Bias so important for business and society?
As AI becomes a dominant trend for businesses, the predictive algorithms currently being implemented from the front to back of organisations will come under greater scrutiny.
Take the case of Amazon, for example. It detected considerable gender biases in its recruitment algorithms, raising major questions about how reliable and ethical these algorithms are.
Then there’s Apple. In November last year, a US financial regulator launched an inquiry into its new credit card over claims it gave different credit limits to women and men. This was brought to light when a prominent tech entrepreneur tweeted about being given 20 times more credit than his wife (who had a higher credit rating).
If even the giants of tech, with all their experts and resources, are finding damaging biases in their AI algorithms, what does this mean for smaller organisations with less advanced tech?
As companies rush to implement various use cases for machine learning algorithms without being ready to deal with inadequate training models, it is crucial that business and society understand what the implications of the resulting bad decisions based on the outputs could be.
The impact of algorithmic bias in criminal justice
Already, we have seen some of the dangers of relying blindly on algorithmic outputs in the impact of the predictive algorithms used by some US police departments in the form of criminal risk assessment algorithms.
Historical crime statistics have been fed into the tool in order to find patterns based on a defendant’s profile. Correlations have then been interpreted as causation and determined a single recidivism score – a numerical estimation of likelihood to reoffend. Judges have used the scores in sentencing to make decisions such as the type of rehabilitation the defendant received, whether they got bail, length of sentences. High scores lead to severity, low scores to leniency.
Far from reducing the human biases that judges might have, this AI model perpetuated and amplified existing discrimination and inequalities found in society. It released dangerous criminals from less disadvantaged backgrounds while keeping low risk defendants from low income and minority backgrounds (who had earned high scores as a result) in prison for longer than they should have done.
Populations already disproportionately targeted by law enforcement had this targeting embedded into the system. This led to a vicious cycle of amplification – not only amplifying inequality but also not achieving the objective of most efficiently allocating resources and reducing prison populations without a rise in crime.
In fact, the accuracy of the score was found to be overwhelmingly unreliable in forecasting violent crime, with only 20 per cent of the people predicted to commit violent crimes actually going on to do so.
The impact of algorithmic bias in recruiting
It’s not just police departments and the criminal justice system where AI bias can shape and destroy lives. The private sector can also make a big difference to individuals, society, and the performance of the companies implementing them.
One field of particular risk is recruitment, where qualified and skilled candidates can be lost during the preliminary round due to the type of keyword identification criteria programmed into online screening and assessment. Candidates who don’t fit the pattern of the historical hires analysed by the algorithm (perhaps due to coming from more diverse backgrounds) often never get to be considered by a human, never mind having an interview.
Individual jobs are an important factor in shaping future careers and standards of living. Candidates from minority backgrounds that are screened out by AI bias could be missing out on benefits that also impact their families and communities. Not getting jobs based on the basis of any of the protected categories under the Equality Act (age, disabiliy, marital status, pregnancy/maternity, race, religion or belief, sex, sexual orientation) can lead to a knock on effect when seeking finance, housing and further education. In turn this reduces opportunites in life causing a spiral of discrimination.
The bias also hurts the companies who are recruiting as they miss out on the type of talent that is crucial to building the type of purposeful organisation increasingly seen as necessary to succeed in our modern society.
The impact of algorithmic bias in financial services
Another field in which biased predictive algorithms have a big risk of negative impact is in financial services. Particularly in financial risk assessments where the financial services company will decide how much of a loan customers can access and what premiums will apply.
Rich data and machine learning can lead to inaccurate risk scoring despite regulation and caution in credit risk that now involves more complex methods for scoring. Retail investment also can be impacted when the financial institution recommends what investments a customer should put money into and how they should manage the savings based on personal information.
If there is bias introduced by unrepresentative data or skewed modelling, it will clearly lead to an inadequate way of managing the money for certain customers, who will end up paying higher fees or having greater risk than they should have done.
A lesson in decision making
When humans make a decision we might be aware of a lot of the details but we don’t necessarily use all of them consciously, we approximate the conclusion, we infer the image based on the contour.
With complex algorithms a lot more details are used to get to outcomes and there’s no ethical dimension as to which one should be included. As we’ve seen with the examples cited above, using all this information is what makes these types of algorithms so powerful but also what can make them vulnerable to unfair discrimination.
How to prevent and correct bias – algorithmically
We believe it is possible to stop the bias, unfairness, and bad business decisions currently associated with AI algorithms.
To do so we created our AI bias scanning tool in order to show that it is possible to solve some issues around algorithmic bias which have a huge impact on society, and are also fundamentally bad business.
We start by identifying the groups of people in protected categories that AI algorithms get wrong over and over again. Then we obfuscate some of the data points or details associated with those people that causes bias in the algorithm. If the algorithm is not exposed to these details in the training data it’s less likely to use them once it actually makes a decision. We hide this data from the machine but we share it with the AI analysts through our insights platform.
Garbage in, garbage out
AWe do this in the initial stage of our diagnostic, before a model has been built, where we look at the raw data and help companies check what potential biases a predictive model built on it can have.
It tends to be that any bias issues with initial data get transferred to the algorithmic model (the garbage in, garbage out principle), so we ensure only quality data is used for the training which provides more assurance.
The second stage takes place once the company has built the algorithm, started testing and put it into production. We can help analyse the output and audit what it is recommending. We identify if there are any biased predictions through providing a set of metrics and statistics to guide the company on any factors or features which need addressing. For example, do the outputs stack up and meet the standards which they should?
Understanding customers and opportunities
The lessons we’ve learned from helping financial services companies tackle AI bias are applicable for any business that is using machine learning algorithms.
Immediate actions can start from questioning the structure of teams involved in briefing and building models. For instance, are they aware of the protected groups, and are they diverse enough to be able to get a group understanding of how factors surrounding them might skew the data being used?
Is there advice they can seek and how are data scientists testing and auditing outputs? It is good to get ethical AI strategies in place at a corporate level as many companies are now doing, but these should not stand in the way of immediate practical action which can make a huge difference to business and the world we live in.
Crucially for commercial businesses, not addressing bias could mean that whole customer segments are missed and excluded. Getting visibility of these is just the first step: companies then need to take a good look at why they have been misidentified and excluded, and this gives a very positive opportunity to get closer to customers, and also can lead to new approaches.
Beating the bias
When it comes to creating a business that is inclusive to all, do you just need more data, or is there something about your marketing or proposition which doesn’t work for particular groups? Knowing these groups’ true potential can be a sizeable commercial opportunity – and not something most companies can afford to miss out on.
We know from experience that AI bias negatively impacts business and the rights of individuals in our society. It’s important we act now in order to stop the reinforcement of existing prejudice against certain groups that exists today. Ask yourself, what will the future look like if we don’t?