It started with a single tweet in November 2019. David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apple’s newly launched credit card, calling it “sexist” for offering his wife a credit limit 20 times lower than his own.

The allegations spread like wildfire, with Hansson stressing that artificial intelligence – now widely used to make lending decisions – was to blame. “It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM they’ve placed their complete faith in does. And what it does is discriminate. This is fucked up.”

While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries.

Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ultimately cut costs.

That legislation, known as the Artificial Intelligence Act, will have consequences beyond EU borders, and like the EU’s General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. “The impact of the act, once adopted, cannot be overstated,” said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute.

Depending on the EU’s final list of “high risk” uses, there is an impetus to introduce strict rules around how AI is used to filter job, university or welfare applications, or – in the case of lenders – assess the creditworthiness of potential borrowers.

EU officials hope that with extra oversight and restrictions on the type of AI models that can be used, the rules will curb the kind of machine-based discrimination that could influence life-altering decisions such as whether you can afford a home or a student loan.

“AI can be used to analyse your entire financial health including spending, saving, other debt, to arrive at a more holistic picture,” Sarah Kocianski, an independent financial technology consultant said. “If designed correctly, such systems can provide wider access to affordable credit.”

But one of the biggest dangers is unintentional bias, in which algorithms end up denying loans or accounts to certain groups including women, migrants or people of colour.

See also  Max Verstappen shows crack in relationship with Sergio Perez

Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. “There is a danger that they will be biased in terms of what a ‘good’ borrower looks like,” Kocianski said.Notably, gender and ethnicity are often found to play a part in the AI’s decision-making processes based on the data it has been taught on: factors that are in no way relevant to a person’s ability to repay a loan.”

Furthermore, some models are designed to be blind to so-called protected characteristics, meaning they are not meant to consider the influence of gender, race, ethnicity or disability. But those AI models can still discriminate as a result of analysing other data points such as postcodes, which may correlate with historically disadvantaged groups that have never previously applied for, secured, or repaid loans or mortgages.

Facial Recognition System concept.
One of the biggest dangers is unintentional bias, in which algorithms discriminate against certain groups including women, migrants or people of colour. Photograph: metamorworks/Getty Images/iStockphoto

And in most cases, when an algorithm makes a decision, it is difficult for anyone to understand how it came to that conclusion, resulting in what is commonly referred to as “black-box” syndrome. It means that banks, for example, might struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicant’s gender from male to female might result in a different outcome.

Circiumaru said the AI act, which could come into effect in late 2024, would benefit tech companies that managed to develop what he called “trustworthy AI” models that are compliant with the new EU rules.

Darko Matovski, the chief executive and co-founder of London-headquartered AI startup causaLens, believes his firm is among them.

The startup, which launched in January 2021, has already licensed its technology to the likes of asset manager Aviva, and quant trading firm Tibra, and says a number of retail banks are in the process of signing deals with the firm before the EU rules come into force.

See also  Sunak scraps A-levels and T-levels combining them into new baccalaureate-style ‘Advanced British Standard’ with pupils to study at five subjects including maths and English to 18

The entrepreneur said causaLens offers a more advanced form of AI that avoids potential bias by accounting and controlling for discriminatory correlations in the data. “Correlation-based models are learning the injustices from the past and they’re just replaying it into the future,” Matovski said.

He believes the proliferation of so-called casual AI models like his own will lead to better outcomes for marginalised groups who may have missed out on educational and financial opportunities.

“It is really hard to understand the scale of the damage already caused, because we cannot really inspect this model,” he said. “We don’t know how many people haven’t gone to university because of a haywire algorithm. We don’t know how many people weren’t able to get their mortgage because of algorithm biases. We just don’t know.”

Matovski said the only way to protect against potential discrimination was to use protected characteristics such as disability, gender or race as an input but guarantee that regardless of those specific inputs, the decision did not change.

He said it was a matter of ensuring AI models reflected our current social values and avoided perpetuating any racist, ableist or misogynistic decision-making from the past. “Society thinks that we should treat everybody equal, no matter what gender, what their postcode is, what race they are. So then the algorithms must not only try to do it, but they must guarantee it,” he said.

Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk

While the EU’s new rules are likely to be a big step in curbing machine-based bias, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they think they have been put at a disadvantage.

“The risks posed by AI, especially when applied in certain specific circumstances, are real, significant and already present,” Circiumaru said.

“AI regulation should ensure that individuals will be appropriately protected from harm by approving or not approving uses of AI and have remedies available where approved AI systems malfunction or result in harms. We cannot pretend approved AI systems will always function perfectly and fail to prepare for the instances when they won’t.”




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *