Britain Launches £400,000 Fairness Innovation Challenge To Tackle AI Discriminations
With the rise of ChatGPT and other Artificial Intelligence (AI) platforms, AI-linked discrimination and bias have become common. The UK government is trying to address this by launching a new innovative challenge.
This comes at a time when a Mesh-AI report suggested that 90 per cent of businesses know AI is essential for them but very few know how to use it.
On Monday, October 16, the Department for Science Innovation and Technology along with the Equality and Human Rights Commission and the Centre for Data Ethics and Innovation launched the ‘Fairness Innovation Challenge‘ through which the Rishi Sunak government is investing £400,000 in UK companies that find innovative solutions for AI discriminations.
As part of the program, UK companies can submit “AI discriminations” reducing proposals through the Innovate UK portal, especially in the healthcare sector. The UK government is also organising an in-person event on October 19 and a virtual briefing on October 24 to explain the strategy.
The UK government will be selecting three ground-breaking homegrown solutions that address AI discriminations and bias through this program and each successful bid has the potential to get £130,000 in funding.
This comes at a time when Britain is gearing up to host the first AI Safety Summit where industry experts will be discussing strategies to minimise AI-generated risks and look for long-term opportunities in this sector.
In the first round of submissions, the Centre for Dara Ethics and Innovation will be selecting UK companies that ensure “fairness underpins the development of AI models”.
The Sunak government has underlined that participating UK companies must suggest strategies that can be applied over “a wider social context”.
The UK government has set “Fairness in AI systems” as its core agenda in the AI Regulation White Paper.
According to the government, the UK can’t utilise the opportunities presented by Artificial Intelligence like using AI for screening breast cancer by the NHS or tackling climate change issues with AI, unless AI discriminations and bias are weeded out.
As part of the Fairness Innovation Challenge, King’s College London will select UK companies in the AI sector that can detect and address potential bias in the AI generative model developed by Health Data Research UK in support of NHS AI Lab.
UK AI researchers built this model by scanning over 10 million anonymous patients’ records to predict the best possible health outcome.
In the second part, UK companies will be selected based on new “AI discriminations” tackling solutions that can be used for a wide range of “open use cases” including AI frauds, law enforcement, and fairness in recruiting software amongst others.
Applications for the Fairness Innovation Challenge are open till December 13 and the selected UK companies will be declared on January 30, 2024.
Fairness Innovation Challenge to sync AI system with the UK laws
Speaking about the Fairness Innovation Challenge, the UK Minister for AI, Viscount Camrose said: “The UK government is putting British talent at the forefront of making AI safer, fairer, and trustworthy through this program.”
“The opportunities presented by AI are enormous, but to fully realise its benefits we need to tackle its risks,” underlined the Minister.
Viscount Camrose further explained why the UK needs its system to tackle AI discriminations and bias as most “AI technical bias and audit tools are developed in the US” and unfit for “UK laws and regulations”.
“The challenge will promote a new UK-led approach which puts the social and cultural context at the heart of how AI systems are developed, alongside wider technical considerations,” said the Minister.
Currently, companies in the UK are having a hard time tackling AI bias and discrimination, including demographics and ethnicity data shortages.
The Chair of the Equality and Human Rights Commission, Baroness Kishwer Falkner echoed this when she said: “Without careful design and proper regulation, AI systems have the potential to disadvantage protected groups, such as people from ethnic minority backgrounds and disabled people.”
“Tech developers and suppliers have a responsibility to ensure that the AI systems do not discriminate. Public authorities also have a legal obligation under the Public Sector Equality Duty to understand the risk of discrimination with AI, as well as its capacity for mitigating bias and its potential to support people with protected characteristics,” Baroness Faulkner added.