Changeflow GovPing Data Privacy & Cybersecurity Fairness Innovation Challenge: Up to £400,000 f...
Routine Notice Added Final

Fairness Innovation Challenge: Up to £400,000 for AI Bias Solutions

Favicon for www.gov.uk UK CDEI
Published
Detected
Email

Summary

The UK Department for Science, Innovation and Technology, through the Centre for Data Ethics and Innovation, has launched the Fairness Innovation Challenge offering up to £400,000 in government investment to UK companies. The competition will fund up to three solutions with individual awards of up to £130,000 each, focusing on innovative approaches to tackle bias and discrimination in AI systems, with initial focus on healthcare and other real-world use cases. Submissions close on 13 December 2024.

What changed

The UK government has announced a new Fairness Innovation Challenge offering up to £400,000 in total funding to support innovative solutions addressing bias and discrimination in AI systems. The competition, administered by the Centre for Data Ethics and Innovation, will select up to three winning proposals with individual awards of up to £130,000 each. The challenge specifically seeks UK-led approaches that incorporate social and cultural context into AI development, distinguishing from existing US-developed bias audit tools that may not align with UK regulations.

UK companies developing AI solutions, particularly in healthcare applications, should consider this funding opportunity to develop fairness-focused technologies. Successful applicants will help establish UK leadership in responsible AI development and contribute to ensuring AI systems do not perpetuate societal biases. The initiative complements the government's AI Regulation White Paper principles and precedes the UK's AI Safety Summit.

What to do next

  1. UK companies may submit applications for the Fairness Innovation Challenge by the December 13, 2024 deadline
  2. Participants should develop solutions that embed social and cultural context alongside technical considerations for AI fairness
  3. Bidders should align proposals with UK laws and regulations rather than relying on US-developed bias audit tools

Archived snapshot

Apr 15, 2026

GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.

Press release

New innovation challenge launched to tackle bias in AI systems

UK companies can apply for up to £400,000 in government investment to fund innovative new solutions which tackle bias and discrimination in AI systems.

From: Department for Science, Innovation and Technology, Centre for Data Ethics and Innovation, Information Commissioner's Office, Equality and Human Rights Commission and Viscount Camrose Published 16 October 2023
This was published under the 2022 to 2024 Sunak Conservative government

  • up to £400,000 in investment up for grabs as Fairness Innovation Challenge opens for submissions
  • new scheme funds innovative solutions to tackle bias and discrimination in AI
  • scheme to focus on healthcare and other real-world use cases UK companies can apply for up to £400,000 in government investment from today to fund innovative new solutions which tackle bias and discrimination in AI systems. The competition will look to support up to three ground-breaking homegrown solutions, with successful bids securing a funding boost of up to £130,000 each.

It comes ahead of the UK hosting the world’s first major AI Safety Summit to consider how to best manage the risks posed by AI while harnessing the opportunities in the best long-term interest of the British people.

The first round of submissions to the Department for Science, Innovation, and Technology’s Fairness Innovation Challenge, delivered through the Centre for Data Ethics and Innovation, will nurture the development of new approaches to ensure fairness underpins the development of AI models.

The challenge will tackle the threats of bias and discrimination by encouraging new approaches which will see participants building a wider social context into the development of their models from the off.

Fairness in AI systems is one of the government’s key principles for AI, as set out in the AI Regulation White Paper. AI is a powerful tool for good, presenting near limitless opportunities to grow the global economy and deliver better public services.

In the UK, the NHS is already trialling AI to help clinicians identify cases of breast cancer, and the technology offers enormous potential to develop new drugs and treatments, and help us tackle pressing global challenges like climate change. These opportunities though cannot be realised without first addressing risks, in this instance tackling bias and discrimination.

Minister for AI, Viscount Camrose, said:

The opportunities presented by AI are enormous, but to fully realise its benefits we need to tackle its risks.

This funding puts British talent at the forefront of making AI safer, fairer, and trustworthy. By making sure AI models do not reflect bias found in the world, we can not only make AI less potentially harmful, but ensure the AI developments of tomorrow reflect the diversity of the communities they will help to serve.
While there are a number of technical bias audit tools on the market, many of these are developed in the United States, and although companies can use these tools to check for potential biases in their systems, they often fail to fit alongside UK laws and regulations. The challenge will promote a new UK-led approach which puts the social and cultural context at the heart of how AI systems are developed, alongside wider technical considerations.

The Challenge will focus on two areas. First, a new partnership with King’s College London will offer participants from across the UK’s AI sector the chance to work on potential bias in their generative AI model. The model, developed with Health Data Research UK with the support of NHS AI Lab, is trained on the anonymised records of more than 1 million patients to predict possible health outcomes.

Second, is a call for ‘open use cases’. Applicants can propose new solutions which tackle discrimination in their own unique models and areas of focus, including tackling fraud, building new law enforcement AI tools, or helping employers build fairer systems which will help analyse and shortlist candidates during recruitment.

Companies currently face a range of challenges in tackling AI bias, including insufficient access to data on demographics, and ensuring potential solutions meet legal requirements. The CDEI are working in close partnership with the Information Commissioner’s Office (ICO) and the Equality and Human Rights Commission (EHRC) to deliver this Challenge. This partnership allows participants to tap into the expertise of regulators to ensure their solutions marry up with data protection and equality legislation.

Stephen Almond, Executive Director of Technology, Innovation and Enterprise at the ICO, said:

The ICO is committed to realising the potential of AI for the whole of society, ensuring that organisations develop AI systems without unwanted bias.

We’re looking forward to supporting the organisations involved in the Fairness Challenge with the aim of mitigating the risks of discrimination in AI development and use.

The challenge will also offer companies guidance on how assurance techniques can be applied in practice to AI systems to achieve fairer outcomes. Assurance techniques are methods and processes which are used to verify and ensure systems and solutions meet certain standards, including those related to fairness.
Baroness Kishwer Falkner, Chairwoman of the Equality and Human Rights Commission, said:

Without careful design and proper regulation, AI systems have the potential to disadvantage protected groups, such as people from ethnic minority backgrounds and disabled people.

Tech developers and suppliers have a responsibility to ensure that the AI systems do not discriminate.

Public authorities also have a legal obligation under the Public Sector Equality Duty to understand the risk of discrimination with AI, as well as its capacity for mitigating bias and its potential to support people with protected characteristics.

The Fairness Innovation Challenge will be instrumental in supporting the development of solutions to mitigate bias and discrimination in AI, ensuring that the technology of the future is used for the good of all. I wish all participants the best of luck in the challenge.
The Fairness Innovation Challenge closes for submissions at 11am on Wednesday 13th December, with successful applicants notified of their selection on 30th January, 2024.

Further Information

Share this page

The following links open in a new tab

Updates to this page

Published 16 October 2023

Named provisions

Fairness Innovation Challenge AI Regulation White Paper

Get daily alerts for UK CDEI

Daily digest delivered to your inbox.

Free. Unsubscribe anytime.

About this page

What is GovPing?

Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission

What's from the agency?

Source document text, dates, docket IDs, and authority are extracted directly from CDEI.

What's AI-generated?

The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.

Last updated

Classification

Agency
CDEI
Published
October 16th, 2023
Compliance deadline
December 13th, 2024 (489 days ago)
Instrument
Notice
Legal weight
Non-binding
Stage
Final
Change scope
Minor

Who this affects

Applies to
Technology companies Manufacturers Healthcare providers
Industry sector
5112 Software & Technology
Activity scope
Innovation funding AI fairness research Bias detection solutions
Geographic scope
United Kingdom GB

Taxonomy

Primary area
Artificial Intelligence
Operational domain
Procurement
Topics
Data Privacy Healthcare Product Safety

Get alerts for this source

We'll email you when UK CDEI publishes new changes.

Free. Unsubscribe anytime.

You're subscribed!