The UK Government has set out plans to regulate the future of AI technology use in a departure away from EU’s rival AI Act proposal, which was launched in April 2021.

The proposal focuses on installing a regulatory environment around the use of AI technology to support innovation and maintain a high standard of data protection to gain public trust. The EU’s set of regulations mirror these aims however the two proposals differ with the UK planning to take a less centralised approach to oversee data use.

The adjustment will be adopted by all industries that digital data serves, but regulators are allowed to apply the rules as they see fit in accordance to the ways that AI is used in individual sectors. These include Ofcom and the Competition and Markets Authority who will impose six principles to manage AI use across a range of use cases.

The timing of a new strategy is in sync with the Data Protection and Digital Information Bill introduced to Parliament, which includes measures to facilitate the appropriate use of AI while minimising compliance restrictions on companies to boost the economy.

The Biometrics Institute recently published the findings of its annual industry survey which indicated a growing trend in user mistrust around the use of AI and biometric technologies to collect personal data.

It revealed a consensus amongst biometric stakeholders that some areas should be off-limits in relation to biometrics. 57% expressed deep concern that industries are not ensuring user privacy and data protection which could hamper further growth of sectors.

Expanding on the Government’s aims, Digital Minister Damian Collins said:

“We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work”.

“It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower”.

The six core principles require developers and users to:

  • Ensure that AI is used safely
  • Ensure that AI is technically secure and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Consider fairness
  • Identify a legal person to be responsible for AI
  • Clarify routes to redress or contestability

The government at the same time is publishing the AI action plan to continue unlocking the further potential of AI across industry and society, as it has done investing over £2.3 billion in AI since 2014.

In a 2022 update, the action plan detailed the government’s objective to increase the number of AI applications and the frequency and scale of AI developments, building upon commitments by the DCMS to enlarge available databases.

AI technology can be divided into five capabilities: machine learning, natural language processing and generation, computer vision and image processing/generation, data management and analysis, and hardware.

Around 15% of all businesses have adopted at least one use of AI technology, equating to 432,000 companies.