ICO fines facial recognition database company Clearview AI

ICO fines facial recognition database company Clearview AI

The Information Commissioner’s Office (ICO) has fined Clearview AI Inc £7,552,800 for using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition.

The ICO has also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.

The ICO enforcement action comes after a joint investigation with the Office of the Australian Information Commissioner (OAIC), which focused on Clearview AI Inc’s use of people’s images, data scraping from the internet and the use of biometric data for facial recognition.

Clearview AI Inc has collected more than 20 billion images of people’s faces and data from publicly available information on the internet and social media platforms all over the world to create an online database. People were not informed that their images were being collected or used in this way.

The company provides a service that allows customers, including the police, to upload an image of a person to the company’s app, which is then checked for a match against all the images in the database.

The app then provides a list of images that have similar characteristics with the photo provided by the customer, with a link to the websites from where those images came from.

Given the high number of UK internet and social media users, Clearview AI Inc’s database is likely to include a substantial amount of data from UK residents, which has been gathered without their knowledge.

Although Clearview AI Inc no longer offers its services to UK organisations, the company has customers in other countries, so the company is still using personal data of UK residents.

John Edwards, UK Information Commissioner, said:

“Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images. The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.

“People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement. Working with colleagues around the world helped us take this action and protect people from such intrusive activity.

“This international cooperation is essential to protect people’s privacy rights in 2022. That means working with regulators in other countries, as we did in this case with our Australian colleagues. And it means working with regulators in Europe, which is why I am meeting them in Brussels this week so we can collaborate to tackle global privacy harms.”

Details of the contraventions
The ICO found that Clearview AI Inc breached UK data protection laws by:

failing to use the information of people in the UK in a way that is fair and transparent, given that individuals are not made aware or would not reasonably expect their personal data to be used in this way;
failing to have a lawful reason for collecting people’s information;
failing to have a process in place to stop the data being retained indefinitely;
failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR);
asking for additional personal information, including photos, when asked by members of the public if they are on their database. This may have acted as a disincentive to individuals who wish to object to their data being collected and used.
The joint investigation was conducted in accordance with the Australian Privacy Act and the UK Data Protection Act 2018. It was also conducted under the Global Privacy Assembly’s Global Cross Border Enforcement Cooperation Arrangement and the MOU between the ICO and the OAIC.

UK govt challenged on cameras deal

UK govt challenged on cameras deal

The Biometrics and Surveillance Camera Commissioner Fraser Sampson has challenged central and local government ministers to clarify their positions on buying surveillance cameras from controversial tech company Hikvision, which is part-owned by the Chinese state.

The move comes after media reports quoted an unnamed Whitehall source saying that Health Secretary, the Rt Honourable Sajid Javid, had banned Hikvision for competing for new business in the Department of Health after a procurement review revealed “ethical concerns” about the company.

Professor Sampson, the independent Biometrics and Surveillance Camera Commissioner, has repeatedly challenged Hikvision to come clean about the extent of its involvement with the Chinese state’s human rights abuses in Xinjiang. However Hikvision has, for more than eight months, failed to answer the questions put to them.

Professor Sampson has raised the matter in a formal letter sent to the Rt Hon Michael Ellis QC MP, Minister for the Cabinet Office and Paymaster General, and to the Rt Hon Michael Gove MP, the Secretary of State for Levelling Up, Housing and Communities, who is the minister responsible for local government. Mr Javid was copied into the correspondence.

Hikvision cameras and facial recognition technology have been implicated in systematic human rights abuses against the Uyghur people and other minorities in the Xinjiang province of China. The British Government has already formally recognised that widespread, systematic persecution in Xinjiang, including the extra-judicial detention of over a million Uyghur Muslims and other minorities. This persecution is known to rely heavily on surveillance technology including facial recognition software designed to detect racial characteristics.

Professor Sampson said: “If it is the case that the Department of Health has ruled out Hikvision from future procurement exercises, then that is a step in the right direction as far as I am concerned.

“There are serious unanswered questions about Hikvision’s involvement in appalling human rights abuses in China. The company seems unwilling or unable to provide assurances about the ethics of some of its operations and about security concerns associated with its equipment.

“If companies won’t provide the information needed to do proper due diligence in relation to ethics and security, then they clearly should not be allowed to bid for contracts within government, or anywhere else in the public sector for that matter. If Mr Javid has banned Hikvision for those reasons, then he should be congratulated.

“If the decision as reported is true, the same considerations would apply equally to all branches of government, and, arguably, the whole of the public sector. If other areas of national and local government have carried out due diligence in relation to their human rights obligations, I’d be interested to see the information they used; if they haven’t then I’d be interested to understand how the risks are being properly addressed.”

In his letters, Mr Sampson also raises concerns about potential security implications of the use of surveillance equipment with dormant functions, such as voice recording and facial recognition, which can be switched on remotely.

UK invites tenders for mobile biometric enrolment devices

UK invites tenders for mobile biometric enrolment devices

The UK government is offering a £450,000 (US$588,000) contract for the purchase of multifunctional Mobile Biometric Enrolment (MBE) devices from a single supplier.

A statement from the Home Office said the aim of the project is to reduce the need for wet ink capture and enable a solution which can connect to the Authority’s biometrics database to digitally enrol fingerprint and other biographic data.

The solution will consist of a fingerprint scanner which includes MRZ functionality, and can communicate with an Android smartphone with an electronic notebook application. The tendered contract will also include a requirement for maintenance and support of the devices for a minimum 3-year period.

The contract is for the purchase of 150 Mobile Biometric Enrolment Devices with minimum 12 month warranty and ability to extend warranty for life of contract. Devices will need to be mobile (hand held) and able to enrol biometric and biographic data for enrolment rather than just verification purposes. Contract will be 3 years with option to extend 2 x 12 month periods to cover on-going support services. A software development kit will be required to integrate device with the Home Office android mobile phones. Additional support will be expected during this time to ensure successful integration. Full details of the requirement can be found in the ITT documentation.

The Bidder’s proposed device will be physically tested as part of the ITT evaluation criteria, to ensure it meets the need for both identity verification and enrolment on to the Authority’s biometrics database in environments Immigration Officers are expected to use the device, (detailed in the procurement documents). Suppliers need to note that 3 product/prototype devices must be with the Authority for testing at the same time the ITT phase of the Restricted Procedure closes.

If a Supplier fails to submit a device by this date and time, they may be deemed to have not submitted a compliant bid and that may lead to exclusion from the competition at the Authority’s absolute discretion.

UK legislation set to make digital identities more trustworthy and secure

UK legislation set to make digital identities more trustworthy and secure

The UK has said organisations will need to gain a new trustmark to show they can handle people’s identity data in a safe and consistent way.

The new Office for Digital Identities and Attributes has been established to oversee strong security and privacy standards for digital IDs

People will be able to easily and quickly prove their identity using digital methods instead of having to rely on traditional physical documents, under new plans unveiled by the government today.

Following a public consultation, the government has announced it will introduce legislation to make digital identities as trusted and secure as official documents such as passports and driving licences.

Digital identities, which are a virtual form of ID, reduce the time, effort and expense that sharing physical documents can take when people need to provide legal proof of who they are, for example when buying a home or starting a new job.

A new Office for Digital Identities and Attributes (ODIA) will be set up in the Department for Digital, Culture, Media and Sport as an interim governing body for digital identities.

Digital identity solutions can be accessed in a number of ways such as via a phone app or website and can be used in-person or online to verify a person’s identity. It will be for people and businesses to decide what digital identity technology works for them to prove their identity, should they choose to create a digital identity at all.

For example, if a person wants to prove they are over 18 to buy age-restricted products, they could create a digital identity with a trusted organisation by sharing personal information such as their name and date of birth. This digital identity could then be used to prove to a retailer they are over-18, without the need to reveal the personal information used to create the digital identity, boosting users’ privacy, unlike physical documents which may disclose date of birth, name and address.

The ODIA will have the power to issue an easily recognised trustmark to certified digital identity organisations, to prove they meet the security and privacy standards needed to handle people’s data in a safe and consistent way.

The ODIA will ensure trust-marked organisations adhere to the highest standards of security and privacy.

Digital identities can also help tackle fraud, which hit record highs with an estimated 5 million cases in the year ending September 2021, by reducing the amount of personal data shared online and making it harder for fraudsters to obtain and use stolen identities.

The government intends to bring forward the necessary legislation when parliamentary time allows to:

  • Establish a robust and secure accreditation and certification process and trustmark so organisations can clearly prove they are meeting the highest security and privacy standards needed to use digital identities.
  • Create a legal gateway to allow trusted organisations to carry out verification checks against official data held by public bodies to help validate a person’s identity.
  • Confirm the legal validity of digital forms of identification are equal to physical forms of identification, such as physical passports

It is committed to ensuring digital identities are not compulsory and people will still be able to use available paper documentation.

Data Minister Julia Lopez said:

This government is committed to unlocking the power of data to benefit people across the UK.

The legislation we’re proposing will ensure that there are trusted and secure ways for people and organisations to use digital identities, should they choose to.

Heather Wheeler MP, Parliamentary Secretary to the Cabinet Office, said;

The government is delivering a number of ambitious and interlinked policy initiatives to prepare the UK for the digital world, and to improve the lives of businesses and citizens.

These initiatives, alongside enabling legislation, will help ensure the UK is able to take full advantage of the opportunities that digital identities and the wider digital economy have to offer.

I would like to thank everyone who participated in the consultation exercise. By working together, and sharing knowledge, experience and expertise, we will continue to deliver transformative digital policies.

In advance of the proposed legislation, landlords, letting agents and employers will be able to use certified new technology to carry out the right to work and the right to rent checks online from the 6th April, 2022 and prove their eligibility to work or rent more easily.

Sue Daley, Director for Technology and Innovation, techUK said:

Today’s announcements are a positive step forward in the UK’s implementation of digital identity. techUK has welcomed DCMS’s efforts in working with industry to get us to where we are today.

Given the next steps now being taken, continued cooperation between industry and government remains the best chance for a successful implementation of a digital identity ecosystem in the UK. However, we must also ensure we bring citizens on this journey with us: building public trust and confidence in Digital ID must be a key priority as we move forward.

Guest Post: Veridas on the UK’s new customer authentication rules

Guest Post: Veridas on the UK’s new customer authentication rules

By Veridas

The Financial Conduct Authority has made it official: as of March 14th 2022, firms should comply with requirements for Strong Customer Authentication (SCA) concerning online commerce

The new regulation implies that banks and other payment services providers need to check that the person requesting access to an account or trying to make a payment is who they claim to be. This new regulation aims to enhance the security of payments and limit fraud during this authentication process. 

Before SCA, e-commerce security was based on a single static password asked to customers. However, the number of interactions we make digitally is increasing exponentially. Consequently, fraud is becoming a significant problem, with criminals stealing more than £750m in the first half of 2021 -Jana Mackintosh, the managing director of payments at UK Finance-. SCA emerges as a solution to the aforementioned problem; it is a European requirement introduced to make online payments more secure and reduce the risk of fraud.

Ensure your business is compliant with SCA

Why are passwords not enough?

Passwords cannot be considered a secure authentication procedure since they are:

  • not secure: we end up re-using the same or very similar, simple or repeated credentials in order to remember them. This dramatically reduces security.
  • uncomfortable for the user: among other things, security policies require frequent rotation and the use of complex, non-repeated passwords, which make them difficult to remember.
  • ineffective: passwords are easy to breach and sometimes easy to steal; theft through MITM attacks, data breaches of sites with passwords stored in the clear, attacks on passwords, etc

Strong authentication: The 2-factor authentication (2FA)

SCA opts for new levels of authentication, which involve asking customers for two of the three followings: something they know, something they own, and something they are. That is, an authentication that requires the use of at least two authentication factors (2FA) chosen from among these three groups:

  1. Knowledge: something the user knows, such as a password or PIN. Disadvantage: the security of passwords is deficient, as discussed above.
  2. Possession: something the user possesses, such as a debit card or a message to a cell phone.
    Disadvantage of OTP-SMS: nowadays, it is not difficult to find cases of identity theft, cloning of user SIMs or messages intercepted by Trojans, among others.
  3. Inheritance: something inherent to the customer, such as the face, voice or fingerprint.

Depending on the operation to be performed, the factors must be combined, including more factors if the risk is higher. In addition, its use is encouraged by Royal Decree-Law 19/2018, which approved the transposition of Directive (EU) 2015/2366 that made it mandatory to use strong authentication no later than January 1st, 2021.

The role of Biometrics

Why not identify ourselves as we do in the physical world? Why is the Internet based on a system of users and passwords and not on real identities and real people?

Forget about passwords

Biometrics allows us to just be us and forget about everything else… In less than 1 minute, from anywhere, needing just a mobile device or a computer, Artificial Intelligence engines hosted in the cloud verify the identity of the person with an accuracy of 99%. This is how biometrics allows people to be identified by their natural attributes.

Once a customer registers or completes an onboarding process, he or she can carry out an infinite number of procedures with a simple selfie or by speaking for 3 seconds, without SMS codes or passwords. This not only offers a lighter and more seamless user experience but also greatly reduces the costs associated with manual verification processes.

Increases security and reduces identity fraud

Doble authentication systems, such as passwords or SMS codes, are tied to the device and not to the person, so when that device is stolen or hacked, all of that person’s personal information is exposed. However, your biometrics cannot be used by anyone other than you; identity verification systems are key to avoiding this type of fraud and transmitting confidence to both entities and users. The advantages of biometrics include:

  • Privacy: it belongs to you and no one else. It cannot be spoofed, cloned, or intercepted.
  • Security: it allows us to move from presumption to certainty. We are sure that the user is who he/she says he/she is, taking into account the previous advantage.
  • Voluntary: it is the user who has the decision to make use of it.

 

 

Porn sites to face age requirement

Porn sites to face age requirement

UK authorities have said porn sites will be legally required to verify the age of their users under new internet safety laws.

Announcing the age verification plans, Digital Economy Minister Chris Philp said: “Parents deserve peace of mind that their children are protected online from seeing things no child should see.”

Philp said the Online Safety Bill will be significantly strengthened with a new legal duty requiring all sites that publish pornography to put robust checks in place to ensure their users are 18 years old or over.

This could include adults using secure age verification technology to verify that they possess a credit card and are over 18 or having a third-party service confirm their age against government data.

If sites fail to act, the independent regulator Ofcom will be able fine them up to 10 per cent of their annual worldwide turnover or can block them from being accessible in the UK. Bosses of these websites could also be held criminally liable if they fail to cooperate with Ofcom.

A large amount of pornography is available online with little or no protections to ensure that those accessing it are old enough to do so. There are widespread concerns this is impacting the way young people understand healthy relationships, sex and consent. Half of parents worry that online pornography is giving their kids an unrealistic view of sex and more than half of mums fear it gives their kids a poor portrayal of women.

Age verification controls are one of the technologies websites may use to prove to Ofcom that they can fulfil their duty of care and prevent children accessing pornography.

Digital Minister Chris Philp said:

It is too easy for children to access pornography online. Parents deserve peace of mind that their children are protected online from seeing things no child should see.

We are now strengthening the Online Safety Bill so it applies to all porn sites to ensure we achieve our aim of making the internet a safer place for children.

Many sites where children are likely to be exposed to pornography are already in scope of the draft Online Safety Bill, including the most popular pornography sites as well as social media, video-sharing platforms and search engines. But as drafted, only commercial porn sites that allow user-generated content – such as videos uploaded by users – are in scope of the bill.

The new standalone provision ministers are adding to the proposed legislation will require providers who publish or place pornographic content on their services to prevent children from accessing that content. This will capture commercial providers of pornography as well as the sites that allow user-generated content. Any companies which run such a pornography site which is accessible to people in the UK will be subject to the same strict enforcement measures as other in-scope services.

The Online Safety Bill will deliver more comprehensive protections for children online than the Digital Economy Act by going further and protecting children from a broader range of harmful content on a wider range of services. The Digital Economy Act did not cover social media companies, where a considerable quantity of pornographic material is accessible, and which research suggests children use to access pornography.

The government is working closely with Ofcom to ensure that online services’ new duties come into force as soon as possible following the short implementation period that will be necessary after the bill’s passage.

The onus will be on the companies themselves to decide how to comply with their new legal duty. Ofcom may recommend the use of a growing range of age verification technologies available for companies to use that minimise the handling of users’ data. The bill does not mandate the use of specific solutions as it is vital that it is flexible to allow for innovation and the development and use of more effective technology in the future.

Age verification technologies do not require a full identity check. Users may need to verify their age using identity documents but the measures companies put in place should not process or store data that is irrelevant to the purpose of checking age. Solutions that are currently available include checking a user’s age against details that their mobile provider holds, verifying via a credit card check, and other database checks including government held data such as passport data.

Any age verification technologies used must be secure, effective and privacy-preserving. All companies that use or build this technology will be required to adhere to the UK’s strong data protection regulations or face enforcement action from the Information Commissioner’s Office.

Online age verification is increasingly common practice in other online sectors, including online gambling and age-restricted sales. In addition, the government is working with industry to develop robust standards for companies to follow when using age assurance tech, which it expects Ofcom to use to oversee the online safety regime.

UK sets out best practice for using e-signatures

UK sets out best practice for using e-signatures

The UK government’s expert Industry Working Group on Electronic Execution of Documents has today (1 February 2022) published their interim report, which: sets out their analysis of the current situation in England and Wales; identifies simple best practice guidance based on existing technology, including for vulnerable individuals; and makes recommendations for future analysis and reform. The report can be found on GOV,UK. You can watch Lord Justice Birss, Mr Justice Fraser and Professor Sarah Green discussing the report and its findings below:

Electronic Execution of Documents – Industry Working Group interim report

The Industry Working Group was convened following a recommendation by the Law Commission, which the Lord Chancellor welcomed and implemented. More details, including the Group’s membership and full Terms of Reference, can be found on GOV.UK.

The Ministry of Justice welcomes the interim report and is grateful for the Group’s work, which will assist in informing the future use and uptake of e-signatures by government and others. The Group’s work is also central to ensuring that the UK remains a centre for legal excellence and that the English and Welsh jurisdiction continues to lead the way in enabling the adoption emerging technologies and in supporting and facilitating digital trade and commerce.

Lord David Wolfson, the Parliamentary Under Secretary of State, said:

I would like to thank the Industry Working Group for this important report on electronic signatures. We in Government are excited about the potential benefits of new, digital ways of working and I welcome in particular the best practice guidance put forward by the group, which will help increase confidence in and encourage uptake of electronic signatures. I am committed to ensuring the UK jurisdiction remains at the forefront of adapting to digital innovation, so that we can best capture the opportunities this offers for our businesses and citizens.

In the next phase of its work, the Group will focus on its remaining Terms of Reference, namely to consider the challenges arising from the use of electronic signatures in cross-border transactions and how to address them, and how best to use electronic signatures so as to optimise their benefits when set against the risk of fraud.

AnyVision comments on UK biometrics

AnyVision comments on UK biometrics

The British government has approached the public to consult on revisions to the Surveillance Camera Code of Practice. The code is part of the Protection of Freedoms Act which provides guidance on the appropriate use of CCTV by local authorities and the police. This is the first revision to the code since its introduction in June 2013.

AnyVision has responded to the Biometrics and Surveillance Camera Commissioner, Professor Fraser Sampson in an open letter entitled: “Facial Recognition Apps Should Be Provided to the Police with an Empty Database.”

Given AnyVision’s expertise in ethical facial recognition and commercial experience identifying persons of interest, including shoplifters, felons, and security threats, the company wanted to lend its perspective to the discussion and share several best practices on the application of ethical facial recognition to law enforcement settings.

AnyVision’s CEO Avi Golan wrote: “The ethical use of facial recognition is a thorny one and requires a nuanced discussion. Part of that discussion must explain how the underlying facial recognition system works, but, just as important, the discussion must also involve how the technology is being used by police departments and what checks and balances are built into their processes. We welcome an honest and objective dialogue involving all stakeholders to draft fair and balanced regulation.”

In recent years, face-based and object recognition systems have been adopted broadly before methods of due diligence have been fully thought through. The company agrees that the use of facial recognition or other biometric-based recognition systems need to be clearly justified and proportionate in meeting the intended purpose and should be appropriately validated.

First, it is important to highlight the unique characteristics and risk factors specific to police use of facial recognition technology. The most common use case for video surveillance is when police and other law enforcement agencies get a picture of a suspect from a crime scene and want to find out: “Who is the person in the picture?” That often requires an extensive database — one that could potentially include every human on planet earth.

This is very different from commercial use cases of facial recognition (e.g., within supermarkets, casinos, or stadiums) which are fundamentally asking a different question: “Is the person in the video a known security threat?” To answer this question doesn’t require a comprehensive database of all people, but rather a defined list of specific people who represent security threats.

In the company’s view, the path to fair and ethical use of facial recognition by police agencies is through adherence to three principles:

  1. Empty Database: We recommend building their watchlists from the ground up based on known felons, persons of interest, and missing persons. Some facial recognition solution providers have scraped billions of photos and identities of people from social networks, usually without their consent. Unfortunately, this method of facial recognition has justifiably angered privacy groups and data protection agencies around the globe and damaged public trust in the accuracy and reliability of facial recognition systems. We believe that lists of suspects should be limited and justified. In this way, unjustified invasion of citizens’ privacy can be prevented, false arrests can be reduced and public confidence in the technology can be increased.
  2. Safeguarding Data & Privacy: Many privacy advocates are justifiably concerned about how video surveillance systems capture and store data of innocent bystanders. At AnyVision, we don’t capture photographic images of people. The watchlists that comprise the reference data for our facial recognition algorithms are created and uploaded by our commercial customers – that is, they are created from scratch and specific to the security needs of that organization. The data that we capture is rendered using mathematical vectors that act as secure cryptography, preventing identity hacking even if data is stolen.

    AnyVision goes a step further in safeguarding the privacy of non-watchlist individuals. We offer our customers the ability to activate “GDPR-mode” which effectively blurs all faces of people not explicitly listed on an organization’s watchlist. When this feature is activated, only individuals identified on the watchlist are visible — all other people in the camera’s field of view are blurred. Privacy Mode goes even further as it discards all detections of non-enrolled individuals. This means that police agencies cannot capture any metadata from non-watchlist detections which further protects the identities of bystanders. These advanced privacy features are designed to help organizations capture and collect data on individuals that is strictly necessary for the purposes of video surveillance (i.e., data minimization).

  3. Lack of Operational Due Diligence: Police admit that facial recognition technology has been instrumental in helping crack some tough cases, but in the last year, there have also been claims of wrongful arrests. In many of these cases, the wrongful arrests were the result of a poor investigative process vs. shortcomings of the facial recognition software. Facial recognition is more than just the technology — it’s about having specific rules that helps the software understand how to process potential face-based matches. These rules must operate within established boundaries that protect an individual’s privacy and conform to compliance law.

    Facial recognition software is designed to identify a handful of likely suspects based on potential matches to a reference database. However, a potential match does not mean that the police department is absolved from performing a proper investigation. It’s critical that the police use the technology responsibly and determine whether any of the potential matches should be investigated further based on appropriate due diligence procedures and following established protocols. When police take shortcuts and wrongfully arrest innocent people based on a supposed match without the necessary due diligence, it reflects poorly on the underlying facial recognition technology. It’s imperative to highlight the importance of human review and investigation when applying this powerful technology.

“AnyVision is willing to share its industry insights and best practices from our vast research experience with leading global players, including name-brand retailers, global hospitality, financial services and law enforcement agencies,” said AnyVision’s CEO, Avi Golan. “If the regulations set forth by the Surveillance Camera Code of Practice are committed to the principles outlined above, then law enforcement agencies can strike the right balance between solving crime and protecting the privacy of innocent citizens.”

UK plans to govern use of digital identities revealed

UK plans to govern use of digital identities revealed

The UK government has today published the second version of its digital identity trust framework which is part of plans to make it faster and easier for people to verify themselves using modern technology through a process as trusted as using drivers licenses or passports.

These latest draft rules of the road for governing the future use of digital identities follow the publication of the first version of the trust framework in February 2021 and the consultation last month. The framework incorporates extensive feedback from an online survey and government engagement sessions with a range of external organisations.

The framework shows how organisations can be certified to provide secure digital identity services, they will have to go through an assessment process with an independent certification body. It also states how data can be shared between organisations and announces the government will start testing the framework in partnership with service providers.

Applications have opened for organisations interested in taking part in the testing process, which will involve organisations assessing where their service meets the proposed trust framework rules and providing feedback to the government. This process will prepare organisations for full certification in the future, as well as help the government to refine trust framework rules so they work for both people and organisations.

Once finalised, the government plans to bring the framework into law and make it easier and safer for people to use digital services to prove who they are or verify something about themselves. The updated framework published today includes:

  • Details on how organisations will become certified against the trust framework in the future, including how the independent assessment will take place. The process will involve bodies accredited by the UK Accreditation Service (UKAS) completing service audits to assess eligibility.
  • New guidance on how organisations can work together to create a consistent approach, which delivers a better user experience and reduces the need for burdensome and repetitive verification processes. It outlines how organisations describe data in the same format so other organisations know the method of identity verification used.
  • Clearer definitions for the trust framework’s roles so organisations can better understand which applies to their specific service, depending on how they are managing data.
  • Refined rules on areas such as how to manage digital identity accounts, where detailed.

Digital Infrastructure Minister Matt Warman said:

“Whether someone wants to prove who they are when starting a job, moving house or shopping online, they ought to have the tools to do so quickly and securely.

“We are developing a new digital identity framework so people can confidently verify themselves using modern technology and organisations have the clarity they need to provide these services.

“This will make life easier and safer for people right across the country and lay the building blocks of our future digital economy.”

UK govt publishes digital identity trust framework

UK govt publishes digital identity trust framework

The government has today published the second version of its digital identity trust framework which is part of plans to make it faster and easier for people to verify themselves using modern technology through a process as trusted as using drivers licenses or passports.

These latest draft rules of the road for governing the future use of digital identities follow the publication of the first version of the trust framework in February 2021 and the consultation last month. The framework incorporates extensive feedback from an online survey and government engagement sessions with a range of external organisations.

The framework shows how organisations can be certified to provide secure digital identity services, they will have to go through an assessment process with an independent certification body. It also states how data can be shared between organisations and announces the government will start testing the framework in partnership with service providers.

Applications have opened for organisations interested in taking part in the testing process, which will involve organisations assessing where their service meets the proposed trust framework rules and providing feedback to the government. This process will prepare organisations for full certification in the future, as well as help the government to refine trust framework rules so they work for both people and organisations.

Once finalised, the government plans to bring the framework into law and make it easier and safer for people to use digital services to prove who they are or verify something about themselves. The updated framework published today includes:

Once finalised, the government plans to bring the framework into law and make it easier and safer for people to use digital services to prove who they are or verify something about themselves. The updated framework published today includes:

  • Details on how organisations will become certified against the trust framework in the future, including how the independent assessment will take place. The process will involve bodies accredited by the UK Accreditation Service (UKAS) completing service audits to assess eligibility.
  • New guidance on how organisations can work together to create a consistent approach, which delivers a better user experience and reduces the need for burdensome and repetitive verification processes. It outlines how organisations describe data in the same format so other organisations know the method of identity verification used.
  • Clearer definitions for the trust framework’s roles so organisations can better understand which applies to their specific service, depending on how they are managing data.
  • Refined rules on areas such as how to manage digital identity accounts, where detailed.

Digital Infrastructure Minister Matt Warman said:

“Whether someone wants to prove who they are when starting a job, moving house or shopping online, they ought to have the tools to do so quickly and securely.

“We are developing a new digital identity framework so people can confidently verify themselves using modern technology and organisations have the clarity they need to provide these services.

“This will make life easier and safer for people right across the country and lay the building blocks of our future digital economy.”