ICO fines facial recognition database company Clearview AI

ICO fines facial recognition database company Clearview AI

The Information Commissioner’s Office (ICO) has fined Clearview AI Inc £7,552,800 for using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition.

The ICO has also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.

The ICO enforcement action comes after a joint investigation with the Office of the Australian Information Commissioner (OAIC), which focused on Clearview AI Inc’s use of people’s images, data scraping from the internet and the use of biometric data for facial recognition.

Clearview AI Inc has collected more than 20 billion images of people’s faces and data from publicly available information on the internet and social media platforms all over the world to create an online database. People were not informed that their images were being collected or used in this way.

The company provides a service that allows customers, including the police, to upload an image of a person to the company’s app, which is then checked for a match against all the images in the database.

The app then provides a list of images that have similar characteristics with the photo provided by the customer, with a link to the websites from where those images came from.

Given the high number of UK internet and social media users, Clearview AI Inc’s database is likely to include a substantial amount of data from UK residents, which has been gathered without their knowledge.

Although Clearview AI Inc no longer offers its services to UK organisations, the company has customers in other countries, so the company is still using personal data of UK residents.

John Edwards, UK Information Commissioner, said:

“Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images. The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.

“People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement. Working with colleagues around the world helped us take this action and protect people from such intrusive activity.

“This international cooperation is essential to protect people’s privacy rights in 2022. That means working with regulators in other countries, as we did in this case with our Australian colleagues. And it means working with regulators in Europe, which is why I am meeting them in Brussels this week so we can collaborate to tackle global privacy harms.”

Details of the contraventions
The ICO found that Clearview AI Inc breached UK data protection laws by:

failing to use the information of people in the UK in a way that is fair and transparent, given that individuals are not made aware or would not reasonably expect their personal data to be used in this way;
failing to have a lawful reason for collecting people’s information;
failing to have a process in place to stop the data being retained indefinitely;
failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR);
asking for additional personal information, including photos, when asked by members of the public if they are on their database. This may have acted as a disincentive to individuals who wish to object to their data being collected and used.
The joint investigation was conducted in accordance with the Australian Privacy Act and the UK Data Protection Act 2018. It was also conducted under the Global Privacy Assembly’s Global Cross Border Enforcement Cooperation Arrangement and the MOU between the ICO and the OAIC.

Biometrics Institute releases updated Privacy Awareness Checklist

Biometrics Institute releases updated Privacy Awareness Checklist

The Biometrics Institute has released its updated Privacy Awareness Checklist, to help members of the Institute work through critical privacy issues right from the start of their biometric journey, and to remind them to treat privacy as a key issue in their organisation.

The Privacy Awareness Checklist (PAC) was first published in 2013. As with all its guiding material, the Institute conducts regular reviews of its guiding material to ensure they stay relevant and that any global changes in technology or legislation are reflected. The checklist is the result of extensive consultation by the Institute’s Privacy and Policy Expert Group (PEG) who engaged with other groups and key stakeholders to ensure that it covers a broad range of issues for different countries, jurisdictions, and sectors.

“Biometric technology continues to develop at pace, affecting a growing number of organisations worldwide,” says Isabelle Moeller, Chief Executive of the Biometrics Institute. “To aid members, the checklist is designed to be a simple and concise resource to raise awareness of privacy concerns whilst being universally useable. It encourages organisations to discuss their Personal Information processing, assess risks and threats, consider privacy awareness and training, and maintain a strong privacy and data protection environment.”

The membership organisation is launching its updated Privacy Awareness Checklist in Privacy Awareness Week (2-8 May 2022) whose theme this year is ‘Privacy: The foundation of trust’.

“Trust is at the centre of everything we do with biometrics,” adds The Hon Terry Aulich, Head of the PEG, “and it is an organisation’s responsibility to treat biometric data responsibly and ethically. The PAC will be an extremely useful tool to work through key considerations for biometrics and privacy.”

The Institute provides a range of good practice material to help guide its members, and the updated checklist references and is aligned to its Good Practice Framework and Privacy Guidelines.

The Biometrics Institute’s Privacy and Policy Expert Group comprises members from many countries and sectors and includes government privacy authorities, academics, social media organisations and legal experts.

Senators demand detail on Amazon’s biometric data collection

Senators demand detail on Amazon’s biometric data collection

U.S. Senators Bill Cassidy, M.D. (R-LA), Amy Klobuchar (D-MN), and Jon Ossoff (D-GA) requested information about Amazon’s data collection practices involving biometrics in a letter to Amazon CEO Andy Jassy. The senators expressed concerns about the company’s use of data gathered by Amazon One, the company’s palm-print recognition and payment system.

The letter follows reports of Amazon offering credits to consumers to share their biometric data with Amazon One. Amazon has also announced that it is planning to expand the program, including potentially selling Amazon One technology to third-party stores.

“Amazon’s expansion of biometric data collection through Amazon One raises serious questions about Amazon’s plans for this data and its respect for user privacy, including about how Amazon may use the data for advertising and tracking purposes,” the senators wrote.

“Amazon One users may experience harms if their data is not kept secure. In contrast with biometric systems like Apple’s Face ID and Touch ID or Samsung Pass, which store biometric information on a user’s device, Amazon One reportedly uploads biometric information to the cloud, raising unique security risks…Data security is particularly important when it comes to immutable customer data, like palm prints,” the senators continued.

Read the full letter here or below:

Dear Mr. Jassy:

We write regarding concerns about Amazon’s recent expansion and promotion of Amazon One, a palm print recognition system, and to request information about the actions Amazon is taking to protect user data privacy and security.

Amazon One appears to be a biometric data recognition system that allows consumers to pay for their purchases in grocery stores, book stores, and other retail settings using their palm print. Consumers can enroll in the program at any location with an Amazon One device by scanning one or both palms and entering their phone and credit card information. Amazon One devices are currently in use in more than 50 retail locations throughout the United States, including in Minnesota. Locations with the technology currently include Amazon Go stores, Whole Foods locations, and other Amazon stores.

Recent reports indicate that Amazon is incentivizing consumers to share their biometric information with Amazon One by offering a $10 promotional credit for Amazon.com products. Amazon has also announced that they have plans to expand Amazon One, which may include introducing the technology in other Amazon stores as well as selling it to third-party stores. Amazon’s expansion of biometric data collection through Amazon One raises serious questions about Amazon’s plans for this data and its respect for user privacy, including about how Amazon may use the data for advertising and tracking purposes.

Offering products from home devices to health services, Amazon possesses a tremendous amount of user data on the activities of hundreds of millions of Americans. Our concerns about user privacy are heightened by evidence that Amazon shared voice data with third-party contractors and allegations that Amazon has violated biometric privacy laws. We are also concerned that Amazon may use data from Amazon One, including data from third-party customers that may purchase and use Amazon One devices, to further cement its competitive power and suppress competition across various markets.

Amazon One users may experience harms if their data is not kept secure. In contrast with biometric systems like Apple’s Face ID and Touch ID or Samsung Pass, which store biometric information on a user’s device, Amazon One reportedly uploads biometric information to the cloud, raising unique security risks. Like many companies, Amazon has been affected by hacks and vulnerabilities that have exposed sensitive information, such as user emails. Amazon’s various home device systems have leaked information or been hacked, as highlighted in a recent letter to the Federal Trade Commission (FTC) from 48 advocacy organizations. Company whistleblowers earlier this year also raised concerns about Amazon’s security practices. Data security is particularly important when it comes to immutable customer data, like palm prints.

In light of these issues, we respectfully ask that you provide written answers to the following questions by August 26, 2021:

  1. Does Amazon have plans to expand Amazon One to additional Whole Foods, Amazon Go, and other Amazon store locations, and if so, on what timetable?
  2. How many third-party customers has Amazon sold (or licensed) Amazon One to? What privacy protections are in place for those third parties and their customers?
  3. How many users have signed up for Amazon One?
  4. Please describe all the ways you use data collected through Amazon One, including from third-party customers. Do you plan to use data collected through Amazon One devices to personalize advertisements, offers, or product recommendations to users?
  5. Is Amazon One user data, including the Amazon One ID, ever paired with biometric data from facial recognition systems?
  6. What information do you provide to consumers about how their data is being used? How will you ensure users understand and consent to Amazon One’s data collection, storage, and use practices when they link their Amazon One and Amazon account information?
  7. What actions have you taken to ensure the security of user data collected through Amazon One?


Ensuring the security of user data and protecting consumer privacy are of the utmost concern. We look forward to your prompt responses.

CEO Shufti Pro Victor Fredung speaks at US Congress hearing

CEO Shufti Pro Victor Fredung speaks at US Congress hearing

A congressional virtual hearing was held on the 16th of July 2021 by the US Financial Services Task Force on Artificial Intelligence, examining key points related to digital privacy.

Speakers at the hearing included experts in the field of digital technology and security. Victor Fredung, the CEO of Shufti Pro, was invited to give his testimony at the hearing and emphasized how Shufti Pro is making the financial structure secure through the use of AI technology for identity verification.

The hearing entitled “I Am Who I Say I Am: Verifying Identity while Preserving Privacy in the Digital Age” discussed a variety of issues, including the future of digital identity, how best to protect data and digital privacy, making AI more inclusive and diverse, and the use of technologies, such as blockchain.

At the hearing, Victor Fredung mentioned his support for the “Improving Digital Identity Act of 2020” and explained how a unified framework is needed in the United States due to vast variations in ID document types and regulatory requirements across the country.

“We strongly suggest the pursuit of a universal framework that each state (in the US) needs to follow when it comes to the selection of ID documents and a unified requirement when it comes to what information needs to be verified and how verification should be performed in all states,” stated Fredung.

The hearing also highlighted the emerging threat of identity theft in the US, where Rep. Bill Foster stated that the FTC received over 1.3 million complaints related to identity theft from US consumers in 2020.

Responding to this, Fredung added that since 2017, Shufti Pro has been combating identity theft by utilizing AI and ML models in its identity verification solutions. With the help of document authentication, anti-spoof checks, liveness detection, and optical character recognition (OCR) technology, fraudulent documents and identity thieves are detected with an accuracy rate of almost 99%.

Fredung further added that while the use of blockchain for data sharing and storage is in its early stages at the moment, “it is definitely something to look out for in the future”.

DiRoma Eck & Co. LLP., the Washington-based advisory and government relations firm that assisted Shufti Pro in getting the opportunity to speak at the hearing, commended the issues raised by Fredung. “Shufti Pro’s CEO, Victor Fredung, provided the Financial Services Committee with valuable and insightful testimony from the perspective of the AI-powered digital identity verification industry. The U.S. Congress will benefit from Shufti Pro’s testimony as it continues to legislate and conduct oversight of the banking and financial technology industries,” said Michael DiRoma, Co-founder and Partner, DiRoma Eck & Co. LLP.