Meta has confirmed it will suspend plans to train AI systems using data from EU citizens and won’t launch Meta AI in the region, outside the EU and UK, AI will continue to be leveraged on its users’ Facebook and Instagram posts

The decision came after complaints from data protection authorities including the Irish Data Protection Commission (DPC), which regulates Meta in the EU.

Meta defended the prospect of using data from EU citizens stating their “legitimate interests” to advance AI innovation, and follow in the footsteps of Google and OpenAI. 

Dr Ilia Kolochenko, CEO at ImmuniWeb and Adjunct Professor of Cybersecurity at Capital Technology University, highlighted the implications of regulatory compliance for AI, hindering Meta’s proposals. 

Commenting on the news, he shared how regulation is becoming  “gradually costlier, time-consuming and otherwise complicated”, tightening the rules around companies’ deployment of AI. 

It makes it harder for companies to satisfy their goals of evolving AI to suit their operational business needs – the EU AI Act and EU Data Act being examples of regulation that is purely focused on protection of personal data and GDPR.

“AI vendors will have to address such technically complex issues as explainability of their AI models, fairness in decision making, transparency and conformity with copyright laws when collecting training data, just to mention a few”. To implement the legal requirements, “current business models of large AI vendors will have to be undermined.

In a statement, Meta said they were “disappointed” to stall European innovation by delaying training of large language models (LLMs), already in use across other areas of the business.