Enforcement of the UK’s Online Safety Act is here, as the legislation was passed on 25 July. Global tech firms will be expected to protect one of the strictest online safety laws in the world, which shields children from harmful digital content.

This weekend, companies including X, formerly Twitter, TikTok, Reddit, Meta, and YouTube must comply with the Act, passed in 2023, that mandates robust age-verification systems to block under-18s from accessing adult material,  including pornography, content related to suicide, eating disorders, and other sensitive subjects.

The largely unregulated dark web internet also poses harmful threats to children, as well as social media sites. 

If big tech companies fail to take accountability now, after repeated warnings ahead of the regulation, they face substantial fines up to $18 million or 10% turnover. 

The act requires tech platforms to either fully remove such content, create separate safe zones for minors, or implement “highly effective” age-assurance tools. These tools, recommended by media regulator Ofcom, include credit card checks, photo ID scans, phone or email verification, and facial recognition software.

Whilst there’s been a rush to uptake systems, the regulator will assess the effectiveness of these tools through to September before taking enforcement action if standards aren’t met.

Platforms such as Reddit have already begun using facial recognition technology to verify users’ ages, while TikTok and X have rolled out enhanced age-checking tools in time for the deadline. Meta plans to expand its AI-based age estimation systems, previously tested in the US, to UK users this autumn. YouTube is using a similar AI model that assesses user behavior to infer age.

However, many of these AI-based systems are not formally approved by Ofcom, which has acknowledged that its list of acceptable technologies is not exhaustive. The agency said it will start evaluating the performance of these AI models next week.