ICO & Ofcom statement on age assurance

Posted

by

Online Responsibility Network

Last week, the UK’s data protection and online safety regulators (the ICO and Ofcom, respectively) issued a joint statement on age assurance. The statement is targeted at services likely to be accessed by children and in scope of the Online Safety Act (OSA) and UK data protection law. It clarifies how existing obligations under these regimes apply in practice and signals how compliance will be assessed.

There are a number of key takeaways. First, age assurance is framed in terms of effectiveness and not just proportionality. The statement makes clear that services must adopt methods that are “necessary, proportionate to your risks, and comply with data protection legislation,” but also that these must be capable of being “highly effective” where required. Crucially, it is explicit that “self-declaration alone… is not an effective means” of determining age or preventing underage access. The regulators also identify categories of approaches capable of meeting this standard.

Second, compliance is structured around a risk-based model with clear consequences. The regulators emphasise a “flexible, tech-neutral approach,” but one that is anchored in risk: organisations must select methods appropriate to “their risks.” Such risks are to be identified in the course of their children’s risk assessment, as required by the OSA. The regulators note that the OSA doesn’t require services to set a minimum age of access but, if one is set, it must be explained in their terms of service and consistently applied. Similarly, the ICO and Ofcom highlight that while services are not required to implement highly effective age assurance, where they do not, they must assume that underage children are present on the service and reflect this in their risk assessments. This, in turn, requires services to deploy safeguards appropriate for child users across the service, rather than relying on age thresholds alone, effectively treating age uncertainty as a trigger for baseline child protections.

Third, age assurance is treated as part of system design, not a standalone control. The statement emphasises that methods must address “risks of circumvention” and ensure “accuracy and robustness.” This means selecting a means of age assurance that sufficiently guards against fake input and binds the proof of age to the user presenting for the age check. In practice, this reflects a concern that age checks can be easily transferred or reused: regulators are not only asking whether age is verified, but whether that verification is meaningfully tied to the individual accessing the service.

Taken together, the statement clarifies that compliance will be judged less by the presence of age assurance measures than by their performance. Whether they can deliver reliable outcomes under real-world conditions and withstand scrutiny across both online safety and data protection frameworks will be critical measures of success.