Closing the Generative AI Gap in the Online Safety Act

Posted

by

Online Responsibility Network

On Monday the 16th of February, the UK Government announced new measures to protect children online. Among them is a measure to crack down on AI-generated illegal content by closing a “legal loophole and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act”. The ‘loophole’ concerns the scope of the Online Safety Act (OSA) as it relates to AI image generation tools and, specifically, AI-enabled nudification tools.

The OSA does regulate generative AI services that operate as a search engine to deliver live results from more than one website or database, including tools modifying search results. Services also fall within the scope of the Act if they include a generative AI chatbot that allows users to share content with other users; this would make them a user-to-user service. This means that AI-generated content shared on a user-to-user service is regulated in the same way as user-generated content.

However, the challenge – and the ‘loophole’ – arises in relation to standalone generative AI apps or sites. These may not host content or enable the sharing of content between users, only allowing users to upload images and delivering outputs. Because of this, they may not fall within the definitions of in-scope services per the OSA. In turn, these services aren’t strictly subject to the OSA’s illegal content duties which include proactive risk assessment and mitigation obligations.

The proposed amendment would therefore extend the OSA’s illegal content duties to cover AI providers whose tools are capable of generating such abusive imagery. In taking this step, the Government is closing a gap in the regulatory perimeter of the OSA and preventing generative AI services from arguing that they’re just ‘tools’ and so not in-scope of safety duties. The creation and sharing of non-consensual intimate images, including sexually explicit deepfakes, is already criminalised under UK law through amendments to the Sexual Offences Act 2003 and related legislation. The issue addressed by the proposed reform is therefore not the absence of a criminal offence, but a gap in the regulatory framework.

It’s also important to stress that this reform is not confined to image-generation or nudification tools. The announcement refers to ‘AI chatbot providers’ more generally: the proposed amendment, then, is levelled at standalone conversational AI service that don’t clearly fall within the OSA’s scope and not just image-based tools. Moreover, the illegal content duties under the OSA are systemic in nature. As mentioned above, they require services to assess and mitigate the risks of illegal content being generated, encountered, or disseminated. The extension of such duties to a broader swathe of generative AI services, then, pushes responsibility upstream towards the design, deployment and risk management of AI systems themselves rather than focusing on the environments where they appear.

Moreover, the Government has announced a related enforcement measure requiring tech companies will be ordered to take down non-consensual intimate images within 48 hours of being flagged. The measure is being made law by way of an amendment to the Crime and Policing Bill and the associated non-compliance penalties are fines of up to 10% of  qualifying worldwide revenue or services being blocked in the UK. Under this proposed amendment, Ofcom would have powers to enforce swift removal across platforms and victims would only need to report an image once to trigger cross-platform action. The government has described this as treating the spread of such material with the same seriousness as other priority illegal content, and as part of a broader strategy to shift the burden of action from victims to platforms and intermediaries.

By pairing (i) the existing criminalisation of the creation and sharing of non-consensual intimate images, (ii) a broadened regulatory duty on generative AI services at the point of production, and (iii) an accelerated mandatory takedown regime for distribution environments, the reforms operate as a layered enforcement architecture. Together, they form a kind of legal pincer movement that targets the conduct itself, the systems capable of generating the material, and the platforms through which it spreads – addressing upstream creation and downstream proliferation in parallel.