UK to make tech firms remove abusive images within 48 hours
The United Kingdom has moved to impose legally binding obligations on technology companies to take down abusive or non-consensual intimate images within 48 hours of them being flagged, aiming to strengthen protections against online harms that disproportionately affect women and girls. Under amendments to the Crime and Policing Bill, firms that do not comply could face fines of up to 10 per cent of their global revenue […] The article UK to make tech firms remove abusive images within 48 hours appeared first on Arabian Post.
The United Kingdom has moved to impose legally binding obligations on technology companies to take down abusive or non-consensual intimate images within 48 hours of them being flagged, aiming to strengthen protections against online harms that disproportionately affect women and girls. Under amendments to the Crime and Policing Bill, firms that do not comply could face fines of up to 10 per cent of their global revenue or have their services blocked in the UK, in what the government describes as a major step in holding platforms accountable for content moderation failures. This change is part of a broader effort to tighten online safety laws and enforce digital content standards through the media regulator Ofcom.
Prime Minister Sir Keir Starmer has characterised the surge in non-consensual intimate imagery, including AI-generated “deepfake” content, as a national emergency that demands urgent legislative action. The new requirement mandates that platforms must remove these images no more than 48 hours after a report is made, centralising reporting to minimise the burden on victims who otherwise must request takedowns across multiple sites. Ofcom is expected to have enforcement powers by mid-year, with the authority to impose fines or block access to services that fail to meet the deadlines.
Officials emphasise that this initiative builds on the existing framework established by the Online Safety Act, which already obliges technology companies to combat a wide range of online harms, including child sexual abuse material and terrorism-related content. The forthcoming rules seek to elevate non-consensual intimate images to the status of “priority offence” under that Act, aligning enforcement mechanisms and expectations across illegal content categories. Victims will only need to report abusive material once for companies to be legally required to remove it across all regulated platforms, streamlining redress.
Campaigners and survivor-advocates have welcomed the government’s announcement as a “win” for victims of image-based abuse, noting that existing practices have often left survivors chasing takedowns across hundreds of reposts. Groups including the End Violence Against Women Coalition and the Revenge Porn Helpline have long lobbied for mandatory timeframes and harsher penalties to deter platforms from treating non-consensual imagery as a low-priority violation. They argue that shifting responsibility from victims to companies is essential to meaningful harm reduction.
Technology firms, from global social media operators to smaller service providers, are bracing for the impact of the rules. Major platforms already employ technological tools such as hash-matching to identify and remove known illegal content, but experts warn that enforcement will be more challenging on encrypted services and with content that mutates to evade detection. The government’s proposal includes exploring digital watermarks or other automated detection methods to help track and block abusive images as they reappear online.
Industry response has been mixed. Some companies have publicly reaffirmed commitments to tackling abusive content, while others express concern about the technical feasibility and potential overreach of enforcement, especially where encryption or privacy features limit visibility into user-generated content. The government insists the draft law does not require platforms to independently identify every abusive image but focuses on prompt removal once it is reported.
The legislative push has been partly catalysed by controversies earlier this year around AI tools capable of generating non-consensual explicit images, particularly allegations involving a high-profile chatbot that was manipulated to create doctored imagery. Ofcom has previously opened formal investigations into platforms over failures to protect users under extant online safety duties, underscoring regulators’ capacity to pursue enforcement when platforms fall short.
Ministers are also consulting on complementary measures, such as restricting social media access for users under a certain age, drawing parallels with policies adopted in other jurisdictions like Australia and the EU. These discussions reflect a broader international trend toward tightening digital safety standards and addressing the harms posed by rapid developments in artificial intelligence and user-to-user services.
Civil liberties groups caution that expansive enforcement could pose risks to free expression and privacy, a recurring theme since the introduction of the Online Safety Act in 2023. Critics argue that sweeping moderation requirements and automated scanning tools may inadvertently suppress legitimate content or extend surveillance beyond their original intent. Government supporters counter that robust safeguards are necessary to protect vulnerable users and that the legal framework includes provisions to respect journalistic and democratically important speech.
The article UK to make tech firms remove abusive images within 48 hours appeared first on Arabian Post.
What's Your Reaction?