Vasudev Devadasan writes: The conflict between free speech and consent

Source: This post is based on the article “Vasudev Devadasan writes: The conflict between free speech and consent” published in The Indian Express on 5th Jul 22.

Syllabus: GS2 – Government policies and interventions

Relevance: Sharing of non-consensual intimate images

News: The Delhi High Court in Mrs. X v Union of India is confronted with a familiar problem. A woman whose nude photos were shared online without her consent approached the Court to block this content.

The case highlights the need for courts, law enforcement, and technology platforms to have a coordinated response to the sharing of non-consensual intimate images (NCII) online.

What are the legal provisions wrt NCII in India?

Publishing NCII is a criminal offence under the Information Technology Act 2000.

The Intermediary Guidelines 2021 provide a partial solution. They empower victims to complain directly to any website that has allowed the uploading of non-consensual images or videos of a person in a state of nudity or engaging in a sexual act.

This includes content that has been digitally altered to depict the person as such. The website must remove the content within 24 hours of receiving a complaint, or risk facing criminal charges.

What are the associated issues?

Issue with intermediary guidelines: The approach listed in these guidelines relies on victims identifying and sharing every URL hosting their intimate images.

Further, the same images may be re-uploaded at different locations or by different user accounts in the future.

While the Intermediary Guidelines do encourage large social media platforms to proactively remove certain types of content, the focus is on child pornography and rape videos.

Victims of NCII abuse have few options other than lodging complaints every time their content surfaces, forcing them to approach courts.

Efforts by tech companies to tackle spread of NCII

Meta recently built a tool to curtail the spread of NCII (

The tool relies on a “hashing” technology to match known NCII against future uploads. The victim’s private images stay with them, with only the hash being added to a database to guard against future uploads.

Similar technology is already used against child-sex abuse material (CSAM) with promising results.

What are the concerns associated with use of image matching tech?

Image-matching technology could be used for surveillance or to simply remove unpopular (but not illegal) content from the internet.

The CBI has already reportedly asked Microsoft to deploy its “PhotoDNA” tool (an image-matching software built to identify CSAM) for investigatory purposes.

The use of automated tools also raises free speech concerns that lawful content may accidentally be taken down. Automatic filters often ignore context. Content that may be illegal in one context may not be illegal in another.

Way forward

If well-designed and administered, other websites could eventually use Meta’s NCII hash database to identify illegal content they may be unwillingly hosting.

Victims could report NCII abuse at a centralised location and have it taken down across a range of websites.

The government can also play a role in facilitating a redressal mechanism.

For example, Australia has appointed an “e-Safety Commissioner”. He receives complaints against NCII and coordinates between complainants, websites, and individuals who posted the content – with the Commissioner empowered to issue “removal notices” against illegal content.

The government’s reported overhaul of the IT Act is an opportunity to develop a coordinated response to NCII-abuse that will provide victims meaningful redress without restricting online speech.

Going forward, courts may consider tasking a state functionary or independent body with verifying the URLs and coordinating with online platforms and internet service providers.

If courts direct platforms to take down NCII, they should only do so where the NCII-content will be illegal in every foreseeable context.

Print Friendly and PDF