Centre notifies IT Rules amendment: Mandatory AI labels, 3-hour takedown window

Centre notifies IT Rules amendment: Mandatory AI labels, 3-hour takedown window
Published on
2 min read

Hyderabad: The Central Government has notified amendments tightening the regulatory framework governing synthetically generated and Artificial Intelligence (AI) based content on digital platforms.

These amendments include mandating the platforms to clearly label synthetically generated information and requiring swift action in taking down such content.

The amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, will come into effect on February 20.

What is synthetic content and what is not?

The Central law now defines synthetically generated information (SGI) as audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, to make any person or event appear in a manner to be genuine.

There is an exception for educational and training materials, PDFs, research papers and presentations. Even the routine editing, such as colour correction, noise reduction, transcription and translation, is exempted, as long as it doesn’t distort or misrepresent the original meaning.

Mandatory labelling and metadata embedding

Platforms are mandated to label all synthetically generated information so it’s easy for users to spot it instantly.

They must also embed persistent metadata and unique identifiers so the content can be traced to its origin. Once those labels are in place, it must be ensured they can’t be modified, suppressed or stripped away.

Visual content is required to carry clear on-screen labels, while audio content must include prefixed audio disclosures.

Users asked to self-declare

The platforms are now mandated to ask the users to self-declare whether the content being uploaded is synthetically generated. However, the onus is on the social media platforms to ensure that appropriate technical tools are deployed to verify the source and the nature of the content uploaded by the users.

If the content is found to be SGI, then it has to be clearly displayed using notices and labels.

If the platform does not label the SGI content appropriately, then it will be considered a failure to exercise due diligence.

Platform safeguards to avoid generation and sharing of unlawful content

Rules mandate the platforms to deploy automated and technical safeguards to prevent the generation or dissemination of SGI content that violates existing laws.

It includes content involving child sexual abuse material, non-consensual intimate imagery, obscene, pornographic, paedophilic or imagery invasive of others' privacy.

Generation or modification of false documents, records or SGI content related to explosives, arms or ammunition is also not allowed.

Platforms are required to warn their users not to misuse the tools in breaking the law. Other social media platforms are required to inform their users every three months about the penalties for misusing the SGI content.

Swift takedown

If any SGI content breaks the law, platforms can remove it or block access to it immediately. The user who created or shared such illegal content can face fines, punishment or legal action. The platforms are also required to report such incidents to the authorities.

Grievance redressal timelines have also been shortened. The period for initial acknowledgement has been reduced from 15 days to 7 days.

Overall, the amendment to the IT rules requires clearly noticeable and traceable labelling of content that is synthetically generated or generated using Artificial Intelligence, swift takedown of unlawful content and measures to prevent dissemination of such content.

Related Stories

No stories found.
logo
South Check
southcheck.in