Jurisdictions world over are grappling with the issue of regulating user content on social media. India, too, is in the process of evolving its approach.
Unlike their initial avatars, social media platforms today actively moderate and curate content that they host. They do so by removing any offending speech, restricting access to such speech in a particular jurisdiction, and suspending or terminating a user account.
Exercise of these powers in the case of high-profile accounts such as former US President Donald Trump or celebrities like Kangana Ranaut have routinely made headlines. But lay users also face consequences of such powers that go unnoticed or unheard.
In their respective transparency reports, Facebook and Instagram reportedly removed 3.20 crore posts, while Google removed around 60,000 URLs suo moto. Therefore, as ‘arbiters of speech’, they are in a position to violate a person’s freedom of speech and expression.
To protect users from incorrect takedowns and account suspensions by social media platforms, the need was felt to institute effective grievance redressal mechanisms (GRM).
Public consultations on the proposed amendments to the Information Technology Rules, 2021 held recently underlined the need for a grievance redressal mechanism to resolve user complaints against actions taken by social media platforms.
In India before May 2021, GRMs of social media platforms, if any, were designed as per the concerned platform’s terms of service. There was no standardisation, in terms of resolution and timelines, in the design of these GRMs.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules (or IT Rules), 2021 streamlined this by bringing in uniformity.
Social media platforms now have to appoint a “grievance officer” before whom a user may file a complaint.
The grievance officer is required to acknowledge the complaint within 24 hours and resolve it within 15 days. If unsatisfied with the officer’s order, the user may approach the High Court/ Supreme Court.
However, accessing the writ jurisdiction of the court can be a time and cost- intensive process and it was found necessary to create an appellate forum that is not as resource-intensive to engage with.
According to the Government, it created this tier because “currently there is no appellate mechanism provided by intermediaries nor is there any credible self-regulatory mechanism in place”.
During the public consultation hearings, it clarified that the proposed “grievance appellate committee” was only a “mezzanine measure” which it had to reluctantly take because of the failure of the social media platforms to design something themselves.
Apparently, this insistence on selfregulation seems like a progressive approach. However, letting the social media platforms control the regulation process is not in the best interests of users.
Speech, by nature, is contextual. What may offend one person would seem legitimate to another. Because the determination is so subjective, the process must be objective to ensure fairness.
A self-regulatory model takes away from such objectivity for several reasons. Social media platforms have not been paragons of objectivity in deciding which content they want to host or take down.
Their political biases have become visible through their decisions to either amplify or restrict certain kinds of content.
For example, while Twitter is commonly understood to be more partial to liberal/Leftist views, Facebook has been alleged to be partial to Rightist stances.
Moreover, a self-regulatory approach to adjudicating on speech is likely to be riddled with trust issues.
The Government has often repeated that the Information Technology Act, 2000 is long overdue for a re-haul and that it will herald the ‘Digital India Act’.
Perhaps, that is the right place to provide for a robust design of these appellate mechanisms, instead of being one foot in and one foot out.