CROSS JURISDICTIONAL ANALYSIS OF OBLIGATIONS ON SOCIAL MEDIA PROVIDERS
This post is authored by Jotsaroop Singh and Bhumija Upadhyay, Junior Editors at RGNUL Financial and Mercantile Law Review (RFMLR).
Social Media is the new playground of the 21st century, and the laws are yet to catch up. Regulation of social media spaces has been a huge concern since their fast proliferation across the globe, especially considering the effects they have on the general populace as a group and as individuals. This influence which social media exerts has been demonstrably prone to misuse, with the disinformation campaign undertaken by Russia to influence the 2016 U.S. Election being a prime example of this.
This means that the paradigm of obligations on Social Media Providers is shifting across the world, with each jurisdiction introducing unique changes which affect the users and the providers alike in multifarious ways. This article aims to analyze various contemporaneous developments in this area, especially with regards to intermediary liability and censorship concerns in various countries.
Changing Obligations for Social Network Providers:
An interesting case study for understanding the changing paradigm of obligations on Social Media Providers is Turkey, a country where a semi-authoritarian regime’s new social media law is coming under heavy scrutiny, given the country’s past record of curbing dissent via crackdowns on websites such as Wikipedia.
The law was passed on 29 July 2020 by the Turkish Parliament and notified in the Official Gazette. Furthermore, the Turkish Information and Communication Technologies Authority (“ICTA”) released a detailed regulatory framework for the Social Network Providers (‘SNPs”) titled "Procedures and Principles for Social Network Providers" (“Principles”) in the Official Gazette dated 2 October 2020. The main ambit of this law is in two areas of operation that is new obligations imposed on social media providers and content removal.
The new obligations imposed on social media providers starts by defining a social media provider, as one with a presence of one million subscribers within the country. Such companies must appoint a representative in the country to help resolve grievances. This move is intended to provide better resolution of complaints and concerns which are raised by citizens.
The content removal provisions envisioned in the law state that if a certain piece of the content constitutes a crime, or otherwise violates a person’s rights, then said provider can be ordered by the court to remove that content. This is an update to the previous methodology, wherein access to the whole site was restricted. Combined with the new obligations in place, the new law can aid the speedy removal of harmful content by the mere threat of court order. Additionally, rather than approach a third party for relief, individuals may file a suit themselves for the removal of the harmful content.
The problems that arise from these provisions are to do with the privatization of censorship, and Turkey is only one of the newer jurisdictions to introduce such provisions. In the early days of the Internet, policymakers felt that the best avenue for the commercial and social growth of this medium would be best achieved by reducing the direct liability burdens on service providers on it. This was mainly realized by reclassifying social media providers as a “distributor” rather than a “publisher,” a distinction established via Section 230 of the Communications Decency Act, 1996 ("CDA") of the United States of America, where most of these operators are based. This entails a lower burden on the management of content. But, the recently highlighted problems have caused many governments to demand that companies take more responsibility in controlling harmful content.
While most jurisdictions cover the basest areas such as hate speech and child abuse imagery, the localized differences in laws can run counter to the internationally established standards or standards in the home country of most of these providers (i.e., USA). Compliance to such obligations can be inadequate due to the contextual nature of all of these laws. From the conservative Thailand government requesting takedowns of risqué photos of the King, to the controversial partnerships with Governments of countries such as Israel to control content, these measures have come under fire.
All these measures have essentially shifted the burden of content regulation to the social media providers, with enforcement ensured through measures such as escalating fines, throttling of access and website shutdowns. The higher burden, as envisioned under the Turkish law, and similar laws in other jurisdictions like the EU and South Korea can lead to social media providers slowing operations in certain jurisdictions due to an inability to adequately comply. The same impetus may be behind a shift to more cost-effective AI Content Moderation systems (which come with their own problems, such as various recorded instances of Digital Millennium Copyright Act (DMCA) controls being used to target political targets).
While increasing the responsibility of social media providers in ensuring that harmful messages are not spread on their platforms is certainly an admirable endeavour, with many providers already taking steps such as Facebook’s Supreme Court and Twitter placing fact check warnings on harmful tweets by U.S. President Donald J. Trump, there are many ethical concerns about shifting the power to control content on the internet from representative bodies to private corporations. In recent hearings before the U.S. Senate, the C.E.O.s of Facebook, Twitter and Google reiterated that a careful approach needs to be taken to implement checks and balances in this regard.
Intermediary Liability of Social Network Providers:
Social Media Intermediaries exert a direct influence on an individual’s right to exercise freedom of expression and information. Jurisdictions across the globe follow three broad Governance models for intermediary liability of social media websites. First, there are jurisdictions such as China where strict liability is imposed and Network Services Providers are held jointly and severally liable under Article 36 of the Tort Law of China. On the other end of the spectrum is the ‘broad immunity model’ as practised by the USA where intermediaries are largely self-regulated. They enjoy conditional immunity in terms of content liability under Section 230 of the CDA and are provided with a safe harbour under Section 512 of the DMCA. There is a third ‘notice-and-takedown model’ where a conditional safe harbour is provided. For instance, the European Union, Directive 2000/31 grants immunity to intermediaries provided they remove/disable access to unlawful content upon knowledge. The Principles adopted by ICTA follow a similar notice-and-takedown model that requires social media websites to respond to requests to delete material within 48 hours, a broad power that enables authorities to restrict access to something they may deem illegal thus posing a threat to civil liberties. The Principles further constrain the space of Social Media intermediaries by reducing their bandwidths up to 90% in the case of non-compliance, in effect making it impossible for users to access these websites.
Parallels can be drawn between India and Turkey where The Ministry of Electronics and Information Technology released the Information Technology [Intermediaries Guidelines (Amendment) Rules], 2018 stating similar requirements as adopted by the ICTA. The rules require intermediaries to use automated content moderation tools and allow user-generated content to be monitored on their providers. However, contrary to the Turkish regime, the intermediaries are exempted from liability, provided certain due diligence is followed. The liabilities for Intermediaries in India are expected to undergo significant changes with the proposed Personal Data Protection Bill, 2019 as Section 93(d) of the Bill mandates Social Media intermediaries to provide methods of voluntary verification to identify users of social media. The penalties for non-compliance with the Principles under Turkish law range from advertising bans and exorbitant monetary penalties to access blocking mechanisms. Unlike Turkey, intermediaries in India are provided immunities under Section 79 of the Information Technology Act and the onus of determining the legality of the content does not solely lie on intermediaries.
The stringent Principles devised by ICTA can be considered hostile towards intermediaries. In order to establish best practices within its ‘notice and takedown’ regime, Turkey should enact a regulatory framework that involves multiple stakeholders such as government, the legal community and tech companies to ensure transparency of digital platforms without curbing free speech. Lessons can be drawn from the Manila Principles which prescribe that content restriction orders must comply with the test of proportionality. In conclusion, proactive monitoring of content should be encouraged but not at the cost of Right to Freedom of Speech and Expression on Social Media.