Content policy

3kegDSWnSSRt5 IdOgLEN6wWTzAb9o4GhezT4PS bk2M4T0Jz0DnqDulmthg5wc6ZzoP1EbuEh3pW36bNrtiZ3w3E5q9MiBNkUYlA5v60nKTdrFQkwlqyX

AI and content policy

Overall, by automating and simplifying online content moderation procedures, AI has the potential to improve the enforcement of content policies. However, difficulties and ethical concerns must be addressed, such as algorithmic biases that unintentionally result in the unjust targeting or exclusion of particular groups and issues like algorithmic transparency and accountability. In addition, the proliferation of AI-generated content adds considerably to the debate.

AI in content moderation

AI is widely employed to help identify and remove prohibited or harmful content more efficiently. AI-powered systems can process large volumes of content, reducing the burden on human moderators and improving response times. AI algorithms can increase the accuracy of content moderation by identifying potentially dangerous information by examining trends, context, and other criteria. Another area where AI is employed is content filtering, i.e. categorising content based on user preferences and community guidelines. For instance, Facebook relies, to a large extent, on AI in its content review process. AI machine learning models are used to detect and remove or reduce the visibility of content violating community standards even before anyone reports it. In other cases, AI may send content to human reviewers to double-check and decide on the content while the technology learns and improves from each decision. However, AI-powered content filtering raises many questions and ethical considerations. The limitations are often related to transparency, accuracy, and bias. AI-based content moderation often comes with a lack of transparency and the inability to explain how decisions are made. AI tools might not grasp the nuances and contextual variations present in human speech or be less accurate when analysing non-English or translated texts. Finally, AI may also reinforce existing biases, further marginalising and censoring at-risk groups. 

AI in content creation and dissemination

AI has been widely utilised to create different types of content, from emails to news articles and research papers. AI is also used in generating images and even composing music. For instance, natural language processing (NLP) models such as ChatGPT can suggest ideas, generate drafts, and converse with the user. AI-powered image and video editing tools can automate specific tasks, such as image enhancement or video editing, speeding up content creation. The ease with which one can generate any type of content raises concerns about the spread of easily generated misinformation. There are already several examples of the use of ChatGPT by journalists, leading to a major backlash from both people affected by the content and its broader readership. As AI becomes better at simulating reality, the problem of deepfakes and other AI-generated visual and audio content becomes all the more serious. Learn more on AI Governance.
  One of the main sociocultural issues is content policy, often addressed from the standpoints of human rights (freedom of expression and the right to communicate), government (content control), and technology (tools for content control). Discussions usually focus on three groups of content:
  • Content that has a global consensus for its control. Included here are child pornography, justification of genocide, and incitement to or organisation of terrorist acts.
  • Content that is sensitive for particular countries, regions, or ethnic groups due to their particular religious and cultural values. Globalised online communication poses challenges for local, cultural, and religious values in many societies. Most content control in Middle Eastern and Asian countries, for example, is officially justified by the protection of specific cultural values. This often means that access to pornographic and gambling websites is blocked.
  • Political censorship on the Internet, often to silence political dissent and usually under the claim of protecting national security and stability.

Governmental filtering of content

Governments that filter access to content usually create an Internet Index of websites blocked for citizen access. Technically speaking, this is done with the help of router-based IP blocking, proxy servers, and DNS redirection. Content filtering occurs in a growing number of countries (see opennet.net).

Private rating and filtering systems

Faced with the potential risk of the disintegration of the Internet through the development of various national barriers (filtering systems), W3C and other like-minded institutions made proactive moves proposing the implementation of user-controlled rating and filtering systems. In these systems, filtering mechanisms can be implemented by software on personal computers or at server level controlling Internet access.

This method allows users to implement their own filtering systems without national intervention. It remains to be seen, however, whether governments will sufficiently trust their citizens to create their own filters.

Content filtering based on geographical location

Another technical solution related to content is geo-location software, which filters access to particular web content according to the geographic or national origin of users. The Yahoo! case was important in this respect, since the group of experts involved, including Vint Cerf, indicated that in 70-90% of cases Yahoo! could determine whether sections of one of its websites hosting Nazi memorabilia were accessed from France.

This assessment helped the court come to a final decision, which requested Yahoo! to filter access from France to Nazi memorabilia. Since the 2000 Yahoo! case, the precision of geo-location has increased further through the development of highly sophisticated geo-location software.

Content control through search engines

The bridge between the end-user and Web content is usually a search engine, and filtering search results is therefore often used as a tool to prevent access to specific content. The risk of filtering of search results, however, doesn’t come only from the governmental sphere; commercial interests may interfere as well, more or less obviously or pervasively. Commentators have started to question the role of search engines (particularly Google, considering its dominant position in users’ preferences) in mediating user access to information and to warn about their power of influencing users’ knowledge and preferences.

This issue is increasingly attracting the attention of governments, which call for increased transparency from Internet companies regarding the algorithms they employ in their search engines. German chancellor Angela Merkel spoke out about this risk, claiming that: ‘Algorithms, when they are not transparent, can lead to a distortion of our perception, they can shrink our expanse of information.’

Automated content control

For Internet companies, it is often difficult to identify illegal content among the millions of content inputs on their platforms. One possible solution can be found in artificial intelligence mechanisms to detect hate speech, verbal abuse or online harassment. However, relying on machine learning to make decisions as to what constitutes hate speech opens many questions, such as whether such systems would be able to differentiate between hate speech and irony or sarcasm.

With the development of social media and Web 2.0 platforms – blogs, document‑sharing websites, forums, and virtual worlds – the difference between the user and the creator has blurred. Internet users can create large portions of web content, such as blog posts, videos, and photo galleries. Identifying, filtering, and labelling ‘improper’ websites is becoming a complex activity. While automatic filtering techniques for texts are well developed, automatic recognition, filtering, and labelling of visual content are still in the early development phase.

One approach, sometimes taken by governments in an attempt to manage user-generated content that they deem objectionable, is to completely block access to platforms such as YouTube and Twitter throughout the country, or even to cut Internet access completely, hindering all communication on social network platforms (as was the case, for example, during some of the Arab Spring events). However, this can seriously infringe on the right to free speech, and violates the potential of the Internet in other areas (e.g. as an educational resource).

As the debate of what can and cannot be published online is becoming increasingly mature, social media platforms themselves have started to formalise their policies of where they draw the border between content that should or should not be tolerated. For example, Facebook’s Statement of Rights and Responsibilities specifies:

‘We can remove any content or information you post on Facebook if we believe that it violates this statement or our policies.’

Yet, the implementation of such policies sometimes leads to unintended consequences, with platforms removing legitimate content.

The national legal framework

The legal vacuum in the field of content policy provides governments with high levels of discretion in deciding what content should be blocked. Since content policy is a sensitive issue for every society, the adoption of legal instruments is vital. National regulation in the field of content policy could bring a more predictable legal situation beneficial for the business sector, ensure a better protection of human rights for citizens, and reduce the level of discretion that governments currently enjoy.

However, as. the border between justified content control and censorship is delicate and difficult to enshrine in legislation, this tension is increasingly being resolved in the courtroom, for example regarding the role of social media outlets in terrorist activities.

International initiatives

In response to the increased sophistication with which terrorists manage their activities and promote their ideologies online, multilateral forums have started addressing ways to limit harmful content (e.g. the G7, the UN Security Council, and the United Nations Office on Drugs and Crime). At the regional level, the main initiatives have arisen in European countries with strong legislation in the field of hate speech, including anti-racism and anti-Semitism. European regional institutions have attempted to impose these rules on cyberspace.

The primary legal instrument addressing the issue of content is the Council of Europe Additional Protocol to the Convention on Cybercrime (2003), concerning the criminalisation of acts of racist and xenophobic nature committed through computer systems. On a more practical level, the EU adopted the European Strategy to Make the Internet a Better Place for Children in 2012.

The Organization for Security and Co-operation in Europe (OSCE) is also active in this field. Since 2003, it has organised a number of conferences and meetings with a particular focus on freedom of expression and the potential misuses of the Internet (e.g. racist, xenophobic, and anti-Semitic propaganda, and content related to violent extremism and radicalisation).

The role and responsibility of intermediaries

The private sector is playing an increasingly important role in content policy. Internet Service Providers, as Internet gateways, are often held responsible for the implementation of content filtering. In addition, Internet companies (such as Facebook, Google, and Twitter) are becoming de facto content regulators. Google, for example, has had to decide on more than half a million requests for the removal of links from search results, based on the right to be forgotten. These companies are also increasingly involved in cooperative efforts with public authorities in an attempt to combat illegal online content.

In 2023, most initiatives on content governance will try to find a balance between the legal status of social media platforms and their social roles. Legally speaking, these are private companies with very little legal responsibility for the content they publish. Societally speaking, these companies are public information utilities that impact people’s perception of society and politics. Twitter’s founder Jack Dorsey described Twitter as ‘the public conversation layer of the internet’.

Currently in the USA, tech platforms are not responsible for the content they host (as per Section 230 of the US Communication Decency Act). Although there are calls from both parties in the US Congress to revisit this arrangement, content governance in the USA is still in the hands of tech companies.

The most important thing to happen in the coming year will be how Musk’s policy experiment with Twitter turns out. If he is successful, he may show that a self-regulation model for content governance is possible. If he fails, it will be a sign that the US Congress has to step in with public regulation, most likely by revising Section 230.

In the EU, content governance has shifted towards public regulation. The Digital Service Act (DSA) introduced new, stricter rules that social media companies will have to follow. Their implementation will start in 2023.

Similar to GDPR and data regulation, many countries are likely to take inspiration from the EU’s DSA approach to content governance.

On a multilateral level, UNESCO will host the conference Internet for Trust: Regulating Digital Platforms for Information as a Public Good in February 2023 as the next step in developing content governance around its Guidance for regulating digital platforms: A multistakeholder approach.

#TRENDING in content policy: Elon Musk acquired Twitter for US$44 billion. For a current analysis of how Musk’s acquisition will affect one of the biggest social media platform’s content policies, watch our latest Diplo Experts Explain video here:

Go to Top