When American Companies Moderate Global Content

America exports many things, but its content moderation standards may be the most important. Activists and concerned citizens the world over use social media as a microphone to connect, mobilize and push for change – which makes the content rules that online platforms set and enforce enormously influential in shaping the global discourse.

However, because these rules are often designed based on U.S. sensibilities and priorities, they can have an adverse impact on globally marginalized communities. Platforms can also be subject to pressure from governments, including those with poor human rights records, to crack down on dissent and opposition.

In this third of our five-part series on Global Digital Rights Challenges, Promise Institute Assistant Director Jess Peake talks with Marwa Fatafta, MENA Policy Manager at Access Now, about the effect that the decisions of American platforms has on censorship of activists in the Middle East and North Africa, and how such actions impact fundamental human rights, censor dissent, and erase history. 

Special thanks to our partners in this series, UCLA Law’s Institute for Technology, Law and Policy.

Transcript

Marwa Fatafta: Who decides that my speech is not permissible? When you’re a part of a community and you have a certain political lexicon or social lexicon, and that is accepted among that particular community. Why should a US company based in Silicon Valley decide that, “Actually, no, your speech is incitement to violence, your speech is hate speech and therefore it should be deleted and removed”?

Natalie Monsanto: This is the Promise Institute Podcast, and we’re very excited to welcome you to the third of our five-part special series on Global Digital Rights Challenges. This episode discusses what it means when American companies moderate global content.

This series was produced in partnership with UCLA Law’s Institute for Technology Law and Policy and features striking conversations about the way the relationship between technology and human rights is playing out around the world. As digital considerations and entwine with and blur more of the core functions in our lives, discussions like these are becoming all the more imperative. Stay tuned after the episode to hear about ways to support work like this and follow us on social. As we start off, the voice you’ll hear first is the ITLP Executive Director at Michael Karen Nicholas introducing our speakers.

Michael Karen Nicholas: We are very excited today to welcome Marwa Fatafta, the Middle East Policy Manager for Access Now who leads their work on Digital Rights in the Middle East and North Africa Region. She is an Advisory Board Member of the Palestinian Digital Rights Organization, 7amleh, and a Policy Analyst at Al-Shabaka. The Palestinian Policy Network working on issues of political leadership, governance, and accountability. 

Before joining AccessNow, Marwa works as the Meta Regional Advisor at Transparency International Secretary in Berlin and coordinated regional and global efforts to mainstream gender into anti-corruption policies and norms. She previously served as a Communications Manager at the British Consulate General in Jerusalem, and was a Fulbright Scholar to the US and holds an MA in International Relations at Syracuse University and an MA in Development and Governance from the University of Duisburg-Essen. She will be speaking today with Jess Peake, the Assistant Director of the Promise Institute for Human Rights and Director of the International and Comparative Law Program at UCLA School of Law. Jess has taught classes on International Humanitarian Law, International Criminal Law, and Methods and Theories of International and Comparative Law. In December, she received a grant from the UC Multi-Campus Project and Initiatives to launch a Human Rights Digital Investigations Lab at the Promise Institute, in collaboration with colleagues at UC Berkeley and UC Santa Cruz. The lab launched successfully this January and has already trained 25 students in cutting edge open source investigation technology justice, supervising three full-time student fellows who are working on a report examining the role that online disinformation played in the fall of 2020 conflicts in Nagorno-Karabakh between Armenia and Azerbaijan. Jess, I’ll turn it over to you. 

Jess Peake : Thank you very much, Michael. Marwa, I thought we’d maybe would start off with kind of a landscape setting question. Can you describe how social media has become an important tool for activists and change makers in the Middle East, North Africa region?

Marwa Fatafta : Thank you, Jess and thank you, Michael, for the introductions. It’s a great question. The MENA region has won worst world records on freedom of expression. The information ecosystem has been traditionally controlled by the states. So, it’s been really hard for journalists, activists, human rights defenders, civil society at large, or even ordinary citizens to really express their opinions, to share information, to communicate freely and safely with each other.

Let alone of course, when it comes to political organizing and mobilization that has been extremely difficult. With the advent of social media, and here, I want to go back to a historical moment. This year, we commemorated the 10th anniversary of the Arab Spring or the Arab Uprisings back in 2011, where there was this moment of celebration of the so-called liberation tech, that social media platforms had finally liberated that heavily controlled information system and has allowed people to communicate with each other freely, to also expose corruption and human rights violations by oppressive regimes in the region, which have been not only controlled, but also suppressed and hidden from being disseminated. 

10 years forward,of course platforms still play that crucial role where as a citizen you can still express yourself, mobilize. There are many social movements and political movements. The Me Too movement, for example, has made its way to the MENA region where women, until this point in time are exposing sexual assault, sexual harassment, and supporting each other on social media.

But, unfortunately that space is becoming also policed and controlled both by the states on the one hand, but also by these content moderation policies from social media companies. That’s why I’m extremely excited to be speaking with you today about how those policies and actions play out in a region like the MENA region.

Jess Peake : Thank you. So, are we talking about all social media platforms or are there particularly prominent ones that sort of stand out as being very heavily utilized in the region? Also, we were just talking earlier about social media really being the internet. I was hoping you could also talk a little bit about what that means for people in the region.

Marwa Fatafta : Yeah. So in terms of the most popular platforms, it really depends on where you are in the MENA region. So for countries like Jordan, Palestine, the majority of north African countries, I think Facebook is the most popular platform used by internet users. In the Gulf region, activists from Saudi Arabia, UAE, and different parts of the Gulf region, you have Twitter being the top used platform as opposed to Facebook. But then something we’ve seen in recent years, Instagram is being utilized more and more for political campaigning. YouTube as well, especially for human rights documentation. There’s a lot of videos and content, for example, coming from Syria, documenting war crimes and human rights violations.

So, it really depends on what you’re using the platform for. I think just the fact that if you are a user relying on social media, if you’re relying on Facebook for your livelihood, for your job, for your work, if you’re an artist and a musician, or if you were an activist or a human rights defender, and as you said, for those people, sometimes the social media platforms.

is the internet, because that’s the only space where you can express yourself. In terms of newcomers, TikTok is becoming the new star now. Especially during the COVID-19 period in countries like Egypt, we see that the app is being used extremely widely, but that also brings in new challenges for users.

Last year, there were at least nine young women who are TikTok influencers, who unfortunately were detained and prosecuted by the Egyptian public prosecutor for violating family values. They were just sharing benign content of lip-syncing, singing, and makeup tutorials and what not. But, according to the public prosecutor that was in violation of the cyber crime law. This is one example of how new platforms that come up and utilized by users in different ways, both political and nonpolitical, can be quickly cracked on by governments in the region. 

Jess Peake : Thank you. So, we’re here today to talk about content moderation and particularly how American companies regulating content can be problematic. But, before we get into the meat of that, can you explain to us what we mean by content moderation, who is doing it and how is it being regulated? What sort of activities are being captured by these policies? 

Marwa Fatafta : I must admit that I never thought about this question. So, I don’t have a smart definition of what content moderation is. But in simple terms, it’s a practice by social media companies to look at user-generated content to determine whether that content is in line with their terms of services, rules, or community standards. In a nutshell, it’s social media companies deciding what is permissible and what is not on their platforms. Obviously, this is a source of contention because who gets to decide and develop those rules of what is legitimate speech and what is not, or what is legitimate content and what is not.

So far, I think social media companies have enjoyed a form of self-regulation in terms of what we call at Access Now, content governance, because it’s not only about looking at the content and deciding whether you want to keep it or remove it, but it’s also about boosting and promoting that content or demoting it among other issues.

In that content governance model, platforms themselves develop their own policies and their own rules and their own terms of services to decide what is permissible and what is not. In many cases that is not necessarily in line with international human rights standards and results in violating the user’s rights despite public commitments by social media companies to adhere and respect human rights.

Again, it’s sometimes also not in line with national laws, which is a good thing in fact because if you, as a competent social media company, want to develop your own rules based on the laws of China or the laws of Turkey or the laws of Saudi Arabia, that means a lot of censorship, but they do comply with US laws. Whereas, I understand that has implications on particular sort of content, especially when it comes to extremism and violent content and the impact of US-based companies, moderating contents in the global south, or in the rest of the world where definitely US laws can be problematic for legitimate speech elsewhere in the world.

Jess Peake : So, it sounds like these platforms have an enormous amount of power to make decisions about what speech is legitimate versus what speech is not legitimate. You mentioned that the companies are sort of developing their own internal policies around this, not really guided by any particular external framework.

You mentioned, obviously, just now that many of the large companies that we’re talking about are US-based and so they are at least in some respects operating within the US legal framework. But what about human rights? How do some of these platform content moderation policies impact human rights? What are some of the concerns that are raised, and how should platforms be thinking about human rights as a more useful framework that they should really be seeking to incorporate in their policies?

Marwa Fatafta : Yeah, it’s a really good question and also a difficult one because there has been a lot of debates around how to incorporate international human rights law and standards into content moderation policies and the entire governance of speech and how those companies make decisions. So there are two parts of this question, obviously, the impact of content moderation policies on human rights.

I think in the MENA region, at least, we have been documenting for a long time. Especially since the Arab Spring, a lot of discrimination and arbitrary and non-transparent decision-making when it comes to content coming from the region at large, whether Arabic or English. If you’re a journalist in Egypt right now, It’s impossible to publish an article in a state run media or any media, as a matter of fact, that is affiliated with the regime. For those who are still running as independent media outlets, their websites are blocked and are prosecuted, they’re detained and et cetera. So you’re facing a lot of intimidation. You’re facing a lot of harassment and the only place for you to share information, to publish articles, to talk to people, to document is social media.

Therefore, when you wake up one day and your accounts are suspended and that has happened many times, that’s a violation of your rights to freedom of expression. When Twitter, for example, decides around a particularly important political moment back in late 2019, when there were calls for anti-Sisi, who is the president of Egypt, protests 

Twitter decided that a particular expression of political dissent was against the rules and they were deleting and suspending accounts of prominent Egyptian activists. That’s a violation of their freedom of expression and also the right to association and assembly online. Unfortunately in those cases, there’s no transparency.

So, you don’t know what rules or terms of surfaces you have violated as a user. Most importantly, you don’t have access to remedy and you don’t know how to challenge that decision. Often here comes the role of organizations like Access Now. We have a running helpline that provides digital security support and can communicate and escalate cases to social media companies for them to take quick action.

But unfortunately not everyone has access to an organization like Access Now. As a result of that, you’re being censored, you’re being silenced and you can’t express yourself, especially around political moments. We see this in Syria. We saw this most recently in Palestine, where activists were organizing politically and Instagram and Facebook were censoring their contents on a mass scale, like thousands of content has been removed to just borrow the words of one activist on the ground. If it wasn’t for their live-streaming on Instagram, they would have been attacked by the police and dragged out of their homes. Sometimes your ability to go on those platforms, either through livestreaming or through sharing content can create a buffer zone or a form of protection for you, especially when you are on the front lines.

This is where social media companies failed quite miserably, I have to say, and not protecting their users and their users rights. If your account is being suspended at this one particular important moment, it raises, of course, a lot of questions about who makes decisions and how obviously questions around content moderation at the scale about algorithms that are used by those platforms to moderate content, and so on and so forth, not to mention of course, how you develop certain policies. 

That leads me to the second part of your question. Things like hate speech, incitement to violence, terrorism I shared with you on Twitter, that one particular word of political dissent is hate speech. Those are categories of speech that are broad, but don’t have any universal definitions and definitely international human rights. Standards don’t also provide any specific definitions, but then we see that social media companies are developing their own definitions of what constitutes hate speech and what constitutes incitement to violence and what constitutes terrorism. In some particular cases, those decisions are heavily politicized. We can see that they’re taking cues from governments or they’re being pressured by governments to moderate or over moderate certain content. This is where we try to always bring back the conversation of the importance of human rights standards to set at least certain limitations and restrictions on censorship and on removal of content and that it requires a higher threshold, especially on their decisions that are disproportionate and unnecessary, most often they lack legitimacy in the conversation around human rights framework. Personally, I’m still trying to crack that formula. 

How exactly can we incorporate the human rights standards into content moderation policies? But for now, as a start, we can say that we just need to learn what these content moderation policies are because in my part of the world, the policies that govern speech abusers here are in many cases, opaque, there are policies that we learn about either through trial and error through experience that when you think a certain content is absolutely over-moderated or through private consultations with the companies that we learn, okay, so you do have a rule for this, but you don’t share it with your users and you don’t share it on your platforms. So, transparency is extremely important at this point. 

Jess Peake: There’s a tremendous amount to unpack in that. I know that Access Now, about a year ago, adopted a set of recommendations for content governance that does encourage respect for human rights.

So, could you maybe talk us through some of those recommendations? Obviously we don’t have time to go through all 26, but like what are some. Immediate or incremental changes that platforms could make now that would move in the direction of greater protection of human rights of, of users. 

Marwa Fatafta: Yeah. So the 26 recommendations you mentioned, we’ve developed them in a reports on content governance where we established three systems. 

One is self-regulation, which is the current system we have with social media companies. They develop their own rules and they implement, often without transparency and without accountability, then we have the state regulation when states, through their own national legislation and frameworks, decide what content is constituted as illegal content and what is not.

Obviously, in repressive or authoritarian contexts that have severe ramifications on human rights. Then, there is this combined co-regulation system where companies have the ability to regulate their own content, but also that governments provide their own regulation. In terms of recommendations and what needs to change now, we’ve been working quite actively on a recent case in Palestine.

That, to me, summarizes the problems we’ve been facing on social media platforms in the MENA region for over a decade. There was a political moment where people are actively sharing documentation of human rights violations they’re organizing and they’re sharing content, etc. We’ve seen, there was this massive censorship without users knowing what exactly have they violated and why their accounts are being suspended.

We are making demands right now to Facebook, but also to other social media companies. It’s to provide transparency on the policies that govern speech in the MENA region, because we know there are special policies that exist, but users and the public at large don’t know what these policies are. 

We also need transparency on how social media platforms use algorithms to moderate content. That’s another issue on its own, where, because of the huge volume of content, social media platforms have been increasingly relying on automated decision-making and algorithms to detect and flag and remove content automatically with COVID-19 and many human content moderators, basically having to work from home and now social media companies are relying more and more on those algorithms.

Because speech is very complicated and not straightforward, especially when it comes to political speech and Arabic language that is already difficult and complex because you have spoken Arabic in different dialects across 22 countries in order for Facebook or any other social media companies to flag, or to understand to moderate content, they really need to understand the intention of the speaker. The political, social, cultural, linguistic context of that particular piece in order to determine whether it’s really hate speech or not, whether you are praising terrorism or you are a journalist reporting about a terrorist organization and those algorithms, absolutely blind, don’t understand that nuance and don’t understand the complexity of speech as a result, many users being affected.

So, we are demanding when it comes to also algorithms to be able to know what they are, and to be able to provide some public audits, for example, to understand whether they’re working or not and providing a clear path and process for remedy, by the way, much of the content we escalate, the social media companies is removed by error, is flag this false positive, and then reinstated, content or accounts are being suspended by the hundreds by the thousands. The answer we receive from social media companies is, “oops, sorry, it was a technical error, we’ll do better”, then until the next scandal until the next case, and it’s been like that for the last decade.

So when that happens, we’re asking for a second review by a human, by a person who can understand context, especially a person who was from that region or is a specialist on the region that can understand whether this content is indeed in violation of terms of services or not. I think at this point there needs to be structural changes within those companies of how they make those decisions on how they design their algorithms as well.

Last but not least, when you develop content moderation policies, you need to co-design them with the communities that are affected by such policies. I said right now, we really learn about content moderation policies in such an arbitrary manner. Again, we’re privileged, not everybody is a digital rights organization with access to those big platforms.

So, sometimes we’re called in for consultation. After the policy has been implemented there are some grievances and maybe they want to add to it. Maybe they want to change some bits of it. Sometimes before they embark on developing that policy. So, we’re pulled in at different points in the policy cycle.

What we’re asking for is that, okay, you want to govern the speech of communities and populations around the world. Sit down with them, talk to them, listen to them, understand their needs, understand how they express themselves because maybe hate speech for one community is one thing, but to the other it’s legitimate political speech.

In order for you to get it right as a company, you have to speak with civil society organizations, with activists, with human rights organizations that can provide not only context, but also cases of how the existing policies cause harm and human rights violations, so there’s a wealth of information and experience.

What is lacking, I would say, is the political will of platforms to do that and to dedicate their resources, both technical and financial and human resources to do these kinds of consultants. 

Jess Peake: Thank you Marwa. I mean, yeah. Obviously the importance of understanding context can really not be understated here and the need for particularly affected communities to be integrated into conversations about how policy should be developed by these platforms really seems to be a crucial next step in improving transparency and access and accountability in these platforms. I know that in December your organization, Access Now, along with about 25 other organizations sent an open letter to Facebook, Twitter, and I think YouTube calling on that stop silencing and raising critical voices from marginalized and oppressed communities across the MENA region. So I was wondering if you could tell us a little bit about some of the circumstances that led to that led to, and also some of the specifics about particular homes that marginalized communities face in content moderation decisions for the platform.

Marwa Fatafta: Yeah, we sent that letter on the 17th of December, the date on which the Tunisian street vendor, where [Mohamed] Bouazizi set himself on fire, sparking the Arab Uprisings. The Arab Spring was that moment where we will not easily celebrate the power of technology and the power of social media. The Arab Uprisings provided a brilliant PR moment for companies like Facebook and Twitter to promote themselves as the platforms that are providing the voice to all people where users can express themselves freely and safely and so on.

Even though companies claim that their terms of services and their rules are treating their users around the world quite equally, our experience working with human rights, defenders and activists on the ground, speak differently to that. We’ve come across again and again, the same grievance from activists that those companies really don’t care about us.

You know, I can give you many examples, Syria, for example, tens of thousands of accounts of the United Regime, activists, journalists have been suspended without any explanation. In Palestine, journalists and activists accounts were suspended without any transparency or notification in Tunisia last year, hundreds of people just tried to log in one morning to their accounts and they’ve been told you’re not eligible to use the platform. Back then, we wrote Facebook immediately. What is happening here? Can you give us more information? They didn’t, they just said, “Sorry, it was a technical mistake, so we’re going to reinstate this”. By the way, until today, there are still accounts that have not been reinstated.

Then a week later we read the news that Facebook disrupted a network of what they call it, coordinated inauthentic behavior. So it was a Tunisian digital company called You Reputation that was running this sophisticated information campaign to influence presidential and general elections in Tunisia, back in 2019 and other African countries.

They detected that they removed the entire and that’s work and what they call assets, you know, including accounts and pages and so on. As a result of that, there was that collateral damage people that had no link whatsoever to that particular and network or company. Obviously, those individuals did not have any way to challenge these decisions and say, “Sorry, I have no connection whatsoever with that company”.

But ironically, actually, a year before that we also as civil society organizations in Tunisia, we asked Facebook to provide transparency ahead of elections specifically because we were afraid of those disinformation campaigns. We heard zero answers a year later. There was that indeed. Ordinary users were affected.

The examples are really plenty. We thought, okay, so this is not just a one-off incident. We can’t afford, as civil society organizations with limited resources to continue to play this game of, there is a fire. We need to put it off and communicate with those countries. Then the next month or the year after we have the same exact issue, but in a different country and in a slightly different context.

So, we try to analyze the patterns of the content moderation decisions. In that letter, we demanded a transparency on policy transparency on your actions, remedy to users, special policies that recognize that and understand that and can differentiate for documentation of human rights abuses versus a incitement to violence.

In that case, of course you can’t rely on algorithms to do this job for you. Again, same demands. I know that I’m repeating myself, but it’s the fault of companies because none of the demands have been answered yet. So that’s it in a nutshell. Last month, we released similar demands to Facebook again, so I hope that there will be some response or more positive response from them soon. 

Jess Peake: Thank you. Yeah, you keep coming back to this point of transparency, but it really is sort of the crux of this issue in order to be able to judge content moderation policies and to be able to challenge them, we really need to understand what they are.

It just seems like there is no real ability to understand the basis on which these platforms are making decisions. It is actually a really interesting question from Nick Lampson, who is a rising 3L at UCLA Law School. He asks if the goal is a catch-all uniform content moderation policy, or is it more of a dynamic fine-grained collage of policies that take into account a particular geopolitical context and regions?

Marwa Fatafta: I mean, the idea here is that the policies that govern content obviously are dynamic because language and expression is by its very nature is also dynamic. I don’t think that’s a catch-all one size fits all policy works.

It’s what is currently being done. Then again, when we see how it’s being enforced in our region, there’s either over-enforcement and over-moderation or lack of enforcement and lack of moderation. So, anything related to extremism violence, incitement to violence. Heavily over-moderated resulting in censoring people who should not be censored, especially activists and journalists.

But, then when we talk about gender based violence online on those platforms, hate speech targeting gender minorities. There were crickets, literally very slow responses by those platforms. Unfortunately, in some cases relating to real-world harm where LGBT+ community members are being outed on those platforms and their pictures being circulated.

As a result of that, they’re being beaten on the streets or kicked out by their families because they found out that their son is gay, for example, or their daughter’s gay and many heartbreaking stories, women, activists being intimidated and harassed and threatened on those platforms and the responses from social media companies are below satisfactory this model. I think it doesn’t work. 

But, what we aspire to have is a more decentralized form of content moderation, where users have more control and agency in deciding how the platform should look like and how they should express themselves. Because at the end of the day, I hear time and time and time again, “Who decides that my speech is not permissible”?

When you’re a part of a community and you have a certain political lexicon or social lexicon, and that is accepted among that particular community, why should a US company based in the Silicon Valley decide that, “Actually, no, it’s not your speech is incitement to violence, your speech is hate speech and therefore it should be deleted and removed”. This is where I’m stressing on this point of co-design because this one size fits all has failed. 

Jess Peake: Do we see any receptiveness of platforms to that idea of co-regulation? Another question from my student, Nick, again, wonders if there’s a difference between how more well-established platforms are approaching this question versus a startup platform are companies that are new to this space, thinking about these issues in a different way.

Marwa Fatafta: Good question. I mean, one new social media app is Clubhouse, and we don’t hear much about content moderation in relation to that application to your first question, Jessica, it’s also becoming extremely popular, especially among communities where freedom of expression is not allowed. So, we see a lot of interesting taboo topics being discussed around sexuality and feminism and an entire regime, political discussions, so on. I am not sure if those applications, for example, are learning the lessons from other bigger ones, but that’s an interesting one. So far, I have not seen, for example, Clubhouse thinking about moderating content differently and obviously the medium of that platform is audio messages, it’s different from Facebook so I guess it’s also a different issue. But TikTok, for examples, new platform, same problems, arbitrary takedowns, arbitrary suspensions, no transparency, no remedy in terms of how receptive they are to the idea. Here’s the thing, I don’t want to be bashing those platforms, they do reach out and they try to speak to communities on the ground. For sure, they have been present, especially Facebook and Twitter. But then again, what happens after those conversations? Right? How do those conversations translate into structural changes into actual policies and that this is a missing piece, and we’re still hoping for companies to amend that bit because consultations do happen, but they rarely revert back on their decisions.

Yeah, there’s a lot to desire on that front for. 

Jess Peake: So, you’ve mentioned several times, Marwa, lack of access to remedy, I wanted to spend a few minutes just talking about the Facebook oversight board and other entities that have been put in place to get some type of remedy rate in a limited range of cases. So, what sort of scope do you see bodies like the oversight board having and the type of impact that they might be able to have in content moderation decisions? 

Marwa Fatafta: Yeah. So when the oversight board was announced, we were a bit skeptical of the whole idea of Facebook supreme courts. In my opinion, it’s another way for Facebook to keep the self-regulation going. It’s very convenient for them. So they think, okay, then there’s a lot of grievance with our content moderation. So let us solve it through the oversight boards. We also raise concerns about regional representation. Those are US-based companies, but I think up to 90% of their users are from outside of the US. When we looked at the first makeup and the first members of the oversight boards, there’s a big gap there, but to be fair, I think the oversight boards, especially with the cases that have been taken, have pushed quite strongly for some of the stuff that we were asking for is providing transparency on their dangerous individuals and organizations, for example, you know, many of the political organizations in the region, or some of them are on that list, on the dangerous individuals and organizations list, but that list isn’t public. The oversight board has been calling for making that public. They’ve been calling for redopositive reform, so that’s a good thing. I know that there’s a new case from the MENA region that the oversight board has just announced.

I’m really looking forward to that one because I think it’s about time for the board to tackle issues from the MENA region. In a nutshell, it’s great. The work that they’re doing, it’s great. They have great experts, especially international human rights lawyers. Nevertheless, the oversight board is not the answer.

You know, the issues are far bigger than what the oversight board can do and accomplish. It’s not only about content moderation, but the entire business model is problematic of those companies. That’s something that is obviously outside of the mandates of the oversight board. 

Jess Peake: Chuka just added a question about a very recently deleted tweet that Twitter removed that was made by the Nigerian government on the grounds that it incited violence and the obviously immediate reaction was the banning of Twitter in Nigeria. Do you think that social media companies should be able to regulate speech by the government? 

Marwa Fatafta: I think so, absolutely. I mean, we’ve seen in the case of the former US president Trump and the results of his speech and many other officials and politicians who have a huge following on those platforms and an audience that listens to them and is influenced by their speech or disseminating hate speech and all sorts of harmful content here.

It’s not in relation to the Nigerian government’s tweets, but to the question, whether social media companies should moderate or regulate speech by the government now to the ban itself. Unfortunately, I think that’s a new trend. We’ve seen it in India, in Nigeria, in Turkey with their social media bill as well, where they require companies to hire a local rep and to respond to their requests for constant removal, among other things, governments are weaponizing their own legislation and their own power to literally just shut down the platform when it makes decisions, disliked by those governments, it’s really worrying to me. So far in the MENA region, we haven’t seen governments shutting down particular platforms, but we know when it comes to internet regulation, there’s a lot of copy and paste in between different jurisdictions. So, if that happens in Nigeria or in India, it would happen in, I don’t know, Tunisia or Egypt or Saudi Arabia or any other country for that matter. 

Jess Peake: You just mentioned the Turkey Social Media Bill, which sets a number of conditions on social media companies that you just mentioned as well. Are there other examples of states in the MENA region that have imposed obligations under local law for companies to take down specific types of content.?

Marwa Fatafta: Yeah. So far in the region, when it comes to internet regulation, there are these cybercrime legislation that prosecutes users, those legislations are very repressive. They’re ambiguous overbroad in many of the provisions basically allow the government to interpret speech however, they like. They wouldn’t necessarily obligate social media companies, for example, to take down content in a case where you have a certain timeframe and you’re obligated to take down content if it’s considered illegal in that certain country. We’ve seen attempts last year by Tunisia. They introduced a so-called fake news law, which would moderate content on social media platforms. Same with Morocco which uses the German law, as an inspiration, they called it an international best practice. Luckily, civil society were quick to react and the draft was tabled. So, I would say there might be a resurfacing of these kinds of laws soon, but so far luckily we haven’t seen any new, proposals being discussed by governments. 

Jess Peake: A couple of other really great questions, one from Faraz Ansari, do you think that the future of these internet platforms after different regional regulations, for example, the GDPR will be more fractured and insular and will that help or hurt these content moderation issues? 

Marwa Fatafta: That’s a difficult question. So, here’s my humble opinion. Surely, the era of self regulation is coming to an end soon. The question here is what would the new regulation look like? 

There’s a lot of debate right now in the U S for example, on section 230. You know, I’m not a US law expert on this issue, but there are these debates happening and lots of proposals are popping right left and center. Similarly in the EU, where their digital services act and the digital markets act. I think those are an indication of attempts or trying to find solutions to these huge platforms that so far have enjoyed a great deal of unquestionable power. So, it really depends at the end of the day on how those regulations look like. From a regional perspective, even though those regulations, meaning the regulations taking place at the EU level or at a certain national European level or the US, they’re often drafted, discussed, and enacted without thinking that whatever legislation takes place, there will have implications on the global self either because of the platforms themselves are based in the US and are governed by US law, so whatever is decided, there will have direct impacts on users that are outside of the US but use those platforms, or because governments really, as I said, you know, copy paste some of those legislations and when it comes from democracies like the US and European countries, they’re often seen as an international best practice or an international standard. 

Some regulation really works well in a democratic setting. But if you export that to an authoritarian country, It really does not work. So when we talk about core regulation or government regulation of content, it might work in a country like Germany, there was a lot of public outcry that it can be detrimental to human rights and freedom of expression. But to a certain degree, it works well because there are checks and balances and the chance to explore those kinds of laws is slim. But if you have a similar law to that in Saudi Arabia, certain content is flagged as illegal. First of all, who decides those kinds of authoritarian contexts, it can be legitimate speech, but it’s illegal in that particular country, so companies have to comply and that would be really, really catastrophic. So the question here is of course, how to find the right regulation, but to think in a global perspective, it’s really, really important in those conversations and as governments and lawmakers, trying to figure out ways forward.

Jess Peake: Thank you, Marwa. I think we’ll have to leave our conversation now, but thank you so much for sharing your insights and expertise with us. This was really, really interesting. Thank you so much. 

Marwa Fatafta: Thank you, Jess, thank you so much.

Natalie Monsanto: Thanks to our co-sponsors in this series, UCLA Law’s Institute for Technology Law and Policy, or at UCLA tech on Twitter. Find Marwa Fatafta on Twitter as well. She’s @MarwaSF. Of course, Access Now is @accessnow to follow us on social look for @promiseinstUCLA. Please, if this episode was valuable, support our work. Visit law.ucla.edu/supportpromise to make a donation at any level and help future conversations like these come to be. Subscribe to the show to make sure you catch the rest of the series as it’s released. Until next time, take care.