Share on facebook
Facebook
Share on twitter
Twitter
Share on whatsapp
Whatsapp
Share on email
Email

Support us and share this article!

Many people in Latvia believe so, but Re:Baltica found no evidence that content moderators on Facebook would consciously delete posts showing support for Ukraine. But there is another problem—its content policies are designed in a way that it is practically impossible to express negative emotion when speaking about the war. The Kremlin only benefits from this. 

In mid-January, a professional disinformer published a “leaked” list of Facebook moderators who supposedly block content that shows support for Ukraine. The list was shared on Twitter and consisted of surnames found on LinkedIn with people who had said that they work for an international company in Riga, which Facebook has hired to perform content moderation for it. The only proof that these people supposedly block accounts supporting Ukraine were their Slavic names. 

Soon afterwards it turned out that before the Twitter post, these names had already been published on Facebook by the Latvian ex-politician Sarmīte Pīka. “Facebook users who take a stand against Russia’s war in Ukraine, are being blocked. Why has this been going on for a long time now with no consequences?” was what she wrote on her profile, and posted the address of the moderators’ workplace in Riga. 

In correspondence with Re:Baltica, Pīka says that publishing the list of surnames was a cry for help. She had spent time in Facebook’s “prison” more than once—meaning her profile was blocked—for publishing posts in support of Ukraine, and according to her, a number of other public figures had had the same experience. Pīka has been “punished” for violating Facebook’s content guidelines twice. Once a post with a caricature of the president of Russia was deleted. Another time, her account was blocked for five days after publishing the poem A Dedication to Latvia’s Russians by the famous Latvian writer Imants Ziedonis. Because of her own experience and seeing that other Facebook users had found themselves in a similar situation, Pīka concludes that “either there are coordinated groups abusing FB content moderation system, or there is deliberate censure overseen by the management.” 

The State Security Service of Latvia (VDD) reacted to these claims and announced that they are looking into it. In their announcement, the VDD pointed out that they are also following the events in Bulgaria, where in January unsatisfied citizens were protesting by the office building of Facebook’s moderators. 

On a political level, this question has been raised also right next door to us, in Lithuania. Consequently, Re:Baltica tried to understand whether Facebook is secretly supporting Russia’s foreign policy in our region. 

Declines an interview

Re:Baltica has been writing about disinformation and the Kremlin’s propaganda for years, and as a result we have been in touch with Facebook’s representatives on many instances. They are usually fast to respond, but their answers to our questions have rarely been meaningful. 

This time was no exception. First, we got in touch with the company CCC Riga Digital Services, which is part of the Canadian company TELUS that has an office in Riga. Facebook outsources their content moderation service to them. CCC Riga Digital Services was registered in Latvia in 2018 and the number of their employees has grown from an initial 63 to 375, their yearly report shows. In 2021, it had a turnover of 10,3 million with a profit of 1 million. 

TELUS declined an interview with us. In a written response, the representative of the company said that the claims are “simply false”. As they work according to the guidelines developed by Facebook, all the questions should be addressed to the social network giant itself. 

Facebook also declined an interview. In a written response, their representative Vitnija Saldava denied the claims that Facebook is secretly supporting Russia’s interests. She pointed out that moderators work in accordance with very precise guidelines and because of that, it is practically impossible for them to make subjective decisions. The moderators also don’t choose the posts they check—they are assigned to them automatically. Additionally, there is a weekly inner audit to inspect the quality of the moderators’ work. When asked to give a precise number of moderators looking after the content in Latvian, Saldava refused to give an answer. Re:Baltica has posed this question to Facebook representatives for a long time, but their answers have always been evasive. 

Having expected this kind of response from both companies, Re:Baltica spoke to two former Facebook content moderators who left the job relatively recently. They can be regarded as two independent sources because the interviews took place separately and the individuals didn’t know about each other. 



When they left the job, the moderators signed non-disclosure agreements. To protect them, we won’t reveal their names. For additional confirmation on what the two sources told us, Re:Baltica analysed previous research done by journalists internationally, and it indirectly confirmed what was said by the two individuals.  

Why are there so many Slavic surnames? 

During one of our message exchanges, Sarmīte Pīka sends a screenshot from the leading newssite Rus.Delfi from 2019 of a job ad in Russian: “Facebook in Riga is actively looking for employees to fight the manipulation of public opinion.” 

“Why was [the ad] only in Russian? Also a coincidence?” Pīka rhetorically asks. 

The ad was in Russian because the TELUS office in Riga moderates content in the national languages of the three Baltic states and in Russian. Additionally, they moderate content in Russian also in other former Soviet republics. This means that a moderator sitting in the office in Riga will look through posts written in Russian by people in Latvia, as well as, for example, by people in Armenia. That partially explains why there are so many employees with Slavic surnames at TELUS. 

“I saw the post by this woman, Pīka, that the poem [by Ziedonis] was deleted. That was done by moderators who are ethnic Latvians. And it was justified, because there was a violation,” one of the interviewed moderators told Re:Baltica

Re:Baltica did not get a clear answer on how many moderators there are overseeing content in the national languages of the Baltic states. One source mentioned that relatively recently for the content in Lithuanian, Latvian and Estonian there were up to ten moderators for each country. Another source, who quit the job recently, gave much bigger numbers—up to 25 people for content in Lithuanian, around 40 for content in Latvian. The company has been actively looking for Lithuanian and Estonian employees, which makes one wonder if the number of local content moderators has increased because of the most recent public scandals. 

When applying for the job, the potential moderator has to have an interview and has to pass a test, which includes questions on the politics and culture of the country they will oversee. This means that a content moderator for Latvia has to understand the local context. When they are hired, the employees go through several weeks of training. The average salary is 1500 euros before taxes but the rates for working over hours, during holidays and night shifts are generous. 

In the research done by journalists internationally, it shows that the moderators often perform poorly because of the heavy workload: in certain cases, they have only 15 seconds to evaluate a post. The moderators from Latvia say that it might be the case in bigger countries, but that here the moderators spend 40 seconds to a minute on a post, which is enough. This is the average amount of time. A source says that the posts are queued up to be examined and there might be an image that takes 15 seconds to evaluate, while a post with a lot of text might take up to 10 minutes because you have to read it and then assess the context. “If later the internal auditor checks why I spent 10 minutes on a post, they will see that it was needed in that case,” an ex-moderator says. 

While working on this article, Re:Baltica experimented with reporting aggressive comments. In some of the cases, we received an answer in a couple of hours, while in other cases—after several days. One of the interviewed sources explains that the time of response varies depending on the gravity of the potential violation and the total number of posts. Priority is given to reports of aggression against underage people and suicide attempts (accounting for the possibility that they can still be prevented). Other reports are treated as they come up in the queue. If there is an event that has caused many people to write comments, it can take several days for the moderators to evaluate it. 

Overseeing the work of the moderators

The information provided by both of the ex-moderators and the response from Facebook shows that the performance of the moderators is evaluated by internal auditors. Additionally, some posts are given to several moderators simultaneously. If the majority of them come to the same conclusion, but there is an employee who diverges from that, the auditor evaluates why. The auditor might decide on extra training for the moderator under question.

The auditor gets involved also in, what are considered to be, “ambiguous” cases. That might signal a difficult case where a more experienced colleague might need to get involved. 

For example, Re:Baltica reported a comment where a man calls the MP Jānis Dombrava an “f***g nazi” and asks “What do you think are your chances of survival if something suddenly changes in the geopolitical situation?” Facebook didn’t delete the comment, stating that there was no violation. 



Both of the ex-moderators, independently from each other, said that in this case you can see signs of bullying, but the comment might be deleted if Dombrava would report it himself. There is no threat—even though someone from the outside might think so, it is too obscure in this example. “This might be the case where the opinion of the moderators splits,” one of the sources concluded, while adding: “I would delete this comment.” 

Both of the former moderators point out that there is always the chance that an employee will delete or block a profile out of personal disagreement with its content, but it would not be possible to do that in the long term. The internal audit system would discover it fast enough. 

What kind of content is being checked?

Another reason that the performance of the moderators might fall short is the fact that the content guidelines are being constantly changed. Information on new changes is posted on the internal chat once in almost two weeks. Employees who don’t follow it carefully can easily make mistakes. 

This might have been the case with the Facebook user Aigars Cunskis. Since last October, his profile picture shows him in a t-shirt that says “Azov”. Before the war, one branch of the Azov battalion was considered to be radical. Facebook had classified it as a “dangerous organisation” and was blocking posts mentioning it. After continued pressure from Ukrainians, Facebook lifted the restriction.

Credit: Aigars Cunskis, personal photo

Cunskis’ profile picture was deleted and his profile was blocked for 30 days even though the restriction had already been revoked. “A moderator probably didn’t notice that the ban was lifted,” the moderator interviewed by Re:Baltica concludes.* 

But the Cunskis’ case creates another potentially dangerous trend—Facebook deleted his profile picture after he had posted a comment under Re:Baltica‘s invitation for people to submit examples of deleted or blocked posts. That means that most likely someone reported it. This example also shows that the Facebook algorithm doesn’t catch all the violations and that user reports are important. 

As the interviewed sources say, it’s a myth that a post needs to be reported by a mass of people to get the attention of moderators. You need only one report for Facebook to react and take down content if they believe that it violates their guidelines. 

This also means that if, for example, there would be a country unfriendly to Latvia that would organise an army of trolls, who understand the way that Facebook works, and they would start reporting profiles supporting Ukraine en masse, this would be a way to “legally” silence the voices that disagree with them. 

Viktoras Daukšas, the head of Debunk.org—the Lithuanian centre for disinformation research, says that they have seen this happen. Last summer, a popular Facebook page with 200 thousand followers, which had raised about a million euros in support of Ukraine, was made invisible by the social network. The reason was violations found in posts that were several months old. Daukšas believes that someone had deliberately gone through old posts searching for a violation and then had reported it to Facebook. 

Jillian C. York, Director for International Freedom of Expression at the Electronic Frontier Foundation, a nonprofit advocacy group for digital rights, believes this is not out of the realm of possibility. York told Re:Baltica that researchers came across a similar phenomenon a couple of years ago in Israel and Palestine, and more recently in Colombia during protests against the government. “We can’t say that they were supported by governments. More likely, the initiative came from political activists. All kinds of combinations are possible,” the Berlin-based researcher concludes.     

What are the criteria for the removal of content on Facebook?

Overall, there are three kinds of content that reach Facebook moderators. One part  consists of posts, images, and comments that the algorithm catches based on keywords. Another part is content that has been reported by individuals. The third kind comes from Facebook users appealing cases where the content that they had published was removed (this supposedly makes up for the smallest part of the cases). 

From conversations with individuals whose accounts have been blocked, Re:Baltica can conclude that people are most angered by the unclear criteria in regard to what kind of content is deleted. Most of them had received just a broad explanation stating “hate speech”, “bullying” or simply that it “goes against our Community Standards” as the reason for the removal of their post.

In response to Re:Baltica‘s invitation for people to submit deleted posts, tens of screenshots were sent in. After analysing them together with the ex-moderators, Re:Baltica came to several conclusions.   


What kind of content is being deleted by Facebook in Latvia?

Re:Baltica’s conclusions 

It’s not allowed to speak of one nation as superior to others.

It is not allowed to generalise when speaking of members from a concrete nation calling them “zombies”, “orcs”, “trolls”. For example, “Russian orcs”. It is considered as dehumanising or as a deprivation of their human traits. But using the term “vatnyk” is allowed. 

It is not allowed to use slurs against people from specific nations. For example, “zhyd” (a pejorative term for a Jewish person—ed.), “hoholi” (a Russian slur referring to a Ukrainian person—ed.), “rusnya”, “moskals”, “kacaps”, “ukras”. 

It is not allowed to use slurs that are specifically against women like “shmara”, “shlura”, “mauka” (slut), “krople” (cripple)  when referring to individuals or women as a whole. These terms cannot be used also against public figures because Facebook sees them as especially disrespectful. 

It is not allowed to call a regular Facebook user a “bitch”, but you can use the term to describe a public figure.

It is allowed to use the terms “nazi”, “homophobe”, “xenophobe” because Facebook regards them as ideological designations and not as pejorative terms. In the context of this rule, it is allowed to publish caricatures that, for example, make fun of the president of Russia, but often it is the accompanying text that creates the problem. If it contains a derogatory or disdainful term, Facebook deletes the entire post. 


For example, a user can’t say that one nation is worse than the others. Several of the examples that were submitted describe a situation where someone has posted about a person of Russian descent acting aggressively with a broad conclusion at the end: “This is what the Russians are like.”

“Facebook wants to be a happy place where all the nations feel good and people can debate freely,” a former moderator explains. Condemning the war, blaming the Kremlin, Putin, Russia’s ruling elite is allowed if done right. For example, it is permitted to say: “Killing local inhabitants is cynical and inhumane”, but it is not alright to angrily write about “Russian murderers”. 

At the beginning of the war in Ukraine, Facebook did make an exception and allowed for more aggressive language when referring to Russian soldiers. This was confirmed also by the leaked internal correspondence documents that were published by the news agency Reuters. But this policy was revoked after Russians started complaining, an ex-moderator reveals. 

At the moment, the Facebook content policy has included the terms “Russian soldiers” as well as “Ukrainian soldiers” in a special protected group list (usually the status is given to various religious groups and minorities). This means that you can’t use bad language when speaking of Russian and Ukrainian soldiers. 

Not a real happy place

While trying to create a happy place for all Facebook users, what has been achieved is a fake world. It is practically impossible to speak emotionally about the war in Ukraine while also abiding by all the rules of the social media. 

This was the conclusion of the Chancellor of the Government of Lithuania after meeting Facebook representatives in January this year. “Such sterilization of language favors Russia, it normalizes the war, washes away responsibility and becomes a silent support for the false narrative spread by the aggressor,” Giedrė Balčytytė told the press



“Facebook acts like there is no war,” Daukšas from Debunk.org agrees. Debunk.org recently published research where they had analysed screenshots submitted by around 200 people with examples of deleted or blocked content. Conclusions: Facebook doesn’t take the local language and culture specificity into account, their policies can be used maliciously by organised reporting on content and by paralysing accounts with a large follower base. Often there is no consistency in the work of the moderators—there are cases where content isn’t removed, but the violation is the same as for something that was deleted. 

This was also Re:Baltica‘s conclusion. The examples that were submitted show comments with ethnic slurs like “gansik” referring to Latvians or “ukras” for Ukrainians that were not removed even though they should have been. At the same time, comments with the ethnic slurs “moskals” or “kacaps” referring to Russians were deleted. 

There is no consistency also in the usage of swear words. An example shows that a comment where the MP Glorija Grevcova was called a “shlura” (a slur used against women in Latvian—ed.) was deleted, but a comment where the journalist who wrote this article was called a “kuce” (“bitch” in English—ed.) was not. The sources we interviewed say that it probably was not deleted because the journalist is a public figure. But this is a contradiction because so is Grevcova. 

As mentioned before, the State Security Service of Latvia has announced that they will look into the work of Facebook content moderators. In Lithuania this question was raised on the highest political level a while ago already. Government officials have met with representatives of Meta, the parent company of Facebook, on several occasions. The last meeting they had was at the beginning of February in Vilnius. Daukšas says that it has been a somewhat constructive process. After consulting with Lithuanian language experts, they reached an agreement that Facebook won’t consider the terms “moskal” (derogatory for “Russian”—ed.) and “trolls” as disdainful and consequently won’t delete content with them, but the usage of the word “kacap” (a historic slur referring to Russians—ed.) still won’t be allowed. 

Meta has also offered to organise training for activists and opinion leaders during which they would explain in more detail what is and what isn’t allowed when posting. 

But the Lithuanians believe that the fight for free expression on Facebook is not over. Debunk.org is planning to research whether posts expressing support for Russia are also being taken down. The Bulgarians believe that their protests have been fruitful. The local activists and journalists have obtained information that the TELUS office in Bulgaria is being closed and is going to move to another country, where — is not known yet. 

According to Re:Baltica’s source, part of the work might be transferred to their Riga office, including content moderation in Ukrainian. Meantime, part of the employees moderating content in the Russian language in Riga is being laid off. Asked for a comment, TELUS Riga’s office answer was: “We recently informed a group of team members in Riga that the client account they were supporting has made the business decision to reduce its services in response to changing client volume demands. Due to the legally binding non-disclosure agreements in place with our clients, TELUS International cannot provide further comment on this matter. TELUS International’s sites in Bulgaria – Sofia and Plovdiv – remain open.”

*After the story was published in Latvian, Re:Baltica received a clarification from META regarding the photo containing the Azov logo that was taken down. “Meta has determined that the Azov regiment no longer meets our strict criteria for designation as a dangerous organization. This means that they are now allowed to have accounts on our platform and content posted by others concerning Azov Regiment will no longer be removed – providing it doesn’t violate any of our other policies. However, hate speech, hate symbols, calls for violence, and any other content which violates our Community Standards will be removed if we find it. This means that we are still removing content that includes the old Azov logo, which was the case with the photo you referred to, so the action of the content reviewer was correct. The new Azov logo is not violating.”


 

INDEPENDENT JOURNALISM NEEDS INDEPENDENT FINANCING If you like our work, support us! LV38RIKO0001060112712


Share on facebook
Facebook
Share on twitter
Twitter
Share on whatsapp
Whatsapp
Share on email
Email

Support us and share this article!