Femicide Case in Gradačac Points to the Ineffectiveness of Content Moderation
Femicide Case in Gradačac Points to the Ineffectiveness of Content Moderation
24/08/2023
Video of murder removed from Instagram after police intervention.
Photo: Pixabay
The case of femicide that happened in Gradačac on 11th August showed the inefficiency of content moderation on social networks, especially in cases of live broadcasting. The video was uploaded by Nermin Sulejmanović from Gradačac when he publicly launched a live stream on his Instagram profile. Even after the live broadcast, the recording of the murder was available to users for approximately three hours before it was removed. During that period, the video was liked, shared and commented on by hundreds, maybe even thousands of network users.
Users called for the video to be reported so it could be removed en masse, but that didn't happen until a human content moderator reacted. Automated content moderation was clearly not enough.
Saša Petrović, the cybercrime inspector of the Federal Police Administration (FUP), was on vacation when he received information that there was a recording that needed to be removed.
"Around noon, I can't remember the minute, we received information that it was necessary to remove the video from Instagram. Twenty minutes after making contact with the Meta administrator, the recording was removed. We could not do that until we received verbal approval from the prosecutor of the Prosecutor's Office of Tuzla Canton," Petrović told Mediacentar Sarajevo.
We sent questions about the actions taken by the Ministry of Internal Affairs (MUP) of Tuzla Canton and the Cantonal Prosecutor's Office of Tuzla Canton and why the prosecutor's approval was delayed for so long, but we have not yet received the answers by the time this text is published. According to the information we have, requests for the removal of content were also sent by the Ministry of Internal Affairs of Tuzla Canton, but the moderator of Meta, which owns Facebook and Instagram, did not respond to them.
"We were in communication with the TK Ministry, they sent an inquiry to Meta, but there was no response. I could call Meta directly by phone, so I did. It was around 7:30 a.m. there and the man who works there had a look in the system for a few minutes, took the data that is needed and that we have to deliver to Meta – why we need it, why it is important. And in about 20 minutes, the video was removed," Petrović told Media.ba.
All this means that the mechanisms of Meta, as well as other networks, are not good enough and that it takes a long time for content to be removed.
Petrović adds that the FUP has established excellent cooperation with Meta, which is faced with a large number of urgent content removal requests on a daily basis.
"All police agencies in Bosnia and Herzegovina (BiH), the region and the world can access Meta's part of the platform that refers to police agencies, where the regular procedure and methods of response are clearly prescribed, even in emergency cases like this one," adds Petrović.
After receiving the prosecutor's approval, the procedure within the FUP, he adds, did not last longer than ten minutes.
Feđa Kulenović, Information expert and senior assistant at the Department of Information Sciences at the Faculty of Philosophy, University of Sarajevo, states for Media.ba that the mechanisms for removing similar content are only as effective as the companies are able to make them, considering the ratio of employees, especially in terms of content moderation.
"The number of members on each network is by no means good and does not meet the needs for moderation. As for the specific shocking case you are referring to, I would say that three hours is too little considering all the problems and the fact that Meta alone has laid off 20,000 employees recently. This is proof that the mechanisms are not efficient enough and that nothing has changed since 2019 when we had promises that things would improve after the case of the live stream of mass racially motivated murders in Christchurch, New Zealand," says Kulenović.
Underdeveloped algorithms for recognising violent content
The fact that the femicide video was not automatically removed shows that automated content removal is not sufficiently developed or effective.
Tijana Cvjetićanin, the research coordinator at the "Zašto ne" Association, says that in removing harmful or violent content, it is essential that there are human contacts who speak the local language of the country in which they operate, i.e. in which the video was published. Speed is important, but so is a humane approach that can assess what is happening.
"My impression is that such dynamics are not possible with algorithms and automated processes. Although the operation of algorithms is a trade secret of online platforms and, as such, very non-transparent, so it is not known how certain content is automatically removed or marked as disturbing, it is quite clear that the algorithm was not able to judge that what was broadcast on Friday morning was a terrible, shocking and disturbing video of a criminal act of murder that was happening live," says Cjetićanin to Media.ba.
One of the signals for recognising such content should be the number of reports from users. This was the case for the video from Gradačac because network users called for the content to be reported, but it was still available until the FUP inspectors requested a human response.
"Three hours is a terribly long time for such a recording to be available to everyone. This means that children, minors, and vulnerable groups could watch it all that time. Three hours is too much," says Cvjetićanin.
The FUP inspector explains that Meta has its own algorithms that recognise harmful content, but those algorithms are better suited for sexually explicit content.
“…than violent content such as this. They have their own moderators who deal with requests and manual searches. The question is how many Meta users are there and how much can be tracked. We have mechanisms for the protection of children and mechanisms for dealing with these cases. But we don't have algorithms or software that could recognise this kind of content. It is also very difficult to develop, to write a program for such content. It is possible for sexually explicit content, when most of the image shows a naked body etc., but for violent content, it is very difficult to automate that process. It's more manual than automatic," says Petrović.
Kulenović, on the other hand, believes that the automation of these processes is overrated and proves to be impossible considering that there is a whole series of exceptions.
"In situations where we are talking about the depiction of violence in art or historical events that caused protests due to censorship in these, it is important to emphasise, privately owned spaces," he said.
When asked if social networks have enough employees who would react in cases of contacts, reports and similar actions that are not automated, Kulenović said:
"To put it bluntly – no. Meta manages probably the most users on its three networks, four with Threads. For billions of users, there are approximately 60,000 employees in total, of which very few are engaged in moderation. Attempting to automate these processes causes many problems that Meta sometimes solves quickly and sometimes more slowly, but all these algorithms are very 'sensitive' and react to small situations involving written text (which is much easier to moderate automatically), while they do not have a big impact on live videos unless someone in the title adds a word that will trigger the system", Kulenović says.
He further says that automatic removal of such content has an effect in situations where there is mass reporting, but given that such situations can also be used to stifle freedom of expression, it becomes more than obvious that these are not effective solutions.
"Nevertheless, I have to say that these are situations that cannot be effectively resolved without seriously affecting privacy and creating completely wrong solutions. Any solution would surely do more harm than good. It seems to me that the only solution is to start reconsidering our relationship with these networks and their private spaces," believes Kulenović.
TikTok and Telegram the most irresponsible
Violent and disturbing content can cause numerous consequences and traumatic experiences for thousands of users but also encourage individuals to commit similar acts.
An additional problem is the speed with which such content spreads. During the three hours that the video was available on social networks, users downloaded and posted it online, either on Meta's platforms, where after the initial reaction they were removed more quickly, or on networks like TikTok and Telegram.
"These are platforms that have generally worse mechanisms for reacting to such things, which we could see in the recent example of the mass murder at the Vladislav Ribnikar school. At that time, very irresponsibly, numerous media outlets in the region published the full name, pictures and videos showing the face of the juvenile perpetrator of multiple murders. Such materials were then used, for example, on TikTok, where a large number of mostly teenagers and adolescents – not only from this region but also from various distant parts of the world – created profiles that looked literally like fan clubs of someone who committed multiple murders," says Tijana Cvjetićanin.
Parts of another live video posted by Sulejmanović, in which he talks about the crime he committed, are still available on TikTok, and the video of the murder is available on Telegram. Almost 80,000 people watched the uncensored video of the murder in the Telegram group "LEVIJATAN – Bez cenzure”.
Telegram and TikTok are networks with which cooperation is more difficult even for police agencies.
"That collaboration is more difficult than with Meta, but it works. It works with TikTok, but it’s much more difficult than with Meta. It is very difficult with Telegram," inspector Saša Petrović told Media.ba.
Kulenović says that mass online communication apps such as Telegram, WhatsApp and Viber are very bad at content moderation. WhatsApp, he says, is better because it's owned by Meta, but that doesn't mean it's great.
"They simply had to react to some situations in the world where WhatsApp was at the centre of spreading disinformation and incitement to violence. Telegram is bad. First of all, their reporting process is almost non-existent in my opinion. When you report a post in some obscure group, you don't get any feedback, and the material remains available. TikTok is just as bad in this regard," says Kulenović.
He adds that companies do not invest enough resources in moderation, but also that effective moderation of content is simply impossible in certain cases, unless there is serious restriction of freedom of speech, which companies do not want, not because of the protection of freedom of speech, but because that way they would lose users and, consequently, money.
Translated by Tijana Dmitrović