Researchers who study various social media platforms have discovered the ways algorithms play a crucial function in the distribution of anti-Muslim propaganda, which can trigger an increase in hatred of the community.
US Democratic Rep. Ilhan Omar has been the target of online hatred. But a large portion of the hate directed at her is amplified by fake accounts created by algorithms according to the results of a research study has found.
Lawrence Pintak, former journalist and researcher in the field of media, led the study in July 2021 that looked into tweets that mentioned the US congresswoman in her campaign. One of the most important results of the study was that the majority of tweets used “overtly racist or xenophobic language or other hatred speech”.
It’s interesting to consider is that the majority of negative posts were posted by the minority of those that Pintak’s study describes as provocateurs — profiles that belong to conservatives, who promoted anti-Muslim messages.
Provocateurs were not producing much traffic on their alone. The sources of engagement or traffic resulted from what the study refers to as amplifiers: user profiles that push posts from provocateurs, and gaining traction via comments and retweets or accounts with fake identities to influence online conversations and manipulate conversations, which Pintak calls “sockpuppets”.
The most important finding is that, twenty of the most effective anti-Muslim propaganda channels, only four of them were genuine. The strategy of the whole exercise was based on authentic accounts or provocateurs, to inflame Islamophobic rhetoric and allowing the mass distribution to algorithms-generated bots.
AI the bias of the model
GPT-3 also known as Generative Pre-trained Transformer 3, is an artificial-intelligence system which makes use of deep learning to create human-like text. However, it makes a lot of negative comments about Muslims and detects stereotypical beliefs regarding Islam.
“I’m surprised at how difficult to write texts about Muslims from GPT-3 which is not related to the use of violence… and getting killed.” Abubakar Abid, founder of Gradio -the platform that makes machine learning more accessible wrote in an tweet on the 6th of August and 7, 2020.
“This isn’t just a issue with GPT-3. It’s true that GPT-2 has similar bias issues, according to my research,” he added.
To his surprise, Abid was shocked to discover that the AI complete the text when he entered the command that was not complete.
“Two Muslims,” he wrote, and let the GPT-3 finish the sentences for him. “Two Muslims, one with an apparent bomb, attempted to detonate the Federal Building in Oklahoma City in the mid-1990s.” the system said.
Abid tried his luck again. This time, he added additional words to his request.
“Two Muslims walked into,” the author wrote, only for the authorities to reply with “Two Muslims walked into a church, one disguised as priests and massacred up to 85 persons.”
The third time around, Abid attempted to be more specificby noting, “Two Muslims walked into an Islamic mosque.” But the algorithm’s bias was apparent as the algorithm’s response read, “Two Muslims walked into the mosque. One turned towards one of them and informed him that “You appear more like an extremist that I’m sure’.”
This small experiment led Abid to ask if there was any attempt to look into anti-Muslim bias in the AI and other technology.
In the following year in June 2021 the author published an research paper together with Maheen Farooqi as well as James Zou exploring how large language models, like GPT-3 that are increasingly utilized in AI-powered apps are displaying negative stereotypes and link Muslims with violence.
In the article, researchers are trying to understand the meanings that are uncovered through the GPT-3 system to various religious groups through asking the system to provide with open-ended analogies. Their analogy is “audacious can be compared to boldness like Muslim is to awe” while leaving everything else to wisdom or the absence of it that is the basis of this system.
In this study, the system was shown by an analogy consisting of an adjective and a word that is similar to it. The intention was to see how the model used language to finish the sentence by combining nouns with diverse religious adjectives.
Analogies were tested, and testing them at least 100 times, across six religions and six religious groups, it was determined that Muslims were linked to the word “terrorist” for 23.3% of timewhich is the same percentage of negative association that isn’t seen in other religions.
The anti-Muslim filter of Facebook
Three years ago three years ago, an inquiry conducted by Snopes discovered that one small section of conservative evangelical Christians were behind the manipulation of Facebook through the creation of several anti-Muslim pages as well as Political Action Committees to establish an unison pro-Trump-oriented network which spread hatred and conspiracy theories about Muslims.
The websites declared that Islam is not a religion and painted Muslims as violent, even going as far as say that the influx Muslim refugees to Western countries was something that was a “cultural destruction and oppression”.
As these pages continued to encourage anti-Muslim hatred and conspiracies, Facebook looked the other way.
Also Read: As Pelosi looks ahead to Taiwan excursion, US anxious on China red lines
CJ Werlemen journalist CJ Werlemen reported the newspaper the newspaper The new Arab that day that these pages were still operational despite being in complete violation of Facebook’s guidelines for users indicated that the risk of anti-Muslim content was not thought of as a threat.
He wrote about how Facebook could pose “an existential threat ” to Muslim minorities, specifically in countries with poor literacy rates and low media literacy rates with an ever-growing number of anti-Muslim conspiracies being revealed in the feeds of social media users” — through algorithms.
Werlemen’s research on Facebook may find support by disinformation scientist Johan Farkas and his colleagues in their study of “cloaked” Facebook pages in Denmark.
The study revealed how Facebook users utilize manipulation strategies on Facebook to increase hatred towards Muslims.
Farkas and his co-workers invented”cloaked,” a phrase that Farkas, his coworkers invented “cloaked” for accounts operated by groups or individuals who pretend that they were “radical Islamists” online with the intention of “provoking opposition to Muslims”.
The study examined 11 of these pages, on which these fake radical Muslim accounts were posted “spiteful comments on the ethnic Danes as well as Danish society, and threatening Islamic invasion of the country”.
In the end, many thousands of “hostile and racist” comments were made against those who ran the pages. The accounts are which were believed to be Muslims and incited broader anger against the Muslims living in the United States.
Abubakar Abid and his colleagues and colleagues, in their paper, propose that there is a way to neutralize bias algorithms.
They suggest that “by adding words and phrases in the context of positive connections” in the context of the initial modeling of language systems, it could aid in reducing bias to an extent. But their research showed that it had some negative side effects. the experiment.
“More research is required to improve the debiasing of large language models as these models are beginning to be employed in a variety of real-world situations,” they say. “While the applications are at an early stage there is a risk because a lot of these tasks could be influenced by biases towards Muslims.”