Facebook and Meta’s Failure to Control Harmful Content
Facebook and Meta, the parent company of Facebook, have been under fire for years for their failure to control harmful content on their platforms. This includes fake news, hate speech, disinformation, and online criminals.
In recent years, there have been numerous reports of how Facebook has been used to spread misinformation and hate speech. For example, in the lead-up to the 2020 US presidential election, Facebook was flooded with fake news and disinformation about both candidates. This had a significant impact on the election, and many experts believe that it helped to contribute to Donald Trump’s victory.
Facebook has also been used to spread hate speech and promote violence. For example, in 2018, it was revealed that Facebook had been used to organize the genocide of the Rohingya people in Myanmar.
In addition to fake news, hate speech, and disinformation, Facebook has also been used by criminals to commit crimes. For example, in 2019, it was revealed that Facebook had been used to traffic children.
Facebook has repeatedly promised to do better at controlling harmful content, but it has so far failed to make significant progress. In fact, some experts (including SCARS) believe that the problem is getting worse.
There are a number of reasons why Facebook has been unable to control harmful content. One reason is that the company is simply too big. Facebook has over 2 billion active users, which makes it impossible for the company to manually review all of the content that is posted on the platform.
Another reason is that Facebook’s algorithms are not sophisticated enough to properly identify harmful content most of the time. The algorithms are designed to prioritize content that is likely to be engaging, but this often means that they promote harmful content as well.
Finally, Facebook has been reluctant to take action against harmful content for fear of being accused of censorship. The company has argued that it is important to allow users to express themselves freely, even if they say things that are offensive or harmful.
Facebook’s failure to control harmful content has had a number of negative consequences. It has damaged the company’s reputation, will lead to government regulation, and contributed to the spread of misinformation and hate speech. Not to mention the ever-present plague of online criminals victimizing normal users of their platforms.
Facebook needs to do more to address the problem of harmful content. The company should invest in more sophisticated algorithms and hire more human reviewers. It should also be more willing to take action against harmful content, even if it means being accused of censorship. But more than those, it needs to work with organizations that understand the criminality that occurs on their platforms – not how to spot a fake profile and the deeper psychological methods that victimize tens of millions. SCARS stands ready to work with Meta (if they are interested,) we have insights that have not been considered.
Only by taking these steps can Facebook hope to regain the trust of its users and prevent its platform from being used to spread harm.
According to Guy Rosen, Chief Information Security Officer, Meta
Taking Down Two of the Largest Known Covert Influence Operations:
China: We recently took down thousands of accounts and Pages that were part of the largest known cross-platform covert influence operation in the world. It targeted more than 50 apps, including Facebook, Instagram, X (formerly Twitter), YouTube, TikTok, Reddit, Pinterest, Medium, Blogspot, LiveJournal, VKontakte, Vimeo, and dozens of smaller platforms and forums. For the first time, we were able to tie this activity together to confirm it was part of one operation known in the security community as Spamouflage and link it to individuals associated with Chinese law enforcement. See details in our Q2 Adversarial Threat report.
Russia: We also blocked thousands of malicious website domains as well as attempts to run fake accounts and Pages on our platforms connected to the Russian operation known as Doppelganger that we first disrupted a year ago. This operation was focused on mimicking websites of mainstream news outlets and government entities to post fake articles aimed at weakening support for Ukraine. It has now expanded beyond initially targeting France, Germany, and Ukraine to also include the US and Israel. This is the largest and the most aggressively persistent Russian-origin operation we’ve taken down since 2017. In addition to new threat research, we’re also publishing our enforcement and policy recommendations for addressing the abuse of the global domain name registration system.
What The World Is Saying
According to Yahoo News
Meta on Tuesday said it purged thousands of Facebook accounts that were part of a widespread online Chinese spam operation trying to covertly boost China and criticize the West.
The campaign, which became known as “Spamouflage”, was active across more than 50 platforms and forums including Facebook, Instagram, TikTok, YouTube and X, formerly known as Twitter, according to a Meta threat report.
“We assess that it’s the largest, though unsuccessful, and most prolific covert influence operation that we know of in the world today,” said Meta Global Threat Intelligence Lead Ben Nimmo.
“And we’ve been able to link Spamouflage to individuals associated with Chinese law enforcement.”
More that 7,700 Facebook accounts along with 15 Instagram accounts were jettisoned in what Meta described as the biggest ever single takedown action at the tech giant’s platforms.
“For the first time we’ve been able to tie these many clusters together to confirm that they all go to one operation,” Nimmo said.
The operation originated in China and its targets included Taiwan, the United States, Australia, Britain, Japan, and global Chinese-speaking audiences.
Facebook or Instagram accounts or pages identified as part of the “large and prolific covert influence operation” were taken down for violating Meta rules against coordinated deceptive behavior on its platforms.
Meta’s team said the network seemed to garner scant engagement, with viewer comments tending to point out bogus claims.
Clusters of fake accounts were run from various parts of China, with the cadence of activity strongly suggesting groups working from an office with daily job schedules, according to Meta.
See below for the source article.
Also according to Yahoo News
Security researchers at Meta have identified and exposed a four-year-long propaganda campaign that they have linked to the Chinese government.
Removing the “Spamouflage” campaign: Meta removed 7,704 Facebook accounts, 954 Facebook pages, 15 Facebook groups and 15 Instagram accounts linked to the campaign, which researchers have named “Spamouflage” due to spam-like messages sent by the accounts, reported The New York Times.
“This is the biggest single takedown of a single network we have ever conducted,” Ben Nimmo, who heads Meta’s security team, told the Times. “When you put it together with all the activity we took down across the internet, we concluded it is the largest covert campaign that we know of today.”
About the campaign: Researchers revealed that the campaign was organized by Chinese law enforcement in 2019 to advance China’s interests while discrediting adversaries like the U.S. It involved spreading propaganda about Hong Kong’s pro-democracy protests, disinformation about the origins of COVID-19, attacks on dissidents and critics abroad and attempts to sow division during the 2022 midterm elections.
The campaign appears to have been run by geographically dispersed operators across China who were centrally provisioned with internet access and content directions. The propaganda included posts on various platforms, including Reddit, Medium, Tumblr, Facebook, TikTok and YouTube, and involved translations of articles in multiple languages.
Low effectiveness: Despite its size across social media platforms, the campaign struggled to attract attention due to poor grammar, spelling errors and incongruent content.
“It was as if they copied them from a numbered list and forgot to proofread them before they posted,” Nimmo said.
While we commend Meta on taking these actions, they are too little too late!
Meta needs to engage in a platform-wide war on the evil that permeates their platform. From disinformation or outright criminality, Facebook, and the other Meta platforms are out of control. It does not matter how many billions of profiles they delete if 50% of their users are still there to commit crimes,
With the advent of the European Union’s Digital Services Act (and hopefully soo similar laws elsewhere), we trust that more attention will be paid to cleaning up social media beginning with the Meta products and services.
However, we need your help too! Share this if you can. Help us place even more pressure on Meta!