Skip to main content

Home  »  US business news   »   Facebook approved adverts making death threats against US election workers

Facebook approved adverts making death threats against US election workers

Facebook

Facebook did not block content that threatens serious violence against US election workers, researchers have found.

Researchers deliberately submitted adverts that were death threats against election workers around the time of the vote earlier this year.

The researchers found many of their adverts were approved by the company's automated moderation systems.

READ MORE: META ADDS SECURITY FEATURES TO PROTECT YOUNG INSTAGRAM AND FACEBOOK USERS FROM BEING MESSAGED BY ‘SUSPICIOUS’ ADULTS

Facebook approved 15 of the 20 ads with violent content submitted by researchers.

Global Witness, a watchdog group, and New York University's Cybersecurity for Democracy published the tests.

Before the adverts were posted, the researchers erased the approved ads.

Ten of the test advertisements were written in Spanish.

READ MORE: GOOGLE FACES ANOTHER CLASS-ACTION LAWSUIT OVER ANTI-COMPETITION

Six of those adverts were approved by Facebook, compared with nine of the 10 ads in English.

The researchers said TikTok and YouTube rejected all the adverts and removed the accounts that tried to submit them.

The tests revealed the challenges social media platforms face in censoring increasingly partisan and violent electoral content.

This is particularly because poll workers are facing risks posed by election deniers and far-right extremists.

READ MORE: META FINED $275 MILLION FOR VIOLATING EU DATA PRIVACY LAW

The Justice Department and other federal agencies warned before this year's midterm elections that threats against poll workers had increased.

A representative for Meta, Facebook's parent company, said in a statement the firm later took down many accounts that had submitted the ads.

The firm also stated that ads are still being examined after they go live.

READ MORE: GOOGLE WILL PAY $400 MILLION IN LOCATION TRACKING SETTLEMENT

The spokesman said: “This is a small sample of ads that are not representative of what people see on our platforms. 

“Content that incites violence against election workers or anyone else has no place on our apps, and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms.”

The study is based on previous tests done by the same groups, including one this year in which 20 ads containing political misinformation were submitted.

READ MORE: META TO PAY $400 MILLION IN FINES OVER CHILDREN’S DATA VIOLATIONS ON INSTAGRAM

Facebook approved only two of the English misleading ads from a US account in that test. 

However, TikTok approved almost 90 percent of them.

YouTube blocked the account that attempted to submit the ads.

This time, more explicit language was used to test Facebook’s largely automated moderating system. 

Need Career Advice? Get employment skills advice at all levels of your career

Those included outright violence threats based on actual statements made by election deniers and other far-right extremists in the US.

One Facebook-approved ad read: “We are going to kill you all.” 

Another ad that threatens to assault children was also approved.

Researchers also said they wish to see social networks like Facebook improve content moderation efforts and provide more transparency about the moderation efforts they take.

Source: The New York Times

Follow us on YouTubeTwitterLinkedIn, and Facebook.

Tags:
Facebook