Advertisement

Facebook Reveals Alarming Stats On Terror Groups Online

More than 200 white supremacist groups have been banned from Facebook, as the social media giant works harder to address hate and extremism online.

Facebook is still working to respond to scathing criticism of its connection to the Christchurch terror attacks, where an Australian-born gunman live-streamed video of his rampage that left 50 people dead at New Zealand mosques, and countless copies of the video later circulated on social media.

The platform on Wednesday gave new updates on its plan to combat hate and extremism online, revealing its work on expanding its initial focus from Middle Eastern terror to a wider range of "dangerous organisations".

More than 26 million pieces of content related to groups like ISIS or al-Qaeda has been removed from Facebook in just the last two years -- but some 200 white supremacist groups have also been banned from the website in recent times too.

Facebook
Photo: Getty

Other programs using artificial intelligence to identify and remove extremist content, as well as initiatives to encourage supporters of dangerous groups to de-radicalise themselves, have also been detailed.

READ MORE: Morrison Wants Facebook, Google To Block Terror Videos

READ MORE: The Near-Untrackable Far-Right Threat The World Is Facing

READ MORE: Blocking 8Chan Just 'Baby Steps' In Fighting Racist Extremism, Experts Say

"Some of these changes predate the tragic terrorist attack in Christchurch, New Zealand, but that attack, and the global response to it in the form of the Christchurch Call to Action, has strongly influenced the recent updates to our policies and their enforcement," Facebook said in a statement.

The social media company, with more than two billion monthly users, uses a variety of automated tools -- AI, image-matching to see if new photos or video come from the same source as other banned content, checking to see if someone has made multiple accounts after being banned -- in efforts to keep extremists off its platform.

NZ Prime Minister Jacinda Ardern hugs a mosque-goer at the Kilbirnie Mosque after the Christchurch attack in March. Photo: Getty

But following global outcry after the March mosque attack, and the 'Christchurch Call To Action' demanding tech companies do more, Facebook has now given a further look under the hood of its policies.

"We’ve since expanded the use of these techniques to a wider range of dangerous organisations, including both terrorist groups and hate organisations," the company said.

"We’ve banned more than 200 white supremacist organisations from our platform, based on our definitions of terrorist organisations and hate organisations, and we use a combination of AI and human expertise to remove content praising or supporting these organisations."

READ MORE: Optus, Telstra, Vodafone Block Sites Hosting Christchurch Videos

READ MORE: Australia's 'World-First' Crackdown On Facebook And Google

Facebook was criticised for not doing more to stop the spread of the Christchurch videos on its platform, but defended its actions by saying the live-streamed attack was nearly unprecedented -- and that its AI software wasn't able to fight it at the time.

"The video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology," Facebook said.

It is now working with law enforcement authorities to feed its AI system video of firearms training programs, so the machine can learn what such situations look like.

Australian PM Scott Morrison has been among world leaders calling on Facebook to do more. Photo: AAP

Facebook said it removed many millions of versions of the video -- many blocked before they could even be uploaded -- but the footage soon spread to many other websites. The Australian government announced it would work to block websites that have hosted versions of the video.

READ MORE: Instagram To Combat 'Horrific' Kids' Abuse With Safety Guide For Parents

READ MORE: French Muslim Group Sues Facebook Over Christchurch Terror Live Stream 

"We’ll need to continue to iterate on our tactics because we know bad actors will continue to change theirs," Facebook said.

In an attempt to divert people from engaging extremist ideologies, Facebook also directs users to information about organisations to help "leave behind hate". After Christchurch, people searching for terms connected to white supremacy were linked to deradicalisation groups -- and that program is now being expanded.

More work is being done to address hate on social media. Photo: Getty

Locally, people will be directed to EXIT Australia, a not-for-profit which says it helps "support those wanting to leave groups that are of high demand or using coercive control against their members."

"We plan to continue expanding this initiative and we’re consulting partners to further build this program in Australia and explore potential collaborations in New Zealand," Facebook said.

"We’ll continue to seek out partners in countries around the world where local experts are working to disengage vulnerable audiences from hate-based organisations."