Social media giants join hands to root out terrorist content. Will they succeed?

Facebook, Twitter and YouTube have come together to rid social media of content that promote terrorism and violence by using digital fingerprints to identify such content.

Images and videos of beheadings and executions are spreading fast on social media and a joint action seeks to eradicate such content.

What's significant about the move is that three platforms have the largest user base in the world and are used constantly by terrorist groups to spread images, videos and messages promoting hatred and violence across the world. Through a coordinated approach, they seek to stop being used as platforms for promoting terrorism, violence or hate, which often result in lone wolf or violent attacks in several countries.

Google removes Taliban app after media cries foul

"There is no place for content that promotes terrorism on our hosted consumer services," said a joint statement from Facebook, Twitter and Google-run YouTube. The three social media giants will add digital fingerprints or 'hashes' to content like images and videos which can then be spotted and contained if they are shared or promoted by users on any of these platforms as well as other websites.

Since Facebook, Twitter and YouTube have billions of users, it could be difficult for their small army of moderators to notice and flag every inappropriate content but through joint coordination, their chances of detecting such content could be higher. For example, if Twitter moderators flag any content by adding a digital fingerprint to it, moderators at Facebook will be able to spot the same content as soon as it is added on Facebook by any user. Before joining hands with Facebook and YouTube, Twitter has already suspended as many as 360,000 accounts since the middle of last year which is suspected of promoting terrorism.

WhatsApp asked to stop sharing user data with Facebook in the UK

The announcement of a joint action comes not long after a number of MPs accused these social media giants of 'passing the buck' and serving as 'recruiting platforms' for terrorist groups by not taking stringent measures against users who promoted terrorism and violence. Earlier this year, Google removed a two-day old app from the Play Store after stories circulated in the media about the app being part of Taliban's digital campaign to spread its reach.

The controversial app allowed users to access Taliban's Pashto website which can also be read in five different languages including English and Arabic. The alarming fact was that the app entered and continued to run on Google's Play Store even after Google initiated pre-moderating apps to ensure that offending apps were blocked before they could spread their evil ideas.

“While we don’t comment on specific apps, we can confirm that we remove apps from Google Play that violate our policies. Our policies are designed to provide a great experience for users and developers. That’s why we remove apps from Google Play that violate those policies,” said Google to the Guardian.

Top Articles

MobileChoice Reviews

Leave a Comment