As violent propaganda continues to spread across the Internet, Google and Facebook are starting to deploy smart systems aimed at blocking or automatically take down terrorism-related videos. Two sources familiar with the initiatives told Reuters that YouTube and Facebook want to fight extremism by using the same technology designed to detect and remove copyright-protected content on their platforms.
Government pressures have also influenced the decision to adapt that system to the needs generated by Islamic State’s efforts to recruit people through the Internet. Recent major attacks have led governments across the world to request radical action from companies whose services are being widely used to spread hateful messages.
The technology involved looks for “hashes,” a kind of unique digital fingerprint that is automatically assigned to specific video content so all the material that happens to match these fingerprints can be removed quickly, according to the Reuters report.
The idea is to identify attempts to repost videos that have been already classified as unacceptable rather than automatically ban content that has never been posted before.
Internet companies discuss the difficult task of blocking violent content
Pressure from U.S. President Barack Obama and other American and European leaders led to a call held in late April by internet giants such as Alphabet Inc.’s Youtube, Facebook Inc., Twitter Inc., and CloudFare. These companies discussed alternatives to fight online radicalization, including a system developed by the private Counter Extremism Project, as told to Reuters by a source involved in the call and three others who were informed on the topics of discussion.
Led by Facebook’s Monika Bickert, who is in charge of overseeing global policy management, the tech leaders discussed the role they play in defining the slim boundary between terrorism and free speech, between corporate and government authority. They are concerned that external influences might end up determining their own policies.
The Counter Extremism Project urged the internet titans to adopt its content-blocking system, which they publicly announced last week for the first time. However, Reuters reported that none of the firms have completely accepted the system because they basically want to avoid external interventions.
All of these powerful companies have their own different methods to block unacceptable content. Seamus Hughes, deputy director of George Washington University’s Program on Extremism, told Reuters that the thing with terror-related content is that it is different from issues that are clearly illegal, such as copyright or child pornography.
The companies have not publicly talked about the use of the method, but sources have told Reuters that posted videos could undergo the process of checks against a database of blocked content to detect new postings related to violence and hate speech.
The role that human work will play in such a process remains unclear, and the companies have not explained how they identified the initial videos as extremist. The new content-blocking technology will most certainly evolve as the companies involved continue discussions on the matter.
Facebook’s Bickert limited herself to say in a statement that the company was “exploring with others in industry ways we can collaboratively work to remove content that violates our policies against terrorism,” according to the report by Reuters. As for Twitter, a spokeswoman said it had “not yet taken a position.”
Google and Facebook are using automation to remove extremist content https://t.co/BvL7mJI9fs pic.twitter.com/cC1IjEm5ew
— Business Insider (@businessinsider) June 25, 2016
Source: Reuters
They have no Forces. Are you talking $$?
Not only that, they spit on the First Amendment. It is one thing to make a terrorist threat, that is against the law. To voice a difference in a movement,opinion or even a group isn’t.
Opinions are like butt-holes, everyone has one.