In the ongoing battle to purge Twitter of content promoting terrorism, the social media company has closed hundreds of thousands of accounts in recent months.
In its latest transparency report covering the period from July 1, 2016 through December 31, 2016, Twitter said it shuttered a total of 376,890 accounts “for violations related to promotion of terrorism,” bringing the total number of closures for terror-related content to a colossal 636,248 accounts from August 1, 2015 through December 31, 2016.
Faced with such a massive task, Twitter has had to develop proprietary tools designed to automatically identify accounts to take down. The software, which is supported by a team of human investigators, accounted for 74 percent of the most recent batch of reported account closures, the company said.
Twitter, like Facebook and other online giants, has been accused in the past of not doing enough to combat extremist activity on its service. Criticism over the last few years prompted the company to implement more robust procedures such as increasing the size of the teams that respond to reports, and taking any necessary action more quickly.
In addition, the company made efforts to start checking more accounts similar to those reported, while it continues to develop algorithms to automatically surface potentially violating accounts for review.
Twitter said it’s also worked harder to prevent those whose accounts are shuttered from quickly returning to the service, though it hasn’t revealed how exactly it does this.
A turning point in the way online companies deal with terror-related activity online came at the start of last year when leading executives from Twitter, Facebook, Microsoft, Google, and others met officials for talks in not only the U.S., but France, too, a country that has suffered multiple terror attacks in recent years.
In December, 2016, the same companies announced they would begin contributing to a shared database holding information on “violent terrorist” material found on the different platforms to help each other remove extremist content more quickly.
Removing such material from online services quickly and efficiently — and keeping it offline — is an ongoing challenge, though Twitter, for one, feels it is making progress, saying last year, “We have already seen results, including an increase in account suspensions and this type of activity shifting off of Twitter.”
In its latest transparency report covering the period from July 1, 2016 through December 31, 2016, Twitter said it shuttered a total of 376,890 accounts “for violations related to promotion of terrorism,” bringing the total number of closures for terror-related content to a colossal 636,248 accounts from August 1, 2015 through December 31, 2016.
Faced with such a massive task, Twitter has had to develop proprietary tools designed to automatically identify accounts to take down. The software, which is supported by a team of human investigators, accounted for 74 percent of the most recent batch of reported account closures, the company said.
Twitter, like Facebook and other online giants, has been accused in the past of not doing enough to combat extremist activity on its service. Criticism over the last few years prompted the company to implement more robust procedures such as increasing the size of the teams that respond to reports, and taking any necessary action more quickly.
In addition, the company made efforts to start checking more accounts similar to those reported, while it continues to develop algorithms to automatically surface potentially violating accounts for review.
Twitter said it’s also worked harder to prevent those whose accounts are shuttered from quickly returning to the service, though it hasn’t revealed how exactly it does this.
A turning point in the way online companies deal with terror-related activity online came at the start of last year when leading executives from Twitter, Facebook, Microsoft, Google, and others met officials for talks in not only the U.S., but France, too, a country that has suffered multiple terror attacks in recent years.
In December, 2016, the same companies announced they would begin contributing to a shared database holding information on “violent terrorist” material found on the different platforms to help each other remove extremist content more quickly.
Removing such material from online services quickly and efficiently — and keeping it offline — is an ongoing challenge, though Twitter, for one, feels it is making progress, saying last year, “We have already seen results, including an increase in account suspensions and this type of activity shifting off of Twitter.”
Comments
Post a Comment