Collaboration with Wikimedia Foundation and Jigsaw to Stop Abusive Comments In one effort to stop the trolls, Wikimedia Foundation partnered with Jigsaw (the tech incubator formerly known as Google Ideas) on a research project called Detox using machine learning to flag comments that might be personal attacks. This project is part of Jigsaw’s initiative to build open-source AI tools to help combat harassment on social media platforms and web forums. The first step in the project was to train the machine learning algorithms using 100,000 toxic comments from Wikipedia Talk pages that had been identified by a 4,000-person human team where every comment had ten different human reviewers. This annotated dataset was one of the largest ever created that looked at online abuse. Not only did these include direct personal attacks, but also third-party and indirect personal attacks (“You are horrible.”  “Bob is horrible.” “Sally said Bob is horrible.”) After training, the machines could determine a comment was a personal attack just as well as three human moderators. Then, the project team had the algorithm review 63 million English Wikipedia comments posted during a 14-year period between 2001 to 2015 to find patterns in the abusive comments. The Amazing Ways How Wikipedia Uses Artificial Intelligence

thumbnail courtesy of

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our team.

You have Successfully Subscribed!

Share This