Google Wants to ‘Improve Conversations Online’ With a System That Filters ‘Toxic’ Comments

Google plans to “improve conversations online” with Perspective, a new technology for comment filtering.

“What if technology could help improve conversations online?” the Perspective API page asks. “Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions.”

One “partner experiment” is Wikipedia, who say they’re developing tools for automated detection of toxic comments using machine learning models.

“These models allow us to analyze the dynamics and impact of comment-level harassment in talk page discussions at scale,” according to Wikimedia. “They may also be used to build tools to visualize and possibly intervene in the problem.”

Others partner experiments include The New York Times, The Economist, and The Guardian.

“Many notable publishers have turned off comments on their sites, arguing it is simply too difficult to foster intelligent conversations,” The Economist said in a letter to readers. “But we want to give comments a second chance. Our goal isn’t to overhaul our comments system for the sake of it. Instead, we aim to raise the quality of debate … and to create a forum where readers feel like they are part of an intelligent Economist discussion.”

Raising “the quality of debate” involves filtering to deal with comments the system rates poorly, and the Perspective API page gives examples of this, showing how “toxic” a comment is likely to be perceived. You can also type your own phrases in to see how the system views it.

Worth noting:

“I’m a man!” 18% Toxic

“I’m a woman!” 37% Toxic

“This is fake news!” 70% Toxic

“I think that’s stupid.” 95% Toxic

“Google sucks!” 98% Toxic

Perspective is still in the early stages of development. It was created by Jigsaw and Google’s Counter Abuse Technology team in a collaborative research project called Conversation-AI.