Client-side AI to combat online toxicity

December 10, 2024

Hate speech, harassment, and online abuse have become a pervasive issue online. Toxic comments silence important voices and drive users and customers away. Toxicity detection protects your users and creates a safer online environment.In this two part series, we explore how to use AI to detect and mitigate toxicity at its source: users’ keyboards.In this first part, we discuss the use cases and benefits of this approach.

Source: Part 1: Client-side AI to combat online toxicity

There are no simple answers to online toxic behaviours. A lot comes from bad actors. But some comes from folks in the moment, and budging and prompts can help someone reflect on the nature of what they are about post make better choices.

Maud Nalpas explores using in-browser AI to provide such prompting.