A reminder that (AI-Driven) Content Moderation Is Hard
2023-10-26 | tags: AI

(Photo by Kai Pilger on Unsplash)

This is your periodic reminder that (AI-Driven) Content Moderation Is Hard.

Anyway – and longtime readers will already know where I'm going here – this is not unique to content moderation. Sure, content moderation makes the problem more visible because everyone on a platform can see the models' performance in real-time. But if we zoom out, we're also reminded about challenges common to all AI models:

1/ Every AI model represents the collective decisions, actions, and inactions of those who built it.

2/ We need to reframe #1 as "attempts to represent" because the model can and will be wrong from time to time.

3/ Because of #2, it's very rare that you'll have a "set it and forget it" kind of model. Do yourself a favor: make sure you have tools in place to monitor the model's performance, and also the staffing to adjust/override/retrain the model as needed.

I'm thinking about this in context of this recent WSJ article on Meta's content moderation challenges:

"Inside Meta, Debate Over What’s Fair in Suppressing Comments in the Palestinian Territories" (WSJ)

Of special note:

Meta has long had trouble building an automated system to enforce its rules outside of English and a handful of languages spoken in large, wealthy countries. The human moderation staff is generally thinner overseas as well.

Developing baselines for predictive models

Understanding table stakes for an AI modeling effort

Weekly recap: 2023-10-29

random thoughts and articles from the past week