This House would prioritize content moderation by human reviewers over algorithmic moderation.
techMeta's human-reviewer staffing has dropped by half since 2022. Trust-and-safety teams across the major platforms have been gutted. The work moved to AI; the failure modes shifted accordingly.
Background
Meta laid off roughly 21,000 employees in 2022-2023, with trust-and-safety teams disproportionately hit. X cut over 50% of its T&S workforce after the Musk acquisition. The AI-moderation systems now handling that volume have measurable strengths (extremism flagging) and measurable weaknesses (context-dependent harassment, sarcasm, breaking-news adjudication). The human-moderation literature also documents real harm: Sama and Genpact contractors in Kenya and the Philippines have reported PTSD-level trauma from reviewing graphic content for $1.50-3/hour. Both moderation modes have costs the other does not.
Argue it. Against the AI.
Pick a side. Three minutes per speech. The AI debates back in your chosen format. Judge ballot at the end.
Start this motion →