While the full memo isn’t publicly available, Bosworth posted a blog entry alluding to it later in the day. The post, titled “Keeping people safe in VR and beyond,” references several of Meta’s existing VR moderation tools. That includes letting people block other users in VR, as well as an extensive Horizon surveillance system for monitoring and reporting bad behavior. Meta has also pledged $50 million for research into practical and ethical issues around its metaverse plans.
As FT notes, Meta’s older platforms like Facebook and Instagram have been castigated for serious moderation failures, including slow and inadequate responses to content that promoted hate and incited violence. The company’s recent rebranding offers a potential fresh start, but as the memo notes, VR and virtual worlds will likely face an entirely new set of problems on top of existing issues.