But Long-Known Best Practices Were Not Prioritized
If you’ve been following the news about Facebook/Meta’s metaverse project lately, you’ll recall the slew of bad press when a female user was sexually assaulted in Horizon Worlds, leading the company to hastily add an avatar “boundary” system.
And if you’ve been following virtual world/metaverse development for any substantial amount of time, you’ve probably been wondering why Meta allowed this to happen at all. Understanding and preparing for avatar-to-avatar harassment, especially directed at female avatars, is a fundamental challenge. How did a company spending billions of dollars on making a metaverse platform of its own somehow miss lesson #1 from Metaverse 101?
As it turns out, Meta was warned about this many times — by a well-known virtual world veteran who was a senior member of the Oculus team. But somehow, his warnings, recommendations, and best practice summaries were not centered. And definitely not put into place.
“I was literally banging the drum at Oculus Connect two years in a row,” Jim Purbrick tells me, with evident frustration, even sending along the talk he gave on the subject at Facebook’s own conference back in 2016. (Watch below.) “I also told every new Oculus employee I met to read My Tiny Life in addition to Ready Player One, but the message didn’t reach every part of the organization, sadly.”
My Tiny Life, of course, is Julian Dibbell’s classic account of virtual world sexual assault… from the 1990s. Yes, the problem has been well-known and documented for that long.
Purbrick, as regular readers know, was an early developer at Linden Lab, going on to consult with CCP, the developers of Eve Online, before joining the Oculus team. He also documents virtual world/metaverse best practices on his blog here.
And when he joined Facebook’s XR team, Purbrick took pains to carry over the wisdom learned from Second Life and from the knowledge base of virtual world development in general:
“I talked to [founding Linden executive] Robin Harper when I was working on this at Oculus to make sure I learned the lessons from her experience at Linden,” Jim tells me, “as well as Raph [Koster] and Daniel James: the best practices have been known for a long time.” (James is a fellow virtual world veteran who also worked at Facebook, until 2017.)
Purbrick left Oculus/Facebook in 2020, but not before advising the company on a system for minimizing avatar harassment:
“When I was last working on avatars I was proposing fading out avatars when they got close to avoid creepy and disturbing intersecting geometry,” he tells me.
By contrast, Purbrick isn’t convinced Meta’s barrier solution is a good one:
“I don’t know the details of the personal boundary plan,” as he puts it, “but it has historically been a bad idea as it allows bad actors to blockade avatars and stop free movement.” (I can confirm that as well. Again, this is also Metaverse 101.) “I think we did a pretty good job with Oculus Venues, where we had the ability to implement a good set of tools and policies,” he adds.
As he departed the company, Purbrick spoke directly about the topic with developers of Meta’s consumer metaverse platform:
“I was talking to the Horizon team when I left Facebook and at least some of the team were aware of the issues and best practices, but the work clearly didn’t get prioritized,” as he puts it to me with classic British understatement.
It is truly mind-boggling, and affirms what I’ve heard elsewhere, that Meta’s Horizon project is beset by a lack of design direction.
As for what this says about Meta, I’m thinking about the company CTO, who only last November, was saying bad metaverse moderation could pose an “existential threat”. But if Meta really believes that, why did they ignore best practices around virtual world moderation that have been around for literal decades — even after they were paying someone to relate them to the team?
Have a great week from all of us at Zoha Islands and Fruit Islands