Bullying has a long history in digital spaces, but being literally in front of virtual reality can make it a whole lot more real than on a flat screen. And the companies behind many of the most popular social VR apps don’t want to talk about it: Meta, as well as the most popular social VR apps. VRChat and Rec Room declined interview requests from CNN Business about how they combat harassment in virtual reality.
But the problem is sure to become more common as cheaper, more powerful headsets make more people shell out for the tech: you can currently pick up the Quest 2 for $299, making it cheaper (and easier to find) than a Sony PlayStation 5.
“I think [harassment] is an issue we need to take seriously in virtual reality, especially if we want it to be a welcoming online space, a diverse online space,” said Daniel Castro, vice president of the Information Technology & Innovation Foundation. . “Even though you see really bad behavior happening in the real world, I think it can get worse online.”
Bubbles, blocks and mute
Virtual reality didn’t become accessible to the masses overnight: for Meta, it started with the company’s purchase of Oculus VR in 2014, and in the years since, the company has rolled out a series more and more efficient, affordable and portable helmets. That work is paying off, as Meta’s Quest headsets accounted for about 80% of VR headsets shipped last year, according to Jitesh Ubrani, research director at technology market researcher IDC.
In hopes of stopping and preventing bad behavior, social VR apps tend to offer a number of common tools people can use. These tools range from the ability to set up an invisible bubble of personal space around you to prevent other avatars from getting too close to you, to mute people you don’t want to hear, to Block it out completely so they can’t see or hear you and vice versa.
Reporting bad behavior and moderation practices in place in virtual reality can be similar to those in online games. Users can sometimes vote to kick someone out of a VR space – I experienced this recently when I was asked to vote on whether to kick someone out of place in Meta’s Horizon Worlds after they repeatedly approached me and other users saying, “By the way, I’m single.” (This user got the boot.) Human moderators are also used to respond to complaints of bad behavior, and apps can suspend or ban users if their behavior is egregious enough.
“These steps are the right direction,” Castro said, though he acknowledges that different apps and platforms – as well as public VR spaces where anyone can stop, as opposed to private spaces where invitations are limited – will come with different content moderation challenges.
These tools will also evolve over time as more and more people use virtual reality. In a statement, Bill Stillwell, Product Manager for VR Integrity at Meta, said, “We will continue to make improvements as we learn more about how people interact in these spaces.”
A burden for the victims
While some of today’s tools can be used proactively, many of them only come in handy after they’ve already been nagged, pointed out Guo Freeman, assistant professor of human-centered computing at Clemson University. who studies games and social virtual reality. Because of this, she feels they are putting a burden on the victims.
It makes sense that app makers are grappling with the moderation challenges that come with scaling and wondering if new kinds of automation might help: the VR market is still tiny compared to that of console video games, but it is growing rapidly. IDC estimates nearly 11 million VR headsets were shipped in 2021, a 96% jump from the 5.6 million shipped a year earlier, Ubrani said. Over the two years, Meta’s Quest helmets made up the majority of these expeditions.
In some ways, ToxMod is similar to the number of social media companies that already moderate their platforms, with a combination of humans and AI. But the sense of acute presence that users tend to experience in VR — and the fact that it relies so heavily on spoken rather than written communication — might make some people feel like they’re being spied on. (Modulate said users are notified when they enter a virtual space where ToxMod can be used, and when a new app or game starts using ToxMod, Modulate’s Community Manager will generally communicate with users by line – like through a game’s Discord channel – to answer any questions about how it works.)
“It’s definitely something we spend a lot of time thinking about,” said Modulate CEO Mike Pappas.
There are no set standards
A primary challenge in addressing harassment in virtual reality is the lack of agreement on what even counts as harassment in a virtual space versus a physical space. In part, that’s because while virtual reality itself isn’t new — it’s been around in different incarnations for decades — it’s new as a mass medium, and it’s changing accordingly all the time.
This novelty means there are no set standards, which can make it difficult for anyone behind a headset to figure out what’s right or wrong when interacting with other people in VR. A growing number of children are also entering virtual spaces and, as Freeman pointed out, what a child may consider playing (such as running and acting wild) an adult may consider bullying.
“Often in our research, participants feel very confused about whether or not this is playful or harassing behavior,” Freeman said.
Harassment in virtual reality can also take new forms that people cannot have offline. Kelly Guillory, comic book illustrator and editor of an online virtual reality magazine, had this experience last year after blocking an old friend in VRChat who had started acting controlling and having outbursts. emotional.
Once she blocked it, she could no longer see or hear it in VRChat. But Guillory was, strangely, still able to sense his presence nearby. On several occasions, while chatting with friends on the app, the avatar of her stalker would approach the group. She thinks he suspected his avatar was there, as his friends often said his name out loud. He was joining in the conversation, talking to the other people she was interacting with. But since Guillory couldn’t see or hear his avatar, it seemed like his friends were having a one-sided conversation. For Guillory, it was as if his stalker was trying to get around his block and impose his virtual presence on him.
“The first two times it happened, it was boring,” she said. “But then it just kept happening.”
It may seem real
Such virtual reality experiences can feel extremely real. Freeman said in his research, people reported that having their avatar grabbed by another person’s avatar felt realistic, especially if they used full-body tracking to replicate their limb movements. . A woman reported that another virtual reality user approached her face, looking like he had kissed her – an action that frightened her, she told Freeman, because it looked like someone doing the same in the offline world.
“Because it’s immersive, the epitome of social virtual reality, these behaviors kind of feel realistic, which means they can feel damaging because they’re physical, the threat,” Freeman said.
That was the case for Guillory: She developed anxiety about it and lost trust in people online, she said. She eventually spoke out on Twitter about the harassment, which helped.
“I still like it here, but I want people to do better,” she said.