The concept of a metaverse has been around for some time, and with the backing of huge tech companies such as Microsoft and Facebook, the idea has become a growing interest to many.
A metaverse can be defined as a 3D virtual space linked to a perceived virtual universe parallel to the physical world. The virtual reality (VR) space has already attracted many VR enthusiasts and many believe that this fad may be the next big thing in tech after social media.
However, the innovation is already attracting toxic behavior, with victims reporting sexual harassment, racial insults, and homophobic behavior on certain VR platforms.
Not too long ago, a woman, Nina Jane Patel, reported having been sexually abused on Horizon Worlds, a VR online video game. Nina posted a blog on Medium explaining her ordeal, stating that within 60 seconds of joining, she had been verbally and sexually harassed by 3-4 male avatars who virtually gang-raped her avatar. She adds that the avatars took photos of her and taunted her by sending comments telling her to stop pretending she hadn’t enjoyed the incident.
People took to social media, and Nina reports receiving comments terming her post as a pathetic cry for attention and that next time, she should not use a female avatar.
Others also raised concerns about the possibility of getting hurt in a virtual world.
In response, the company says that safety remains a priority on the metaverse where strangers get to interact daily. Meta’s Vice President of Horizon, Vivek Sharma, said that Nina should have used the ‘Safe Zone’ feature when she felt threatened. He also added that it had been an ‘unfortunate incident’ that will help the company improve on the performance and accessibility of the ‘block-user’ feature.
Similar cases have been reported, where back in 2016, users of Quiver claimed to have been sexually assaulted before their friends and family. With safety features such as ‘Safe Zone,’ users will be protected from other’s triggering actions until they feel safe enough to deactivate.
However, this does not help the issue of sexual harassment from occurring in the first place. To ensure that this doesn’t happen, Meta may consider disallowing features or disabling character actions that have the potential of being used or abused in making users feel unsafe.
With toxic behaviors such as sexual harassment becoming a significant problem facing the evolving VR space, Meta is already taking the necessary preventive measures to become safely accessible to its users.
In February 2022, the company announced Personal Boundary. In this safety feature, avatars cannot get within two feet of each other in order to create personal space and prevent unpleasant interactions. In its functioning, if someone tries to enter another person’s personal space, the system will halt their forward movement.
Meta could also follow in the footsteps of World of Warcraft, where there are certain limits on who avatars can approach and interact with.
Software developers are already aware of what is happening, and many should anticipate these kinds of problems when creating interfaces. When people go behind a screen where their behavior is unmonitored, it is more likely that they will behave foolishly than they would in person.
It is also important to note that toxic behavior in VR spaces may be as harmful as in the physical world where users may suffer significant psychological effects.
With the way technology is evolving, it is evident that metaverse can become the next technological advancement in use by many. Moderating content will be challenging, and the platform will inevitably attract greater threats. As we advance, Meta should consider covering as many secure bases as possible to make users feel comfortable and safe.
In addition to that, they should have policies that address all these concerns and ensure that rights, such privacy, safety, and dignity are upheld.