As Facebook users around the world are coming to understand, some of their favorite technologies can be used against them. It’s not just the scandal over psychological profiling firm Cambridge Analytica getting access to data from tens of millions of Facebook profiles. People’s filter bubbles are filled with carefully tailored information — and misinformation — altering their behavior and thinking, and even their votes.
People, both individually and as a society at large, are wrestling to understand how their newsfeeds turned against them. They are coming to realize exactly how carefully controlled Facebook feeds are, with highly tailored ads. That set of problems, though, pales in comparison to those posed by the next technological revolution, which is already underway: virtual reality.
On one hand, virtual worlds hold almost limitless potential. VR games can treat drug addiction and maybe help solve the opioid epidemic. Prison inmates can use VR simulations to prepare for life after their release. People are racing to enter these immersive experiences, which have the potential to be more psychologically powerful than any other technology to date: The first modern equipment offering the opportunity sold out in 14 minutes.
In these new worlds, every leaf, every stone on the virtual ground and every conversation is carefully constructed. In our research into the emerging definition of ethics in virtual reality, my colleagues and I interviewed the developers and early users of virtual reality to understand what risks are coming and how we can reduce them.
Intensity is going to level up
“VR is a very personal, intimate situation. When you wear a VR headset . . . you really believe it, it’s really immersive,” says one of the developers with whom we spoke. If someone harms you in VR, you’re going to feel it, and if someone manipulates you into believing something, it’s going to stick.
This immersion is what users want: “VR is really about being immersed . . . As opposed to a TV where I can constantly be distracted,” one user told us. That immersiveness is what gives VR unprecedented power: “really, what VR is trying to do here is duplicate reality where it tricks your mind.”
These tricks can be enjoyable — allowing people to fly helicopters or journey back to ancient Egypt. They can be helpful, offering pain management or treatment for psychological conditions.
But they can also be malicious. Even a common prank that friends play on each other online — logging in and posting as each other — can take on a whole new dimension. One VR user explains, “Someone can put on a VR head unit and go into a virtual world assuming your identity. I think that identity theft, if VR becomes mainstream, will become rampant.”
Data will be even more personal
VR will be able to collect data on a whole new level. Seemingly innocuous infrared sensors designed to help with motion sickness and alignment can capture near-perfect representations of users’ real-world surroundings.
Further, the data and interactions that give VR the power to treat and diagnose physical and mental health conditions can be used to hyper-personalize experiences and information to the precise vulnerabilities of individual users.
Combined, the intensity of virtual reality experiences and the even more personal data they collect present the specter of fake news that’s much more powerful than text articles and memes. Rather, immersive, personalized experiences may thoroughly convince people of entirely alternate realities, to which they are perfectly susceptible. Such immersive VR advertisements are on the horizon as early as this year.
Building a virtual future
A person who uses virtual reality is, often willingly, being controlled to far greater extents than were ever possible before. Everything a person sees and hears — and perhaps even feels or smells — is totally created by another person. That surrender brings both promise and peril. Perhaps in carefully constructed virtual worlds, people can solve problems that have eluded us in reality. But these virtual worlds will be built inside a real world that can’t be ignored.
While technologists and users are cleaning up the malicious, manipulative past, they’ll need to go far beyond making social media healthier. As carefully as developers are building virtual worlds themselves, society as a whole must intentionally and painstakingly construct the culture in which these technologies exist.
In many cases, developers are the first allies in this fight. Our research found that VR developers were more concerned about their users’ well-being than the users themselves. Yet, one developer admits that “the fact of the matter is . . . I can count on my fingers the number of experienced developers I’ve actually met.” Even experts have only begun to explore ethics, security and privacy in virtual reality scenarios.
The developers we spoke with expressed a desire for guidelines on where to draw the boundaries, and how to prevent dangerous misuses of their platforms. As an initial step, we invited VR developers and users from nine online communities to work with us to create a set of guidelines for VR ethics. They made suggestions about inclusivity, protecting users from manipulative attackers and limits on data collection.
As the debacle with Facebook and Cambridge Analytica shows, though, people don’t always follow guidelines, or even platforms’ rules and policies — and the effects could be all the worse in this new VR world. But, our initial success reaching agreement on VR guidelines serves as a reminder that people can go beyond reckoning with the technologies others create: We can work together to create beneficial technologies we want.
Elissa Redmiles, Ph.D. Student in Computer Science, University of Maryland
Shares