Ever since science fiction emerged as a genre, the people who imagine various versions of the future have wowed us with their ideas for what humanity might be doing at some distant point in the future. Rarely, however, do readers and viewers get a glimpse of how things might unfold along the way.
Getting to tomorrow involves far more than imagining what sorts of future technologies we or our descendants will utilize. It also means considering the cascade of political, personal and religious issues that will emerge while those technologies are being developed.
Society is just beginning to catch a glimpse of these issues as the technologies of artificial intelligence and augmented reality are edging toward being used by almost everyone, rather than just by a handful of wealthy technology obsessives. This can have many positive implications, but it has negative ones as well, as the line between reality and imagination becomes increasingly blurred. The powerful software that can bring us the delights of Pokémon Go or help us find just the right web page can also be used toward far more nefarious ends.
Maybe it's significant that a web search for the originator of the concept that perception is reality yields no clear results. The two people who seem most likely to have said it first were Albert Einstein and Lee Atwater, the late Republican political consultant who pioneered the art of bending campaign reality to suit his clients' needs.
“Reality is merely an illusion, albeit a very persistent one,” Einstein is said to have quipped. Fittingly, there appears to be no verifiable source that he actually did.
Atwater, however, routinely told clients and colleagues that in the political realm, perception was reality.
That has no doubt been true since before Socrates was falsely convicted of fomenting revolution 2,400 years ago. What’s different now is how much easier it is to build false perceptions thanks to technology.
One news cycle after another shows how easily a small team of Russian trolls, utilizing techniques and software concepts invented by American marketing firms, can shape the opinions of people thousands of miles away. Their task was made much easier thanks to a group of young far-right activists who started out as 4chan libertarian trolls and transferred their obsessive loyalty to a rookie Republican presidential candidate named Donald Trump during the summer of 2015. The fusion of Russian propagandists and white nationalist s**t-posters soon gave birth to what the emergent “alt-right” began describing as “meme magic,” the half-joking concept that any idea could be made real through enough trolling and online deceit.
Despite the hype, meme magic has proven to have only limited power. It hasn’t stopped Trump from essentially pursuing the same policies as former president George W. Bush, if through crude and imbecilic means. Alt-right trolls also seem not to realize that the real reason Trump won a victory even he didn’t expect was that he didn’t campaign on the financial austerity platform used by every other major Republican candidate since 1980.
Nonetheless, it's true that the combination of alt-right bot operators and Russian trolls has been able to shape public opinion, aided significantly by established conservative media organizations’ desperation for clicks and boosted viewership -- and by Trump’s endless willingness to hear praise. The troll operators and their president have succeeded in convincing most Republicans to believe that the true collaboration between Russia and America in 2016 was actually working on behalf of Hillary Clinton.
They have also succeeded at promoting a nearly meaningless memorandum written by Rep. Devin Nunes, R-Calif., accusing the FBI of corruption, into weeks of breathless headlines. Many Republicans still believe that document was a bombshell, long after it proved to be a complete waste of time. Among other inflammatory ends, Russian trolls even apparently utilized Pokémon Go in an effort to spread anger among black Americans about controversial police killings of suspects.
While online manipulators and their AI software have yet to prove that they have the ability to drastically shift reality, they have proven that they can bend it ever so slightly, as if into a distorted Möbius strip that can be both true and false, depending upon one’s perspective. Never before has the mainstream press been simultaneously tasked with debunking widespread, nonsensical conspiracy theories while also being called “fake news” by a president who lies constantly.
Even outside the political realm, the numerous ways that internet giants and governments hoard information about web users and the many problems caused by “artificial stupidity,” there are plenty of other ways AI can be used for harmful purposes. Even the littlest among us have not been immune. Journalist James Bridle touched off an internet-wide debate last November about thousands of disturbing videos that YouTube had allowed to be uploaded to its service and then shown to young children via its YouTube Kids mobile app. With no human intervention involved in approving these videos, strange clips featuring popular cartoon characters committing suicide, killing each other or being buried alive have been offered up to kids whose parents assumed they were utilizing a child-friendly app.
“There have been times when a child is brought to my office between [the ages of] 8 and 10 and they’re found doing sexual things: oral sex, kissing and getting naked and acting out sexual poses," child therapist Natasha Daniels told NBC. "This usually indicates some sort of sexual abuse. In the past, whenever I did some investigating, I would find a child who has been molested himself or that an adult has been grooming the child for abuse. However, in the last five years, when I follow the trail all the way back, it's YouTube and that's where it ends."
In most cases, these videos have been automatically generated based on popular search keywords and then uploaded by computer programs. After much opprobrium, YouTube’s parent company Google has vowed to crack down on the clips. B ut given how numerous they are, the web giant is having to rely upon user reports. Google has also said it will hire more than 10,000 people to accelerate content review processes. But with more than 400 hours of video uploaded to YouTube every minute of every day, such review process seems almost impossible.
Abusers of AI are also targeting adults, through a new and disturbing form of pornography made by nefarious users who meld images and videos of celebrities with real sex video footage to create entirely new clips. Dubbed “deepfakes” after the pseudonymous popularizer of the form, they are flooding into porn and image-sharing websites, whose site owners are quick to ban them as non-consensual. The battle against deepfakes is now pitting AI versus AI as sex video sites like PornHub employ facial recognition algorithms to detect fake celebrity sex tapes. The software can do little to detect videos generated using the faces of non-famous people, however.
“Pitting AI-driven moderators against AI-generated videos sounds like a harbinger of the fake news apocalypse,” Motherboard’s Samatha Cole noted last week. “Robots make damaging videos, and other robots chase them down to nuke them off the internet. But as machine learning research becomes more democratized, it’s an inevitable battle — and one that researchers are already entrenched in.”
Whether human society ends up in some awful dystopia or a magical World of Tomorrow, we’re going to go through some extremely weird territory beforehand. The future will be more screwed up than the wildest imaginings of Aldous Huxley and Walt Disney combined.
Shares