About half the information you’ll encounter online today will be fake. By 2022, you might encounter fake content online more often than you encounter real content.
Businesses and governments are quickly investing in technology to parse real information from fake. But our focus on technology is misplaced. Sure, machine learning and other automated tools are crucial for taking down fake information. But to truly fight fake news, we need better intellectual tools too — and these tools predate the internet.
Losing the Battle Against Fake Content
On the internet, reality has become more of a suggestion than a rule. In the first quarter of this year, Facebook removed 2.2 billion fake accounts. Half of the web traffic on YouTube is fake. Russian cybercriminals have built a shadow internet populated by fake people with fake cookies, clicks, and social media accounts.
Fake content has even begun to warp public policy. In 2017, the Federal Communications Commission (FCC) invited the public to debate net neutrality on an online forum. That’s not unusual: government agencies often hold public debates before introducing policy. But something unusual did happen: people started noticing anti-net neutrality comments on the FCC’s forum in their name — even though they’d never actually commented. It wasn’t just anti-net neutrality comments: About 20% of the pro-net neutrality posts were fake too.
To push back against content fraud, actors as diverse as Facebook, Google, the New York Times and the European Union are investing in cutting-edge technology like machine learning. But it’s not so simple. As methods to combat fake content evolve, so does fake content itself. Even the most advanced machine learning models can’t keep up with deepfake videos. Bad actors use bots to commit fraud at a dizzying rate. Criminals steal technology and share it freely, whereas anti-fraud companies are straitjacketed by nondisclosure agreements. In this arms race, only one side is playing by the rules — and it’s the losing one.
This Section is Fake News
Earlier this year, the Department of Homeland Security confirmed that Russia carried out a misinformation campaign throughout 2016 to confuse, frighten and demoralize voters. In the lead-up to the 2020 election, news outlets and government agencies are fighting to reclaim the narrative.
While we’ve bolstered our technological tools, however, we’ve neglected the intellectual ones. We can spend years training machine learning models to find fake news, but without a nuanced understanding of misinformation, we will forever debate the meaning of both “fake” and “news.” To beat fraudulent content, we must leave Silicon Valley and step onto the battlefields of World War II.
When the Germans occupied the Netherlands in 1940, they turned the Haarlemsche Courant into a Nazi propaganda newspaper. Years later, as D-Day approached, the Nazis scrambled to distribute an issue of the Courant that downplayed the Allies’ impending arrival. At the same time, a group of ordinary citizens put together a “fake” version of the Courant that more accurately described the Allied invasion. So two versions of the Courant came out on D-Day: the “true” Nazi version and the “fake” subversive version.
The Belgian underground executed a similar scheme. In 1943, ragtag resistance fighters risked everything to produce a “fake" version of a Nazi propaganda newspaper, which brought to light some embarrassing truths the Nazis would’ve preferred to hide.
Why do these cases matter? Today, we associate fake news almost exclusively with the bad guys. And to be clear, it is often the “bad guys” who are producing it. But during World War II, ordinary people were able to reclaim the narrative — and win tactical victories — because they were armed with a nuanced understanding of (mis)information.
This Section Isn’t Fake News
At the beginning of World War II, the U.S. government authorized a group of researchers to study misinformation. One of the researchers, Robert Knapp, later published his recommendations for “rumor control.” We would do well to heed two of them: instilling faith in the media, and ensuring quality information remains accessible. However, neither goal is feasible if people lack the intellectual tools to parse fake and real content.
Facebook has taken a positive first step. They've placed an information icon next to news stories so readers can easily access context on the article and publication. Other social media sites should follow suit. News organizations themselves should consider publishing “codebooks” that document fact-checkers’ tools and methods.
But if people can’t interpret this context, then it’s a waste of resources. To that end, information-literacy programs should become a mandatory part of elementary and high school curricula. Kids should learn how to spot fake content while they’re learning how to cite sources. Accessibility isn’t just a matter of putting an information icon next to a news article; it’s about giving people the tools they need to interpret that information.
As 2020 approaches, the war on fake news will intensify, and democracy might become collateral damage. To win, we’ll need a combination of 21st century technology and 20th century insight. Reality itself hangs in the balance.
Shares