The Autocrat’s New Tool
Kit
THE SATURDAY ESSAY
(WSJ)
By Richard Fontaine and Kara Frederick
March 15, 2019 11:10 a.m. ET
Chinese
authorities are now using the tools of big data to detect departures from
“normal” behavior among Muslims in the country’s Xinjiang region—and then to
identify each supposed deviant for further state attention. The Egyptian
government plans to relocate from Cairo later this year to a still-unnamed new
capital that will have, as the project’s spokesman put it, “cameras and sensors
everywhere,” with “a command center to control the entire city.” Moscow already
has some 5,000 cameras installed with facial-recognition technology, and it can
match faces of interest to the Russian state to photos from passport databases,
police files and even VK, the country’s most popular social media platform.
As dystopian
and repressive as these efforts sound, just wait. They may soon look like the
quaint tactics of yesteryear. A sophisticated new set of technological
tools—some of them now maturing, others poised to emerge over the coming
decade—seem destined to wind up in the hands of autocrats around the world. They
will allow strongmen and police states to bolster their internal grip,
undermine basic rights and spread illiberal practices beyond their own borders.
China and Russia are poised to take advantage of this new suite of products and
capabilities, but they will soon be available for export, so that even
second-tier tyrannies will be able to better monitor and mislead their
populations.
Many of
these advances will give autocrats new ways to spread propaganda, both
internally and externally. One key technology is automated microtargeting.
Today’s microtargeting relies on personality assessments to tailor content to
segments of a population, based on their psychological, demographic or
behavioral characteristics. Russia’s Internet Research Agency reportedly conducted
this kind of research during the 2016 U.S. presidential race, harvesting data
from Facebook to craft specific messages for individual voters based in part on
race, ethnicity and identity. The more powerful microtargeting is, the easier
it will be for autocracies to influence speech and thought.
Until now,
such efforts have been mostly limited to the commercial world and have focused
on precision advertising: Facebook itself conducts microtargeting, for
instance, and Google labeled users “left-leaning” or “right-leaning” for
political advertisers in the 2016 election. But private firms are developing
artificial intelligence that can automate this customization for whole
populations, and government interest is sure to follow. In an October 2018
discussion at the Council on Foreign Relations, Jason Matheny, the former
director of the U.S. government’s Intelligence Advanced Research Projects
Activity, cited this kind of “industrialization of propaganda” as one reason to
beware of the “exuberance in China and Russia towards AI.”
China is the leader in developing new
forms of control and is eager to export them.
AI-driven
applications will soon allow authoritarians to analyze patterns in a
population’s online activity, identify those most susceptible to a particular message and target them more precisely with
propaganda. In a widely viewed TED Talk in 2017, techno-sociologist Zeynep Tufekci described a world where “people in power [use]
these algorithms to quietly watch us, to judge us and to nudge us, to predict
and identify the troublemakers and the rebels.” The result, she suggests, may
be an authoritarianism that transforms our private screens into “persuasion
architectures at scale…to manipulate individuals one by one, using their
personal, individual weaknesses and vulnerabilities.” This is likely to mean
far more effective “influence campaigns,” aimed at either citizens of
authoritarian countries or those of democracies abroad.
Emerging
technologies will also change the ways that autocrats deliver propaganda.
State-controlled online “bots” (automated accounts) already plague social
media. During Russia’s 2014 invasion of Crimea and in the months afterward, for
example, researchers at New York University found that fully half of the tweets
from accounts that focused on Russian politics were bot-generated. The October
2018 murder of Washington Post columnist Jamal Khashoggi prompted a surge in
messaging from pro-regime Saudi bots.
But bots
will soon be indistinguishable from humans online—capable of denouncing
antiregime activists, attacking rivals and amplifying state messaging in
alarmingly lifelike ways. Lisa-Marie Neudert, a
researcher with Oxford’s Computational Propaganda Project, has warned that “the
next generation of bots is preparing for attack. This time around, political
bots will leave repetitive, automated tasks behind and instead become
intelligent.” The kind of tech advances that fuel Amazon’s Alexa and Apple’s
Siri, she told the International Forum for Democratic Studies last October, are
also teaching propaganda bots how to talk.
For years,
the Chinese government has employed what’s known as the “50 Cent
Army”—thousands of fake, paid commenters—to post online messages favorable to
Beijing and to distract online critics. In the future, bots will do the work of
the current legions of regime-paid desk workers.
These
increasingly insidious bots will work together with other new tools to let
dictatorships spread disinformation, including “deep fakes”—digital forgeries
impossible to distinguish from authentic audio, video or images. Audio fakeries
are already getting good enough to fool many listeners: Speech-synthesis
systems made by companies such as Lyrebird (which says it creates “the most
realistic artificial voices in the world”) require as little as one minute of
original voice recording to generate seemingly authentic audio of the target
speaker.
Video is
soon to follow. On YouTube, one can already see an unnerving mashup of actors
Steve Buscemi and Jennifer Lawrence and a far-from-perfect video made by the
Chinese company iFlytek showing both Donald Trump and
Barack Obama “speaking” in fluent Mandarin. Soon, such fakes will be chillingly
convincing. That will leave those playing defense “outgunned,” according to
Dartmouth computer science professor Hany Farid. There are probably 100 to
1,000 times “more people developing the technology to manipulate content than
there is to detect [it],” he told Pew in January. “Suddenly there’ll be the
ability to claim that anything is fake. And how are we going to believe
anything?”
The kind of advances in AI that fuel
Alexa and Siri are also teaching propaganda ‘bots’ how to talk.
New tools
will also make it possible for dictators to conduct surveillance as never
before, both online and in the real world. Humans are training computers to
identify and interpret emotional context within blocks of text using natural
language processing (an application of machine learning). Facebook now uses
similar techniques to examine linguistic nuances in posts that might flag users
who are contemplating suicide. Smaller companies are working to score
individual social-media posts based on attitude, emotion and intent.
The California-based
AI startup Predictim scoured the text of Twitter , Facebook and Instagram to develop risk ratings for
(of all things) wannabe babysitters. Based solely on the language in potential
babysitters’ social-media postings, the app provided automated assessments of
their propensity to bully, be disrespectful or use drugs. The startup’s efforts
triggered a swift backlash last year, but China, Russia and other autocracies
won’t share such scruples. Jack Clark, who directs policy for the research firm
OpenAI, warns that “we currently aren’t—at a national
or international level—assessing or measuring the rate of progress of AI
capabilities and the ease with which given capabilities can be modified for
malicious purposes.” This, he adds, “is equivalent to flying blind into a
tornado—eventually, something’s going to hit you.”
The next
generation of natural language processing tools will become more sophisticated
as advances in machine learning accelerate. Applied by the wrong regime, they
can be combined with other data to assess an individual’s trustworthiness,
patriotism and likelihood of dissenting.
Such
applications do not yet exist, but an early move in that direction can be seen
in China’s public statements. As The Wall Street
Journal has reported, “By 2020, the government hopes to implement a national
‘social credit’ system that would assign every citizen a rating based on how
they behave at work, in public venues and in their financial dealings.” Local
governments across China are already keeping digital records of citizens’
behavior and docking them for jaywalking, breaking family-planning rules or
paying bills late. Those who end up on the blacklist lose out, unable to buy
high-speed train tickets, obtain government subsidies, purchase real estate or
even get hired. According to a plan issued by Beijing’s municipal government,
by 2021, the capital’s blacklisted citizens will be “unable to move even a
single step.”
Venezuela
has introduced its own “carnet de la patria” (fatherland card), a
smart-chip-based piece of identification that citizens need to get access to
government services such as health care and subsidized food. Human Rights Watch
reports that the card may capture voting history as well. The data that this
system generates is stored by the Chinese company ZTE, which has also
reportedly deployed a team of experts within Venezuela’s state-run
telecommunications company Cantv to help run the
program, according to a 2018 investigation by Reuters.
Yoshua Bengio, a computer scientist known as one of the three
“godfathers” of deep learning in AI, recently described to Bloomberg his
concerns about the growing use of technology for political control. “This is
the 1984 Big Brother Scenario,” he said, “I think it’s becoming more and more
scary.”
Autocrats’
ability to spy on their citizens will be further enhanced by advances in
artificial intelligence that make sense of enormous data sets. In both the U.S.
and China, companies are optimizing new chips to support neural networks—an
algorithmic approach loosely inspired by human brain function. China’s Ministry
of Industry and Information Technology recently said that it hoped to
mass-produce neural-network optimized chips by 2020. They will allow oppressive
regimes to more efficiently collect information on their population’s speech
and behavior, sift through massive data sets and quickly exploit the
information.
One particular application of AI—facial recognition—could be as
ubiquitous in a decade as smartphone cameras are today. The technology has been
used by the U.S. Department of Homeland Security, San Diego’s Police Department
and others to enhance security at large events like the Super Bowl. In the
hands of autocrats, however, the technology has great potential for repressive
use. Chinese police deployed facial-recognition glasses in early 2018, and
Beijing-based LLVision Technology Co. sells basic
versions to countries in Africa and Europe. Such glasses can be used to help
identify criminals like thieves and drug dealers—or to hunt human-rights
activists and pro-democracy protesters.
A political
dissident in Harare may soon have as much to fear as a heroin smuggler in
Zhengzhou: The Chinese AI firm CloudWalk Technology
has sold Zimbabwe’s government a mass facial-recognition system. It will send
facial data on millions of Zimbabweans back to the company in China, allowing
it to refine its algorithms and perfect the system for further export. Business
is also booming for other companies. The global client list of the Chinese
surveillance firm Tiandy, a CCTV camera manufacturer
and “smart security solution provider,” includes more than 60 countries.
The rise of
new “smart cities” around the world could also mean trouble. Autocratic regimes
will be able to weave diverse data streams into a grid of social control. China
plans to build more smart cities like Yinchuan, where commuters can use a
positive facial ID to board a bus, or Hangzhou, where facial data can be used
to buy a meal at KFC. Planned megacities like Xiong’an
New Area, a development southwest of Beijing, suggest the shape of future
panopticons. These cities of the future could use centralized systems of
control across financial, criminal and government records, drawing on websites,
visual imagery, phone applications and sensors—all of it propelled by 5G data
transmission.
Until quite
recently, it was easy to see the digital revolution as a great liberalizer, a
way to transmit ideas faster than any would-be censor could react. The reality
is turning out to be far more complicated.
The internet
dispersed data, but new technological advances can concentrate its power in the
hands of a few. With more than 30 billion devices expected to be connected to
the internet by 2020, each one generating new data, those who can control, process
and exploit the information rush will have a major advantage. A regime bent on
stability may feel virtually compelled to do so.
But we
shouldn’t assume that the benefits will accrue only to repressive governments.
When dictatorships sought in recent years to monitor their citizens’ online
communications, the U.S. State Department and others sponsored encryption tools
that allowed would-be dissenters to safely communicate. When regimes censored
information and blocked access to key websites, circumvention tools cropped up
to allow unfettered access.
That is the
right idea. Open societies will need to marshal an array of responses in the
contest ahead. Democracies will need to slap sanctions on the individuals and
groups using new tools for repressive ends, inflict higher costs on technology
companies complicit in gross human-rights abuses, invest in countermeasures and
harden their own systems against external intrusions. Free governments will
also have to differentiate between using new technologies for legitimate
purposes (such as traditional law enforcement) and using them to solidify
single-party control, curtail basic rights and meddle in democracies abroad.
Dictators
from Caracas to Pyongyang will seek to exploit the enormous potential for political
misuse inherent in the emerging technologies, just as they have over the
decades with radio, television and the internet itself. Democracies will need
to be ready to fight back.
Mr. Fontaine is the CEO of the Center
for a New American Security in Washington, D.C. Ms. Frederick is an associate
fellow in the center’s technology and security program and worked previously
for Facebook, the U.S. Naval Special Warfare Command and the Department of
Defense.
Appeared in the March 16, 2019, print
edition as 'The Autocrat’s New Tool Kit High-Tech Tools for Suppressing
Dissent.'
MORE SATURDAY ESSAYS
‘China Could Have Been a Very Different Country.’ A Search
for Family Reveals a Lost Moment March 7, 2019
A Regime Still Fighting the ‘Great Satan’ March 1, 2019
The Failure of the French Elite February 22, 2019
Democrats Take Aim at the Reagan Tax Revolution February 15,
2019