Media
How “Manipulation Armies” Are Undermining Democracies
A new report on the closed loops of disinformation stoking chaos and confusion.
Posted November 18, 2017
We used to know them as the “hidden persuaders”—Vance Packard’s now almost quaint phrase from the 1950s for the droves of lobbyists, marketing specialists, and opinion-makers who pulled every trick in the book to get us to buy their products and services or to vote for their candidate. Today, powered by secret algorithms and psychographic profiling, offshore troll farms and unscrupulous political actors, they are better known as “manipulation armies,” and their low-cost, high-impact gaslighting is undermining democracies the world over.
That is the somber conclusion of Manipulating Social Media to Undermine Democracy, a timely, well-documented report published earlier this week by the U.S. nonprofit and watchdog Freedom House. Funded in part by the U.S. State Department’s Bureau of Democracy, as by Google, Yahoo and several other partners, the report is an annual update on the state of freedom on the net, itself a window on the health and vulnerability of political systems in every world region.
“Governments around the world have dramatically increased their efforts to manipulate information on social media over the past year,” the report notes, with a sharp uptick in at least half the 65 nations studied. Of the world’s 3.4 billion internet users, 42% of us “live in countries where the government employs armies of ‘opinion shapers’ to spread government views and counter critics on social media.” And 63%, remarkably, are in countries where users of social media “were arrested or imprisoned for posting content on political, social, and religious issues.” The sharpest declines in internet freedom occurred in Ukraine, Egypt, and Turkey, but the U.S., UK, France, and Germany all posted modest-to-significant falls. For the third consecutive year, the Chinese government was determined to be the worst abuser of internet freedom.
Online manipulation and disinformation tactics were found to play “a significant role in elections in the United States and at least 17 other countries,” including Venezuela, Turkey, and the Philippines. The report’s verdict on last year’s U.S. election—bolstered by a range of well-documented studies on the apparently unwitting but no less consequential role of Facebook, Twitter, Google, YouTube, and Instagram in disseminating fake and polarizing news items—may come as less of a surprise at this stage of the Trump-Russia probe. Nonetheless, it will be invaluable to citizens and institutions responsible for upholding electoral integrity, in documenting the falsehoods and deceit strategies objectively, with all the necessary precision:
The use of “fake news,” automated “bot” accounts, and other manipulation methods gained particular attention in the United States. While the country’s online environment remained generally free, it was troubled by a proliferation of fabricated news articles, divisive partisan vitriol, and aggressive harassment of many journalists, both during and after the presidential election campaign.
We are reminded, in just two examples, of the role played by “smearing individuals’ public images” during and after the election and, even more specifically and chillingly, that in March 2017 U.S. Customs and Border Protection agents “asked Twitter to reveal the owner of an account that objected to [the president’s] immigration policy, and backed off only after the company fought the request in court.”
As the examples help underline, the report’s concern about state-sponsored gaslighting extends far beyond the need for fair elections, which involves related factors such as the extreme gerrymandering of districts for partisan gain and voter suppression efforts in heavily minority districts. One of the report’s key takeaways is that governments—especially ones favoring autocratic rule—use online manipulation and disinformation for domestic ends, to advance their agenda while limiting dissent, deflecting controversy, and thwarting opponents and opposition more generally. “Over the past few years,” the report determines, “state-sponsored efforts to control online discussion has become significantly more widespread and technically sophisticated, with bots, propaganda producers, and fake news outlets exploiting social media and search algorithms to ensure high visibility and seamless integration with trusted content.”
Sometimes the mechanism can be as low-tech as “hashtag poisoning,” favored in Mexico, where automated bots, for example, “flood antigovernment hashtags with irrelevant posts in order to bury any useful information.” In other countries, such as the Philippines, “keyboard armies”—whose members earn up to $10 per day “operating fake social media accounts”—bombard users with fake support for the president while also attacking his critics. In still other nations such as Saudi Arabia, Qatar, and Bangladesh, the same mechanisms have been used to drum up sectarian strife, to target and in some cases incite violence against different religious and ethnic groups, as well as against atheists, agnostics, and others.
In such scenarios, a growing feature of everyday life in many parts of the world, not only is information crowd-sourced and weaponized, but trust in legitimate news agencies and social institutions is also massively eroded. By extension, technical attacks against “news outlets, opposition, and rights defenders” rose markedly last year and cyber-attacks “became more common due in part to the increased availability of relevant technology, which is sold in a weakly regulated market, and in part to inadequate security practices among many of the targeted groups or individuals.”
Against such measures, efforts on the part of Facebook and Twitter to counter targeted disinformation, including by deleting sites and accounts sponsored by state and foreign actors and by introducing fact-checking mechanisms and alerts, have so far proven weak and ineffectual—“way too little, way too late,” according to critics in countries that face the lasting consequences.
Regulation and tighter security can help, but when governments adopt these strategies to maintain their own nefarious forms of control, workable remedies are in short supply. Disinformation efforts are not just relatively low cost but difficult to detect and, subsequently, to source and counter. Knowing that, in the case of Bahrain, Azerbaijan, Mexico, and China for instance, “independent forensic analysts concluded that the government was behind” orchestrated attacks on opposition politicians and human rights defenders does little to protect those individuals from harm, much less to counter misperception and build tolerance for difference and dissent, twin pillars to any democracy. The damage such disinformation campaigns can wreak is incalculable, and, if current trends continue, looks set to worsen in the coming years.
“In the absence of a comprehensive campaign to deal with this threat,” the report concludes, “manipulation and disinformation techniques could enable modern authoritarian regimes to expand their power and influence while permanently eroding user confidence in online media and the internet as a whole” (also see Sunstein and Tufekci).
Minus the political resolve and commercial will, the prospects for reform are grim. Division, discord, and hyper-partisanship are easy to stoke in times of heightened distrust and all favor autocratic rule. The days when we could leisurely debate whether the internet would usher in a new age of egalitarianism are now sadly far behind us. The onus is on Big Tech, but also citizens and governments everywhere, to help protect what remains of democracy.
References
Freedom House. Nov. 2017. Manipulating Social Media to Undermine Democracy. [Link]
Office of the Department of National Intelligence and of the National Intelligence Council. Jan. 2017. Background to “Assessing Russian Activities and Intentions in Recent US Elections”: The Analytic Process and Cyber Incident Attribution. [Link]
Packard, Vance. 1957. The Hidden Persuaders. New York: Random House.
Sunstein, Cass. 2017. #Republic: Divided Democracy in the Age of Social Media. Princeton: Princeton UP.
Tufekci, Zeynep. 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven: Yale UP.