Tuesday Q&A:  Social media, the 2016 election, and how Russia used disinformation to divide the electorate

In this edition of Tuesday Q&A, Western Today talked with WWU Assistant Professor of Computer Science Michael Tsikerdekis about Russia's cyber attacks on this country during the 2016 election and afterwards, how they did it and were eventually caught, and how the nation's voracious appetite for social media and willingness to spread disinformation contributed the the attack's success.

U.S. intelligence agencies, as well as outside analysts, have verified that Russia engaged – and is still engaging – in protracted efforts to interfere with the 2016 election by dividing Americans along the lines of race, religion and ideology. How is such a strategy planned, and what makes this country so open to an attack like this?

MT: The simple way to explain the process is that it involves creating fake accounts and pages as well as subsequently elevating the status of these accounts by either using other fake accounts to like/follow them (referred to as a "Sybil attack") or by plainly purchasing ads that would promote the content of these accounts or pages. The content generated by these accounts is aimed to be polarizing and it is often proofread by Americans for linguistic and cultural context. They are not doing anything that others cannot do, in fact many individuals from many places including from within then U.S., to a degree, do this.

What is dangerous is that with Russia we have a confirmed state actor that is willing to commit large resources to this process. The degree of social media adoption in the United States makes such strategies particularly effective in influencing public opinions.

Social media by its very nature involves spreading thoughts and ideas that a user supports. How much are U.S. citizens to blame for metaphorically nurturing a crop that the Russians planted?

MT: I think there is some blame to be attributed to someone that distributes non-verified information to a set of peers. In real life, we are used to taking things with a grain of salt if we hear some news from someone. On the other hand, most people are blasted with news by other peers and assume that the material being distributed is of importance. They do not ask, if someone were to verbally read the material to them on the street, would the reaction be the same? I think there must have been domestic bad actors that took advantage of this and helped sort of speak nurture the bad crop.

How was Russia able to keep this effort a secret for so long, and how did we eventually figure out who was to blame?

MT: Fake accounts are a daily occurrence on social platforms, and their source cannot often be discerned. For example, a Russian bot farm may decide to tunnel their traffic through India and then create fake accounts and content. To most people, these accounts will look and sound American and many would not follow through investigating the source. Even if you investigate you will find that the country of origin is India in this example as opposed to Russia. There is a lot of paper trail that needs to followed in order to trace back a fake account to Russia, let alone a state-sponsored bot farm.

What defenses can be put in place to stop “fake news” by social media providers?

MT: The operating model of social media platforms is at odds with protecting individuals from malicious sources of information. They have a responsibility to their investors to increase revenue and in social media that requires users and ads. Increase any of these two and you increase revenue. Most providers do not have a real incentive to restrict user registration or even place rigid verification criteria, which would be a first step in the right direction.

Furthermore, providers talk about the use of fake account detection approaches, which as the creator of a few, I can tell you that they only work to mitigate the most trivial of accounts and are expensive since humans have to also evaluate the results. Again, it is a step in the right direction but unless the equation between users, ads and revenue changes it would be difficult to see any radical change.

Lastly, how can we, as consumers of social media, be agents of change to stop the spread of planted disinformation meant to do harm to our country?

MT: The best approach is to critically evaluate information and to follow a news item to its source and identify the source's process of evaluating information and any biases it may have.

I think there are two further ways that people can contribute in preventing the spread of disinformation: first, use the filtering features provided by social media platforms to restrict what appears on your timeline. You have control to a certain degree to eliminate individuals in your circles that have a bad habit of sharing non-verified or non-verifiable information. 

The second way that someone can help themselves and in turn others is by understanding the nature of information as it traverses through social media. Social platforms are not sources of journalism but operate more like the telephone game, which we used to play as kids. You expect that your local station or newspaper has vetting procedures and wouldn't let just anyone distribute information. But on social media this is exactly how information is shared. Sources, many times anonymous (or pseudonymous) that convey information to one another and perhaps even alter it as it goes through. What you receive is just that, a news item that has been the product of an online telephone game.

If people realize this, they can then be more critical of the content.

 

Michael Tsikerdekis has taught at Western since 2017, and received his doctorate in Informatics from Masaryk University, Czech Republic. His areas of research interest include cybersecurity and online deception.