While looking for something else I fell into a nest of automated accounts pretending to be Andrew Yang supporters.

How It Started: The MAGA Bots

For the past month or so I have been building software to find, track and visualize networks of fake accounts and their activities. This is something I have wanted to work on for a long time, ever since I read an article about how bot networks were being used to cultivate leads for scammers. Social media bots are used as bait, blanketing the platforms they’re on with messages, hoping for an interaction so that they can pass you along to a scam artist.

The idea that different fake accounts are created for different purposes and coordinating their efforts together fascinated me. It scratched that anthropological itch. I wanted to know more about the economy of the fake internet.

Good software is built on an iterative cycle of experiments. In order to figure out what I should build, I needed to feed the first prototype data as soon as possible. Well…. where’s a better place to find fake accounts than the Twitter feed of the current President of the United States? A mere 100 retweets from a Donald Trump tweet uncovered four bots, each with thousands of followers that seemed sure to be bots themselves. It was terribly exciting to watch immature software draw beautiful snowflake networks from connections between bots.

But the problem was there are literally millions of Twitter bots posing as MAGA supporters. No really, SparkToro estimates that 30 million of Donald Trump’s followers (60% of all his followers) are in fact fake. I was finding interesting things, but the connections were loose, almost incidental. It’s hard to find any kind of network when you’re looking at a few hundred points on a network of millions.

But MAGA bots are supposed to be troll bots, right? The theory goes that they’re supposed to cultivate division and distrust in the American political system. So I thought maybe I could find more interesting nodes in the network if I looked at the most obvious targets of MAGA trolling.

In other words, Democratic candidates.

I didn’t really think to focus on any one Democratic hopeful. I just grabbed a few tweets here and there as I saw them. All of them had bot activity of some kind. Given enough time to collect the data, bot networks would eventually appear for all of them. A tweet about Bernie Sanders yielded the largest bot haul, but they were on the lower range of activity and were a hodgepodge of issue focused accounts instead of “Bernie Bros”. They weren’t connected to my existing pool of MAGA bots and most of them seemed pretty innocuous.

And yet, off to the side on the graph there were a handful of bots that were closely networked together. They all seemed to follow or interact with one another. They had similar constructed handles and profiles. Intrigued, I told the software to trace this network and see where it went. Like the MAGA bots, each round of search produced more bots, but unlike the MAGA bots each round of new bots had connections to at least two or three existing bots. As I continued to trace the network, more and more bots appeared, each connected to more and more nodes on the network until finally I had a pattern so dense with crisscrossing connections in the center it looked like a collapsing star.

I had found the bots of the Yang Gang.

An Ethnographic Study of Twitter Bots

Fake accounts are not such a straight forward issue. That’s what makes them so interesting. An adversary does not have to build a fully functioning AI in order to masquerade as a legitimate account largely because there are many legitimate accounts that behave like bots. As far as social networks go, Twitter is a highly programmable platform, but the question of what it is okay to program Twitter accounts to do is still up for debate. For example, my bot tracker focuses on bots that post on average 70 times a day or more. While a human being can certainly post to Twitter a hundred times in a day, it’s unlikely that a real human being would post a hundred times a day every day without some kind of automation helping out.

But that doesn’t necessarily mean that account is fake. Virtually all the official accounts for major news outlets fit this criteria. They tweet and retweet current stories as content becomes available and the people that follow them want them to behave that way. They follow them in order to receive that content and if it was only tweeted once they would probably miss it.

Similarly, not every political bot is malicious. The bot tracker also found this Elizabeth Warren for WV! account which in less than a month had close to 4,000 tweets. And yet it’s hard to argue that such an account is part of a plot to manipulate anyone. It doesn’t present itself as a real person and is clear about its political objectives. One could argue that such accounts lower the quality of Twitter content overall, but from the perspective of manipulating public opinion they seem fairly innocent.

Some of the bots in the Yang Gang network are very much like the Elizabeth Warren for WV! bot. They are campaign bots and don’t pretend to be otherwise. But others are quite clearly attempting to pass themselves off as real people. Occasionally their profiles will say something like “just here for the Yang stuff” as if to justify/explain why their accounts are new and do nothing but retweet Yang Gang content a hundred times a day.

These accounts share the same basic template of their MAGA bot cousins. They obsessively retweet. They may have one real tweet pinned so that the first thing you see on their timeline makes it look like they occasionally post original content. They load their bios up with emoji and hashtags. If the account produces original tweets, they will regularly retweet those tweets to further the illusion of real activity.

It can be difficult to understand what the value is in a fake account retweeting content to followers that consist almost entirely of other fake accounts. But this type of activity has three different groups that make up its audience: the bot’s followers, people following the hashtag, and the account that originated the tweet being retweeted.

Retweets and likes can be a method of radicalization. Think of the little surge of endorphins you get when someone responds to your content and how you think about future content based on that feedback. It will encourage you to post more and become more set in your beliefs. Having millions of bots that look like they could be real people liking and retweeting content doesn’t just spread propaganda, it actively radicalizes people. So called “affected networks” psychologically reward people for indulging in emotions like anger, outrage, paranoia, etc.

“…people enjoy the circulation of affect that presents itself as contemporary communication” (Dean, 2010, p. 21, emphasis in the original). Subsequently, the enjoyment of participating in an affective network is a “binding technique” that intensifies through reflexive communication: adding comments, links, and interconnecting myriad platforms, people, and devices (Dean, 2010, p. 21). Rather than finding accurate news meaningful, Facebook users find the affective pleasure of connectivity addictive, whether or not the information they share is factual, and that is how communicative capitalism captivates subjects as it holds them captive.

from The Self-Radicalization of White Men: “Fake News” and the Affective Networking of Paranoia

That sort of tactic is most effective when you focus on interacting with real accounts and content that have modest like/RT numbers. People who get hundreds or thousands of each probably have notifications turned off on their accounts. People who only get a handful of each are more likely to notice and therefore experience the binding effect.

Here’s what the averages for content retweeted by Yang Gang bots looks like:

|         Type         |  Avg | Median |
| :------------------: | :--: | :----: |
| Likes on RT content | 2107 | 202 |
| RTs total on content | 551 | 59 |

And the top users the Yang Gang bots RT from are known Yang surrogates — staff, regional campaign leaders, delegates, etc. In other words, this bot net is just here to astroturf. They’re trying to keep the activity numbers up so that Andrew Yang’s passionate fan following continues to be a story.

That this is going on has not escaped the notice of legitimate Yang Gang members.

This is not exclusively an Andrew Yang issue. As far back as 2010 during the special election for Ted Kennedy’s Senate seat, bots have been used by campaigns to influence public perception. Newt Gingrich allegedly bought fake followers in the run up to his 2012 Presidential bid (Update: Only 92% of Newt Gingrich’s Twitter Followers Are Fake). The Institutional Revolutionary Party in Mexico has been known to astroturf in order to get their issues trending. Indian prime minister Narendra Modi has been accused of using fake accounts. From The New Statesman:

…late last year, when Time started monitoring Twitter for its Person of the Year award, local media soon spotted a pattern. Thousands of Modi’s followers were tweeting “I think Narendra Modi should be #TIMEPOY” at regular intervals, 24 hours a day — while a rival army of bots was tweeting the opposite.

This problem did not start in 2016 and was not invented by the Russians. There’s plenty of evidence that Americans have been in the game from the very beginning. Every campaign for 2020 has some kind of fake account or bot activity working around it.

Fake or Foot Solider?

I’m old enough to remember Anonymous’s campaigns using the High Orbit Ion Cannon, where people would voluntarily turn their computers into zombies for a DDoS attack. What’s missing from the conversation around the role bots play in manipulating public perception is the reality that some accounts are both real and fake. Often called cyborgs, they automate activity on their account for a wide variety of reasons. A few are clearly attempting to build a brand or sell a product. While going through the Yang Gang I found a few writers promoting books, some Youtube hopefuls, and one up and coming hip hop boy band.

It’s possible that some of these accounts belong to actual Yang Gang members who have volunteered them to be automated in order to promote their chosen candidate. Certainly I’m not suggesting that the Yang Gang doesn’t exist or that it is not a powerful political/campaigning force.

It’s also not certain that fraudulent activity that benefits one candidate is something that candidate initiates, encourages or even supports. The Yang Gang has already proven itself to be quite willing to cheat for the campaign without taking direction from Yang’s campaign. It’s natural to assume astroturfing for Andrew Yang must come from Andrew Yang, but there’s really no evidence yet that it does. Fake accounts can be acquired for pennies. Anyone can do it.

Contrary to popular belief, bot networks are collected over time not built all at once by a single entity. They are often repurposed, sold, borrowed, leant, stolen from attack to attack. The Mirai bot network was originally built to game Minecraft. Because of this open market a single network can cover a broad range of fake account types, each with a different agenda.

Take for example, Mike.

Mike’s account is interesting because it’s connected to both Yang Gang bots and MAGA bots but it doesn’t really post political content from either. Instead most of Mike’s 350 tweets a day (that’s one every 4 minutes) are devoted to funny videos, mostly TikToks. A lot of silly things involving cats.

Mike is a teenager. He works at Taco Bell. He wants to be internet famous and sets modest goals for himself about building his Youtube channel. He donates to Andrew Yang’s campaign, which is why he’s connected to some Yang Gang bots but he’s also very passionate about second amendment rights, which is why he’s connected to the MAGA bots. When he does retweet political content it comes off as very sincere. He’s not shrilling for political overlords, he’s just posting his beliefs on the internet the way millions of other people do.

At first I thought Mike was a fake account with a stolen identity because when I looked at his Youtube channel all the videos (45 segments in total) had been posted the week before. But the content itself looked real. Sometimes Mike mentions the date and chats about his Youtube stats and the long climb to monetization. If this was a case of someone else’s content being stolen to give a fake account a more realistic backstory, I couldn’t find evidence of it.

I noticed Mike’s account originally because he’s one of a few links between the MAGA bots and the Yang Gang bots. I kept coming back to Mike’s account because it made me sad. He’s just a kid who wants a fanbase and doesn’t realize he’s sitting alone on the internet clinging to affirmation from bots. I couldn’t help wondering what the gentle encouragement of hundreds of fake accounts was doing to him. He seemed lonely. Eager to please. His last video blog he announced he wasn’t going to college after all…

Gloria Law’s Lasagna

It’s not difficult to create a fake account on social media, but as I dug through the Yang Gang bot network it became clear that not all these accounts had been created for their current purpose. A large number of these accounts were born some time in 2019, but there were also older accounts too. Why would Yang Gang bots exist before the Yang Gang did?

The answer came when the bot hunter found Gloria Law…. both of her.

Best I can tell the story of Gloria Law is true, she sued the city of Cambridge in 2014 for discrimination and retaliation after she filed a complaint with the Massachusetts Commission Against Discrimination and was fired. But whether any of the Gloria Law accounts are actually run by Gloria Law is anybody’s guess. There’s this one that has already been suspended by Twitter. Then there’s this one which behaves pretty aggressively in a bot-like manner. But the one that popped up in my tracing of the Yang Gang bot network first was this one, which caught my attention because she turned to Yang Gang quite recently. Only a few weeks ago this account tweeted exclusively about lasagna. Hundreds and hundreds of retweets about lasagna. Pictures of lasagna, recipes for lasagna, tweets from around the internet praising lasagna. I have never seen an account so single minded in its promotion of one food. At this rate the Yang Gang content will have pushed these older tweets completely off the her timeline by the time you read this, but I had the foresight to take screenshots.

I don’t know whether the real Gloria Law likes lasagna or not but it seems likely that this account had a different screen name and profile pic in the beginning of January. Someone repurposed it to pretend to be a Yang supporter. It’s a fair bet that at least some portion of this bot network has been built by buying “vintage” fake accounts that can be transformed into Yang Gang members.

The Impact of Everyday Influencers

Rounding out this adventure is one last bit of luck. As I was digging through the Yang Gang bots I came across a bot account that was being followed by a person I knew. Not a celebrity, a person I knew in person. This was weird, but there are lots of reasons why legitimate Twitter users might follow bots. If you’re in the Yang Gang, the obsessive retweeting of Andrew Yang related news provides some useful utility the same way following a conventional news source might. I didn’t think anything of it, but then I came across another account this friend was following, and another and another.

Sure, these accounts retweeted hundreds of Andrew Yang news articles a day, but if one wanted to keep up with the Yang Gang why choose accounts presenting themselves fraudulently as everyday American citizens over official campaign accounts automated largely the same way?

Is it possible that these bots were real people my friend actually knew? I had the bot hunter look into what bots were connected to his account:

Oh boy, he wasn’t just following and being followed by a lot of Yang Gang bots. He was connected to a large number of bots that were now serving overt political purposes, including some MAGA bots. My friend is not very political, but a quick glance at his Twitter feed makes it clear that he’s not donning a red Make America Great cap any time soon.

When I asked my friend about this situation, he admitted in the past to using an auto-follower. He still occasionally follows people with hopes of building his fanbase, but the benefits of automation ultimately didn’t seem worth the risk. He stopped but he’ll still follow back when a user follows him. That means when he makes the occasional statement about current events, which results in some political bots following him, he follows back. By doing so he strengthens the legitimacy of the network and makes it easier for the bots to trick users into thinking they’re real. He’s trying to build his audience, they’re trying to influence an election.

Author of Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store