Are Pro-Russian Bot Swarms Making A Comeback?
An inauthentic pro-Russian Twitter network is pairing old tricks with new tech to push Putin’s rebuttal to the West
On Wednesday, January 26, identical automated tweets from botlike accounts flooded the Twitter replies of various news accounts referencing recent evacuations of personnel from the U.S. embassy in Kyiv, Ukraine. The automated tweets from recently created accounts responded with one of four uniquely worded replies, each pointing to an earlier Russian authorization to evacuate its embassy staff and families in Estonia, Latvia, and Lithuania.
Noticing several suspicious signs in these tweets and the accounts posting them, our team collected tweets through Twitter’s Search API, querying on four common strings of text found in the network’s highly formulaic posts. The data we gathered showed a mere 28 accounts had produced more than 2,017 tweets containing identical text. We then dove deeper into the data to learn more about this botnet.
Beyond the spammy “copypasta” replies they posted, our data showed these Twitter accounts were strikingly similar across a number of elements:
All of the accounts had Russian display names
All accounts were created after January 17, 2022
All featured profile pictures generated by GANs (Generative Adversarial Networks)—in other words, faces of individuals who do not exist
Some account names did not appear to match the gender of their profile photos
Many used GAN photos of children
Every tweet was sent from the Twitter Web Client1
Almost every account also listed a unique location inside Russia2
The bots also appeared to specifically respond, in a clearly automated way, to tweets referencing keywords related to the current situation in Ukraine. These included “evacuation,” “embassy,” and “Ukraine.” Additionally, some accounts tagged the Twitter account of American news network ABC.
One of the first Twitter bots to participate was “Daria Makarova” (@DMakarova76), an account created on January 17, 2022, whose bio reads: “Leading New York investigative, drug and alcohol law firm. #Filibuster.” The account’s reported location was Elektrostal, a city 35 miles outside of Moscow. This account was presented as a U.S.-based law firm concerned about Senate procedure while tweeting from a small town outside Moscow…Makes complete sense.
After firing off one trial tweet shortly after its creation about Russian Baltic embassy evacuations, tagging Canadian actor Ryan Reynolds, Samsung Canada, the Toronto Maple Leafs hockey team, and a Toronto-based children’s charity, @DMakarova76 then proceeded to send five separate tweet replies within four seconds. New bot accounts began dutifully tweeting identical replies on the same threads at a similarly torrid pace of roughly one per second. “Daria Makarova,” who began tweeting at 17:14:33 GMT, wrapped up her tweet rampage 10 minutes and four seconds later, having lodged 110 replies in that span. Those must be some tired fingers.
The replies contained largely identical strings of text, at times customizing the tweets to refer to Estonia, Latvia, Lithuania—or even the misspelled “Baltic countiries” (sic). This exact typo was replicated a total of 410 times by new Twitter users. Subtle, these accounts were not.
This behavior was repeated by 28 other recently created accounts, each using an inauthentic profile photo. In total, four tweets with near-identical wording were reposted more than 2,030 times. Nearly every tweet originated from Twitter’s Web Client, and only eight tweets were retweeted, further highlighting the inherent inauthenticity of the network’s distribution model.
The sloppiness and crude methods employed by this inauthentic Twitter network are reminiscent of some of the less sophisticated information operations conducted by Russia in recent years. Following the assassination of Russian dissident politician Boris Nemtsov in 2015, the Internet Research Agency (also known as the “troll farm”) launched a botnet repeatedly alleging the Kremlin was not involved with the assassination. These bots repeated identical Russian-language comments, such as: “Ukrainians killed him...he was stealing one of their girlfriends.”
In the context of past Russian information operations, Wednesday’s tweetstorm is neither impressive nor novel. But it does come on the heels of months of power projection by pro-Kremlin actors in the information space and Russia's own military posturing along Ukraine's borders. In this 2022 swarm, the "flood the zone" tactic is the same, albeit with a modern update of computer-generated profile photos.
While this campaign attempts to create the impression of Russians facing hostile conditions in the NATO Baltic states, and is clearly designed to appear as if it originates from Russia, there is not enough evidence to definitively attribute it to a specific state-affiliated actor.
As Russia has mobilized over 127,000 troops to Ukrainian borders, malign actors affiliated with Belarus and Russia have demonstrated their ability and intent to sow confusion and distract attention. Wednesday’s bot operation, regardless of the actors responsible, represents a low-cost way of corrupting the information space, in this case targeting Ukrainian allies in the Baltics. Over the last month Ukraine’s law enforcement agencies have reported more than 600 bomb threats nationwide, a similarly cheap method of generating localized panic and confusion. Cyberattacks targeting Ukrainian government websites two weeks ago were designed to appear Polish in origin, but were attributed by Kyiv to Belarusian intelligence. Russia itself has also spoofed cyberattacks on nations such as Estonia in the past.
Though all reported locations of this latest bot swarm point back to Russia, these are easily spoofed and not necessarily indicative of a Russia-backed information operation. However, this activity targeting Baltic states aligns both with Russia’s present tensions toward states providing weapons to Ukraine and the NATO alliance more broadly.
These digital manipulation tactics are designed to be noticed. Just as videos of Russian military equipment posted to TikTok have been propagated by Russian-aligned actors and outlets in recent weeks, these bot swarms seek to be noticed and to loudly echo Russian interests. In 2014, prior to Russia’s annexation of Crimea and invasion of Eastern Ukraine, pro-Kremlin social media manipulation increased in the midst of a high-stakes geopolitical crisis—it appears this bot swarm may represent another effort with similar goals. We can almost certainly expect online manipulation, targeting Ukraine and its Western allies alike, to increase commensurate with offline tensions.
A tweet’s “client” refers to the application or device a tweet was sent from. These can be a browser (e.g. “Twitter Web Client”), apps (e.g. “Twitter for iPhone” and “Twitter for Android”), third-party social media management tools (e.g. Tweet Deck, Sprout Social, etc.), or custom-built bot software. Many tweets are sent from the Twitter Web Client, but the fact that all 2,017 tweets in this set came from one client is highly irregular, and—in tandem with the other suspicious signs from these accounts—is likely indicative of automation.
Reported locations on Twitter run on the honor system – they are different from geocoded Tweets, which cannot be (easily) spoofed. The only reported location that appeared more than once within these profiles was Grozny, Chechnya’s capital.