Skip to content
The Present

‘Bot-like’ accounts pumped divisive content during Democratic debates

The accounts spread hashtags like #DemDebateSoWhite and #KamalaHarrisDestroyed.


Scott Olson
/ Staff

Key Takeaways
  • Hundreds of account with “bot-like” characteristics were seen spreading divisive content on social media, according to the data company Storyful.
  • It’s unclear whether these accounts were actually bots, and if so, how effective they were in shaping public discourse.
  • In July, former special counsel Robert Mueller told Congress Russia is attempting to manipulate public opinion “as we sit here.”
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

The 2020 U.S. presidential election is more than a year away, but some experts are already reporting signs of bots injecting divisive rhetoric into online political discussions.

During the Democratic debates on Tuesday and Wednesday, hundreds of social-media accounts with bot-like traits were seen spreading misinformation and racially charged messages, according to analytics company Storyful. One trending hashtag was #DemDebateSoWhite, which, as the Wall Street Journal reports, originated from a Twitter account under the name Susannah Faulkner and was later shared by conservative activist Ali Alexander.

According to Storyful:

“The bot-like activity on #DemDebateSoWhite follows a high degree of automation in discussion around Kamala Harris’s ethnicity during the first Democratic debate, indicating the issue of race as a possible target for bot and bot-like accounts during the election campaign.”

Another trending hashtag was #KamalaHarrisDestroyed, first tweeted by conservative actor and comedian Terrence K. Williams. The tweet was retweeted more than 12,000 times, but Storyful said many accounts that interacted with the tweet had “bot-like” characteristics. To determine which social-media accounts might be bots, Storyful analyses criteria that includes “accounts with usernames that share almost identical patterns of characters and numbers, repetitive text in multiple posts, and a high volume of activity during a certain period,” the Wall Street Journal reports.

Still, there’s no hard evidence showing bots pumped up these tweets, and Twitter itself told Recode it didn’t find any substantial evidence of bot manipulation. But what does seem clear is that, compared to just a couple years ago, the public is more skeptical — and perhaps paranoid — of political content on social media.

“There’s an increased awareness that social media manipulation exists and is a thing, but that also has the effect of making people look for it everywhere,” Renee DiResta, a 2019 Mozilla fellow in Media, Misinformation, and Trust and expert in social media manipulation, told Recode.

It’s hard to quantify how effective bots really are at changing opinions or dividing the public, but their effect might be small. In Finland, for example, a 2019 study found that bots even though bots attempted to influence two of the nation’s most recent elections, the public didn’t really notice.

“On Finnish Twitter, the tweets observed with our research methodology did not to spread particularly wide, as they failed to garner any substantial attention from ordinary users,” the researchers wrote.

Effective or not, it seems safe to say that, unless social media companies can find a way to identify and eradicate it, social media manipulation will become increasingly common in online political discussions. As former special counsel Robert Mueller told Congress in July, regarding the threat of Russian manipulation into U.S. elections, “They’re doing it as we sit here, and they expect to do it during the next campaign.”

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related