A US-led international operation recently dismantled a large Russian bot farm that used AI to spread disinformation on Twitter. It has been reported that the disinformation took the form of re-creations of supposedly ordinary U.S. citizens. This was achieved through Meliorator AI software which enabled the people behind the facility to create fake social media accounts.
Many of the electronically-created bots, complete with pictures of smiling clean-cut people, purported to be ordinary people with names such as “Sue Williamson” and “Ricardo Abbott” who were supposedly taking to social media sites such as X, formerly Twitter, to extoll on Putin’s generosity and virtues.
This takedown is a significant blow to Russia’s AI-powered disinformation efforts, but it also raises concerns – could similar tactics be deployed on other platforms?
Seeking to answer this question is Jason Kent, Hacker in Residence at Cequence, who explains the background to the technology to Digital Journal.
Kent says that certain political events in the U.S. make this news from law enforcement even more important, as he notes: “As we approach election season, we can expect more and more of this. I find that these AI driven bots are powerful as they are just as capable as learning the algorithm that drives a post’s views, they can pull the reverse card on the opposite sentiment.”
AI bots employ a variety of AI technologies, from machine learning—comprised of algorithms, features, and data sets—that optimize responses over time, to natural language processing (NLP) and natural language understanding (NLU).
Kent discusses how stopping these bots presents a major challenge, partly because there are so many of them and because they are relatively easy to create.
Kent observes: “Taking these types of bot networks down is an extremely difficult and important task, one that usually results in the head of the hydra growing back with two heads. The more detection mechanisms that are known, the harder it is to take the next botnet down.”
Citing an example of how AI can be used to disrupt elections, Kent presents: “Lets say a politician wants to squash all of the other politician’s posts. One way they can do that is to let a botnet repost with known “bad” information and the post suddenly isn’t ringing with the sentiment detection portions of their AI and suddenly that post isn’t important and isn’t being shared. The same rules apply with pushing a post up and making it trend.”
Kent’s final advice runs: “Everyone just needs to stop and think, “did I just read a post that caused an emotion?” If this post is on social media the next thought needs to be “I don’t trust this content.”