US indictments reveal Russia’s use of AI in disinformation campaign targeting 2024 election

Experts warn that the Kremlin may deploy artificial intelligence (AI) in its attempts to sway the November presidential elections through various influence schemes. 

The U.S. Department of Justice recently unveiled indictments as part of an ongoing investigation into alleged Russian government schemes aimed at manipulating American voters through disinformation campaigns. 

U.S. Attorney General Merrick Garland announced a major crackdown on influence operations conducted through state-run media and online platforms – a campaign dubbed “Doppleganger.” While the focus initially centered on employees of the Russian state-controlled media outlet RT, subsequent indictments revealed a broader scope and complexity to Russia’s initiatives.

The U.S. also seized over two dozen internet domains associated with the operation and established an Election Threats Task Force, comprising FBI Director Christopher Wray and top Justice Department officials. 

“This is critically serious, and we will treat it accordingly,” Garland stated alongside Wray on Wednesday.

These indictments include the use of AI to create social media profiles “masquerading as U.S. (or other non-Russian citizens)” and to generate the appearance of “a legitimate news media outlet’s website.” 

“Among the methods Doppelganger employed to drive viewership to the cybersquatted and unique media domains was the deployment of “influencers” worldwide, paid social media advertisements (in some cases created using artificial intelligence tools), and the creation of social media profiles posing as U.S. (or other non-Russian) citizens to post comments on social media platforms with links to the cybersquatted domains,” the indictment stated. 

The U.S. Department of the Treasury, in an announcement, designated 10 individuals and two entities under the Office of Foreign Assets Control, enabling the U.S. to impose visa restrictions and a Rewards for Justice reward of up to $10 million related to such operations. 

The Treasury reported that Russian state-sponsored actors have leveraged generative AI deep fakes and disinformation “to undermine confidence in the United States’ election process and institutions.” 

The Treasury identified Russian nonprofit Autonomous Non-Profit Organization (ANO) Dialog and ANO Dialog Regions as using ” to develop Russian disinformation campaigns,” including “fake online posts on popular social media accounts …. that would be composed of counterfeit documents, among other material, in order to elicit an emotional response from audiences.”

ANO Dialog allegedly “identified U.S., U.K. and other figures as potential targets for deepfake projects” in late 2023. The “War on Fakes” website served as a key outlet for disseminating this fabricated information, which also employed bot accounts that targeted voting locations in the U.S. 2024 election.

In an , Belgian investigative journalist Christo Grozev disclosed that complaints regarding the “global propaganda effort by Russia” – which the Kremlin was “losing to the West” in the early stages of the invasion of Ukraine – spurred the decision to utilize AI and “all kind of new methods to make it indistinguishable from the regular flow of information.” 

“They plan to do insertion of advertising, which is in fact hidden as news, and in this way bombard the target population with things that may be misconstrued as news, but are in fact advertising content,” Grozev explained. 

“They plan to disguise that advertising content on a person-to-person level as if it is content from their favorite news sites,” he cautioned. “Now, we haven’t seen that in action, but it’s an intent, and they claim they have developed the technology to do that.”

“They’re very explicit that they’re not going to use Russia-related platforms or even separate platforms,” he added. “They’re going to infiltrate the platform that the target already uses. And that is what sounds scary.”

ant