AI-Generated Fakes on TikTok: Examples Relevant in the Election Context

Every day, the Moldovan TikTok is “invaded” by manipulative or fake videos partially or entirely generated by artificial intelligence tools. Their topics largely coincide with the disinformation narratives spread during the same period of time by the politicians, influencers, and anonymous accounts associated with the Russian Federation or Ilan Sor.

  • During the previous monitoring which was conducted using a pre-determined list of hashtags for identifying disinformation narratives in the election context, we detected dozens of fake AI-generated videos which promoted such “classic” narratives as “drawing Moldova into a war” or “France’s involvement in the elections in the Republic of Moldova.” More recently, however, such common hashtags as #moldova or #chișinău are being used; in this way, these messages infiltrate hundreds of videos published by real users on a daily basis.
  • A large part of the accounts involved in spreading artificially generated fakes demonstrate no typical characteristics of consolidated accounts: they do not have any significant numbers of users or a long history of activity. On the other hand, analyzed individually, such videos manage to reach dozens and even hundreds or thousands of users.

 

Familiar but Artificial Voices

Most often, according to our observations, manipulative narratives are being shared via fast news-style videos with AI-generated voices. In some cases, the voiceover accurately imitates the human voice; therefore, Internet users can easily confuse it with real persons’ voices. In other cases, the presence of artificial intelligence is obvious due to its robotic tonality, pronunciation errors, or unnatural pauses. In either case, such videos achieve their purposes: they have an impact and manage to convey the intended messages to the viewers.

An example of this approach is the fake news item Mediacritica previously announced – according to it, 800 NATO soldiers will arrive in the Republic of Moldova before the elections.

In an AI-“voiced” video, this fabricated information is presented as interference in the parliamentary elections to be held on September 28. It is curious that the AI “played a trick” on its authors: instead of mentioning “soldiers,” it announces that “800 Spartans” are about to arrive in the Republic of Moldova. This fake news item actually fuels some of the narratives permanently spread by Russian propaganda during the recent months: arming Moldova and dragging it into a war, foreign control, and intentions to falsify the election results.

The page that published this video, as well as many other AI-generated posts, has been producing content since the 2024 presidential elections, and some of its videos have over 100000 views.

Another page using AI-generated content published a video about “dividing the country by means of religion.” The message is narrated by an artificial voice and contains images borrowed from another source. In other words, the images look real, but the voice is artificially generated.

This approach was most probably used to create the impression that the message belongs to a real person.

The same page mainly publishes materials favorable to Victoria Furtuna and presenting her in a positive way. In another video, for instance, the politician’s electoral travel is presented. In this case, both the voice and the figure of the person “speaking” in the corner of the screen are AI-generated.

Non-Existent “Elderly Ladies”

In August, several sources and propaganda accounts covered the sentencing of Evghenia Gutul, the head of the region, to seven years of imprisonment, with serving punishment, in the case regarding the illegal financing of the former Sor Party. The topic was covered much more insistently than any other current events discussed publicly both by bloggers affiliated with Ilan Sor and in AI-generated videos. An eloquent example is the “elderly ladies” handcuffed by police officers for “coming to defend” Evghenia Gutul.

The videos have basically the same plot: an elderly woman shouts, “Freedom for Gutul!” and asks, “Why am I detained?” In addition to trying to spread a fake illusion of large-scale support for the head of the region, the message is intended to convey the idea that the police officers are abusing their power, and that people in the Republic of Moldova are detained or persecuted for political reasons.

One of these obviously AI-generated videos has almost 90000 views. In the comments, several real users either condemn or support the “police officers’” actions.

More recently, in addition to Ilan Sor, the “elderly ladies” have also started actively speaking in favor of Vladimir Plahotniuc who was detained in Greece. In an AI-generated video, the protagonist claims she would be happy if the ex-DP leader returned to politics, because she is “fed up with all this yellow nonsense from the PAS.”

In another video with over 200000 views, three elderly persons sing and call Plahotniuc home to “overthrow Maia.”

Batman “Invited” by the PSRM “Arrives” in the Republic of Moldova

Artificial intelligence is not used on TikTok merely to create an illusion that people and events shown are real. In some cases, it is a rough manipulation tool helping promote fake narratives by means of obviously generated images.

For instance, in a video stating that “Maia Sandu controls justice,” the image of the president is placed next to LGBTQ+ community symbols, resorting to homophobia in order to demonize her.

In other cases, the head of state is “featured” on the accounts publishing content in favor of Ilan Sor in extreme contexts: next to a demon or in the likeness of a mythological creature. These images accompanied by manipulative texts suggest that Maia Sandu intends to destroy Moldova. Their target group is Russian-language speakers.

We have also identified the videos in which the ruling party is presented as corrupt by using the images intended to stir up viewers’ emotions. People dressed in the colors of and wearing the symbols of the Action and Solidarity Party are depicted stealing money from a mother with three children or from elderly people; these pictures are intended to convey the idea that the governance robs the most vulnerable ones.

Some narratives are also presented by using more creative methods. One TikTok account, for instance, has published several AI-generated videos in which such a character as Batman “arrives” in the Republic of Moldova. Some of those videos have a political subtext: it is mentioned that the superhero has been invited by “Igor Nicolaevici” [Dodon – author’s note] to confront Maia Sandu.

In the other episodes, Moldova is depicted as a state in crisis – with deteriorated roads, LGBTQ+ parades, and a governance which “consists of thieves,” according to the description. The narrative culminates with the images of Batman leading citizens who carry crosses and tridents towards Chisinau.

The previous videos do not conform to the usual patterns of coordinated propaganda and may have been created by individual users on the basis of their own political views. However, this approach does not cancel the fact that they spread harmful narratives and even incite hatred.

Artificial Panic in the Comments

Along with visual and audio elements, the comments section is often used as an additional tool for manipulating the audience. For instance, a fake video about “Moldova’s involvement in the war in Ukraine” is accompanied by comments in Romanian written by users from Myanmar, Bangladesh, or Benin. One of them writes that “the world does not understand the essential things,” while another one considers that “it is Moldova that is looking for war,” and the third user describes the situation in our country as “shocking.”

Fake comments are generally used to amplify the visibility and artificial credibility of the content. Due to generating apparent popularity, the platform’s algorithms favor sharing such materials, which explains the fact that one video we have analyzed exceeds the threshold of 100000 views. At the same time, such comments activate the social conformity effect (“herd behavior”) which makes users adopt the opinions of the apparent majority and rarely search for more information regarding the real identity of those who formulate them.

 

CONCLUSIONS

A year ago, approximately 34 million photos were generated on a daily basis with the help of artificial intelligence. The amount of AI-generated content keeps growing rapidly and steadily; it has already become an element of information and hybrid wars, and the Republic of Moldova is no exception in this regard. During pre-election days and election campaigns, fake videos generated and published on TikTok are a dangerous manipulation tool.

By using credible artificial voices, fabricated images, and fake comments which create an impression of false social legitimacy, these materials distort public perception of reality and shatter citizens’ trust in the election process, the state authorities, and democracy in general.

In a fragile electoral context, similar practices increase the risk of polarization and radicalization, thus transforming social networks into a space vulnerable from the perspective of information security of the Republic of Moldova.

The monitoring was carried out within the project “Resilient Media, Informed Voters: Safeguarding Moldova’s Elections from Disinformation”, funded by the Embassy of the Kingdom of the Netherlands in Moldova. The views expressed are those of the authors and do not necessarily reflect the position of the donor.

Access to the TikTok monitoring tool was provided as part of the project “ProElect – Promote accountability in Electoral processes in EaP through increased participation and capacity of civic actors” implemented by the Center for Research and Advocacy in European Affairs with the financial support of the European Union and Equal Rights & Independent Media (ERIM).

Loading

Share This

Copy Link to Clipboard

Copy