the artificial intelligence conandrum for current diplomacy affairs

This article is based on an intervention I gave at the Radicalisation Awareness Network Policy Support Western Balkans Thematic Research Meeting on “Foreign influence, aggression against Ukraine and the impact on ethno-nationalism and violent extremism in the Western Balkans: A P/CVE perspective” on 16 March 2023

As we have seen in recent years and months, artificial intelligence technology has gone through an enormous amount of development and most importantly accessibility. Apart from the ubiquitously talked about ChatGPT there are uncountable platforms that have made a splash into the communication market and consequently the influence and disinformation market.

Many commentators stress how this unprecedentedly fast revolution can be a double-edged sword. However, this has applied through history to every innovation milestone. Nuclear technology can be used to produce great amounts of energy, as well as very powerful weapons. Ten years ago Twitter was the propeller of many democratic revolutions across North Africa, the Middle East and Ukraine, and it is also a wide channel of information utilized by radical extremists including ISIS, or the Wagner military company.

Therefore, as an analyst of the future of communication technology, I tend to be very wary about building, promoting or feeding narratives that are overly simplistic and I prefer to look into what we as communicators, policy analysts, policy makers, governance experts can do to make the best out of the technology that happens to be at our disposal in a specific period of time and to analyse how malign actors may decide to use such technology to their advantage.

While AI has the potential to, and certainly will, revolutionize many aspects of our lives, not only our communication and the content we consume, and create new opportunities for economic growth, it can also be used for malicious purposes, particularly in the realm of disinformation and aggression campaigns.

In Ukraine, for example, Russia has been waging an aggressive campaign to destabilize the country for many years. These campaigns include the use of sophisticated disinformation tactics, such as the use of AI-powered bots and social media platforms to spread false and misleading information. Actually, allow me to say that Russia has perfected the art of disinformation throughout the years.

For over a decade, Russia has been at the forefront of disinformation farms that spread all over the world. Their main goal is to destabilize countries and meddle with election processes where they have a certain level of interest. But using technology to create artificial intelligence is a whole different ball game nobody expected. As it turns out, Russia is already creating A.I.-generated personas with full profiles and a human face.

Last year, NBC News journalist Ben Collins wrote a thread about two specific people who are spreading disinformation from the city of Kyiv. But not everything is what it seems, both of these profiles are not recognized by any system as real people. As it turns out, they were both created by a Russian troll farm in order to spread fake news about Kyiv.

The first one Collins introduced is Vladimir Bondarenko, a blogger from Kyiv who despises the Ukrainian Government. Watching his artificially created face is down right scary when you see how real his picture is. But he does have some interesting flaws that are not that difficult to spot. On the Ukraine Today website, Vladimir has an entire backstory as if he was a real human being. He studied to become an aviation engineer but he was later forced to become a blogger when the Ukraine aviation infrastructure collapsed. In the picture, this man has strange ears, which is a major giveaway his face isn’t real.

Russia also created an A.I. profile of a woman, Irina Kerimova from Kharkiv. She used to be a private guitar teacher but she eventually became chief editor of this Russia propaganda website that is presumably founded by the RT company (the Kremlin). She also has a strange mismatch on her earrings.

Facebook revealed to Collins that these two profiles are part of Russia’s new propaganda operation that was identified by the State Department back in 2020. They are called News Front and South Front and were both created by Alexander Malkevich, the same man who ran the St. Petersburg troll farm after 2016. To those who aren’t aware, this is the same troll farm that has ties to the infamous Cambridge Analytica. Just another way in which Russia uses fake news to influence entire countries.

The use of AI in these campaigns allows Russia to create convincing fake news stories that are tailored to the specific interests and beliefs of their target audiences. By leveraging social media platforms, chatbots, and other AI-powered tools, they can create highly targeted disinformation campaigns that can quickly gain traction and influence public opinion.

One example of this is the use of AI-powered bots to spread false information about the conflict in the Donbas region. These bots create fake news stories that exaggerate the actions of Ukrainian forces and downplay their own aggression, which are then disseminated through social media channels. By doing so, they can create the illusion of a popular movement in support of Russia’s intervention in Ukraine, while undermining support for the Ukrainian government and its efforts to resolve the conflict peacefully.

Apart from AI generated images, allow me to open a parenthesis on the so called “deep fakes” videos of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information.

Three weeks had passed since Russia invaded Ukraine on February 24. The world and global organisations were expecting Kyiv to fall soon, finding it difficult to believe that the war-hit nation can sustain even a month of offensive. At such a crucial point, a video appeared in which Ukraine’s President Volodymyr Zelensky, in his signature green attire, was seen addressing his soldiers from behind a podium on which the Ukrainian state emblem was present.

In the video, Zelensky was seen asking his soldiers to lay down their weapons and go back to their families. “There is no need to die in this war. I advise you to live,” he was seen as saying. The video was massively circulated on social media and briefly ran on Ukraine’s television which suggested that the leader had fled from Kyiv.

The one-minute video clip shared was called a ‘deepfake’ – a term to define the sophisticated hoax in which artificial intelligence is used for creating a fake image and most commonly fake videos. The video which was posted by the hackers was instantly removed from social media platforms and debunked. Zelensky dismissed it as a “childish provocation,” and mocked Russia for desperately spreading fake news.

In the Western Balkans, Russia has also been using similar tactics to sow discord and undermine democratic institutions. These efforts have included the creation of fake news stories that play on ethnic and religious tensions, promoting extremist views and undermining social cohesion.

These campaigns are often carried out in secret, using sophisticated AI algorithms that can evade detection by traditional monitoring methods. This allows Russia to spread disinformation and influence public opinion without being detected, making it difficult for policymakers and civil society organizations to respond effectively.

So, what can we do to combat this growing threat?

One important step is to invest in better AI-powered tools that can detect and track disinformation campaigns in real-time. This requires developing advanced algorithms and machine learning techniques that can identify and neutralize these campaigns before they have a chance to do significant damage.

Another important step is to work more closely with social media companies to ensure that their platforms are not being used to spread fake news and disinformation. This means developing new policies and guidelines that can help identify and remove these types of posts, while still respecting freedom of speech and expression.

Certainly, training government officials to counter disinformation-led narratives and increasing their media literacy is another important solution that can be employed to combat the threats of foreign influence, disinformation, and aggression campaigns in Ukraine and the Western Balkans.

Government officials, particularly those working in the areas of foreign policy, defense, and security, need to be equipped with the knowledge and skills necessary to recognize and respond to disinformation campaigns. This includes developing a deep understanding of the strategies and tactics used by foreign actors to spread false and misleading information, as well as the ability to identify and analyse online sources and social media trends. Moreover, increasing the media literacy of government officials can help them to recognize and counter the spread of disinformation more effectively. This can involve providing training on critical thinking skills, such as the ability to identify bias, propaganda, and false information. It can also include training on how to engage with traditional and social media effectively to counter false narratives and promote accurate information.

Another important solution to combat the threats of foreign influence, disinformation, and aggression campaigns in Ukraine and the Western Balkans is to generate an enabling environment where anti-disinformation start-ups in Europe can thrive and be coordinated for maximum efficiency.

Start-ups that focus on developing innovative solutions to detect and counter disinformation campaigns are critical to addressing the challenges posed by these threats. By generating an enabling environment that supports and encourages these start-ups , we can harness the power of innovation to create more effective and efficient tools to combat disinformation. This can be achieved through a number of measures, including providing financial support to start-ups, creating networks that facilitate collaboration and knowledge sharing among start-ups, or via attractive fiscal policy and establishing regulatory frameworks that encourage the development of innovative solutions.

In addition, establishing a coordinated approach to combat disinformation across Europe can help to maximize the efficiency of these start-ups. This can involve the creation of centralized platforms that provide a one-stop-shop for access to anti-disinformation tools and expertise, as well as the establishment of regional hubs that facilitate the coordination of activities and the sharing of best practices. By creating an enabling environment for anti-disinformation start-ups in Europe, we can encourage the development of innovative solutions that are tailored to the specific challenges faced by Ukraine and the Western Balkans. This can help to build a more resilient and effective response to the threats posed by foreign influence, disinformation, and aggression campaigns.

It is clear that the impact of artificial intelligence technology on foreign influence, disinformation, and aggression campaigns in Ukraine and the Western Balkans is a serious threat that requires immediate attention. By working together, we can develop the tools and strategies needed to combat this growing threat and ensure that these regions can continue to thrive in the years to come.

Sources:

  • Paul, C., Matthews, O., & McCants, W. (2017). Winning the Information War: Techniques and Counterstrategies to Russian Propaganda in Central and Eastern Europe. Atlantic Council.
  • Džankić, J., & Jovicic, M. (2018). Russian Influence and Information Operations in the Western Balkans. European Union Institute for Security Studies.
  • Rid, T. (2018). Active Measures: The History of Disinformation and Political Warfare. Farrar, Straus and Giroux.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.