Technologies and disinformation in the Russia – Ukraine War
Fake news thrives in the Russia – Ukraine war. The conflict as it stands is happening in both the physical and virtual world at a global level. Whether it is manipulated photos, false videos, state propaganda or fake AI-generated interviews, technological disinformation has become a huge part of the Russia - Ukraine war. Authenticity has become blurred due to digital technologies which have become involved in all aspects of the war. Major players in this current battle for truth are social media. The traditional control of information is no longer effective. Discover how technologies and disinformation have changed the way in which the war is being played out in both Ukraine and Russia and how it is trying to be stopped.
Social media in times of war
The rise of social media has been at the forefront of misleading and false information. Whether intentional or unintentional, it has become difficult to believe what is being spread online. That is why we urge companies to protect themselves against fake news.
TikTok
The world’s first TikTok war. It may sound unbelievable (and, let’s be honest, rather bittersweet), but the social media platform TikTok is adding fuel to the fire. What is TikTok, you ask? TikTok is a short-form video-sharing application that allows its users to share 15-second videos with their followers on any topic they like. The application is available in over 150 countries and has over 1 billion users. In the United States alone, TikTok has been downloaded over 200 million times. It’s fair to say TikTok has a powerful impact on this large audience. During the Russian-Ukraine war, TikTok emerged as one of the biggest platforms for spreading false information in the form of short snappy videos. The majority of the social media platform’s users are under the age of 30. This audience finds their news online, rather than by reading credible news sources. One single fake post can make a user question what they believe. It is dangerous, as the truth is now becoming indistinguishable from fake news.
Fake Livestreams
From the beginning of the war, fake live streams on TikTok had some of the highest views the social platform had ever seen. How are these made? Quite simply. The user locates old videos of conflict, war or some form of military activity, then adds upsetting audios of explosions or intense fighting before starting the live stream. Once the live stream has gained enough traction, the user will then ask for sizeable donations from the audience, fooling the viewer into believing this is happening now when, in fact, the video is a complete fake. For example, one TikTok user had up to 30 million views by posting old videos of Ukraine military drills and training from 2017.
Facebook and Twitter
Apart from TikTok, social media platforms Facebook and Twitter are also guilty of spreading fake news and misinformation. During the ongoing Russia-Ukraine war, both platforms have had to take down profiles of AI-generated human profiles. One profile was called Vladimir Bondarenko, a ‘person’ claiming to be from Kyiv who was sharing anti-Ukrainian discussions. Another profile was named Irina, who claimed to be a teacher located in Kharkhiv and the Editor in Chief of the news source Ukraine Today. Without close examination, it is extremely difficult to know whether these profiles were fake or not. Another worrying factor is that many of these AI-generated profiles or deepfakes can be created without any knowledge of web coding. This means that anyone with a computer or access to the internet is capable of producing such content.
Artificial Intelligence as a tool to manipulate opinion
AI, short for artificial intelligence, is the simulation of human intelligence by machines - mostly computer systems. In many situations, AI can perform as well or better than humans. The use of AI and training machine learning models as sources of disinformation is a new and dangerous development. The conflict between Ukraine and Russia has shown how this technology can play a significant role in modern war. It is an extremely sensitive issue. When AI is combined with military systems to gain advantages during war, this can have significant effects on the battlefield. AI systems which expedite analyses of battlefield data have been used on both Russia-Ukraine fronts.
Deepfakes and the spread of misinformation
In mid-March 2022, three weeks into the Russia-Ukraine war, a disturbing video was circulating on social media and was even shown on Ukraine 14, a news television channel. The video revealed Ukrainian President Volodymyr Zelenskyy in a motionless state asking the citizens of Ukraine to stop fighting Russian soldiers and to surrender their weapons. The video stated that the President had fled Kyiv. However, these were not the actual words spoken by the President. This video was a deep fake. The content of the video was constructed using artificial intelligence.
Another deepfake occurrence, a probable courtesy of Russian hackers, even tricked mayors of European capitals Berlin, Madrid and Vienna into participating in video call with a fake Vitali Klitschko, the renown Ukrainian heavyweight boxer turned mayor of Kyiv.
Deep fake videos are created by humans who program computers to impersonate real humans to persuade the public that the video is authentic. They are an evolving form of artificial intelligence. The technology is intended to manipulate the media. When this video hit the headlines, it was swiftly removed from social media platforms and denounced as fake by the President himself. Even though it was taken down shortly after it was uploaded, deep fake videos like this still create a deeply concerning effect on society as they pose a real challenge to what media consumers believe to be true.
What is the purpose of a deepfake?
The purpose of a deepfake is to manipulate its viewers. The creator of a deepfake wants viewers to believe something that never happened. Used for many reasons including political manipulation, fraud, blackmail and hoaxes, the rise of deepfakes is becoming more prominent online.
How can you spot a deepfake?
Videos posted online are becoming more sophisticated, making it harder to spot a deepfake. However, there are signs that can help to decipher the truth.
- Unnatural facial movements: If something looks a bit off in the face, this could be a red flag. This happens when one image has been placed on top of another in a video sequence or frame.
- A lack of emotion: If the face is lacking any kind of emotion, this could signal an untrustworthy video.
- Unnatural body movements and shapes: Just like facial expressions, if the body movements and shapes look jerky or strange, you should be wary of the video you are watching. A big red flag.
- Unnatural colouring: Abnormal skin colouring, blurry backgrounds, and unnatural lighting all signify that a video could be a fake.
- Teeth and hair that do not look real: Missing teeth and hair outlines are again indications that the video has been manipulated.
- When slowed down the images look unnatural: If you slow down a video and play it back, zoom in to closely see the images that make up the video. This can help distinguish whether lip-syncing has been used.
An online battle for truth
The pursuit of authenticity online is at an all-time high. The fight to stop fake news and disinformation has never been more important. The Russia-Ukraine war has exemplified how technologies can be used to spread misinformation. There are ways to stop this spread of information by understanding the most common techniques that are used.
Fact-Checking
Fact-checking resources are also a great way of distinguishing real news from fake news. Many European organizations have started to create lists of sites where users can go to check information.
Journalists and news sources
Journalists and fact-checkers are working hard to make sure that every piece of content being put online is verified. Well-resourced news outlets such as the BBC are also making it a point to call out fake news. Some outlets have even hired staff purely to stop the spread of disinformation online and to flag it whenever necessary.
Ask yourself some simple questions
When reviewing information online, it is important to ask yourself some basic questions. Who created this information? Is it a credible news source? Why do you believe this piece of content was created? Essentially, if you are unsure of what you are seeing or reading, it is probably not a good idea to share or repeat the information. The best solution is to fact-check by looking at credible news sources. By doing this, you can help to stop the spread of disinformation in the Russia-Ukraine war.
Cyberattacks and online war
Hackers are now soldiers. With the quality of communications being a mandatory part of military tactics since forever, a silent army is a dead army. Cyberattacks are one of the most efficient ways to weaken your enemy. The Russian “soft war” strategy for invasion relies on three pillars: cyberattacks on Ukrainian, cyber influence operations targeted globally (which we mentioned earlier) and network penetration and espionage outside Ukraine, specifically on the country’s allies and partners.
We are witnessing and all new kind of armed conflict, one that extends beyond the tangible boundaries of the frontlines.
If you would like to dig deeper on the topic of the Russia-Ukraine war, you will find interesting, fact-checked information here