New “nuclear bomb”: How information warfare is reshaping global politics

Review

Since the 1980s, the rise of the Internet has gradually influenced the global geopolitical climate. This phenomenon gave birth to a new term in the lexicon of conflict – information warfare. By the 2000s, with the proliferation of social networks, rapid information exchange, and the empowerment of individuals to act as information hubs, the online world evolved into a potent political and military tool. In the 21st century, manipulating entire societies, conducting disinformation campaigns, and using propaganda to achieve hostile objectives has become easier than ever. Traditional media, television, and digital platforms have been fully exploited in this regard.

What is alarming is that this process continues unabated, with indications of further development in the future. Examples include Russia’s support for subversive TV channels like those of Medvedchuk in Ukraine, Kremlin-backed chauvinistic media, and the dominance of the United States in the digital world, along with its past and future strategies. A notable case is the Pentagon’s anti-vaccine operation targeting China’s COVID-19 vaccine during the global crisis. The emergence of AI-powered deep fakes, indistinguishable from real humans, adds another layer of threat. This article delves into these political projects and the potential dangers facing nations in the future.

US "anti-vaccine operation" in Central Asia

Amid the economic, political, and military rivalry between the world’s two superpowers, China and the United States, information warfare has emerged as a key battleground. This conflict intensified during Donald Trump’s presidency, reaching a peak in March 2020 when Chinese officials suggested that the coronavirus might have been brought to Wuhan by a US soldier during the 2019 Military World Games. Chinese authorities speculated, without evidence, that the virus could have originated from a US Army lab in Fort Detrick, Maryland.

In response, the US Department of Justice alleged that Chinese intelligence operatives created fake social media accounts to disseminate the Fort Detrick conspiracy. This marked the beginning of an escalating information war.

Four years later, in 2024, a Reuters investigation shed light on the repercussions of this campaign, revealing the extent of damage to US vaccines and their distribution. According to the report, the US decided to retaliate against China’s disinformation efforts. In spring 2020, then-Secretary of Defense Mark Esper authorized Jonathan Braga, deputy commander of the US Pacific Army (now head of Special Operations), to launch a counter-operation.

Operating from MacDill Air Force Base in Tampa, military personnel and contractors created hundreds of fake accounts on social media platforms like Twitter and Facebook. These anonymous profiles spread misinformation about the safety and efficacy of China’s first COVID-19 vaccine, which was in the testing phase. The goal was to undermine global confidence in Chinese vaccines and protective equipment, countering China’s growing influence through its "COVID-19 diplomacy."

By this time, Beijing had begun distributing humanitarian aid worldwide, from masks to medical equipment, reinforcing its international presence. The US operation was not merely retaliatory but aimed at curbing China's expanding reach during the pandemic.

The Pentagon's anti-vaccine operation began almost simultaneously with the start of the first clinical trials of the Chinese vaccine Sinovac, specifically in March-April 2020. The US military targeted three regions during this mission: Southeast Asia, the Middle East, and Central Asia. Their approach was tailored to local conditions and values, yielding effective results.

The initial focus was Southeast Asia, particularly the Philippines. Fake accounts created by the Pentagon exploited a sensitive issue—the long-standing territorial disputes between China and the Philippines in the South China Sea. This made it easy to stir anti-China sentiment among Filipinos. The disinformation campaign centered on sowing distrust in Chinese vaccines, playing on fears such as, "The virus came from China, and so does the vaccine?" Similar false narratives about the ineffectiveness of Chinese masks and other protective equipment were also circulated. US officials feared that China’s COVID diplomacy might draw Southeast Asian nations, including Cambodia and Malaysia, closer to Beijing. However, the primary target was the Philippines, where relations with the US had deteriorated under President Rodrigo Duterte, who criticized American influence and threatened to revoke US legal jurisdiction over troops stationed there. The pandemic heightened US concerns that Duterte might pivot towards China. The Pentagon's disinformation campaign succeeded in amplifying distrust of Chinese vaccines. Following US propaganda, Duterte voiced alarm over low vaccination rates and even threatened jail time for those refusing vaccination during a national address in June 2021. At the time, only 2 million of the Philippines' 114 million population were fully vaccinated. By mid-2021, the country had recorded over 1.3 million COVID-19 cases and 24,000 deaths. The primary vaccines available were Chinese, and public hesitancy contributed to the Philippines' high mortality rate compared to neighboring countries. Philippine health experts and former officials interviewed by Reuters confirmed that US social media campaigns had influenced public perception.

The disinformation project extended beyond the Philippines. By summer 2020, the operation expanded to Central Asia and the Middle East. This phase introduced a religious angle, targeting Muslim populations by spreading rumors that Chinese vaccines contained pork gelatin. Fake social media accounts disseminated messages in Arabic for Middle Eastern audiences and in Russian for Central Asia. One widely shared meme depicted a Chinese flag as a curtain separating Muslim women in headscarves from pigs holding vaccine syringes. Reuters reported that this propaganda targeted Kazakhstan, Kyrgyzstan, and Uzbekistan. Hundreds of fake accounts simultaneously spread similar narratives and imagery. Tweets posted on the former Twitter platform exemplify the psychological tactics used to influence public opinion in these regions.

“Can we trust China, which is trying to hide the presence of pork gelatin in its vaccine and distributing it in Central Asia and other Muslim countries where many consider such drugs to be haram?” Muslim scholars at the Raza Academy in Mumbai reported that the Chinese coronavirus vaccine contains gelatin from pork and recommended avoiding vaccination with the haram vaccine. China is hiding what exactly the drug is made of, which causes distrust among Muslims.”

The US military’s anti-vaccine operation spanned from spring 2020 to mid-2021. Interestingly, the Pentagon’s clandestine distribution of messages on Facebook and Instagram drew the attention of Meta, the platforms' parent company. Facebook executives first contacted the Pentagon in the summer of 2020, noting that their employees had easily identified fake military accounts, which violated the company’s policies. However, US military officials requested that Facebook refrain from removing the posts, claiming the accounts were part of counter-terrorism operations. This project, initiated during Donald Trump’s presidency, extended into the early months of Joe Biden’s administration in 2021. The campaign ultimately concluded after Biden signed an executive order in spring 2021 prohibiting efforts to undermine vaccines developed by rival nations.

By 2024, a Reuters investigation uncovered significant evidence of the Pentagon’s disinformation campaign targeting China on social media. The investigation revealed that 300 fake accounts were created on Twitter to influence users in the Philippines, while approximately 150 accounts targeted other regions. Similar accounts appeared on Facebook and Instagram, with the majority launched during the spring and summer of 2020. According to Reuters, these fake accounts, managed by Pentagon contractors, amassed tens of thousands of followers throughout the operation. General Dynamics, the IT contractor providing services to US Central Command, was identified as a key player. The investigation indicated that errors by General Dynamics allowed social media platforms to trace the origin of the fake accounts.

How can you be “made to dance”?

The above-mentioned reality indicates that the Pentagon’s special operation “psychological warfare” and the era of confrontation are at their peak. Powerful external forces can carry out such “psychological aggression” at any time. Not only the United States, but also China and Russia have well-established experience in this area. During the COVID-19 pandemic, the large-scale disinformation campaign against Chinese vaccines by the United States military through fake accounts on social networks has already become the simplest and most common method. In the era of information warfare with misleading messages that are far from the truth, “fighting” with fake accounts has become obsolete.

However, this simple method is no longer effective, and it is still too early to say that its effect is very low. In societies where the ability to “fact-check” is not formed among Internet users, censorship is strong in state information policy, and as a result, there are few independent media outlets that work around the clock, it is not out of the question to use the Pentagon-style method to cause instability. Logic will get you from point A to point B, but imagination will get you anywhere, so let's see how such plans can be implemented using imagination.

It takes several years to organize political instability and various disorders. These years can be spent on studying the internal and external political environment of the country in which the operation is intended, launching new accounts and operating at different times in different years in order not to look suspicious on platforms such as Facebook, the former Twitter, now X, and Instagram. These fake accounts can be in the target country and the number of people they can influence, from several hundred to a thousand, or even more. Initially, all of these fake accounts are controlled from a single base, such as the Pentagon. They can operate normally, posting daily stories or occasional posts on the page. All this is done to hide the fact that the account is fake and to give the impression that an ordinary person is behind it. Thus, in order to cause chaos among the population, the country's internal problems, the most painful point of the people suffering from this, are targeted. For example, in winter, when the country is on the verge of an energy crisis, the heating system is in a state of disarray, and the government is unable to provide the necessary supplies for either industry or public consumption, fake accounts that have been launched long ago can begin their work.

Against the background of such news, it is possible that the natural gathering of car owners, who have formed long, even several-kilometer-long lines at gas stations in winter due to the shortage of natural resources in the domestic market, or the gathering of angry fathers around an "electroset" because their baby is shivering from the cold in the cradle, could serve as the initial spark for a large fire. External forces may be interested in mobilizing the population in this way and causing political crises in the country. This is done to discredit the government, which has tried to suppress the protests in various ways, and to subject it to sanctions by the world community, in particular the West. Ultimately, those behind this act offer themselves as the only way out for this country, which has been isolated in various areas by democratic communities. That's when unfair contracts, unreasonable obligations, pressures, and other similar traps begin.

In fact, such tensions do not choose time and place. Therefore, it has become everyone's duty to be prepared for such attacks in the information field, not to be deceived by them, and not to play the drum of external forces. Of course, the greatest force that must resist this is the structures and individuals responsible for the information policy of states. The time has long come when press secretaries of state organizations and a number of authorized bodies must be constantly vigilant and continuously provide information within their authority to independent mass media. Because official information must circulate faster than disinformation among users of social networks. Only then can we withstand extremely destructive information attacks.

Artificial intelligence intervention further complicates the situation

The experience of fake accounts for powerful external forces is already outdated. It is likely that they have already begun to switch to advanced technology in this regard. Because recent reports reveal that players who stand out from the rest in military and technological terms, such as the United States and China, are resorting to artificial intelligence in the “psychological warfare” of the information field.

In particular, over the past few years, China has invested heavily in artificial intelligence and is said to be aiming to become a world leader in this field by 2030 and then actively implement it for military purposes. For example, Tsai Ming-yen, Director General of Taiwan’s National Security Bureau, has already made a statement that China is increasingly likely to create a huge database of “deep fakes” through artificial intelligence and use them to destabilize the island.

“Deep fake” is a well-known synthetic development in which a person’s face or voice is replaced with the actions of another person. Previously used for various purposes, such as superimposing immoral images on celebrities, “deep fake” is now expected to be widely used to spread fake news or misinformation on social media. For example, there are allegations that China is collecting various voice samples from India in border areas. It is said that local residents were hired to record pre-recorded words, phrases, or conversations, and the samples collected by them were allegedly transferred to servers located in China. Although the exact purpose of collecting this data is unknown, they point to disinformation campaigns carried out by “deep fake”. Many argue that the Indian government should be vigilant and be prepared for any threat following such suspicions.

The first clear impact of “deep fake” was seen in the Russia-Ukraine conflict. In March 2022, a fake video of Ukrainian President Volodymyr Zelensky asking his troops to surrender went viral. However, due to the video’s low level of complexity and poor quality, it was not difficult to identify it as a “deep fake.” Similarly, a fake video of Russian President Vladimir Putin urging his troops to lay down their arms and go home was widely shared on X. In the absence of rapid and reliable news, such “deep fakes” were likely to cause unrest among citizens on both sides and increase confusion and uncertainty about the military operation.

The “deep fake” images of Putin and Zelensky cannot help but call for prudence. Since the parties used such a subtle weapon while one country was invading another, it is possible that this situation could be tested again at any time in any part of the world. With conflicts in the Middle East, Eastern Europe, the Korean Peninsula, the Taiwan Strait, and the still-unresolved “Taliban” issue in Central Asia, these regions could become a testing ground for “deep fakes.” In an era when artificial intelligence is increasingly empowering “deep fakes,” it is necessary to wait for an explanation from official information circles on this issue, rather than immediately reacting to fake video images of any person who has a place in world politics that can suddenly spread like a virus on the networks.

In addition, information has begun to appear that the Pentagon, which has extensive experience in such covert operations, has also taken practical steps to strengthen its capabilities in this regard. A classified document obtained by *The Intercept* reveals that the secretive US Special Operations Command is looking for companies to help create “deep fake” Internet users, and that they are working to create fake online identities that are indistinguishable from real people. According to the document, the US defense establishment hopes to create a database of “deep fake” users whose fakeness cannot be detected by either humans or computers. The 76-page plan from the US Joint Special Operations Command outlines advanced technologies needed for the country’s most elite, covert military operations, and calls for the creation of fake online “users,” or “deep fake” accounts, that are recognizable as human but appear to be unique individuals who do not exist in the real world. This, in stark contrast to the fake accounts used in the “psychological warfare” against Chinese vaccines during the COVID-19 era, will allow the US military to now wage information warfare not with dry written posts, but with fake individuals with live images and audio.

According to *The Intercept*, the Joint Special Operations Command hopes that these fake people will even have the ability to create “selfie videos.” It is also possible to guess the name of a potential partner that could provide such an opportunity for the Pentagon. This could be the contractor that worked on the anti-vaccine operation—General Dynamics IT. According to a Reuters investigation, in February of this year, General Dynamics IT received a contract worth $493 million, and this company may continue to provide covert services to the US military.


Author

Tags

Donal'd Tramp “antivaktsina operatsiyasi” Xitoy va AQSh “Deep fake”

Rate Count

0

Rating

3

Rate this article

Share with your friends