Spilnota Detector Media
Detector Media collects and documents real-time chronicles of the Kremlin propaganda about the Russian invasion. Ukraine for decades has been suffering from Kremlin propaganda. Here we document all narratives, messages, and tactics, which Russia is using from February 17th, 2022. Reminder: the increasing of shelling and fighting by militants happened on the 17th of February 2022 on the territory of Ukraine. Russian propaganda blames Ukraine for these actions

On 21 February, on the 1458th day of the full-scale war, our editorial office recorded:

2732
Fake
816
Manipulation
775
Message
559
Disclosure
Русскій фейк, іді на***!

How Pravda promotes Russian propaganda in Spain

The Russian disinformation network Pravda is increasingly expanding its presence in the global information space, particularly through the Spanish-language segment – and this poses a new challenge not only for Europe but also for technological systems that many consider to be neutral to some extent. This is emphasized in the latest study by the ATHENA project, which focuses on how pro-Kremlin narratives are spreading in Spain and influencing artificial intelligence (AI).

The Pravda network, first identified in April 2022, has become, according to researchers from the ATHENA project, not merely a propaganda tool but a full-fledged disinformation machine. As noted in our article from March 12, 2025, the network comprised more than 150 domains across roughly 49 countries. According to NewsGuard, most of the network’s websites do not produce original content; instead, they aggregate, republish, or translate pro-Kremlin messages from Russian state media, Telegram channels, or official sources.

In the ATHENA report, researchers note that the Spanish-language Pravda integrates into the Spanish-speaking information space the same messages that Russia systematically promotes globally: claims of the “fascization of Ukraine”, the “decline of the EU”, and justifications for the war. All of these texts follow a common structural pattern – they present Russian propaganda materials as “expert opinions” or an “alternative point of view”, while portraying the site itself as a local outlet supposedly “fighting censorship”. This disguise, combined with its targeting of Spanish audiences, makes the network a dangerous instrument of influence.

The ATHENA project also found that, in addition to a general Spanish-language section, versions in Catalan, Basque, and Galician have appeared. All of them systematically disseminate content adapted to local audiences, largely based on Russian state media as well as anonymous Telegram channels. The simultaneous emergence of several language branches was not accidental: the first articles in the Catalan, Basque, and Galician sections were published at the same time and were virtually identical translations. This points to a centralized content factory that automates and scales disinformation for different audiences. The system operates as a multi-level chain: at its core are thousands of Telegram channels that generate the initial pool of materials, followed by aggregators and localized “Pravda” sites that instantly turn these signals into publications. Monitoring by Maldita.es, cited by ATHENA, shows that a large share of the network’s materials consist of reposted Telegram messages.

This disinformation operation has several strategic objectives. First, it aims to expand influence within Spain itself and across the entire Spanish-speaking world, since content published in Spain can easily spread to Latin America. Second, the media network seeks to build up a “propaganda footprint” online. NewsGuard found that one third of the responses produced by leading chatbots echoed narratives promoted by the Pravda network, and in many cases the models directly cited materials from this network, creating a risk that fake sources may be legitimized in AI-generated answers.

The practical consequences of this approach are also visible in the timing of the network’s activity spikes: Pravda synchronously increased its publishing volume during moments of crisis, when public attention was at its peak – for example, during the large-scale power outage that affected Spain and Portugal on April 28, 2025. On those days, the group published hundreds of posts, often hinting at cyberattacks or making outright false claims about the scale of the disruption, thereby amplifying panic or undermining confidence in the ability of state institutions to respond to the crisis.

ATHENA and Maldita.es traced how messages from Telegram appeared on the network’s websites within the first minutes, making these platforms an effective tool for the rapid spread of narratives during periods of uncertainty. A significant share of Pravda’s sources are outlets linked to Russian state propaganda – RT, Sputnik, RIA Novosti, and others. Around 40% of the network’s Spanish-language publications directly cited such sources; the rest mostly originated from Telegram chains that themselves frequently republish or rework material from these platforms.

For Ukraine and for European media, such activity can undermine trust in institutions and sow doubts about the effectiveness of policies or decisions, or function as a technological attack on the ways information is accessed through modern tools.

Foreign Affairs: Artificial intelligence is intensifying the disinformation war, and the United States is unprepared to defend itself

An article published in Foreign Affairs on November 19, 2025, reports that foreign governments are using artificial intelligence to mass-produce propaganda, while U.S. institutions are not prepared to respond effectively to this threat.

Artificial intelligence is becoming a powerful tool of information warfare. According to the authors in Foreign Affairs, foreign governments – primarily China and Russia – are already using generative models to create personalized disinformation campaigns. At the same time, the United States is showing a weak systemic response: agencies responsible for countering foreign influence are being weakened, and mechanisms for oversight and regulation of AI models are still not in place.

As one of the most striking illustrations of this threat, the authors cite an example of a deepfake on the Signal platform, where a message allegedly sent by the U.S. Secretary of State was accompanied by an audio recording that sounded entirely convincing – but was artificially generated. Such technology can undermine diplomatic relations, destabilize political messaging, and create risks of leaking classified information. “Had the lie not been exposed, this trick could have sown discord, compromised U.S. diplomacy, or extracted confidential intelligence from Washington’s foreign partners,” note the article’s authors, James P. Rubin and Darian Vuicza.

Another dangerous trend is the use of AI to create personalized influence campaigns. According to the authors, the Chinese company GoLaxy used AI tools to build “digital profiles” of hundreds of U.S. lawmakers and public figures. With this data, it was then able to generate tailored content that closely matched the psychological preferences and political inclinations of each individual recipient – and to do so at the scale of millions of messages.

Rubin and Vuicza emphasize that while disinformation itself is nothing new, AI has made it far more scalable and effective. They argue that aggressor states no longer need to invest hundreds of millions of dollars in propaganda: generative AI allows influence campaigns to be launched with far fewer resources.

Most controversial, however, is the response of the United States. As the authors note, instead of strengthening institutions capable of countering foreign influence, the U.S. government is weakening its defensive mechanisms. In particular, they criticize decisions by the administration to dismantle or reduce the authority of bodies responsible for combating disinformation. A genuine counterstrategy, in their view, must include both technological and institutional solutions. This requires new structures to monitor the impact of AI-generated messaging, as well as audits of artificial intelligence models used to produce information content. They also call for the restoration of institutions tasked with countering disinformation.

Fake: The bill provides for the arrival of 10 million immigrants to Ukraine by 2030

Fake: The bill provides for the arrival of 10 million immigrants to Ukraine by 2030

A number of news outlets and social media accounts are spreading false information claiming that Ukraine allegedly plans to bring in 10 million immigrants by 2030, including “Arabs and Asians”, in order to “massively replace Ukrainians”. This disinformation narrative, which refers to a bill submitted to the Verkhovna Rada titled “On Amendments to Certain Laws of Ukraine Regarding the Employment of Foreigners and Stateless Persons”, was debunked by StopFake fact-checkers.

In reality, claims about bringing in millions of migrants to “replace” Ukrainians are yet another distorted propaganda narrative.

Bill No. 14211, which is currently under consideration in the committees of the Verkhovna Rada, contains no plans or quotas to attract 10 million migrants and does not aim to replace the population.

Its main purpose is to simplify legal procedures for foreign nationals and stateless persons who have already expressed a desire to work in Ukraine. In particular, it concerns simplifying the procedures for issuing temporary residence permits and work permits. These changes are intended to align Ukraine’s migration legislation with European Union standards, which is an important step on the path to European integration.

The figure of 10 million cited by propagandists is not part of the bill and is not a state plan.

This number was previously mentioned by individual experts and politicians, including former Minister of Economy of Ukraine Tymofii Mylovanov, solely as a personal, approximate assessment. It reflects a potential future need for labor migrants in Ukraine after the war to stabilize the economy and compensate for the sharp decline in the working-age population and the demographic losses caused by the war and high levels of emigration.

Thus, the figure of “10 million migrants” represents individual estimates highlighting current demographic challenges, not an approved government plan or a legislative commitment. The bill merely legalizes and simplifies existing procedures for those willing to work, without establishing any quotas.

Clickbait: Former Prime Minister of Ukraine Volodymyr Groysman Has Allegedly Died

False information about the alleged death of former Prime Minister of Ukraine Volodymyr Groysman is being actively spread on social media and some news websites. This publication is accompanied by a loud, emotional headline and often a black-and-white photo of the politician, creating the impression of tragic news. The fake clickbait was debunked by VoxCheck experts.

However, this is fake. In reality, Volodymyr Groysman’s father – Borys Isaakovych Groysman – has died. He was a deputy of the Vinnytsia City Council and passed away on January 6, 2025, at the age of 78. Volodymyr Groysman himself wrote about this on his official Instagram page, which later became a pretext for creating manipulative content. Although Borys Groysman died back in January 2025, purveyors of fakes decided to bring the story back and use it as a pretext for clickbait right now.

How the clickbait works

The message about the “death of the former prime minister” is a classic example of clickbait aimed at artificially increasing traffic and reach.

  • The authors of such posts (including those linking to the Timezones website) deliberately use a shocking headline (“Former Prime Minister of Ukraine Volodymyr Groysman has died”) so that readers, seeing the name of a well-known politician, immediately click the link to learn the details.
  • Only after carefully reading the article does it become clear that it is about the death of his father, not the former prime minister himself.
  • Spreading such content is a deliberate manipulation of readers’ attention to boost a site’s visitor metrics.

Check information sources and read the full text carefully so you don’t become a victim of clickbait.

Andrii Pylypenko, Lesia Bidochko, Oleksandr Siedin, Kostiantyn Zadyraka, and Oleksiy Pivtorak are collaborating on this chronicle. Ksenia Ilyuk is the author of the project.