Spilnota Detector Media
Русскій фейк, іді на***!

LLM grooming: how states and propagandists influence the responses of artificial intelligence

Government institutions and disinformation networks are increasingly trying to influence the operation of AI chatbots in order to make them reproduce political propaganda and manipulate users’ opinions. According to researchers, such practices may threaten democracy, undermine public trust, and intensify social polarization. This was reported by the outlet Maldita.es.

One of the main techniques used by state actors is so-called “LLM grooming” – the mass flooding of the internet with fabricated materials that later end up in the datasets used to train language models. According to analysts at NewsGuard, the pro-Russian disinformation network Pravda used this very method to make Western chatbots repeat false claims. Pravda spread fakes through more than 150 websites in dozens of languages and regions so that this content would be indexed by search engines and eventually incorporated into artificial intelligence algorithms. A similar approach, according to Responsible Statecraft, was allegedly used by Israel, which commissioned the creation of websites to influence GPT models in order to “reframe” their responses on topics related to antisemitism in the United States.

NewsGuard experts explain that to influence large language models, it is enough to “flood” the internet with content that will be collected by web crawlers and later become part of training datasets. The quality of this content is not essential – what matters is that algorithms can find it.

Another example of manipulation can be seen in China. There, the DeepSeek chatbot has built-in safety barriers that prohibit answers to questions on topics undesirable to the authorities, such as the Tiananmen Square events or protests in Hong Kong. Instead of discussing these issues, the bot responds with propaganda phrases in support of the Communist Party. As reported by The Guardian, Chinese cybersecurity standards require all AI systems not to violate “socialist values” or undermine state power. This has led to censorship and even the removal of a number of applications from the App Store that did not meet these requirements.

In the United States, the situation has a different nature. In July 2025, President Donald Trump signed an executive order on “preventing ideological bias in AI”, obliging companies that receive government funding to ensure the “ideological neutrality” of their models. The document defines policies of diversity, equity, and inclusion as manifestations of “ideological bias.” Experts warn that such restrictions could in practice lead to the removal of important safety filters and the reproduction of hidden discriminatory practices in algorithmic responses.

Manipulating chatbot responses can have serious consequences for society. Users often place excessive trust in information generated by AI, perceiving it as objective. This creates a danger for public consciousness: disinformation is presented in a convincing form, facilitating psychological manipulation and increasing polarization. As noted by researcher Raluca Cernatoni from the Carnegie Endowment for International Peace, such systems enable states to spread propaganda on an unprecedented scale.

According to María José Rementería from the Barcelona Supercomputing Center, the distortion of facts in AI responses destroys perceptions of reality and intensifies social conflicts. Ultimately, this undermines trust in the information space, state institutions, and democracy itself.

Research by Freedom House and the Institute for Strategic Dialogue shows that Russia, China, Iran, and Venezuela are already actively experimenting with artificial intelligence technologies to manipulate public opinion and undermine democratic processes. At the same time, even in democratic countries, restrictions or censorship of chatbot responses can create an isolation effect for users, pushing them toward closed platforms where hate speech spreads.

Countering these threats requires a comprehensive approach – from stronger regulation and independent audits of models to improving digital literacy. Researchers at Dartmouth College recommend conducting regular checks of the data used to train language models in order to detect possible state interference. Raluca Cernatoni also emphasizes the need to introduce international norms to combat deepfakes, propaganda, and AI manipulation.

Users should remain critical of chatbot responses, verify references and sources of information, and avoid using AI as a replacement for search engines or educational resources. As Paolo Rosso, head of the Natural Language Engineering Laboratory at the Polytechnic University of Valencia, notes, “users must remember that large language models are only tools. It is important to learn how to use them consciously”.

Із перших днів повномасштабного вторгнення експерти «Детектора медіа» щодня протистоять російській дезінформації. Ми спростовуємо фейкові новини, деконструюємо російські наративи та меседжі. І гартуємо медіаграмотність читачів.

Наша команда закликає долучитися до Спільноти «Детектора медіа», щоб разом боротися з російською пропагандою.

Долучитись