AI Chatbots Send Traffic to Russian Propaganda Sites
AI Chatbots Quietly Drive Traffic to Sanctioned Russian
Major artificial intelligence platforms have become an unexpected distribution channel for sanctioned Russian state media, with new data showing that chatbots directed approximately 300,000 visits to Kremlin-linked propaganda websites during the final quarter of 2025 alone. The findings, drawn from SimilarWeb analytics, reveal how AI systems designed to synthesize information are inadvertently steering users toward content restricted under European Union sanctions.
The analysis examined referral traffic from October through December 2025, tracking visits generated by leading conversational AI services including ChatGPT, Perplexity, Claude, and Mistral. Researchers focused on eight outlets known for promoting Russian government narratives, including RT, Sputnik, RIA Novosti, and Lenta.ru, all of which face bans or operational restrictions across EU member states due to their role in spreading disinformation and supporting military aggression against Ukraine.
While the absolute numbers remain modest compared to the total readership of these platforms, the pattern signals a systemic vulnerability in how AI systems source and present information. For niche propaganda outlets, the dependency on AI referrals is already substantial. ChatGPT and Claude together accounted for roughly six percent of all referral traffic to Sputnik Globe during the quarter, while Claude-generated visits represented nearly ten percent of referrals to News Pravda, a multilingual disinformation operation known for targeting AI systems specifically.
The Scale of AI-Driven Referrals
Among individual platforms, ChatGPT emerged as the most significant source of AI-generated traffic. The OpenAI service alone sent more than 88,000 users to RT during the three-month period, while Perplexity contributed another 10,100 visits to the same outlet. RIA Novosti, one of Russia’s largest state news agencies, received over 70,000 visits from AI tools, and Lenta.ru logged more than 60,000.
RT, the Kremlin’s flagship international broadcaster, recorded approximately 123 million page views during the fourth quarter, meaning AI-driven traffic constituted well under one percent of its total audience. Yet the geographic distribution of this traffic raises compliance questions. Despite EU sanctions designed to restrict access to these outlets, a significant portion of visits originated from European Union nations and the United States. RT’s audience data shows American users making up ten percent of its readership, with Germans accounting for 2.27 percent, Spaniards 1.48 percent, and British visitors comprising 1.12 percent.
Perplexity, the AI-powered answer engine, has emerged as a growing traffic source for multiple Russian outlets during this period, suggesting that AI-driven referrals are gaining momentum as users increasingly turn to conversational interfaces for news and research.
The Research Infrastructure
The traffic analysis follows earlier investigations documenting how Russian state media content infiltrates AI responses. A study published in October 2025 by the Institute for Strategic Dialogue found that Russian state-attributed content appeared in 18 percent of chatbot responses when researchers tested 300 queries across five languages. The London-based think tank tested ChatGPT, Gemini, DeepSeek, and Grok, discovering that nearly a quarter of queries deliberately designed to elicit pro-Russian views included citations to Kremlin-aligned sources, compared to just over ten percent for neutral prompts.
Separately, NewsGuard, an organization specializing in information reliability, reported that leading AI chatbots repeated false narratives pushed by a Moscow-based influence network called Pravda approximately 33 percent of the time. The Pravda operation, which researchers say is designed specifically to manipulate AI systems, generates content at industrial scale, publishing an average of 18,000 articles per false claim across 150 websites in 46 languages.
The network exploits what researchers call data voids, areas where legitimate information is scarce and false narratives can fill the gap. Large language models assess credibility through statistical signals including repetition, apparent consensus, and cross-referencing across sources. When thousands of articles repeat identical false claims, algorithms interpret volume as validation, treating manufactured agreement as corroboration.
Expert Warnings on Systemic Vulnerability
Daniel Schiff, co-director of Purdue University’s Governance and Responsible AI Lab, characterized the findings as deeply concerning for information integrity. In response to the NewsGuard research, Schiff called the pattern “a really major warning for us” and noted that AI systems are vulnerable to manipulation through both simple and sophisticated techniques.
Schiff explained that chatbots scrape information from across the internet, including sources that lack adequate vetting, creating pathways for adversarial actors to launder disinformation through seemingly neutral technology. He warned that Americans in particular can be too trusting of AI-generated responses, even when systems produce persuasive but inaccurate content.
The Purdue researcher emphasized that misinformation campaigns operate gradually, influencing public understanding over time by exploiting trust in automated systems. “Pick any domestic policy issue, international issue, war, and you can influence segments of the population to believe things that are false, to believe things that they wouldn’t believe if they were to take the time to think through the issues themselves,” Schiff stated.
The Pravda network exemplifies this strategy. By overwhelming AI training data and search indices with manufactured content, Russian operators can effectively groom chatbots to surface and repeat Kremlin narratives without the platforms necessarily recognizing the sources as state-sponsored propaganda.
Regulatory and Corporate Responses
The European Commission moved to address the gap between sanctions policy and AI system capabilities on January 22, 2026, updating its guidance on restricted services to explicitly prohibit the provision of artificial intelligence services to Russia. The new guidance clarifies that Article 5n of Council Regulation (EU) No 833/2014 now covers both access to hosted AI models and platforms enabling the training, fine-tuning, or inference of AI models, including large language models, image generation systems, and specialized applications.
However, enforcement presents distinct challenges. Unlike traditional media distribution, where sanctions can target broadcast licenses or block website domains, AI chatbots function as intermediaries rather than direct distributors. Users in sanctioned jurisdictions may access restricted content indirectly through AI responses and source citations, circumventing geographic blocking mechanisms.
OpenAI spokesperson Kate Waters stated that the company is actively working to prevent ChatGPT from spreading content associated with state-backed entities, though specific technical measures were not detailed. A European Commission spokesperson emphasized that platform providers bear responsibility for blocking access to sanctioned outlets, placing the compliance burden on AI companies themselves.
The Broader Information Security Implications
The document traffic data illuminates a wider strategic vulnerability as AI systems increasingly function as primary information gateways. Russia reportedly spends over one billion dollars annually on information warfare, a fraction of its military budget that nonetheless enables disproportionate influence through digital channels. For the cost of a handful of fighter aircraft, adversarial states can amplify distrust, inflame social divisions, and weaken democratic institutions from within.
The latest research suggests these efforts are adapting to technological shifts. As users migrate from traditional search engines to conversational AI for news consumption and research, influence operations appear to be following that migration, optimizing content to capture AI referrals even as direct website access faces sanctions pressure.
The traffic patterns from late 2025 indicate this adaptation is already producing measurable results. With 300,000 documented visits generated by AI systems in a single quarter, and certain propaganda outlets deriving significant portions of their referral traffic from chatbots, the intersection of artificial intelligence and state-sponsored disinformation has become a concrete operational challenge rather than a theoretical concern.
For policymakers and technology platforms, the findings underscore the urgency of building information security safeguards directly into AI architectures. Researchers suggest that transparency requirements, real-time source vetting, and provenance tracking comparable to financial audit standards may be necessary as AI systems assume greater roles in mediating public understanding of international events.
The gap between sanctions on paper and the reality of AI-mediated information flows represents a new frontier in the regulation of digital platforms. As the European Commission’s updated guidance takes effect, observers will be watching whether AI companies can operationalize restrictions that were designed for an era of websites and broadcasters, not conversational agents that synthesize and cite sources dynamically.



