Bots are rapidly taking over social media; they are more sophisticated than ever before, and they’ve figured out something most voters have not: social media algorithms allow any issue to be publicly perceived as legitimate with enough engagement.
Bots on platforms like X have been used to artificially create grassroots political legitimacy out of thin air. America has seen this recently with far-right American commentator Nick Fuentes, who just one year ago was a fringe figure. However, his impact has been significantly boosted over the past year, as he has amassed more engagement than accounts with ten or even 100 times his follower count.
It shouldn’t surprise you to know that this is not organic. It’s bots – and they can manufacture perceived legitimacy.
In December, the Network Contagion Research Institute (NCRI) released a postmortem of algorithmic manipulation boosting Fuentes on X. Their findings should be a wake-up call globally on how social media can be leveraged to create artificial legitimacy and shift perceptions.
The core finding is that within the first 30 minutes after Fuentes posts, 61% of his retweets came from accounts that repeatedly amplified the same posts within the same 30-minute window. It’s not how normal people engage online; it’s most likely a coordinated bot network. These were fully anonymous accounts, faceless, with no names, no location data, no identifying markers.
This artificial amplification created the appearance of relevance. The mainstream media was covering him. Smaller outlets amplified coverage. By the time this report was released, he was already considered part of mainstream discourse.
And this isn’t just a problem of manipulatively amplifying extremism in the United States. It’s a global phenomenon that’s often driven by state actors not to support any particular worldview, whether they be extreme or moderate, but to advance their geopolitical goals.
Canada has seen this firsthand. In fact, we saw this play out during the most recent federal election.
A detailed investigation by the Canadian Association of Academic Developers uncovered hundreds of bot accounts that were activated during the election. All of them were touting unmistakable signatures: high-volume posting patterns, rapid response times, and no identity.
And it wasn’t just an issue of partisan propaganda. X saw bot campaigns attacking both Mark Carney and Pierre Poilievre over the course of the 2025 election.
In March of that year, Canada’s intelligence agencies detected a Chinese government-linked WeChat news account, Youli-Youmian, conducting a coordinated campaign to promote Mark Carney. The CCP-linked account posted content praising Carney as a “rock-star economist” while garnering millions of views, being amplified by a small group of 30 WeChat accounts.
The goal was to weaponize social media algorithms and control the information environment. The same Chinese network had previously targeted Conservative MP Michael Chong in 2023.
Foreign interference from bots is nothing new in Canada, but it does show the degree to which our digital landscape is fragmented. To understand why these campaigns work, you need to understand how X’s algorithm functions alongside fragmentation.
Algorithms track every interaction – clicks, likes, shares, watch time, search history – with the goal of tailoring content towards your existing preferences. Over months of exposure to the algorithm, users end up only seeing information that aligns with their existing beliefs, while the opposition is filtered out.
Algorithms have sorted Canadians into ideological silos and created closed digital environments that reinforce beliefs. Canadians who are algorithmically fed left-wing content see anti-Pierre messaging. They see it everywhere. They see it amplified.
It creates the perception of legitimacy and importance within their closed environment – manufactured legitimacy.
The TENET Media operation reveals exactly how this infiltration works. In September 2024, U.S. federal prosecutors unsealed an indictment revealing that a Tennessee-based media company, founded by Canadian commentator Lauren Chen and her husband Liam Donovan, allegedly had been covertly funded with nearly $10 million from Russian state broadcaster RT.
What made this operation so effective wasn’t the sophisticated hacking or very obvious propaganda; it was the mimicry. TENET Media didn’t try to change the conversation in right-wing circles; they infiltrated it and became a part of it.
This wasn’t a bot farm; this was a professional operation, but they exploited the same vulnerability in our digital landscape.
By creating content that mimics the style, culture, and politics of these ideological silos, foreign actors can easily infiltrate them and spread their own messages under the pretense that they’re someone who’s “in the know” within that bubble.
The content from TENET Media focused on cultural flashpoints of the American right-wing ideological silo: transgender issues, race relations, immigration, and claims of anti-white racism.
This is how figures like Nick Fuentes have forced their way into the main discourse. By manufacturing their own legitimacy, they become perceived as mainstream.
Manufactured legitimacy is the new machinery of digital politics, and online ideological silos are what make it possible. Identify an ideological silo, master its language, and then you can inject whatever you want under the cover of being perceived as legitimate.
Bots create the appearance of grassroots support, the algorithms ensure the message reaches the right audiences, and the resulting manufactured legitimacy does the rest.
Canada needs to pay closer attention to the growing problem of foreign interference and bots changing the conversation before our elections are not decided by voters, but by whoever has the most convincing bots and the deepest understanding of our ideological fault lines.
Jeff Ballingall is the founder of Mobilize Media Group.
Ryan Comeau is a contributor to TrendingPolitics.ca.


