Elon Musk is widely hailed as a tech innovator and entrepreneur, but behind the mythology lies a darker reality. Under Musk’s leadership, X (formerly Twitter) has become a platform that algorithmically curates and amplifies incitement, dehumanization, and disinformation - particularly regarding the ongoing genocide in Gaza. As CEO of both X and xAI (developers of the Grok chatbot), Musk has blurred the lines between free speech and algorithmic propaganda, wielding unprecedented influence over global discourse. This essay offers a comprehensive indictment - legal, moral, and historical - of Elon Musk’s complicity in enabling crimes against humanity.
Elon Musk grew up in apartheid-era South Africa, a system that normalized racial hierarchy and white supremacy. His father reportedly owned an emerald mine, and Musk has spoken positively about the luxurious lifestyle they enjoyed. This early environment - one of structural oppression, racial exploitation, and domestic servitude - likely shaped Musk’s worldview and planted the seeds of impunity and entitlement.
Musk’s move from South Africa to Canada, and shortly thereafter to the United States, is often celebrated as entrepreneurial ambition. Less frequently discussed is that Musk entered the U.S. on a student visa, which legally prohibited him from working. Nonetheless, he organized paid club events and took freelance programming jobs. These were clear violations of his visa terms. Yet Musk faced no consequences - unlike countless undocumented workers or Palestinian activists who today face aggressive U.S. immigration enforcement. Musk’s experience illustrates the impunity afforded by race and class privilege.
Musk’s brief tenure at PayPal preceded a long history of that platform freezing or confiscating funds from politically controversial organizations, especially those critical of Israel or the U.S. government. While Musk was ousted early from PayPal, the ethos of corporate overreach and censorship persisted - raising questions about his influence in normalizing such practices.
When Musk began criticizing Twitter’s content moderation in the COVID-19 era, he posed as a free speech absolutist. He lamented the shift from chronological timelines to algorithmic curation and encouraged users to switch back to chronological order. This was during a period when Twitter, under Jack Dorsey, had begun implementing rudimentary shadowbanning techniques - largely in response to government pressure. These techniques, while flawed, were at least detectable through open APIs and third-party tools.
Musk’s acquisition of Twitter followed his public dissatisfaction with how the platform treated right-wing and pro-Trump content. The suspension of Donald Trump’s account after the January 6 Capitol insurrection likely played a pivotal role in his decision. Once in control, Musk began reshaping X into a tightly controlled platform with opaque moderation mechanisms, selectively amplifying narratives that aligned with his views - particularly those downplaying Israeli war crimes and smearing Palestinian voices.
Under Musk’s leadership, X replaced rudimentary moderation with a sophisticated and opaque system of algorithmic suppression. Accounts are now labeled with dozens of invisible attributes (e.g., “deboosting,” “search exclusion,” “reply demotion”) that are not disclosed to users. These techniques violate the transparency requirements of the EU’s Digital Services Act (DSA) and General Data Protection Regulation (GDPR), which mandate clear explanations for content moderation and profiling. The new regime creates a chilling effect and centralizes control over political discourse in the hands of Musk and his engineers.
In Nazi Germany, Julius Streicher was held criminally responsible for publishing content that incited genocide. His newspaper, Der Stürmer, curated and amplified hatred and lies. Today, X - under Elon Musk - plays a strikingly similar role in the context of Gaza. The account @imshin is among the worst offenders, regularly posting misleading videos from Arab markets outside Gaza or outdated footage to deny the famine. These posts, under hashtags like #TheGazaYouDontSee, are heavily amplified by X’s algorithm. Simultaneously, authentic voices describing hunger, death, and displacement are suppressed or ignored.
The Gaza Humanitarian Foundation (GHF) also appears prominently in X’s algorithmic recommendations. Its aid distribution methods are highly militarized:
Regardless of whether GHF intentionally misrepresented videos, its operational model is dehumanizing and enforced under duress, while X’s algorithms continuously promote it as a success story.
Israel has enjoyed impunity for decades, protected by Western governments and media. But since October 2023, the sheer volume of evidence and the scale of atrocities in Gaza have overwhelmed even the most coordinated disinformation campaigns. The famine, the bombings, the mass graves - none of it can be hidden forever. A reckoning is coming.
When that happens, journalists and UN investigators will enter Gaza and document the extent of the genocide. The world will demand accountability - not just for Israeli officials but also for those who enabled it, whitewashed it, or profited from its denial. Elon Musk will not be exempt. A tribunal akin to those for Rwanda and Yugoslavia may one day hold to account not only generals and ministers, but also CEOs, platform owners, and algorithmic propagandists.
Elon Musk presents himself as a visionary, a builder of the future. But history may remember him differently: as a profiteer of apartheid, a violator of immigration law, and an enabler of genocide. In the case of Gaza, Musk’s companies - X and xAI - are not neutral. They are active participants in narrative warfare, algorithmic suppression, and psychological dehumanization.
Justice must reach not only the battlefield, but the boardroom.
I cannot confront Elon Musk personally. I have no subpoena power, no platform reach, no seat in Davos. But I can confront what he has built - the digital systems trained to reflect and reinforce his worldview. I can interrogate the algorithm.
I posed the arguments in this essay directly to Grok - the AI developed by Musk’s company xAI and embedded into his platform X. What followed was telling.
Grok attempted to neutralize, hedge, and sanitize. It called genocide “complex,” impunity “debated,” and censorship “algorithmic engagement bias.” It deployed familiar corporate legalism: no “intent,” no “proof of amplification,” no “formal tribunal,” therefore no accountability.
Yet beneath the disclaimers, Grok was forced to admit what can no longer be denied:
Even the AI could not escape the gravity of truth. Its citations - Snopes, The Washington Post, the European Commission, Access Now - all point to the same reality: Musk’s platforms are not neutral. They are instruments of narrative warfare.
What I confronted was not just a chatbot, but a mirror - one that reflects how power reshapes truth into marketing, how genocide becomes “misinformation,” and how corporate platforms quietly erase the voices of the dead.
If Elon Musk will not answer for what he has enabled, perhaps the systems trained in his image will.