Elon Musk has unveiled a plan to retrain his artificial intelligence model, Grok, on a newly curated knowledge base, aiming to “rewrite the entire corpus of human knowledge, adding missing information and deleting errors.”
According to an X post published by Musk on Saturday, the forthcoming Grok3.5 model is anticipated to possess “advanced reasoning” capabilities. His vision for this enhanced model is to actively participate in the creation of a ‘corrected’ historical and factual record. Once this rewritten knowledge set is complete, the Grok model will then be retrained using this new, presumably more accurate and comprehensive, data. Musk’s rationale for this drastic measure stems from his belief that “far too much garbage” exists within the foundational models of AI, particularly those trained on “uncorrected data.” This perspective underscores a deep-seated dissatisfaction with the current state of AI data integrity and accuracy.
Musk’s contention regarding the quality of AI training data is not a new development. For an extended period, he has been a vocal critic of rival AI models, such as OpenAI’s ChatGPT, a company he notably co-founded. He has consistently alleged that these models exhibit inherent biases and intentionally omit information deemed “not politically correct.” This critique aligns with a broader pattern in Musk’s approach to technology and information dissemination, where he consistently aims to shape products and platforms to be free from what he perceives as detrimental “political correctness.” His stated goal is to make Grok an “anti-woke” AI, an ideological stance that has shaped many of his recent ventures.
This ideological motivation was also evident in his acquisition and subsequent management of Twitter in 2022. Following his takeover, Musk significantly relaxed the platform’s content and misinformation moderation policies. This shift led to a noticeable increase in the dissemination of unchecked conspiracy theories, extremist content, and fake news, some of which, ironically, was amplified by Musk himself. In an attempt to counteract the deluge of misinformation, Musk introduced the “Community Notes” feature on X. This feature allows users to collaboratively debunk or add contextual information to potentially misleading posts, with these notes appearing prominently beneath the original content. While intended to foster a more accurate information environment, the effectiveness and impartiality of this crowdsourced moderation system remain subjects of ongoing discussion.
Musk’s latest proposal for Grok’s retraining has been met with immediate and strong condemnation from various experts and commentators. Gary Marcus, a prominent AI startup founder and Professor Emeritus of Neural Science at New York University, vehemently criticized Musk’s plan, likening it to a dystopian scenario found in George Orwell’s “1984.” Marcus articulated his concern on X, stating, “Straight out of 1984. You couldn’t get Grok to align with your own personal beliefs so you are going to rewrite history to make it conform to your views.” This comparison highlights the profound concern that Musk’s initiative could lead to a manipulative reinterpretation of historical facts to fit a specific ideological agenda, rather than an objective pursuit of truth.
Further adding to the chorus of criticism, Bernardino Sassoli de’ Bianchi, a Professor of Logic and Science Philosophy at the University of Milan, expressed profound alarm at Musk’s intentions. In a LinkedIn post, de’ Bianchi stated he was “at a loss of words to comment on how dangerous” Musk’s plan is. He elaborated on the potential implications, asserting, “When powerful billionaires treat history as malleable simply because outcomes don’t align with their beliefs, we’re no longer dealing with innovation — we’re facing narrative control.” De’ Bianchi’s strong words underscore the fear that such an undertaking could fundamentally undermine the integrity of historical understanding and allow for the imposition of a particular worldview. He concluded by emphasizing the ethical breach inherent in the plan: “Rewriting training data to match ideology is wrong on every conceivable level.”
Compounding the controversy, Musk’s call for X users to contribute “divisive facts” to aid in Grok’s retraining has yielded concerning results. Musk specifically requested facts that are “politically incorrect, but nonetheless factually true.” However, the responses to this open invitation have largely consisted of a wide array of conspiracy theories and thoroughly debunked extremist claims. These submissions included instances of Holocaust distortion, discredited vaccine misinformation, pseudoscientific claims promoting racist ideologies concerning intelligence, and climate change denial. This outcome raises serious questions about the feasibility of crowdsourcing a ‘corrected’ knowledge base, particularly when the criteria for inclusion appear to invite and legitimize fringe or demonstrably false narratives. The very nature of the contributions received suggests that Musk’s methodology may inadvertently amplify the very “garbage” and “uncorrected data” he purports to eliminate, leading to a potentially more biased and inaccurate AI model rather than a rectified one.




