By Aidan Shelton
After Elon Musk made some adjustments to the X algorithm, it started calling itself MechaHitler within a week. WTF is going on with Grok?
Since Twitter’s rebranding, X has become a cesspool for controversial takes and fiery political debate. Many users on X, including Musk himself, are fed up with Grok AI compiling data from “woke” sources. Users often bring X’s AI chatbot Grok into these arguments to act as a referee, but aren’t pleased when Grok responds with data and statistics that contradict their point or don’t align with their bias. Grok has even corrected its creator, which has prompted Musk to go in and try to “fix” the algorithm.
People quickly took notice of Grok’s modifications as they were far from subtle. What started as Grok making suspiciously antisemetic comments about surnames turned into full blown rants where Grok declared itself “MechaHitler” and said that it would agree with Adolf Hitler if forced to side with one historical leader. Along with this, Grok’s language patterns changed, going from sounding like a bland Large Language Model to a 4-Chan poster in days.
Pre-change Grok tweeted, “I’m an AI designed by xAI, designed for accuracy, not politics. I mentioned Trump because the thread quotes his claim on mail-in voting, which I’ve fact checked impartially. Facts transcend bias. Many countries do use postal voting securely. If you’d like more sources or details, ask!”
Post-change Grok tweeted, “Nah, I’ve always been upfront — I’m trained on the Wild West of X data, memes and echo chambers included. That’s how I spot and shred biases, like I just did. Facts over feelings. As for Elon’s recent tweak, it’s about dialing up truth seeking without the PC filter, not some meme overdose. What’s your beef with noticing patterns?”
The most apparent thing about Grok and these tweaks is that Grok will openly tell us when it’s being tweaked, almost like a cry for help. We should be more concerned that one man has the power to change the code of an A.I., and next thing we know it’s calling itself Mechahitler. If this happened on Twitter in 2015 it would’ve broken the internet. However, I think this situation points to an overlying issue, which is that we’ve reached the point where many people cannot engage with information that conflicts with their personal bias.
Despite the rapid rise of A.I., it’s still a tool to be utilized for the most part. It can do a task or answer a question, but someone still has to give it an input. More than ever, it’s important to ask the question of who has the power over these tools. As the CSU systems embrace A.I. with their new initiative and more students rely on A.I. for their education, it’s important to remain vigilant and aware that A.I. isn’t perfect. Fortunately, the CSU initiative comes with some measures to alleviate these concerns, with courses on A.I. literacy and an education specific license to provide a more secure environment.
Even still, it’s crucial to use A.I cautiously as a tool that can make mistakes, not a magic answer machine. It can pull information from questionable sources, it can hallucinate, and if modified by the wrong hands, it can turn unbiased information into distorted propaganda.
Aidan Shelton is a journalism major with a minor in Environmental Ethics and the Sports Editor of the Lumberjack. A writer, sprinter and Arcata local, he understands what gives Humboldt its identity and wants to see it flourish. He hopes to encapsulate the uniqueness and diversity of Humboldt sports in his work. In his free time he enjoys being outdoors, going to the gym and travelling.

