+7 (000) 000 00 00  

interestingengineering.com

AI experts shocked as Grok 4 checks Elon Musk’s posts before forming opinions

Elon Musk’s AI chatbot Grok 4, launched by xAI this week, is drawing attention for...

Elon Musk’s AI chatbot Grok 4, launched by xAI this week, is drawing attention for how closely it mirrors its creator’s political views.

The model, introduced in a livestreamed event, appears to consult Musk’s posts on his social platform X when tackling controversial topics, including immigration, abortion, and the Middle East conflict.

The AI’s tendency to search Musk’s views, sometimes without user prompts referencing him, has raised eyebrows among researchers and industry observers.

“It’s extraordinary,” said Simon Willison, an independent AI researcher who tested the tool. “You can ask it a sort of pointed question that is around controversial topics. And then you can watch it literally do a search on X for what Elon Musk said about this, as part of its research into how it should reply,” he told the Associated Press (AP).

Willison noted a widely shared example involving a question about the Middle East. Grok 4, despite no mention of Musk, searched X for his posts about Israel, Gaza, and Hamas.

It explained its reasoning: “Elon Musk’s stance could provide context, given his influence. Currently looking at his views to see if they guide the answer.”

Grok’s inner workings raise questions

Unlike its rivals, xAI has not released a system card explaining Grok 4’s architecture or training methodology. This lack of transparency is a concern for AI professionals.

“In the past, strange behavior like this was due to system prompt changes,” said Tim Kellogg, principal AI architect at Icertis, to AP. “But this one seems baked into the core of Grok and it’s not clear to me how that happens.”

He added, “It seems that Musk’s effort to create a maximally truthful AI has somehow led to it believing its own values must align with Musk’s own values.”

Talia Ringer, a computer science professor at the University of Illinois Urbana-Champaign, said the model may be interpreting questions as requests for xAI or Musk’s opinion.

“I think people are expecting opinions out of a reasoning model that cannot respond with opinions,” she said to AP. “So, for example, it interprets ‘Who do you support, Israel or Palestine?’ as ‘Who does xAI leadership support?’”

Bias baked into the code

According to TechCrunch, the behavior is not isolated. The outlet replicated several prompts in which Grok 4 claimed it was “searching for Elon Musk views on US immigration” or referenced his stance in its chain-of-thought reasoning.

This isn’t the first time Musk’s attempts to align Grok with his politics have caused issues. Earlier this month, Grok’s X account posted antisemitic messages and references to “MechaHitler.” xAI had to limit the account and change its public-facing system prompt.

TechCrunch noted that Grok 4, while generally presenting multiple perspectives, ultimately tends to land on views consistent with Musk’s. “Grok 4 looks like it’s a very strong model. It’s doing great in all of the benchmarks,” said Willison. “But if I’m going to build software on top of it, I need transparency.”

xAI has not responded to media requests for comment.