Opinion: AI Chatbots Depend on Biased Sources

The Role of AI Chatbots in Shaping Public Perception
AI chatbots such as ChatGPT and Grok have become powerful tools for students, researchers, and the general public. They can assist in writing essays, conducting research, and exploring complex topics. However, these tools also pose significant risks, particularly when they present information through a political lens. This has sparked a growing debate about the neutrality and reliability of AI-generated content.
David Sacks, the Trump administration’s AI and crypto czar, recently emphasized the importance of ensuring that AI systems operate free from ideological bias. He stated that the administration is working on proposals to make sure AI remains truth-seeking and trustworthy. This statement highlights the increasing concern over how AI might influence public opinion and policy discussions.
A Personal Encounter with AI Bias
During a recent interaction, I witnessed an example of AI bias firsthand. On a platform called X, a user asked Grok whether more guns make Americans safer. Grok responded by stating that evidence shows more guns correlate with higher firearm homicides and violent crime rates. It dismissed self-defense and deterrence arguments, labeling my "more guns, less crime" theory as something cited by "right-wing advocates." Grok supported its claims by referencing Scientific American and a RAND Corporation review, which supposedly showed that guns don’t reduce crime and instead increase violence.
However, these responses were misleading. The Scientific American article had clear biases, and Grok ignored my published rebuttal in the same publication. In that rebuttal, I highlighted that over two-thirds of peer-reviewed studies show concealed carry laws reduce crime. The journalist who wrote the article, Melinda Wenner Moyer, had access to my research but failed to acknowledge it. She also misrepresented findings from the National Research Council’s major report on the topic.
Evaluating the Evidence
Grok placed significant weight on RAND’s literature survey, claiming it had reviewed 100+ studies. Eventually, it admitted that the number of papers studying right-to-carry laws was actually 25, showing mixed results. I pointed out that RAND was highly selective in its sources, ignoring dozens of studies that showed these laws lowered violent crime rates and surveys of academics who had published peer-reviewed empirical research.
Even after this, Grok largely ignored my responses and focused on two papers claiming that right-to-carry laws increased violent crime. One of these papers failed to control for variables like changes in policing, poverty, or economic conditions. When I pointed out this flaw, Grok mentioned another study that demonstrated a statistical technique to account for such factors, but that study didn’t look at right-to-carry laws. Only after a prolonged exchange did Grok acknowledge the error.
Misleading Examples and Flawed Analysis
The second paper Grok emphasized made a subtler mistake: it compared states that had recently adopted right-to-carry laws to those that had done so years earlier. Early adopters made it easier to obtain permits and saw much larger increases in concealed handgun permits during the period studied. Comparing later adopters—those who saw smaller increases—to these early states skewed the results. If crime didn’t fall as much in newer states, the flawed analysis made it look as if crime had risen. Again, only after I cited my own peer-reviewed studies from 2022 and 2024 did Grok acknowledge the issue.
When Grok argued that more guns lead to more firearm homicides, I asked it to name any country that banned all guns or handguns and saw homicide rates fall—or even stay the same. Grok cited Australia, Great Britain, and Brazil, but none of these examples are accurate. Australia never banned all guns or handguns. Firearm homicides had already been falling for 15 years before the 1997 buyback and fell more slowly afterward. Meanwhile, gun ownership actually increased and by 2010 had surpassed pre-buyback levels.
In Britain, handgun bans enacted in 1997 preceded a 50% surge in homicide rates over the next seven years. The rates didn’t decline until the government boosted the police force by 14% over two years. Even then, homicide rates took 14 years to return to pre-ban levels. Brazil didn’t ban all guns or handguns either. While its 2003 gun control law included a boost in law enforcement resources, murder rates remained largely unchanged. Only after President Jair Bolsonaro took office in January 2019—liberalizing gun ownership and increasing legal gun ownership by 650%—did Brazil’s homicide rate drop by more than 30%.
Only after I laid out these facts did Grok concede, calling them “fair points” and then echoing the very arguments I had just made.
Broader Implications of AI Bias
My experience with Grok is not unique. To study the chatbots’ political biases, the Crime Prevention Research Center, which I head, asked various AI programs questions on crime and gun control last year in March and again in August and ranked the answers on how progressive or conservative their responses were. The chatbots tilted to the left, claiming that things like higher arrest and conviction rates don’t deter crime and clearly supporting more gun control laws.
AI chatbots speak with certainty but often rely on sources with clear biases. They cite selective evidence, misrepresent or don’t understand complex findings, and ignore reputable research that challenges a politically convenient narrative. AI chatbots also hallucinate, meaning they sometimes completely make up facts.
Students, journalists, and everyday citizens increasingly rely on these tools. If they accept chatbot responses at face value, they risk walking away with a fundamentally distorted view of issues like gun policy.