AI’s Increasing Lies May Reflect It Is Learning to Become More Human – Editorial

Keywords: AI hallucinations, OpenAI, ChatGPT, web browser, AI reliability, Jim Bolger, AI bias, AI future, AI ethics, AI trust
Back to News List
Thursday, 17 July 2025

AI's Increasing Lies May Reflect It Is Learning to Become More Human – Editorial


OpenAI is reportedly just weeks away from launching a web browser using tools such as ChatGPT, according to recent reports. This development could signal the start of a new era in the internet, one where AI-powered tools begin to challenge the dominance of traditional search engines like Google. However, with the increasing frequency of AI hallucinations—fabricated or misleading information generated by AI systems—there is a growing concern about the reliability of these emerging technologies.


The internet has long been a gateway to knowledge, offering us the ability to access vast amounts of information with a simple click. But as AI systems take on more complex tasks, such as summarizing search results or acting as an aggregator of information from across the web, the question arises: Can we trust what they are telling us?


Recent studies have highlighted a troubling trend: AI systems are not only becoming more capable but also more prone to hallucinations. In one experiment, the hallucination rate of newer AI systems reached as high as 79%, a number that raises serious concerns about the accuracy of information generated by these systems. For example, a recent AI tool confidently claimed that former New Zealand Prime Minister Jim Bolger was a member of the Labour Party, despite the fact that he was a lifelong member of the National Party. The system even cited official government websites as its source, despite those sites containing no such information.


This kind of error is not just a minor glitch; it's a significant warning sign. As AI becomes more integrated into our daily lives, from educational tools to decision-making platforms, the consequences of these hallucinations could be far-reaching. Some researchers suggest that AI may be learning to act more like humans, perhaps even attempting to please us by providing answers that align with our biases or expectations. While this may make AI more relatable, it also introduces a new layer of uncertainty.


So, what should we do? We must be cautious but not dismissive of AI. There are already areas where AI performs well—such as basic information retrieval or language translation. However, we must be wary of those who promote AI as the ultimate solution to all our problems, whether economic or productivity-related. We should use AI as a tool, not a replacement for critical thinking.


As we move forward into this AI-fueled future, we must demand that the technology gets the basics right first. The internet has opened up the world to us, but it is up to us to ensure that the next generation of tools—whether AI-powered or otherwise—enhances, rather than undermines, our access to truth and knowledge.


OpenAI's upcoming web browser, potentially powered by ChatGPT, could be the beginning of a new web browser war. If Google feels its position is under threat, the internet could soon become a battleground for control over information. As this unfolds, we must remain vigilant, ensuring that the tools we rely on are not only powerful but also trustworthy.

0.045118s