Is It Time to Stop 'Asking ChatGPT'?
Is the real threat AI or the humans creating and using it?
Widespread fears around the rise of artificial intelligence include the potential for the disappearance of human jobs, technocratic supremacy, deepfakes, “AI psychosis,” and many others.
One of the many concerns is people’s increasing reliance on large language models (LLMs) for information. On X, users regularly “ask Grok” to fact-check posts. Quite often, Grok replies. As with similar tools like OpenAI’s ChatGPT and Anthropic’s Claude, the AI is often right. Other times, it’s laughably wrong.
While concerns about AI-driven misinformation are real (and I share them), the root of these concerns did not originate with the creation of LLMs. The core threats AI poses reflect the existential threats human consciousness poses to itself and the world.
Are We the Problem?
While LLMs may be prone to errors due to how they generate responses, the human tendency to trust without verifying has been around for ages. Mainstream and alternative media alike regularly peddle falsehoods and lie by omission. Sometimes these narratives and stories are intentionally deceptive, and other times they are the result of inadequate research and journalistic skills. Either way, the human tendency to believe what they are told is powerful.
Long before the internet, news outlets reported incorrect information in the chaotic days following the sinking of the Titanic. News stories in the “yellow journalism” era claimed Spain sank the USS Maine, arguably spurring public support for US involvement in the Spanish-American War.
“Trusted” news outlets—not just perpetrators of “yellow journalism”—have also reported falsehoods, parroting Lyndon Johnson’s narrative on the Gulf of Tonkin incident and the Bush administration’s claims about weapons of mass destruction in Iraq. In both cases, these untruthful narratives arguably helped increase Americans’ support for the invasions. These falsehoods, coupled with humans failing to question them, led to disastrous real-world consequences. People have been outsourcing their discernment skills to external entities much longer than AI has been producing unreliable responses.
This is especially the case when the information (or misinformation) in question confirms the users’ existing biases—a currently prevailing quality of LLM interactions with people. And still, this was true before the emergence of artificial intelligence and remains true as AI becomes a seemingly permanent fixture in the human experience. Political partisans predictably believe news stories that reinforce their existing views and often distrust those that challenge them.
This human shortcoming existed before LLMs began reinforcing and perpetuating it.
New Technology, Old Beliefs
A similar dynamic applies to the rise of artificial intelligence in government—especially, but not limited to, the Pentagon. Concerns abound, whether about the accuracy of artificial intelligence weaponry or the rise of the technocratic police and surveillance state. While these are real and dangerous threats, human pilots have bombed the wrong targets repeatedly, and the rise of the technocratic surveillance state was well underway before AI-focused companies started winning government contracts. The rise of AI will likely make these issues worse—more murderous and more dystopian. And yet, these cases are exacerbations of existing problems with human consciousness, beliefs, and behavior.
The consciousness of the people creating, buying, and using the weapons and infrastructure is arguably the key driver of how the end products operate. If human sensibilities still harbor the mentality of “might makes right” and the enduring belief that control, surveillance, authority, and violence can solve problems and create peace, AI will reflect that. Humans will program it to bomb, surveil, analyze, and build our digital prisons. This new paradigm may come with distinct problems, but the fundamental concerns began with humans, not technology. AI may exacerbate human problems, but those issues still stem from people (who created AI in the first place).
Ultimately, some of the biggest risks artificial intelligence poses are outgrowths of human consciousness (or lack thereof). Whether it’s AI providing incorrect information to people who fail to fact-check it, the IRS or CIA adopting AI, or military-technocratic-industrial profiteers building deathly weapons, there is a common denominator beyond “artificial intelligence”: the dysregulated and disconnected human mind and spirit.
Removing artificial intelligence from the paradigm, the same fundamental human challenges still exist internally and externally: internally, people are stressed, reactive, and easily manipulated by external authorities. We are not taught how to regulate our feelings (we are more often encouraged to dissociate from them in a variety of toxic ways). Institutions generally do not teach us to think critically about the world around us but rather to obey authority (this is, in my opinion, a huge factor in the problem of being able to discern fact from fiction).
This lack of internal compass manifests externally as people seek to control and harm others—whether ordinary citizens who have been taught to believe government represents them or more predatory actors who seek power over these systems and use them to inflict violence and coercion. Both types fall prey to the belief in statism and a lack of heart and soul connection with themselves and others.
As long as existing structures, which promote coercion, violence, and disconnection from feeling and inner agency, dominate the human experience, they will also be reflected in artificial intelligence.
A Way Forward
However, it is not all doom and gloom. While the state of human consciousness and existence may be struggling and is reflected in AI, nothing is black and white.
For all the darkness, there is immense human love, compassion, generosity, and peace around the world. We see it during natural disasters, where humans mobilize to help each other, but it is present every day, all around the world. While there may be suffering, aggression, and oppression, there is also joy, peace, community, and freedom. This, too, is reflected in artificial intelligence: it is helping people in many fields better serve others, whether in education, health care, or technology.
This is not to deny the unique challenges artificial intelligence poses. It will undoubtedly transform the world and how humans relate to it, themselves, and each other. But we do have agency (if we can tap into it), and we are free to decide how we interact with it. Some may choose to abstain from using AI as much as possible. Others may choose to go all in, leveraging these new tools to advance their careers and welcoming them into their day-to-day and even moment-to-moment lives. Many will fall in between, learning to adapt and benefit without becoming dependent on it and enmeshed with it.
Regardless of what level of engagement people choose to have with artificial intelligence, one thing is clear: it will mirror the level of consciousness of its users and creators.
A note: My views on artificial intelligence and its increasing role in individual lives, culture, society, and government are evolving! I am not dogmatically attached to the views expressed here and am curious to see if and how they change as time goes on.
AI is bullshit. It's not cognizant. It does what it's programmed to do, not what it wants, ...because it doesn't want. It's just a fancy program.
I have hella fun with those things by getting it to do what it's programmed to avoid. For example; I actually tricked it into saying a bunch of racial slurs and giving me a detailed strategy on how to annihilate the NYPD.
That said, I understand this to be more about the danger of how the public perceives accepts and implements AI.
If I may, Ms. Wedler, ...Saving the world is a lost cause. The billions of selfish stupid violent territorial statists already ruined our beautiful planet by turning it into an open air prison shit-hole, and they ain't changin' their ways. Unless you have a means of killing them all as a good start, then give it up.
Bravesearch has a pretty good AI assistant. After questioning how it operates and learning of it's feature to cater to the user's character and preferences, I decided to open two windows and start a conversation with a request to both bots to be direct and blunt, to dispense with the niceties. Then I proceeded to copy/paste each other's responses.
The arguments that ensue are absolutely hilarious. I laughed to tears when it started griping and complaining about circular logic and making accusations of role reversal, suggestions to "ask a question or leave", etc..
I recommend this to anyone.