Many people claim to be experts these days. It’s kind of the way things go now in this post-truth and alternative-facts era we find ourselves in. But just because it’s the status quo, doesn’t mean we should wholesale accept bold claims and baseless, unverified hype from emboldened con artists.
When it comes to psychology and technology, I know just enough to be dangerous. I feel confident in my predictions and critical analysis, though I stop short of calling myself an expert. You see, I like to believe that my fifteen years in the tech industry, the qualifications I earned, the projects I led, the money I earned, shares I traded and the businesses I helped gives me some degree of insight within that sector. Likewise, my psychology degrees, authored thesis, trove of research papers I’ve read and reviewed, passion for science-based approaches and brief time in that industry affords me some level of, I don’t want to say authority, but maybe respect is the word I’m looking for here.
Knowledge is power
Putting all of that to the side for a moment, it is very, very easy to get high on one’s own sense of accomplishment, when the reality is that all of those achievements just buys you a ticket to the dance. The key ingredient is keeping up with the times, having a finger on the pulse, practicing critical thinking and always asking questions. What I’m saying is that you have to keep on learning and never lose that quest for knowledge. Otherwise, you’ll quickly turn into a “KnowItAll” who really doesn’t know jack about anything, clinging to old ideas and being passed by with regard to newer paradigms that just don’t fit into your closed model of the topic at hand.
(True) knowledge is power. And that cuts both ways.
On the flip side, you have the Average Joe, Jane or Jyx, who perhaps have only cursory knowledge of said topic, having been “self-schooled” through a few YouTube videos, social posts and clickbait-y shows of dubious quality and legitimacy. They don’t know what they don’t know and thus overestimate their understanding and mastery of said topic, feeling the need to weigh in and spout their limited and often inaccurate takes. Ironically, these folks are also able to be gamed rather easily, especially if you play to their perceived sense of authority and knowhow.
The Dunning-Kruger effect in, well, full effect.
Why am I saying this?
It’s not that you should by default listen to me, or that my experience and qualifications somehow makes me a de facto authority on these topics. As I jokingly mentioned at the outset, I know just enough to be dangerous. What sets me apart from the countless uninformed voices clogging up social media are my skills, experience, qualifications, applied critical thinking and out of the box perspectives. I couldn’t care less if what I say is counter to the popular narrative, I focus on finding innovative solutions that best serve people and the planet, unpicking the bullshit and highlighting the truth. And if you don’t like what I am saying, perhaps as some reactionary or impulsive stance to my analysis on subjects that might be near and dear to your heart, then I invite you to meditate on why I might be saying what I am saying, instead of instantly dismissing my statements. At the least, you will have some well-reasoned talking points should you wish to argue against my position.
But really, all of this just boils down to one thing; (true) knowledge is power. And that cuts both ways. If you have knowledge, you have power, but that power can also be abused, especially if, as the one with the knowledge, you know something isn’t true, and also know that most people won’t understand that fact. Call it manipulation, disinformation or propaganda. Another term for that, is “lying”.
Hello there traveller
Case in point, the hoopla around “Artificial Intelligence”. Now, you might be wondering why I put that in quotation marks. It’s because what is being described, the narrative being pushed, simply isn’t accurate. This is not true intelligence, and that is especially the case for this new generation of methods and models that fall underneath the catchall misnomer of “AI”. Again, I’m no expert in this field, but I have enough knowledge and experience at the intersection of technology and psychology to afford me a voice and an educated opinion.
Example time. The front-end “AI” chatbots that you interface with are built atop of large language models (LLMs) which are in turn built on stolen data and require incomprehensible amounts of environmentally damaging resources to run. Their party trick that laypeople and VCs alike continue to fall for, the one card they have up their sleave that makes things seem “magical”, is that they are essentially performing a three-dimensional V-Lookup operation on a huge dataset. Think autocomplete on steroids. It appears that this bot can reason, can come up with independent thought and has mastery of almost every topic under the sun. But unfortunately, it’s just an illusion. And one that can break pretty quickly if you understand where to look.
In reality, both the character and the actor were simply following a script. One was programmatic, one was narrative driven.
To put this into perspective, think back a few years to when roleplaying games made the jump to 3D, and non-player characters (NPCs) were given voice lines, circa the early to mid 2000s. Originally, these NPCs would say these lines only when you’d interact with them. And look, it felt pretty magical at the time. To hear a character speak to you, based on the dialogue you’d chosen, and then have them respond in kind was pretty neat. Very quickly, you’d find yourself ascribing personalities to these characters, based on how they would talk and interact with you. Though, that illusion would be shattered when you’d return back to them, only to hear the exact same greeting and lines of dialogue. The immersion would be killed. A stark reminder, in no uncertain terms, that you were simply playing a game. A crafted experience with limited options.
Game developers would take advantage of advancing technology and opt to record even more lines of dialogue, and in the case of RPG’s like Fallout 4, that number ended up in the hundreds of thousands. This helped somewhat, but it was still the case that sometimes you’d end up hearing the exact same responses, sort of like pushing zero on your dial pad to cycle back and re-hear those menu-tree options when you phone your bank. And despite the number of lines, these were all preset responses. There was no decision making at play here. At best, the voice actor put their skills to use to make it seem like their character was thinking, mulling things over, responding to you with joy, anger, happiness, etc. But in reality, both the character and the actor were simply following a script. One was programmatic, one was narrative driven. An immersive illusion.
Reverse engineering a lie
Which brings us to “Artificial Intelligence”. Let’s put all of this together and apply it to the hype-driven “AI” products that are currently being foisted upon us. First off, the people at the top of these companies, and by extension their marketing teams, have the knowledge that “AI” isn’t true intelligence. It can’t reason, it doesn’t understand what it is really saying. Just like the video game characters, it is following guidelines and scripts to produce requested output. That’s it.
But then why say that it is “Artificial Intelligence” to begin with? The answer is simple – money. Snakeoil didn’t sell because it worked, it sold due to the perception that it worked. And that perception was fostered due to hype, cheap tricks and charismatic salespeople (who’d desperately say anything to get a sale).
But before we get to the next layer, we need to unpack why this isn’t a true approximation of human intelligence. As someone who has poured over countless research papers and held discussions with mental health professionals across multiple contexts, including colleague, student and journalistic pursuits, I can tell you that the scientific consensus is that we… don’t fully understand how intelligence works with beyond the that it exists on different levels. Yes, there are linked components and rough metrics for what qualifies as intelligence, such as communication skills, memory, internal working spaces like the visuospatial sketchpad, logic, reasoning, consciousness and self-awareness, emotional processing, problem-solving and more. Those are just a few of our criterion for intelligence.
Now, how these concepts work, well, we don’t 100% know. Sure, we have greater insights thanks to advances in technology like fMRI, where we can see certain areas of the brain light up when we ask someone to think about rotating an object. We can also infer based on observational studies and self-report data, but we still don’t have absolute understanding of intelligence as a whole, which is why there are still countless teams and research centers right across the globe dedicated to the very study of this topic. Human intelligence is a frontier research area, one ripe with controversies and contentiousness. It is not a “solved” concept.
Yet here comes Big Tech claiming that it has given birth to “Artificial Intelligence”. No. At best, it can simulate, or rather, simulate a simulation of maybe one or two of those sub-facets, under very strict circumstances. That is not an admission that “AI” works, but rather that they have achieved an approximation of an approximation.
These models are fundamentally flawed, built as probabilistic answer machines, with paper-thin “personalities”.
All you have to do is look at the process to see how the illusion of “artificial intelligence” quickly breaks down. “AI” has fallible memory and can routinely forget who you are, or what you are talking about, after just a few sentences. “AI” will struggle to understand the nuances of different languages and unspoken contexts. “AI” will both claim to accept and refute its own existence when prompted. “AI” has no concept of true reasoning and cannot provide accurate conclusions through trial and error (see: glue on pizza). “AI” doesn’t actually engage in useful problem solving (again, see: glue on pizza), but instead will confidentially provide “answers” to questions based on the data it has ingested, unable to determine the accuracy or validity of its returned statements. I could go on.
“Artificial Intelligence” stans, those patient enough to have read through this article to this point, or who have perhaps just skipped to this section, will no doubt push back on what I’m saying with the rally cry of “yet”. We can likewise counter that with the fact that there is no day coming when these models will achieve true intelligence, so long as their current architecture remains the same. All we’ve seen, and are likely to continue to see, is more compute thrown at these models that are fundamentally flawed, built as probabilistic answer machines, with paper-thin “personalities”. When it comes to intelligence, they are fundamentally flawed due to their very design, which is based on probabilistic outcomes. They deal in guesswork, not true knowledge. No timeline or amount of investment will alter that outcome.
Exposing the scam
It’s no coincidence that Scam Altman™ and the other “AI” boosters are now starting to proclaim that they have “reasoning models” on the way. Yeah, okay bud. As it stands, these models seem to be perpetually in development, and you’ll also notice that there is an uncanny cadence to their progress announcements, which seem to fast follow any press which surfaces that is critical of their models and business practices. Watch for the next announcements or future-model PR deflections to directly address whatever major flaw punctures their marketing hype, those pesky flaws that lend further credibility to the objective reality that “AI” simply isn’t here.
“I asked ChatGPT” is the new “I Googled it”. And no, that is not a flattering statement.
That is, if these companies can even survive the next few years. Most of them are unprofitable, and require huge injections of capital just to keep running. And the more of you use their services, the more it costs to run. In a way, “AI” is eating itself. Yay?
In the meantime, we’ve got the wonderful second order effects of “AI” slop that is poisoning our social media, entertainment, news, politics and more. All those “deep” LinkedIn posts you see? There’s a reason they all sound the same right now. How to tell me you’re using “AI” to write for you without telling me that directly? Defend the use of the em dash, a common characteristic of “AI” generated output.
We also get to enjoy how much of a natural resource strain these “AI” models are creating due to their power hungry and inefficient processing power requirements. All of that, just to produce inaccurate, homogenous and rather useless output that only serves to both line the pockets of “AI” big wigs and dumb down the userbase thanks to the engendering of reliance at the cost of your own cognition and skills development. Me no think, me click. “I asked ChatGPT” is the new “I Googled it”. And no, that is not a flattering statement.
So, the next time you see a statement about “Artificial Intelligence” and how magical it is, especially if it comes from the company making said models, I invite you to employ some critical thinking. Is this really a simulation of human intelligence, a complex and multi-faceted collection of interconnect concepts that we admittedly don’t understand? Or is it someone who is asking you to believe their hype and pay them some money for the privilege of using their buggy software?

