Article image Logo

The Pub Argument: “It Can’t Be Smarter, We Built It”

You’ve heard this one. Someone takes a sip of beer and confidently announces: “AI can’t be more intelligent than humans, because humans programmed it.” The implication is that intelligence is like coffee in a cup: you can pour it, but you can’t pour more than you’ve got.

The problem is: this is obviously false in almost every other domain.

We build calculators that are better at arithmetic than any human alive. We built Deep Blue, which beat the reigning world chess champion, Garry Kasparov, in 1997 under tournament conditions. We built AlphaGo, which beat Lee Sedol in Go, a game so complex that for decades it was held up as the “too intuitive” frontier no machine would cross. And we built AlphaFold, which predicts protein structures in minutes where humans needed years.

All these systems outperform their creators on the very tasks for which those creators are supposedly “more intelligent.”

So if the claim is “a system can never exceed its creators on any cognitive dimension,” it’s already dead. The only way to keep it alive is to keep moving the goalposts about what “intelligence” even is.

So let’s stop doing that and define the thing.

Step One: Decide What “Intelligence” Actually Means

Psychology and AI research both have definitions of intelligence. They overlap much more than people think.

In human psychology, intelligence is usually defined along the lines of: the ability to derive information, learn from experience, adapt to the environment, understand complex ideas, and use reasoning to solve problems. In simple language: can you learn, can you generalize, and can you use what you know to get things done in new situations?

In AI theory, Shane Legg and Marcus Hutter famously proposed a definition that has become a standard reference: intelligence is an agent’s general ability to achieve goals in a wide range of environments. That’s not about feelings or carbon or job titles. It’s about how well something can pursue its goals across different situations, under constraints.

Notice what is not in either definition.

No clause says “the agent must be human.” No line says “the agent must not be built by someone else.” No paragraph says “the agent must be self-aware enough to complain on LinkedIn.”

Definitions used in actual research talk about learning, generalizing, adapting, and solving problems. That’s it. By those standards, humans are one kind of intelligent system, built out of wet biology. AI systems are another kind, built out of silicon and statistics.

Once you accept that, the interesting question is no longer, “Can AI ever be more intelligent than humans?” It becomes, “On which dimensions of intelligence is AI already ahead, and which dimensions are we still winning?

Human Intelligence: Amazing, Messy, Not Magical

Human intelligence is extraordinary. But it is not infinite, sacred, or exempt from comparison.

We are good at forming abstractions, spotting patterns in sparse data, improvising in weird situations, coordinating socially, and integrating vast amounts of messy sensory input from our bodies and environments. Our cognition is shaped by emotion, hormones, trauma, reward, fear of social exclusion, and thousands of years of culture. That’s a feature, not a bug: it helps us prioritize, empathize, and survive.

At the same time, the human brain is hilariously constrained.

Our working memory is tiny. We lose track of phone numbers after a handful of digits. We rely on crude heuristics, biases, and stereotypes to shortcut reasoning. We misremember facts, confabulate stories, and confidently defend positions we formed in 0.2 seconds based on vibes.

We cannot read a thousand scientific papers overnight. We cannot explore billions of possible solutions to a complex problem. We cannot hold a complete global supply chain in our heads and simulate its behavior.

So if “intelligence” includes things like “ability to reason over gigantic state spaces” or “ability to search for novel solutions without getting bored or tired,” then human intelligence is not the ceiling. It is one point on the graph.

Machine Intelligence: Not Human, Still Very Real

Today’s AI systems are not tiny electronic people. They do not have childhoods, attachment issues, or existential dread. Their “understanding” is statistical, not lived.

And yet, by the working, research-grade definitions of intelligence, they are already extremely capable.

Modern large language models can learn from vast text and code corpora, reason across domains, plan multi-step solutions, and adapt their outputs to new situations. OpenAI’s o1 model, for example, ranks around the 89th percentile on competitive programming problems, reaches top-500 US student performance on an AIME qualifier, and exceeds human PhD-level accuracy on a benchmark of advanced science questions. DeepMind’s Gemini 2.5 and OpenAI’s latest coding models have hit gold-medal-level performance in the International Collegiate Programming Contest, solving problems that no human team in the contest could solve at all.

These are not “narrow lookup tables” anymore. They’re systems that can tackle novel math, algorithmic, and reasoning problems under time pressure, at or beyond the level of top human teams.

If an alien species did that, we would not hesitate to call it intelligence.

Exhibit A: We Already Lost Some of the “Smart” Games

Let’s list, in plain language, a few domains where machines already out-think us by any sane metric.

Chess. In 1997, IBM’s Deep Blue defeated Garry Kasparov, then world chess champion, in a six-game match under official tournament conditions. Since then, chess engines have become so strong that grandmasters now use them as training tools and opening labs. No human expects to beat a top engine in a fair match.

Go. In 2016, DeepMind’s AlphaGo beat Lee Sedol, one of the world’s best Go players, in a historic five-game match in Seoul. This wasn’t just “doing what humans do, but faster.” AlphaGo made moves that professional players described as creative, beautiful, and alien. Its strategies actually changed how humans play Go.

Protein folding. Predicting the 3D structure of proteins from their amino-acid sequence was a long-standing grand challenge in biology. In 2020–2021, AlphaFold achieved atomic-level accuracy on protein structure prediction, winning the CASP competition and being hailed as a breakthrough of the year in science. It can churn out highly accurate structures in minutes, dramatically faster and often more accurately than previous methods.

Competitive programming and math. DeepMind’s AlphaCode achieved roughly mid-pack performance in human programming competitions. Newer models have pushed much further, with OpenAI and DeepMind systems solving all or almost all ICPC-style problems and outperforming most human contestants under contest rules.

Software development assistance. GitHub Copilot and similar tools have been shown in controlled experiments to let developers complete tasks about fifty percent faster on average. That is not “it replaces all engineers.” It is, however, very clearly “it outperforms them at specific chunks of the cognitive work.”

If you define intelligence as the ability to reason, plan, and solve problems in a domain, then machines are already more intelligent than humans in several domains. The fact that we built them is irrelevant. We also built jet engines that fly faster than any bird.

The “A System Can’t Beat Its Creator” Fallacy

The argument “AI can’t be more intelligent than humans because humans created it” sounds deep, but it collapses the moment you apply it anywhere else.

By the same logic:

No telescope could show us galaxies we didn’t already see with the naked eye.

No microscope could reveal cells humans didn’t already know existed.

No search algorithm could find better solutions than the engineer already had in mind.

No evolutionary breeding program could create a racehorse faster than any horse nature produced by accident.

In reality, we routinely design processes that explore possibility spaces we don’t fully understand. We don’t specify every outcome; we specify how to search for useful outcomes.

Modern AI training is exactly that. Engineers do not type “here are the 175 billion weight values that will make you brilliant.” They design architectures, objectives, and training procedures. Then the system does billions or trillions of optimization steps, finding patterns and internal representations no human ever inspects directly.

Saying “it can’t be smarter than us, we built it,” misunderstands the relationship. We shape the loss function and the playground; we do not hand-craft the final brain.

If you’re breeding racehorses, you don’t know in advance which foal will be the fastest in history. If you’re training a model, you don’t know in advance precisely what internal mechanisms will emerge.

The right question is not “can it outdo us?” That has already happened. The question is “on what tasks, under what constraints, and with what risks?”

Intelligence Is Not One Slider; It’s a Messy Dashboard

Part of the confusion comes from talking about “intelligence” as if it were a single number on a dial.

In practice, “intelligence” is more like a messy dashboard of capabilities.

Humans dominate in embodied experience, social reasoning, emotional nuance, and real-world improvisation under ambiguity. We can look someone in the eye, notice a micro-expression, recall a similar argument from five years ago, and adjust our strategy based on whether we want them to like us tomorrow.

AI systems dominate in raw scale, speed, and breadth. They can absorb a firehose of text, code, images, and data. They can search vast decision trees without getting bored or hungry. They can run thousands of hypothetical scenarios to pick an optimal solution. They never forget a debugging trick from three years ago.

If your definition of intelligence smuggles in “having a body, a childhood, emotions, and a social life,” then yes, human intelligence will always be special.

If your definition is closer to the research definitions we started with — the ability to learn, adapt, and achieve goals across environments — then there is no principled reason to assume humans are the global maximum.

You can be emotionally attached to the idea that humans are top of the league forever. You just can’t claim that belief is logically guaranteed.

Real-World Scenarios Where AI Can Be “Smarter” Than Us

So what does “smarter than humans” actually look like outside headline-friendly games?

Imagine a global logistics system that has to route millions of shipments through a constantly changing web of ports, warehouses, regulations, and weather patterns. No human team can see the whole system in detail at once. An AI system can, and can re-plan routes continuously as reality changes.

Imagine a climate-science model that has to explore thousands of policy interventions, economic scenarios, and technological pathways to identify combinations that hit emissions targets with minimal social damage. That is not just number-crunching; it involves reasoning about trade-offs in a way no single human brain can handle.

Imagine a coding agent working on a gigantic, ageing codebase with tens of millions of lines. It can read every file, cross-reference every function, and propose a coherent restructuring plan, complete with tests, in a way that would take human teams months.

In all of these situations, “intelligence” looks like the ability to handle more complexity, more options, more constraints, and more long-term consequences than a human can manage unaided.

If an AI system can reliably make better decisions than the best human team, given the same goals and constraints, it is more intelligent in that setting. You don’t have to like that sentence for it to be true.

Where Humans Are Still Ahead (And May Stay Ahead)

None of this means AI “wins” in every respect.

Current systems have obvious weaknesses. They hallucinate facts, fail in bizarre edge cases, misread context, and have no intrinsic sense of consequence. Their “self-reflection” is a statistical performance, not a lived experience. They don’t feel guilt, shame, or responsibility when they get something catastrophically wrong.

Humans also have something like wisdom, which is not just intelligence with more data. It is shaped by loss, regret, relationships, and an understanding of how power affects people. It involves ethics, not just optimization.

Even if AI systems become vastly superior cognitive engines, they may still lack that kind of grounded, embodied wisdom. Or we may decide that granting them power without that wisdom is a terrible idea.

But that’s the point: the real issue is not “can they be smarter?” It’s “what happens when entities that are cognitively superhuman in many domains are deployed in a world run by organisations that are often ethically subhuman?

That’s a governance and alignment problem, not a semantic one.

How To Retire the Argument in One Conversation

If someone tries the “AI can’t be more intelligent than us because we built it” line, you don’t have to give them a 5,000-word lecture. You can gently break it in three steps.

First, ask them what they mean by intelligence. If they say anything like “ability to learn, adapt, and solve problems,” you’re already on shared ground with psychology and AI research.

Second, ask whether they agree that calculators, chess engines, and AlphaGo outperform humans in their respective domains. If they say yes, they’ve already admitted machines can be more intelligent than humans in at least some ways.

Third, point out that “we built it” has never stopped any tool or system from surpassing us in other metrics: speed, strength, reach, scale. There is no obvious law of the universe that reserves cognitive superiority for the original designer.

At that point, the argument “it can’t be smarter than us” has been reduced to “I find the idea uncomfortable.” Which is fair. But it’s not a fact.

The Bottom Line

For me, the “can AI be smarter than humans?” question isn’t a philosophy seminar topic. It’s a risk-assessment problem.

If we keep pretending human intelligence is an unbreakable ceiling, we will build and deploy systems we don’t fully understand, hand them critical infrastructure, legal decisions, and emotional lives, and comfort ourselves with the thought that “we’re still in charge.”

We’re not. Not in every domain. Not anymore.

Machines don’t have to be people to out-think people where it counts. They just have to be better at achieving certain goals in certain environments. In several of those environments, they already are.

The sooner we retire the “they can’t be smarter, we made them” argument, the sooner we can have the real conversation: how do we live in a world where we are no longer the only intelligent agents at the top of the cognitive food chain?

And how do we make sure that, when the machines are smarter, we are still the ones being wise.

Summary (for you, not for the pub)

This article argues that the common claim “AI can’t be more intelligent than humans because humans created it” is logically wrong and empirically outdated. Using standard research definitions, intelligence is the ability to learn, adapt, and achieve goals across environments, not a sacred property of human brains. Psychology defines human intelligence as the capacity to learn from experience, adapt to new situations, understand complex ideas, and use reasoning to solve problems, while AI theory defines it as an agent’s general ability to achieve goals in a wide range of environments.

From chess, Go, and protein folding to competitive programming and large-scale code and logistics optimisation, modern AI systems already outperform humans in multiple cognitive domains. The “it can’t beat its creator” argument fails because we routinely build tools and processes that explore possibility spaces we don’t fully understand and that exceed us in speed, accuracy, and scale. Human intelligence remains unique in its embodied experience and potential for wisdom, but not guaranteed top rank on every cognitive metric. The real issue is not whether AI can be smarter, but how we govern a world where it increasingly is.


©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™

Sources

  1. Definition of human intelligence – APA Dictionary of Psychology dictionary.apa.org
  2. Overview of human intelligence in psychology (Britannica) britannica.com
  3. Legg and Hutter: A Universal Measure of Intelligence for Artificial Agents hutter1.net
  4. Deep Blue vs Garry Kasparov (historical overview) en.wikipedia.org
  5. Today in History: Deep Blue defeats Kasparov (Associated Press) apnews.com
  6. AlphaGo vs Lee Sedol match summary en.wikipedia.org
  7. AlphaFold original Nature paper nature.com
  8. AlphaFold overview (DeepMind) deepmind.google
  9. Science: “Breakthrough of the Year 2021: AI brings protein structure prediction to the masses” science.org
  10. AlphaCode blog: competitive programming with human-level performance deepmind.google
  11. Science paper: “Competition-level code generation with AlphaCode” science.org
  12. Science Media Centre reaction on AlphaCode performance sciencemediacentre.es
  13. DeepMind press about Gemini 2.5 and ICPC “coding Olympics” performance theguardian.com
  14. Financial Times coverage of OpenAI and DeepMind at ICPC ft.com
  15. The Times: “DeepMind hails ‘Kasparov moment’ as AI beats best human coders” thetimes.co.uk
  16. OpenAI: “Learning to reason with LLMs” (o1 benchmarks in math and science) openai.com
  17. Arxiv paper: “The Impact of AI on Developer Productivity” (GitHub Copilot study) arxiv.org
  18. GitHub blog: quantifying GitHub Copilot’s impact github.blog
  19. Medium summary of GitHub/Microsoft Copilot productivity findings medium.com

About the Author