Artificial Intelligence 8 min read

AI Claims of Human-Level Intelligence Unveiled: Reliance on Massive Rules Over True Reasoning

The article critiques AI giants’ claims of nearing human-level intelligence, highlighting research that shows current models rely on massive rule memorization rather than genuine reasoning, leading to brittleness in navigation, mathematics, and adaptability, and emphasizing the need to understand these limitations for future progress.

Cognitive Technology Team
Cognitive Technology Team
Cognitive Technology Team
AI Claims of Human-Level Intelligence Unveiled: Reliance on Massive Rules Over True Reasoning

AI巨头宣称接近人类智能,但研究揭露其本质:依赖海量规则而非真正推理,远非灵活的世界模型,创新仍需突破。

The leading AI companies—OpenAI, Anthropic, and Google—continue to proclaim that AI is on the brink of achieving human-level intelligence. However, growing skepticism is emerging. Researchers have found that AI’s “thinking” differs fundamentally from human cognition and may have inherent limitations. Melanie Mitchell, a professor at the Santa Fe Institute who studies AI, notes that current model architectures appear intrinsically constrained. They do not build an understanding of the world as humans do, but instead learn massive amounts of experience‑based rules and mechanically respond to information.

This approach contrasts sharply with how humans and even animals reason about the world and predict the future. Biological beings construct world models that incorporate causal relationships. For example, a person who sees an obstacle will immediately adjust their route, whereas AI may be helpless. Keyon Vafa, an AI researcher at Harvard, shared an illustrative example. His team trained an AI on navigation data from Manhattan streets. The resulting “map” was chaotic: routes jumped randomly across Central Park or cut through several blocks, clearly unrealistic. Yet surprisingly, the system still provided navigation guidance from start to destination with 99 % accuracy. It had not understood Manhattan’s layout; instead, it memorized a specific set of rules for each possible start‑end pair. Vafa described this disordered problem‑solving as achieving feats that humans find difficult by leveraging the AI’s massive “brain” and impressive compute power.

However, the flaw of this “thinking” becomes evident when Vafa’s team blocked 1 % of the virtual Manhattan roads, forcing the AI to detour; its performance collapsed dramatically. This reveals a huge gap between AI and humans: humans may not be able to recite every New York street, but they far exceed AI in flexibility when faced with unexpected situations. Vafa lamented that AI appears intelligent but is essentially a patched‑together “Rube Goldberg machine” full of temporary fixes.

AI’s mathematical abilities expose similar problems. Studies have shown that large language models handle numbers extremely inefficiently. They learn completely independent rules for different numeric ranges—for instance, multiplication of numbers from 200 to 210—much like a student memorizing answers to each problem instead of mastering the underlying operation. Mitchell argues in a series of articles that AI seems to be building a gigantic “bag of heuristics” rather than forming concise mental models. She explains that heuristics are shortcuts for problem solving, but AI’s shortcuts are so abundant that they become far from efficient.

In the past, ChatGPT and its rivals were mysterious black boxes. They generated results through training rather than explicit programming, encoding information in massive parameter networks in ways that are difficult for humans to interpret. Yet emerging research on mechanistic interpretability is pulling back the curtain. Scientists are developing new tools to peek into how AI processes mathematics, plays games, or navigates. These findings cause many to doubt claims that AI is approaching artificial general intelligence (AGI). Mitchell points out that anthropomorphic language—such as describing AI as “reasoning” or “understanding”—can mislead the public.

Why do AI systems require such enormous models and data? Humans can learn new skills after only a few attempts, whereas AI must repeatedly “see” countless combinations of text, images, or board states to distill rules. This also explains why AI from different companies exhibit convergent performance and may have already hit a performance ceiling. Vafa’s research shows that AI’s “knowledge” cannot be compressed into a compact model like human understanding; it relies on lengthy rule lists.

These limitations are not a new topic. In 1970, Marvin Minsky at MIT predicted that computers would achieve ordinary human intelligence within three to eight years. Last year, Elon Musk claimed AI would surpass humans by 2026. Sam Altman also wrote in a blog that the dawn of AGI is near and history is turning a new page. Anthropic’s chief safety officer warned that virtual employees would enter U.S. companies within a year. Yet history shows that optimistic AI forecasts often fall short.

Nevertheless, AI’s potential is undeniable. Software developers are exploring how to harness these impressive systems to boost productivity. Even if AI’s “intelligence” may be nearing its ceiling, optimization continues. Jacob Andreas of MIT proposed in a paper that understanding language‑model limitations can inspire new training methods, making AI more accurate, trustworthy, and controllable. For example, researchers are trying to make AI simulate a human scientist’s collaboration, generating hypotheses and evaluating them, similar to Google’s “AI co‑scientist” system.

The future of AI may lie not in replacing human thought but in becoming a powerful assistant. Its “thinking” may be merely a stack of memory and rules, yet that is sufficient to solve many problems. As Vafa notes, AI’s success stems from brute force rather than elegance. Recognizing its constraints may be the first step toward making it better.

Artificial Intelligencemachine learninglarge language modelsmodel interpretabilityAI limitations
Cognitive Technology Team
Written by

Cognitive Technology Team

Cognitive Technology Team regularly delivers the latest IT news, original content, programming tutorials and experience sharing, with daily perks awaiting you.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.