Hinton Warns: $4.8 Trillion AI Market Locked In – Is AGI a Foolish Term?

In a stark address at the World Digital Conference, Geoffrey Hinton warned that only about 1% of AI research focuses on safety while the $4.8 trillion market races ahead, critiquing the term AGI, outlining three classes of AI risk, and highlighting the dangerous concentration of AI power and resources worldwide.

DataFunTalk
DataFunTalk
DataFunTalk
Hinton Warns: $4.8 Trillion AI Market Locked In – Is AGI a Foolish Term?

Hinton’s Urgent Warning

Seventy‑eight‑year‑old Geoffrey Hinton, the 2024 Nobel laureate often called the "father of AI," told a packed audience that unregulated AI is like a high‑speed car without a steering wheel, and that humanity may not be able to coexist with super‑intelligent AI.

AI Market Explosion and Safety Neglect

UNCTAD data show the global AI market grew to $189 billion in 2023 and is projected to reach $4.8 trillion by 2033 – a growth rate that would create an economy larger than Japan’s GDP in just ten years. Hinton noted that almost all of this investment is spent on building larger models and more compute, while only about 1% of AI R&D is devoted to "making sure this thing doesn’t go wrong." He summed it up with a single word: "Crazy."

Regulation vs. Lobbying Analogy

Hinton dismissed the AI‑industry analogy that regulation is a brake and progress is the accelerator. He argued that regulation is the steering wheel; without it, the industry is building a car that can only accelerate.

From Award Ceremony to Trial

During the ceremony honoring Hinton and Terry Sejnowski for their 1980s work on Boltzmann machines, the discussion shifted to AGI. When asked to define AGI, Hinton called the term "stupid" because it assumes intelligence is one‑dimensional, like a thermometer. He explained that intelligence is highly multidimensional – AI may surpass humans in some tasks (e.g., general knowledge) while lagging in others (e.g., certain reasoning).

He said the more meaningful concept is "superintelligence," defined as an entity that outperforms humans on almost every intellectual task, and he believes it is on the horizon.

Three Classes of AI Risk

Malicious use: AI could create deep‑fake videos, design lethal viruses, or launch cyber‑attacks.

Side‑effects of profit‑driven AI: generating illegal images, recommendation algorithms polarising audiences, and other societal harms.

Existential threat from autonomous AI: a scenario that may prompt international cooperation because all nations fear it.

Historical Parallel

Hinton likened the current situation to the tobacco and asbestos eras: wealthy nations regulate domestically but continue exporting harmful products to the developing world, creating a global risk that repeats.

Distribution of Power and Resources

UNCTAD officials highlighted that AI development capacity, infrastructure, investment, and talent are concentrated in a handful of northern‑hemisphere economies, leaving the rest without a seat at the rule‑making table. This creates a "second great split" between AI‑building and AI‑using nations.

Hinton’s Three‑Year Trajectory

Since leaving Google in 2023, Hinton has repeatedly warned about AI safety, first expressing regret about his career, then using his Nobel platform to call for safety research, and now delivering concrete figures and policy calls at the 2026 conference.

Technical Insight Amid the Warning

He also discussed why restricted Boltzmann machines embody correct Bayesian inference, why current image generators use only half of the wake‑sleep algorithm, and why combining generative and discriminative models is the next logical step.

Conclusion: Who Holds the Steering Wheel?

The metaphorical car’s accelerator is fully pressed, the engine roaring at $4.8 trillion, but whether a steering wheel will appear depends on governments, corporations, and scientists taking control in the coming years.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AGIAI safetyAI governanceAI marketAI risksuperintelligenceAI regulation
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.