Cerebras Unveils the World’s Largest AI Chip – Nvidia’s Newest Rival

Cerebras, the 2015‑founded California chip maker, announced a wafer‑scale engine with 1.2 trillion transistors on a single 12‑inch wafer, secured over $10 billion in orders from OpenAI and AWS, and launched an IPO priced at $115‑125 per share that raised $3.5 billion, positioning itself as a formidable challenger to Nvidia in massive AI‑training workloads.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Cerebras Unveils the World’s Largest AI Chip – Nvidia’s Newest Rival

In the AI‑chip arena, Cerebras has emerged as a bold contender. Founded in 2015 in California, the company takes a radically different approach by fabricating a single 12‑inch wafer into one chip, embedding roughly 1.2 trillion transistors. This wafer‑scale engine (WSE) eliminates the traditional practice of cutting a wafer into hundreds of smaller chips.

The design targets ultra‑large AI models, offering lower latency and higher throughput that even Nvidia finds challenging. According to the article, OpenAI and AWS placed a combined order exceeding $10 billion for 750 MW of compute power, effectively reserving an entire super‑scale AI compute center for Cerebras.

Cerebras announced its IPO with a price range of $115–$125 per share, issuing 28 million shares to raise $3.5 billion and achieving a valuation of $26.6 billion. Although the company posted a loss in 2024, it projected a turnaround in 2025 with revenue of $5.1 billion—a 76 % year‑over‑year increase—and earnings of $1.38 per share, marking its first profitable year.

The core advantage lies in the WSE’s wafer‑level integration: no cutting, no splicing, a single chip that processes massive workloads more efficiently. In large‑model training and extreme‑scale compute scenarios, this architecture delivers superior performance, though Nvidia still dominates the broader training and inference market.

The article likens the industry’s typical chip‑making to “cutting a watermelon,” whereas Cerebras “eats the whole watermelon.” This metaphor underscores its strategy of leveraging a gigantic, monolithic chip to carve out a niche in the AI‑hardware landscape.

Overall, Cerebras’ massive chip and aggressive IPO signal the start of a new competitive dynamic in AI hardware, positioning it as perhaps the most audacious “rebellion” against Nvidia’s dominance.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Nvidia competitionAI chipsIPOLarge-scale AI trainingCerebrasWafer-scale engine
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.