Gemini 3 vs Claude Code: Which AI Generates a Better 3D Billiards Game?

This article introduces Google's Gemini 3 series and four free access channels, walks through using Google AI Studio, Antigravity IDE, and Gemini CLI, then conducts a hands‑on benchmark comparing Gemini 3 and Claude Code on generating a 3D HTML billiards game, analyzing speed, code quality, and execution results.

Java Architecture Diary
Java Architecture Diary
Java Architecture Diary
Gemini 3 vs Claude Code: Which AI Generates a Better 3D Billiards Game?

Google Gemini 3 release and free access channels

Google DeepMind launched the Gemini 3 AI model series and opened four free access channels for developers:

Google AI Studio – browser‑based platform for conversational interaction and prompt templates.

Google Antigravity IDE – VS Code‑based AI code editor supporting multiple models.

Gemini Web – official Gemini website.

Gemini CLI – command‑line interface for terminal‑oriented usage.

Google AI Studio

AI Studio is a no‑install web UI (https://aistudio.google.com). After signing in with a Google account users can start a chat or select preset prompts. High traffic may cause congestion; for stable access use the Gemini API or a paid subscription.

Google Antigravity IDE

Antigravity is an AI‑enhanced code editor built on VS Code. It is available for macOS, Windows and Linux (download from https://antigravity.google.com). During the public beta the following models are free:

Gemini 3 Pro (Google)

Claude Sonnet 4.5 (Anthropic)

GPT‑OSS (OpenAI)

Users can switch models per task. Example: use Claude for code generation (SWE‑Bench 82.0 %) and Gemini 3 Deep Think for complex reasoning.

Installation

Visit the download page, extract the package, and launch the VS Code extension.

Gemini CLI

Installation

npm install -g @google/gemini-cli

Requires Node.js 20 or newer.

Practical benchmark: Gemini 3 vs Claude Code

Test setup

Gemini CLI using the Gemini 3 Pro Preview model.

Claude Code using Claude Sonnet 4.5.

Task: generate an HTML version of a 3D billiards game.

Prompt

帮我创建一个 HTML 版本 3D 桌球小游戏。

The prompt is intentionally minimal to evaluate intent understanding.

Results – Claude Code

Generation speed: ~1 minute for complete HTML + CSS + JavaScript.

Code quality: uses Three.js for 3D rendering, includes a physics engine for realistic collisions, and provides smooth mouse‑controlled interaction.

Execution: runs correctly; interaction logic works; visual effects appear natural.

Results – Gemini 3

Generation speed: ~3 minutes.

Code quality: only a basic HTML skeleton, attempted Canvas 2D drawing instead of 3D, and physics simulation was inaccurate.

Execution: fails to run; interaction logic broken; no 3D effect.

Analysis

Gemini 3 struggled with task understanding, misinterpreting “3D billiards” as a 2D implementation and producing buggy physics. Claude Code correctly identified the requirement, selected Three.js, and generated ready‑to‑run code.

Benchmark numbers (official Gemini 3 blog)

SWE‑Bench Verified: Gemini 3 Pro 76.2 % vs Claude Sonnet 4.5 82.0 %.

MMMU‑Pro: Gemini 3 Pro 81.0 % vs Claude Sonnet 77.8 %.

t2‑bench (tool use): Gemini 3 Pro 85.4 % vs Claude 84.7 %.

Conclusion

Claude Sonnet 4.5 remains the leading “coding expert,” maintaining the highest SWE‑Bench score (82.0 %).

AI code generationmodel comparisonGemini CLIClaude CodeGoogle AI StudioGemini 3Antigravity IDE
Java Architecture Diary
Written by

Java Architecture Diary

Committed to sharing original, high‑quality technical articles; no fluff or promotional content.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.