Artificial Intelligence 7 min read

DeepSeek‑R1‑0528: How the New Open‑Source LLM Outperforms Gemini and Claude

DeepSeek‑R1‑0528, the latest open‑source 660B LLM, dramatically improves coding, reasoning, and long‑context abilities, matching or surpassing top models like Gemini 2.5 Pro and Claude 4 in benchmarks and real‑world tests, while offering faster, more stable, and fully executable outputs.

Data Thinking Notes
Data Thinking Notes
Data Thinking Notes
DeepSeek‑R1‑0528: How the New Open‑Source LLM Outperforms Gemini and Claude

DeepSeek‑R1‑0528 Open‑Source Release

DeepSeek‑R1‑0528 model weights have been uploaded to HuggingFace (project link). The model, trained on DeepSeek‑V3‑0324 with 660B parameters, shows a dramatic evolution in coding ability and longer reasoning time.

Image
Image

Benchmark results: On LiveCodeBench, DeepSeek‑R1‑0528 performs on par with o3‑mini (High) and o4‑mini (Medium), surpassing Gemini 2.5 Flash.

Image
Image

Key highlights (summarized by the community):

Deep reasoning comparable to Google models

Improved text generation – more natural and better formatted

Unique inference style – fast yet thorough

Extended thinking time – single‑task processing up to 30‑60 minutes

Image
Image

Long‑thinking capability is a major discussion point; some users observed reasoning times exceeding 25 minutes.

Image
Image

Programming performance: Users report that DeepSeek‑R1‑0528 can generate correct code on the first try, outperforming Claude 4 and Gemini 2.5 Pro in front‑end and back‑end tasks.

Image
Image

Comparisons with Gemini 2.5 Pro on research prompts, live source searches, and AI SaaS strategy show mixed results, often a tie.

Image
Image

Additional tests demonstrate stable, fast reasoning for tasks such as 3D animation generation, website design, physics simulations, and classic puzzles like the farmer‑fox‑goose‑beans problem.

Image
Image

The model also handled complex relational queries and lengthy reasoning chains without interruption, indicating improved compute resources after release.

Image
Image

Overall, DeepSeek‑R1‑0528 represents a significant step forward for open‑source LLMs, delivering strong coding, reasoning, and multi‑step problem‑solving capabilities.

code generationDeepSeekLarge Language ModelreasoningAI benchmarking
Data Thinking Notes
Written by

Data Thinking Notes

Sharing insights on data architecture, governance, and middle platforms, exploring AI in data, and linking data with business scenarios.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.