Testing Gemini 2.5 Pro’s Programming Skills with Cursor

The author evaluates Gemini 2.5 Pro’s coding capabilities inside the Cursor IDE, detailing setup steps, regional API‑key limitations, hands‑on attempts to generate a front‑end project, a comparison with Augment Code’s Sonnet 3.5 model, and overall impressions of AI‑driven code generation.

Infra Learning Club
Infra Learning Club
Infra Learning Club
Testing Gemini 2.5 Pro’s Programming Skills with Cursor

Preparation

Download the latest Cursor version that lists the Gemini 2.5 Pro model.

Obtain a Google AI API key with Gemini 2.5 Pro access.

Visit Google AI Studio and sign in.

Create a new API key, ensuring Gemini 2.5 Pro permission.

In Cursor, open Settings → “API Keys”.

Enter the key under the “Google AI (Gemini)” section.

Select “Gemini 2.5 Pro with API Key” on the Models tab.

Adjust parameters such as temperature and context window.

Save the configuration.

Note: The API key cannot be configured in mainland China due to regional restrictions.

Usage

Fed a random web‑page screenshot to Gemini 2.5 Pro and asked it to implement a front‑end project. The model operated only in “ask” mode, answering questions without automatically creating files or writing code; the user had to click “run” for each step.

It could not start a project from scratch, so the test switched to feeding an existing open‑source project's bug issue.

After several dialogue rounds, Gemini 2.5 Pro still missed the key problem points, leading to an overall impression of average performance.

Comparison with Augment Code

Processing the same bug issue with Augment Code, which uses the Sonnet 3.5 model, clearly identified the problem and offered concrete solutions.

Augment Code also provides an “Agent auto” mode that can control the terminal, create files, write code, manage dependencies, and run the project automatically, contrasting with Cursor’s manual “ask” workflow.

Recommendation: augmentcode.com works well for open‑source projects; caution is advised when using it on proprietary code.

Conclusion

Gemini 2.5 Pro showed no standout strengths for programming tasks; its performance was comparable to earlier AI assistants and lagged behind Augment Code’s Sonnet 3.5‑driven automation. The test reflects a broader shift from “assist‑drive” tools like GitHub Copilot toward fully autonomous AI agents capable of end‑to‑end code generation and execution.

[1] Google AI Studio: https://aistudio.google.com/
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AI code generationCursor IDEProgramming AIAugment CodeGemini 2.5 Pro
Infra Learning Club
Written by

Infra Learning Club

Infra Learning Club shares study notes, cutting-edge technology, and career discussions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.