Artificial Intelligence 12 min read

Comparison and Experience Sharing of AI Coding Tools

In this detailed recap of a live session, the author shares personal experiences, compares six AI-powered coding assistants—including Trae, Cursor, and Augment—evaluates their speed, features, costs, and MCP support, and offers practical advice on tool selection, project workflow, and productivity techniques for developers.

Continuous Delivery 2.0
Continuous Delivery 2.0
Continuous Delivery 2.0
Comparison and Experience Sharing of AI Coding Tools

Introduction

Hello everyone, I am Qiao Liang. In this session I share the insights I gained while using various AI code editors, offering practical tips and experiences.

AI Coding Tool Comparison

I have tried many AI code editors. Below is a brief timeline of the tools I used:

Usage History

Tools such as Tongyi Lingma, MarsCode, and GitHub Copilot were tried last year; they made coding slightly faster but the overall productivity boost was modest.

ByteDance’s Trae was tried early this year; the free tier was attractive but strict request limits caused frequent queuing, leading me to abandon it.

Cursor: I used it for a while. The free trial lasts two weeks and has request limits; during peak times the quota was exhausted quickly, and the model’s memory required frequent prompt reminders.

Augment: After a month‑long gap, I revisited Augment. This time it performed much better, aligning with Cursor’s speed, adding memory capabilities, and even surpassing Cursor in some aspects.

Key comparison points:

Response speed: Trae is the slowest; Cursor and Augment are comparable, though Augment may lag at night and return 503 errors.

Feature details: Cursor shows code modifications directly in the chat (agent auto‑run mode), which I love; Augment requires a manual click to view changes and feels slower.

Cost: Trae is currently free without a time limit. Both Cursor and Augment offer two‑week free trials; Cursor’s free quota is smaller, while Augment’s token limit matches the backend model’s maximum.

MCP support: Trae does not yet support MCP Server, whereas Cursor and Augment both do. My MCP Server set includes Fetch, Tavily, Sequential‑thinking, and Software‑planning‑tool.

In Cursor I often use the Sequential‑thinking and Software‑planning MCPs together to generate detailed work plans; Augment can achieve similar results without these MCPs.

Scenario Selection

1. Individual or “personal company” users: The performance gap between tools is minor because coding is only a small part of their work. AI assistants enable them to build simple websites or information‑aggregation apps without hiring a developer.

2. Professional software engineers: Daily efficiency is critical; both Cursor and Augment are viable, with Augment having a slight edge.

Conclusion: For solo entrepreneurs, AI coding can be a “hero”; for seasoned engineers, it remains an intelligent but inexperienced “intern”.

Project Practical Experience

I have been working on a small Python project that manages my Markdown documents and publishes them to various platforms.

Production code: 8,000+ lines (2,000 lines of comments).

Test code: 6,000+ lines (2,000 lines of comments), 277 test cases, only 5 integration tests (WeChat and OpenRouter Service).

Test coverage: 86%.

External integrations: WeChat public account API and OpenRouter Service (calls DeepSeek large model).

All comments were generated automatically by AI.

Project Workflow Tips

Use quick‑key snippets: On macOS I configure Alfred with common prompt snippets (e.g., run all tests, break down tasks, commit code) to speed up interaction with the AI editor.

Below are screenshots of my ten most‑used keywords and prompt examples.

Example prompts:

1. Before continuing, update the work plan in instructions/process.md , marking the completion time if finished or the start time if just begun.

2. (1) Read Project_management.md to understand basic work requirements; (2) Review Instruction.md , Documentation.md , and Progress.md to grasp system design and current progress; (3) Compare production code with requirements; (4) Check for conflicts between requirements and test cases; (5) If issues arise, raise them for discussion.

Documentation and Progress Tracking

In a previous article I described how I use four Markdown documents to track work progress; the above prompts illustrate their purposes.

Using TDD with AI

AI can hallucinate. To mitigate this, I adopt a Test‑Driven Development (TDD) approach, treating AI suggestions as a “frame” that I review for correctness. When AI modifies production code to satisfy a test, I intervene if it breaks other tests, guiding it step‑by‑step.

Voice Input

Typing can be slow; on a MacBook you can use voice input to dictate commands to the AI.

Key Takeaways

AI code editor experience: Tested Tongyi Lingma, MarsCode, Copilot, Trae, Cursor, Augment; early tools were mediocre, recent Augment experience improved significantly.

Tool comparison: Trae has the worst response speed; Cursor and Augment are similar, though Augment may lag at night.

Feature details: Cursor shows code changes inline; Augment requires manual view.

MCP usage: Cursor often uses Sequential‑thinking and Software‑planning; Fetch is common in documentation.

Tool limitations: Augment’s free trial has no usage caps but can become sluggish; Cursor allows command execution; Augment’s auto‑run mode is convenient.

Scenario recommendations: Solo users can choose Cursor or Augment; one‑person companies may stick with free tiers; professional engineers should pick paid, reliable tools and treat AI as an intern, employing TDD.

Additional notes: Different tools suit different audiences; macOS voice input can replace manual typing; the speaker is a management consultant.

MCPsoftware developmentproductivitytool comparisonTDDAI coding toolscode assistants
Continuous Delivery 2.0
Written by

Continuous Delivery 2.0

Tech and case studies on organizational management, team management, and engineering efficiency

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.