Automate Sonar Issue Fixes with AI: A Six‑Step Prompt Generation Framework
This article explains how to reduce token consumption and improve AI‑driven code repairs by converting Sonar scan results into structured prompts using the Model Context Protocol (MCP) and a six‑step intelligent prompt generation pipeline integrated with Cursor.
Pain Points
Developers often waste tokens because AI models receive overly large or vague prompts, leading to excessive input and output token usage, repeated iterations, and inaccurate fixes.
Token consumption grows with input size and task complexity.
Ambiguous prompts add irrelevant content and force the model to generate longer responses.
Repeated prompt tweaking creates inefficient interaction loops.
Solution Overview
The core challenge is turning Sonar scan results into concise, structured prompts that AI can understand efficiently. The author built an MCP‑based system for Cursor that generates intelligent prompts automatically.
What is MCP?
Model Context Protocol (MCP) is an open protocol supported by Cursor, allowing custom tools written in Python (or other languages) to be invoked directly from the AI assistant. It essentially gives the AI the ability to call user‑defined functions.
Six‑Step Prompt Generation Method
Issue Type Identification : Map Sonar rule IDs to high‑level issue categories (e.g., NullPointer, ResourceLeak, SQLInjection).
AST Code Structure Parsing : Use javalang to build an abstract syntax tree (AST) of Java files, locating the exact method containing the reported line.
Intelligent Context Extraction : Extract the full method or surrounding code based on issue type, optionally including imports and class declarations, while limiting total lines to control token usage.
Layered Prompt Template Engine : Combine a role description, coding standards (Alibaba Java guide), and issue‑specific requirements into a hierarchical template.
Few‑Shot Example Repository : Provide before‑and‑after code examples for each issue type to guide the model via few‑shot learning.
Integration into MCP Service : Assemble all components into an MCP tool that processes Sonar issues batch‑wise, generates prompts, estimates token cost, and returns structured metadata.
Key Components
IssueTypeClassifier maps Sonar rule keys to issue categories:
class IssueTypeClassifier:
RULE_MAPPING = {
"java:S2259": "NullPointer",
"java:S2637": "NullPointer",
"java:S2095": "ResourceLeak",
"java:S2093": "ResourceLeak",
"java:S3649": "SQLInjection",
"java:S2076": "CommandInjection",
"java:S117": "NamingConvention",
"java:S1192": "DuplicateString",
}
@staticmethod
def classify(rule_key: str) -> str:
return IssueTypeClassifier.RULE_MAPPING.get(rule_key, "General")JavaCodeParser uses javalang to locate the method surrounding a given line number:
class JavaCodeParser:
"""Java code structure parser using AST"""
@staticmethod
def extract_method_context(file_path: str, line_number: int) -> Optional[Dict]:
with open(file_path, 'r', encoding='utf-8') as f:
code = f.read()
tree = javalang.parse.parse(code)
for path, node in tree.filter(javalang.tree.MethodDeclaration):
method_start = node.position.line
method_end = calculate_method_end(node)
if method_start <= line_number <= method_end:
return {
'class_name': extract_class_name(path),
'method_name': node.name,
'start_line': method_start,
'end_line': method_end,
'modifiers': node.modifiers,
'return_type': node.return_type.name,
'parameters': [p.name for p in node.parameters]
}
return NoneContextExtractor applies extraction rules per issue type to produce concise code snippets, falling back to surrounding lines when AST parsing fails.
class ContextExtractor:
EXTRACTION_RULES = {
"NullPointer": {"range": "full_method", "max_lines": 150, "include_imports": True, "include_class_declaration": True},
"ResourceLeak": {"range": "full_method", "max_lines": 100, "include_imports": True, "include_class_declaration": True},
"SQLInjection": {"range": "full_method", "max_lines": 120, "include_imports": False, "include_class_declaration": True},
"NamingConvention": {"range": "surrounding", "max_lines": 30, "include_imports": False, "include_class_declaration": False},
"General": {"range": "surrounding", "max_lines": 50, "include_imports": False, "include_class_declaration": False},
}
@staticmethod
def extract(file_path: str, line_number: int, issue_type: str) -> str:
# Implementation omitted for brevity – follows the rules above
...PromptTemplateEngine builds the final prompt by injecting coding standards, task description, requirements, and optional few‑shot examples:
class PromptTemplateEngine:
"""Layered prompt template engine"""
JAVA_STANDARDS = """
Please follow these Java coding standards:
1. Alibaba Java Development Manual
2. Defensive null checks
3. try‑with‑resources for resources
4. Proper exception handling
5. Keep method complexity low
6. Clear camelCase naming
"""
ISSUE_TEMPLATES = {
"NullPointer": {
"task": "Fix the null‑pointer risk in the code",
"requirements": """
1. Add null checks at method start
2. Throw IllegalArgumentException with clear message
3. Preserve existing business logic
4. Do not change method signature
"""
},
# Other issue templates omitted for brevity
}
@staticmethod
def generate(issue_type: str, issue_data: dict, code_context: str, examples: str = "") -> str:
template = PromptTemplateEngine.ISSUE_TEMPLATES.get(issue_type, {"task": "Fix the code issue", "requirements": "Follow project coding style"})
prompt = f"You are a senior Java engineer.
Task: {template['task']}
Coding standards:
{PromptTemplateEngine.JAVA_STANDARDS.strip()}
"
if examples:
prompt += f"
Examples:
{examples}
"
prompt += f"
Issue details:
- Rule ID: {issue_data.get('rule', 'Unknown')}
- Issue type: {issue_type}
- Severity: {issue_data.get('severity', 'MAJOR')}
- Message: {issue_data.get('message', '')}
- Line: {issue_data.get('line', 0)}
Code context:
```java
{code_context}
```
{template['requirements'].strip()}
Output format:
- Only the repaired method code wrapped in ```java```
- No extra explanations
"
return promptExampleRepository stores before/after snippets for few‑shot learning, e.g., null‑pointer checks and resource‑leak fixes.
IntelligentPromptGenerator orchestrates the workflow: extracts issue info, classifies type, obtains context, fetches examples, generates the prompt, and estimates token usage.
class IntelligentPromptGenerator:
@staticmethod
def generate_for_issue(issue: dict, base_dir: str) -> dict:
try:
rule_key = issue.get('rule', '')
component = issue.get('component', '')
file_path = component.split(':', 1)[1] if ':' in component else component
full_path = os.path.join(base_dir, file_path)
line_number = issue.get('line', 0)
issue_type = IssueTypeClassifier.classify(rule_key)
code_context = ContextExtractor.extract(full_path, line_number, issue_type)
examples = ExampleRepository.get_examples(issue_type)
issue_data = {
'rule': rule_key,
'severity': issue.get('severity', 'MAJOR'),
'message': issue.get('message', ''),
'line': line_number,
}
prompt = PromptTemplateEngine.generate(issue_type, issue_data, code_context, examples)
token_estimate = len(prompt) // 4
return {'success': True, 'prompt': prompt, 'metadata': {
'file': file_path,
'line': line_number,
'issue_type': issue_type,
'rule': rule_key,
'severity': issue_data['severity'],
'token_estimate': token_estimate,
}}
except Exception as e:
return {'success': False, 'error': str(e), 'file': file_path, 'line': line_number}The MCP tool @mcp.tool("auto-fix-sonar-issues") fetches Sonar issues via httpx, processes each with the generator, and returns a markdown report containing prompts and metadata.
Integration and Deployment
Install dependencies: pip install -r requirements.txt Configure ~/.cursor/mcp.json to point to the Python script, then restart Cursor. Use the tool in the chat window with parameters such as project_name, cookie, base_dir, etc.
Usage Experience
After configuration, a single command in Cursor triggers the entire pipeline: Sonar API retrieval, context extraction, prompt generation, and formatted output. Developers can review and apply the suggested fixes directly.
Results and Improvements
Token cost reduced by ~70% thanks to precise context extraction.
Fix accuracy improved by ~30% due to structured prompts and few‑shot examples.
Productivity increased threefold with fully automated processing.
Code quality remains controllable via layered templates enforcing team standards.
Extensible – adding new issue types only requires updating the template configuration.
Conclusion
Effective AI‑assisted development requires precise problem identification, intelligent context extraction, standards injection, and systematic example learning. The presented MCP‑based framework demonstrates how to transform unstructured Sonar findings into actionable AI prompts, and the approach can be adapted to other static analysis tools such as ESLint or Checkstyle.
Author: Liu Yabin, Java Engineer at XianKeHui.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Zhuanzhuan Tech
A platform for Zhuanzhuan R&D and industry peers to learn and exchange technology, regularly sharing frontline experience and cutting‑edge topics. We welcome practical discussions and sharing; contact waterystone with any questions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
