Boost Ad Monitoring Development with Cursor AI: Rules, Scenarios, and Automation

This article explains how the AI‑enhanced Cursor code editor can streamline ad‑monitoring backend development by defining global and scenario‑specific rules, automating repetitive tasks, and improving code quality across multiple media platforms, with concrete examples, directory structures, and detailed job‑task guidelines.

37 Interactive Technology Team
37 Interactive Technology Team
37 Interactive Technology Team
Boost Ad Monitoring Development with Cursor AI: Rules, Scenarios, and Automation

Background

Cursor is an AI‑powered code editor that inherits VS Code’s capabilities and adds natural‑language code generation, intelligent completion, and project‑wide optimization suggestions.

Business Background

Our ad‑monitoring system must handle three main scenarios: multimedia data querying (e.g., Toutiao, Kuaishou, Tencent), metric management and calculation, and scheduled or resident tasks.

Code patterns are repetitive.

Strict development standards must be followed.

Human errors are common.

Implementation

By leveraging Cursor, we can accelerate development efficiency for the above scenarios.

Global Rules

Define project context and working paths so generated code aligns with the overall project structure.

# Project Context
This document provides AI tools with project context to generate relevant and accurate code suggestions.
## Project Overview
- **Framework**: a goframe‑based framework called tcf
## Directory Structure
```
- api          # API definitions (e.g., api/v1/...)
- v1
- cmd          # Service entry points (web, daemon, job)
  - daemon    # go run cmd/daemon/main.go
  - job       # go run cmd/job/main.go
  - web       # go run cmd/web/main.go
- data
  - kuaishou  # Kuaishou test data
  - tengxun   # Tencent test data
  - toutiao   # Toutiao test data
- deployments_sygn  # TKE config files
- hack        # Development tools and scripts
- internal    # Business logic (hidden via Go internal)
  - cmd
    - daemon
    - job
    - web
  - consts    # Constants
  - controller
  - daemon
  - dao       # Data access objects (CRUD)
  - job
  - logic     # Complex business logic
  - model     # Data structures
  - service   # Interface definitions
- manifest    # Build, deploy, run configs
  - config
  - docker    # Docker files
- resource    # Static assets
- go.mod      # Go module dependencies
```

Scheduled‑Task Rules

Tasks are independent; a dedicated rule set ensures consistency and reusability.

# Development Guidelines
## 1. Scheduled‑Task Development
- Directory: internal/job/
- Example: job_target_holo_import.go
## Development Process
1. Requirement analysis
   - Define goal and schedule
   - Identify input parameters and data sources
   - Specify output format
2. Code implementation
   - Use parser.GetOpt for arguments
   - Follow existing naming conventions
   - Implement core logic, logging, and error handling
3. Testing
   - Run scripts to verify functionality
   - Ensure compatibility with existing features
   - Validate error handling and logs
## Code Quality
- Modular design, reuse existing code
- Use COALESCE for null handling, string_agg for aggregation
- Apply round() for numeric precision
## Error Handling
- Comprehensive error capture and logging
- Consistent error return values
## Data Processing
- Numeric metrics: control precision
- String metrics: handle empty and multi‑value aggregation
- Time metrics: unify timezone and format
## Documentation
- Add comments for key parameters and return values
- Record special handling logic
- Update related docs
## Development Notes
- Review requirements before coding
- Understand existing implementations
- Plan development steps
- Follow coding standards, optimize performance, keep code concise
## Post‑Development
- Full functional testing
- Code review
- Documentation update

Scenario Rules

Scenario 1: Adding New Media Attributes

When a media (e.g., Toutiao, Kuaishou, Tencent) gains new attribute metrics, insert the corresponding code in the appropriate layer without altering existing logic. Use COALESCE for nulls and string_agg for aggregation; apply round() for numeric precision.

Scenario 2: Importing Media Metrics

Convert source Excel (target.xlsx) to JSON, transform JSON according to filtering and classification rules (sum, ratio, cost), validate fields, and convert back to Excel for import using existing scripts.

# Development Scenario
## Scenario 1: New Media Attribute Metrics
- Supported media: toutiao (id=1), kuaishou (id=5), tengxun (id=-4)
- Code directory: internal/logic/mq_sm_data_holo (e.g., mq_sm_data_holo_toutiao.go)
- Add attributes via buildPropertyDimension or getPropertyCommonColumn
- Use COALESCE, string_agg, round() as needed
## Scenario 2: Media Metric Import
### Prerequisites
- Media type confirmed
- Files present: target.xlsx, target_holo_import.xlsx
### Steps
1. Pre‑process: remove intermediate files if any
2. Convert target.xlsx → target.json (`go run cmd/job/main.go jobTargetExcelToJson --media_id={id} --file_name=target`)
3. Transform target.json → target_holo_import.json with rules:
   - Filter: "归属第一层级" == "媒体数据指标"
   - Classify into sum, ratio, cost
   - Validate required fields per class
   - Parse formulas into numerator/denominator keys
4. Validate counts between source and transformed JSON
5. Convert target_holo_import.json → target_holo_import.xlsx (`go run cmd/job/main.go jobTargetJsonToExcel ...`)
6. Import Excel (`go run cmd/job/main.go jobTargetHoloImport ...`)
### Notes
- Ensure unique metric keys
- Verify formula correctness
- Keep JSON structure consistent with reference files
## Scenario 3: New Media Integration
- Define new media constants (name, id, pinyin) in internal/consts/mq_monitor.go
- Add media ID constant (e.g., MqMediaIdWithXxx)
- Optionally add filter‑mapping in MqMonitorFilterPropertyKeyMap
- Create data class file in internal/logic/mq_sm_data_holo (reference kuaishou implementation)
- Implement GetData, buildSql, buildWhere, buildPropertyDimension
- Update cronjobs for new media tasks
## Scenario 4: Generate sm_target Import Script
- Source: target.xlsx (columns: name, key, first‑level, second‑level, type)
- Target table: sm_target (detailed schema provided)
- Map first‑level enum values to attribute, media data, statistical data
- Use parent‑id lookup for second‑level hierarchy
- Handle numeric vs percentage types
- Follow standard coding and error‑handling practices

Summary

By judiciously applying Cursor, development efficiency for ad‑monitoring scenarios improves dramatically while maintaining code quality and consistency. Benefits include reduced repetitive coding, fewer human errors, standardized workflows, unified style, automated error handling, comprehensive documentation, clear rule files, reusable templates, and continuous process optimization.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Automationdata processingCursorGo backendJob SchedulingAI code editorAd Monitoring
37 Interactive Technology Team
Written by

37 Interactive Technology Team

37 Interactive Technology Center

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.