Mastering Code Review: Goals, Process, Checklist, and Best Practices
This guide outlines the objectives, workflow, key review items, and practical methods for effective code review, providing detailed checkpoints and a checklist to help teams improve code quality, maintainability, and collaborative learning.
1. Code Review Goals
Identify issues early in the project, spread knowledge and help junior developers learn from senior developers, foster mutual assistance and backup, improve code readability and maintainability, and discover "minefield" areas.
2. Code Review Process
Prerequisite: All code changes must undergo code review.
The code to be reviewed must compile, pass static analysis tools (e.g., FindBugs), and unit tests, and be self‑tested.
Projects or small tasks should follow the code review flow in AONE or use GitLab merge requests.
Form:
Pair review: mandatory for all projects and tasks.
Group review: performed as needed, randomly selecting code or requiring it for critical sections.
Suggested mode: incremental review during daily development.
Scope: Both product code and unit‑test code must be reviewed; critical code requires group review.
Personnel:
Reviewers must be familiar with the code and project and have participated in design reviews.
Critical or high‑risk changes must involve the module owner or architect.
QA team members are encouraged to participate.
3. Code Review Content
Code conventions (use Sonar tools).
Error and exception handling, including interface return‑value checks.
Business logic and loops.
Thread safety.
Handle closure, memory leaks, and resource cleanup (e.g., Jenkins platform).
Framework and performance issues.
Unit‑test cases and their effectiveness.
4. Code Review Methods
The developer introduces the code, covering design ideas, data structures, and code architecture.
Both parties discuss and exchange ideas.
The reviewer conducts a detailed review and records findings using a unified tool (e.g., Tala).
The reviewer and developer discuss the results, propose improvements, and refactor together.
Reusable review experiences are documented as coding standards or added to a "minefield" repository for team reuse.
Suggested mode: incremental review throughout development to avoid long‑tail effects.
Note: Reviewers and developers are partners, not adversaries. The goal is collaborative learning, knowledge sharing, and delivering better products.
Appendix
1. Code Review Checkpoints
Code style and comments.
Copyright statements and code signatures.
Adherence to formatting standards.
No wildcard imports or unnecessary imports.
Proper naming conventions for variables, methods, and classes.
Public variables have appropriate descriptions.
Public methods and classes have clear documentation.
Complex algorithms include adequate explanations.
No usage of deprecated code.
Reasonable use of public/private/static/final modifiers.
Static constants accessed via class/interface names.
Serializable classes define serialVersionUID.
Meaningful naming using full words and proper case.
Configuration values (URLs, file paths) externalized.
Non‑English literals replaced with resource bundles.
Avoid System.out, System.err, and printStackTrace() in production.
Log low‑level messages only after checking isXXEnabled.
Limit DAO calls, especially inside loops.
Prevent excessive debug logging in production.
Ensure thread‑safe usage of shared objects.
Avoid deadlocks and race conditions.
Optimize loops and prevent infinite loops.
Limit recursion depth to stay within stack limits.
Use appropriate I/O classes and buffering for performance.
Assess and mitigate high‑concurrency impacts.
Prefer interfaces (e.g., List) over concrete implementations.
Close streams, files, and connections promptly.
Maintain well‑structured XML configuration files.
Use constant naming conventions for non‑descriptive strings and numbers.
Avoid hard‑coded character encodings.
Compile and limit the use of large regular expressions.
Prefer StringBuilder for large string concatenations.
Limit method parameters to five or fewer; encapsulate related data in beans.
Keep method length under 50 lines; extract logic into smaller methods.
Avoid more than four levels of nested if‑else statements; use design patterns such as Strategy or Command.
2. Checklist
Level 3 – Verify code follows formatting standards.
Level 2 – Ensure no unnecessary wildcard imports.
Level 2 – Detect unused fields.
Level 2 – Detect unused local variables.
Level 2 – Detect unused private methods.
Level 2 – Identify redundant or duplicate logic that can be refactored.
Level 2 – Check appropriateness of public/private/static/final modifiers.
Level 2 – Ensure static constants are accessed via class/interface names.
Level 2 – Verify all Serializable classes define serialVersionUID.
Level 3 – Enforce meaningful naming conventions.
Level 3 – Ensure all control blocks use braces.
Level 3 – Avoid overly complex statements.
Level 3 – Externalize URLs, file paths, and similar literals.
Level 3 – Externalize Chinese literals using resource bundles.
Level 3 – Use constants for non‑descriptive strings.
Level 3 – Use constants for numeric literals where appropriate.
Level 3 – Provide detailed comments for constants.
Level 2 – Avoid System.out, System.err, and printStackTrace() in production code.
Level 1 – Ensure streams, files, and connections are closed properly.
Level 1 – Guard low‑level log output with isXXEnabled checks.
Level 1 – Simplify DAO access in the business layer.
Level 1 – Prevent excessive debug logging in production.
Level 1 – Ensure thread‑safe usage of objects.
Level 1 – Use StringBuilder for large string assemblies.
Level 1 – Limit recursion depth and avoid deep recursion.
Level 1 – Avoid memory leaks with ThreadLocal (must be static).
Level 1 – Handle exceptions properly and release resources.
Level 1 – Prohibit hard‑coded character set conversions (e.g., GBK↔ISO‑8859‑1).
Level 1 – Compile large regular expressions and avoid performance‑heavy patterns.
Level 1 – Avoid large‑scale use of String.indexOf().
Level 1 – Limit method parameters to five or fewer.
Level 1 – Keep method bodies under 50 lines.
Level 1 – Limit nested if‑else to four levels; use design patterns where suitable.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Software Development Quality
Discussions on software development quality, R&D efficiency, high availability, technical quality, quality systems, assurance, architecture design, tool platforms, test development, continuous delivery, continuous testing, etc. Contact me with any article questions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
