Comprehensive Software Testing Process Guide
This guide outlines a complete software testing workflow covering basic smoke, functional, security, interface, compatibility, installation, performance, regression, and advanced testing stages, with detailed steps, responsibilities, and reporting requirements for developers, testers, and product teams.
1. Basic Testing (Smoke Test) Conducted jointly by development, testing, and product teams. Includes main flow and core functionality verification, pre‑test core case execution, automated regression of stable main flows, and delivery of a smoke test conclusion report.
2. Functional Testing Test designers create test cases after requirements and detailed design are complete, reviewing new feature validation, impact scope, business scenarios (network switch, lock/unlock, offline browsing, etc.), stability via monkey testing, and other tests such as timeout, permission, and configuration file checks.
3. Security Testing Performed by security team members, focusing on SQL/OS command injection, XSS, security misconfigurations, permission bypass, DNS hijacking, and app‑side protections like DEX obfuscation and log suppression.
4. Interface Testing Testers verify interface correctness, required and missing parameters, parameter type handling, and synchronous/asynchronous request behavior.
5. Compatibility/Adaptation Testing Includes system version, device model, browser, platform, and resolution compatibility tests, covering major Android and iOS browsers, various screen sizes, split‑screen, half‑screen, and landscape scenarios.
6. Installation/Uninstallation Testing Checks normal and abnormal install/uninstall flows, package size consistency, clean removal of files, and proper handling of edge cases such as insufficient space or power loss.
7. Basic Performance Testing (when performance requirements exist) Measures client startup time, memory/CPU usage, power consumption, traffic, response time, hot/cold launch times, memory leaks, over‑rendering limits, and frame rate (>60 fps).
8. Online (Backup) Regression Testing Requires product participation; both product and testing teams validate new features and main flows before releasing an official launch report.
Advanced Testing
• Comprehensive performance testing covering activity memory/CPU, power, traffic, business response time, launch times, memory leaks, over‑rendering, and frame rate.
• Stability testing ensuring monkey test failure probability below 0.07% under normal, weak, and no‑network conditions, continuous 8‑hour background run without issues, and CPU usage >80% for 5 hours without crashes.
• Static code inspection with jointly defined coding standards, tool‑based scanning, and manual reviews.
High‑Level Testing
1. Promote unit test completeness: at least one positive and one negative case per function/interface, aiming for >80% code coverage.
2. Conduct code walkthroughs and peer reviews.
3. Implement online automated monitoring and fault alerting tools.
Qtest, the professional testing team of 360, leads the Web Platform Department’s testing automation and efficiency initiatives.
360 Quality & Efficiency
360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.