Deepening Functional Testing: Controlling Product, Development Quality, and Environment Configuration
This article expands on functional testing by exploring three key control areas—product knowledge, development quality through version control and test case integration, and environment configuration—offering practical insights and best practices for testers to enhance test effectiveness and drive project quality.
In a previous article "Thinking about Functional Testing" I discussed the basics of functional testing; today I will talk about horizontal expansion when deepening functional testing, further improving testing skills, and I will explore three major aspects based on my experience:
1. Controlling the product
2. Controlling development quality
3. Controlling environment configuration
We often joke that testing for developers is product testing, and testing for product is development testing, so during a project, testing often plays a role in driving the whole process.
Controlling the "product" involves both mastering the project's product background knowledge and understanding the product's requirement documents.
When testing a project, you must understand the industry background and related knowledge, know the users of the functionality, and understand what users truly want and can accept; this enables better business understanding, better comprehension of requirement documents, and testing from the user's perspective. For example, if you test commercial advertising, you need to know its background; otherwise you wouldn't even know what CPT/CPC are, making testing impossible.
On the other hand, while the product focuses on functionality and UI design, the design of requirements and their implementation by developers may be unreasonable, incomplete, or have better alternatives. For example, developers sometimes cannot understand overly concise requirements. Another scenario: after testing most of the requirements, you discover that the current implementation cannot meet functional needs or a bug becomes a deadlock, and the current solution cannot solve it, requiring a complete redesign. Therefore, testing should combine functional understanding with development implementation to proactively point out unreasonable designs.
First: obtain code access and, if possible, be able to view diffs; also understand the development version control system and, if it is unreasonable, push for changes. Have you ever experienced that a previously testable feature stops working after switching branches? That's due to poor version control.
Regarding SVN version control:
TAG, trunk, branches: iterative development occurs on trunk; after completing on trunk, create a TAG. For hotfixes or urgent needs, branch from TAG, develop and release, then tag again and merge back into trunk, followed by trunk‑related functional testing to avoid merge issues.
This method has worked well for me; previously, testing controlled all branches and TAGs, which was clear; feedback is welcome.
Regarding Git, the version control we currently use:
We use a stripped‑down gitflow, not using hotfix and release branches, only master, develop, and feature branches: feature branches are created from develop for feature development and testing; after testing, merge into develop for regression testing and release, then merge into master after release, as shown in the diagram.
For more details see the gitflow article: http://nvie.com/posts/a-successful-git-branching-model/
Second: I insist that test cases be integrated with development design documents (see my previous case design article). Development's test submission requirements include database, configuration files, and code implementation logic (design diagrams or detailed specs). Without understanding the implementation logic, test cases rely only on UI interactions, making it hard to detect underlying issues, leading to gaps in test coverage.
Regarding databases, table design should be reasonable. For example, while testing IoT cloud integration, device info is stored later and user info earlier; suddenly cookie info was placed into the device table, which seemed unreasonable, but developers ignored it. This made fixing user‑binding bugs harder, and correcting the design required extensive interface code changes, which is costly.
Configuration files must be understood for testing; without knowledge of project config files, controlling the test environment is impossible. Configuration files typically include databases (MySQL, Redis), qbus (custom message queue), qconf (custom distributed config), constant definitions, third‑party parameters, etc. Hard‑coding these should be avoided; if developers embed them directly in code, test and production environments may differ, causing issues.
Another important but often overlooked point: logs. You must know where logs are, how the project outputs them, and their categories (DEBUG/ERROR, etc.). This helps quickly understand functionality and locate bugs. My habit: check logs for bugs; if logs suffice, great; otherwise, add more log statements in code to aid debugging (for PHP projects I can do this directly; for compiled languages like Java or Go, ask developers to add logs).
(3) Due to length, environment configuration control will be discussed next time; feedback is welcome.
360 Quality & Efficiency
360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.