Advanced Software Testing Guide: Automation, Performance, Security & DevOps
Explore a comprehensive, step‑by‑step guide covering advanced automation testing techniques, API and performance testing strategies, security testing best practices, CI/CD pipeline configuration, Linux system analysis, database testing, cloud‑native considerations, and practical code examples, providing actionable checklists, troubleshooting tips, and real‑world scenarios for modern software quality assurance.
Automation Testing – Advanced Practices
When a UI test fails, follow a systematic troubleshooting workflow:
Check error logs for stack traces.
Reproduce the issue locally.
Verify the test environment (browser version, driver, OS).
Validate test data (fixtures, database state).
Inspect the test code (locators, waits, assertions).
Typical failure causes and remedies:
Element locator failure – NoSuchElementException. Review the locator strategy and add explicit waits.
Timeouts – TimeoutException. Increase wait time or optimise network latency.
Environment changes – Configuration errors. Update environment configuration and test data.
Data issues – Assertion failures. Clean the environment and ensure correct test data.
Code bugs – Script errors. Debug the script and fix logic.
Useful debugging helpers:
# Capture a screenshot
driver.save_screenshot('error.png')
# Log the current URL
logging.info(f"Current page: {driver.current_url}")
# Breakpoint debugging
import pdb; pdb.set_trace()Dynamic element location strategies:
# XPath examples
//button[contains(text(),'提交')]
//input[contains(@id,'username')]
# CSS selector examples
input[id*='user']
input[id^='username']
input[id$='input']
# Explicit waits (Selenium WebDriver)
wait.until(EC.presence_of_element_located((By.ID, "username")))
wait.until(EC.element_to_be_clickable((By.ID, "submit")))Data‑driven testing with pytest:
# CSV fixture
import csv, pytest
@pytest.fixture(params=csv.reader(open('data.csv')))
def login_data(request):
return request.param
def test_login(login_data):
username, password, expected = login_data
login_page.login(username, password)
assert login_page.get_result() == expected
# Excel helper
import openpyxl
def load_excel_data():
wb = openpyxl.load_workbook('data.xlsx')
sheet = wb.active
return [row for row in sheet.iter_rows(min_row=2, values_only=True)]
# JSON parametrisation
import json
with open('data.json') as f:
test_data = json.load(f)
@pytest.mark.parametrize('case', test_data)
def test_api(case):
resp = request_api(case['url'], case['method'])
assert resp.status_code == case['expected_code']Layered automation framework layout (test case → business logic → utilities → configuration → data):
# Directory tree
automation/
├── config/ # configuration files
├── common/ # shared utilities
├── pages/ # page objects / actions
├── test_cases/ # test scripts
├── data/ # test data files
├── reports/ # HTML / XML reports
├── logs/ # log files
├── conftest.py # pytest fixtures
└── requirements.txt # Python dependenciesStability best practices (expressed as a checklist):
Use stable locators; avoid dynamic IDs.
Prefer explicit waits; avoid hard‑coded sleep.
Isolate test data; each test should have its own dataset.
Run tests in isolated environments (containers, virtual machines).
Implement retry logic and capture screenshots on failure.
Design tests to be independent – no hidden dependencies.
Produce detailed logs for faster root‑cause analysis.
Retry mechanism using the flaky library:
from flaky import flaky
@flaky(max_runs=3, min_passes=1)
def test_unstable_feature():
assert do_something() == expectedAPI Testing – Advanced Techniques
Common authentication schemes (security rating from low to high):
Basic Auth – Base64 encoded username/password (⭐⭐).
Token – JWT or custom token (⭐⭐⭐⭐).
OAuth 2.0 – Third‑party authorization (⭐⭐⭐⭐⭐).
API Key – Static key (⭐⭐⭐).
Session – Cookie‑based session (⭐⭐⭐).
HMAC – Signed requests (⭐⭐⭐⭐⭐).
Example: obtain a JWT token and use it in subsequent requests:
import requests
def get_token():
resp = requests.post('/login', json={'username':'test','password':'123456'})
return resp.json()['token']
headers = {'Authorization': f'Bearer {get_token()}'}
requests.get('/api/data', headers=headers)Idempotency test – sending the same request twice should create a single resource:
def test_create_order_idempotent():
order_id = generate_order_id()
resp1 = create_order(order_id) # first call
resp2 = create_order(order_id) # second call
assert resp1['order_id'] == resp2['order_id']
assert get_order_count() == 1Versioning strategies (URL path, request header, query parameter) with a simple test case:
# Path versioning
GET /api/v1/users
GET /api/v2/users
# Header versioning
Accept: application/vnd.api.v1+json
# Query‑parameter versioning
GET /api/users?version=1
def test_api_version():
r1 = requests.get('/api/v1/users')
assert r1.status_code == 200
r2 = requests.get('/api/v2/users')
assert r2.status_code == 200
assert 'new_field' in r2.json()['data'][0]Performance Testing – Advanced Strategies
Typical load‑testing scenarios:
Baseline – 1‑10 users, 5 min, establish performance baseline.
Load – Target concurrency for 30 min, validate expected load.
Stress – Gradually increase load until failure, find bottlenecks.
Stability – Target load for 24 h+, verify long‑run stability.
Peak – Sudden high concurrency for 5 min, test burst handling.
Key monitoring metrics (grouped by layer):
Application : QPS/TPS, response time (avg / P95 / P99), error rate, success rate.
System : CPU %, memory %, disk I/O, network bandwidth.
Middleware : DB connection count, cache hit ratio, message‑queue backlog, JVM GC activity.
Report generation commands (compatible with Jenkins):
# HTML report (self‑contained)
pytest --html=report.html --self-contained-html
# JUnit XML (for CI)
pytest --junitxml=report.xml
# Allure results
pytest --alluredir=./allure-results
allure serve ./allure-resultsSample Jenkins pipeline that runs static analysis, unit, API and UI tests:
pipeline {
agent any
stages {
stage('Prepare') {
steps { sh 'python -V'; sh 'pip install -r requirements.txt' }
}
stage('Static Analysis') {
steps { sh 'flake8 src/'; sh 'pylint src/' }
}
stage('Unit Tests') {
steps { sh 'pytest tests/unit/ --cov=src' }
}
stage('API Tests') {
steps { sh 'pytest tests/api/' }
}
stage('UI Tests') {
steps { sh 'pytest tests/ui/ --browser=chrome' }
}
}
post { success { echo 'Tests passed!' } failure { echo 'Tests failed!' } }
}Security Testing – Advanced Topics
File‑upload vulnerability matrix (test cases):
Upload executable files (e.g., .php) to bypass type checks.
Change file extensions to bypass validation.
Upload oversized files to test size limits.
Use path traversal payloads (e.g., ../../../etc/passwd).
Concurrent uploads to expose race conditions.
Sample upload test:
def test_file_upload():
upload_file('test.jpg', 'image/jpeg') # normal file
upload_file('test.php', 'image/jpeg') # extension bypass
upload_file('large.zip', size='1GB') # oversized
upload_file('../../../etc/passwd') # path traversalAuthentication security checklist (sample test):
def test_auth_security():
# Brute‑force lockout after 10 failures
for i in range(10):
login('admin', f'password{i}')
resp = login('admin', 'password10')
assert resp['locked'] is True
# Session timeout
time.sleep(3600)
resp = get_protected_resource()
assert resp.status_code == 401API security test snippets:
# Unauthorized access
resp = requests.get('/api/users')
assert resp.status_code == 401
# SQL injection
resp = requests.get('/api/users?id=1 OR 1=1')
assert resp.status_code == 400
# Sensitive data leakage
resp = requests.get('/api/users/1')
assert 'password' not in resp.json()Database lock testing (row lock vs. deadlock example):
# Session 1 – acquire row lock
START TRANSACTION;
UPDATE users SET balance = balance - 100 WHERE id = 1;
-- do not commit
# Session 2 – conflicting update (should block)
START TRANSACTION;
UPDATE users SET balance = balance + 100 WHERE id = 1;
-- inspect lock tables
SELECT * FROM information_schema.innodb_locks;
SELECT * FROM information_schema.innodb_lock_waits;CI/CD – Advanced Practices
Parameterized Jenkins builds (string, choice, boolean):
pipeline {
parameters {
string(name: 'VERSION', defaultValue: '1.0', description: 'Version number')
choice(name: 'ENV', choices: ['dev','test','prod'], description: 'Target environment')
booleanParam(name: 'DEPLOY', defaultValue: true, description: 'Deploy after build')
}
stages {
stage('Echo') {
steps {
sh "echo Version: ${VERSION}"
sh "echo Environment: ${ENV}"
script { if (params.DEPLOY) sh './deploy.sh' }
}
}
}
}Git branching strategies (visualised as text):
Git Flow : main (production) → develop → feature/*, release/*, hotfix/*.
GitHub Flow : main ← feature/* → merge back to main.
GitLab Flow : main → pre‑production → production.
Quality‑gate stage example (fail build if thresholds are not met):
stage('Quality Gate') {
steps {
script {
def testResult = junit allowEmptyResults: true, testResults: 'reports/*.xml'
if (testResult.failCount > 0) { error 'Tests failed' }
def coverage = sh(script: 'coverage report', returnStdout: true).trim()
if (coverage.toInteger() < 80) { error 'Coverage below 80%' }
}
}
}Linux – Advanced Operations
Essential performance‑analysis commands:
CPU: top, htop, mpstat 1 Memory: free -h, vmstat 1 Disk I/O: iostat -x 1, df -h, du -sh /* | sort -hr | head -10 Network: netstat -an, ss -s, iftop Processes: ps aux --sort=-%cpu | head -10 System load: uptime, w Sample Bash monitoring script that alerts when CPU, memory or disk usage exceeds 80 % and restarts a stopped service:
#!/bin/bash
cpu=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}')
if (( $(echo "$cpu > 80" | bc -l) )); then
echo "High CPU: $cpu%" | mail -s "Alert" [email protected]
fi
mem=$(free | grep Mem | awk '{printf("%.2f", $3/$2 * 100)}')
if (( $(echo "$mem > 80" | bc -l) )); then
echo "High Memory: $mem%" | mail -s "Alert" [email protected]
fi
disk=$(df -h / | awk 'NR==2 {gsub(/%/,"",$5); print $5}')
if [ $disk -gt 80 ]; then
echo "Disk usage $disk%" | mail -s "Alert" [email protected]
fi
if ! systemctl is-active --quiet nginx; then
echo "Nginx down" | mail -s "Alert" [email protected]
systemctl restart nginx
fiScenario‑Based Advanced Practices
Distributed system testing – verify service discovery, load balancing, circuit breaking, distributed transactions, data consistency and tracing. Example failure scenarios include node crash, network partition, high concurrency, timeout and data sync issues.
Message‑queue testing – send/receive, persistence, ordering, duplication, backlog handling and dead‑letter queues. Example:
def test_message_queue():
producer.send('topic', {'data':'test'})
msg = consumer.receive(timeout=5)
assert msg['data'] == 'test'
# Persistence after broker restart
restart_broker()
msg = consumer.receive(timeout=5)
assert msg is not None
# Backlog test
for i in range(1000):
producer.send('topic', {'id': i})
assert consume_all() == 1000Microservice testing layers – unit, contract, integration and end‑to‑end. Emphasise contract verification, service registration, configuration centre, tracing, circuit breaking and distributed transaction validation.
Cloud‑native testing – containerisation, Kubernetes deployment, service mesh, auto‑scaling, secret management. Recommended tools: Testcontainers, Kind/Minikube, Istio, Helm.
Big‑data platform testing – validate data ingestion, ETL pipelines, storage, computation, visualization and data‑quality checks. Sample pipeline validation:
def test_data_pipeline():
assert source_data_count > 0
etl_job.run()
assert etl_job.status == 'SUCCESS'
assert data_quality_score > 0.95
assert source_count == target_count
assert processing_time < thresholdProgramming Ability – Python Examples
Singleton via decorator:
def singleton(cls):
instances = {}
def wrapper(*args, **kwargs):
if cls not in instances:
instances[cls] = cls(*args, **kwargs)
return instances[cls]
return wrapper
@singleton
class Database:
passThread pool with ThreadPoolExecutor:
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=5)
future = executor.submit(task_func, arg1, arg2)
result = future.result()
executor.shutdown()Exception handling pattern:
try:
result = 10 / 0
except ZeroDivisionError as e:
print(f"Division error: {e}")
except Exception as e:
print(f"Other error: {e}")
else:
print("No exception")
finally:
print("Always executed")Decorator example (timer and retry with parameters):
# Simple timer decorator
import time, functools
def timer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
print(f"Elapsed: {end - start}s")
return result
return wrapper
@timer
def test():
time.sleep(1)
# Retry decorator with configurable attempts
def retry(max_times=3):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
for i in range(max_times):
try:
return func(*args, **kwargs)
except Exception as e:
if i == max_times - 1:
raise
return wrapper
return decorator
@retry(max_times=3)
def unstable():
# flaky logic
passCommon data‑structure operations:
# List
lst = [1, 2, 3]
lst.append(4)
lst.extend([5, 6])
lst.insert(0, 0)
lst.remove(3)
lst.pop()
lst.sort()
lst.reverse()
# Dictionary
cfg = {'a': 1, 'b': 2}
cfg['c'] = 3
value = cfg.get('a', 0)
keys = cfg.keys()
values = cfg.values()
items = cfg.items()
# Set
s = {1, 2, 3}
s.add(4)
s.remove(1)
intersection = s & {3, 4, 5}
union = s | {3, 4, 5}
difference = s - {3, 4, 5}
# Tuple unpacking
t = (1, 2, 3)
a, b, c = tComprehensive Practice – Test Planning
Typical test‑plan outline:
Project overview and scope.
Testing strategy (manual, automated, performance, security).
Environment requirements (hardware, software, network).
Resources (team members, tools, licences).
Schedule and milestones.
Risk assessment and mitigation.
Deliverables (test cases, reports, metrics).
Test‑pyramid distribution (approximate percentages):
Unit tests – 60 %.
API tests – 30 %.
End‑to‑end UI tests – 10 %.
Key quality metrics:
Defect density (defects per KLOC).
Test coverage (code, requirements, risk).
Escape rate (defects found post‑release).
Test efficiency (tests per day, automation ROI).
Release quality (pass rate, critical defect count).
Adopt left‑shift quality practices: involve testers in requirement reviews, enforce code reviews, integrate automated tests early in CI, and continuously monitor quality gates.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
