Operations 18 min read

Ace QA Interviews: 100+ Must‑Know Questions & Expert Answers for Test Engineers

This guide compiles over a hundred high‑frequency interview questions covering functional testing, API automation, performance testing, Linux commands, Docker, Kubernetes, and test leadership, each paired with concise answer points to help quality engineers prepare effectively and secure their next offer.

Test Development Learning Exchange
Test Development Learning Exchange
Test Development Learning Exchange
Ace QA Interviews: 100+ Must‑Know Questions & Expert Answers for Test Engineers

Functional Testing (10+ questions)

Q: What is the difference between black‑box and white‑box testing?

A: Black‑box focuses on input/output (functionality); white‑box examines code logic (path/branch coverage).

Q: How would you design test cases for a login feature?

A: Cover positive (correct credentials) and negative scenarios (empty, wrong, special characters), security (XSS/SQL injection), performance (concurrent logins), and compatibility (multiple devices).

Q: Give an example of equivalence partitioning and boundary‑value analysis.

A: For an age field (1‑120): valid partition [1,120]; invalid <1 or >120; boundary values 0,1,2,119,120,121.

Q: A bug is rejected by a developer – what do you do?

A: Provide reproducible steps, logs, and requirement reference; involve product for confirmation; escalate to technical lead if needed.

Q: What is smoke testing and when is it performed?

A: Quick verification of core flows to ensure a build is testable, typically after a build is delivered and before regression.

Q: How to test a requirement without documentation?

A: Infer from prototypes, competitor products, or user stories; hold a three‑party review; supplement with exploratory testing.

Q: What is regression testing and how do you define its scope?

A: Verify that changes haven’t introduced new defects; scope includes the modified module, its dependent modules, and critical end‑to‑end paths.

Q: How to ensure test coverage?

A: Break requirements into items, map test cases, and generate coverage reports (e.g., requirement coverage ≥ 95%).

Q: What should a production bug‑post‑mortem contain?

A: Root‑cause analysis (5 Whys), missed test reasons, and improvement actions such as new test cases, monitoring, or process changes.

Q: Core abilities of a test engineer?

A: Risk identification, user‑centric thinking, communication & collaboration, quality advocacy, and technical enablement.

Q: How to test a payment function?

A: Validate success, failure, timeout, duplicate payments, amount boundaries, reconciliation consistency, and refund flow.

Q: What is exploratory testing and when is it suitable?

A: Unscripted, experience‑driven testing; ideal for vague requirements, tight schedules, or innovative features.

API Automation (12 questions)

Q: Why automate APIs and what value does it bring?

A: Fast regression, protection of critical paths, CI/CD gatekeeping, and freeing human effort for higher‑level testing.

Q: Difference between GET and POST parameter passing?

A: GET uses URL query parameters; POST sends data via JSON body ( json=) or form data ( data=).

Q: How to handle cookies or token authentication?

A: Use requests.Session() for automatic cookie management; manually extract token and add it to request headers.

Q: Does HTTP 200 always mean success?

A: No – also verify business fields, e.g., {"code": "9999", "msg": "failure"}.

Q: How to process API responses?

A: Assert status_code, parse json(), and check relevant headers such as Content‑Type.

Q: How to retry on network timeout?

A: Use the tenacity library with exponential back‑off, retrying only Timeout or ConnectionError.

Q: How to refresh an expired token automatically?

A: Intercept 401 responses, call a refresh endpoint or re‑login, then retry the original request.

Q: Design a maintainable automation framework?

A: Four layers – configuration, utilities, test cases, reports; abstract services; separate data from logic.

Q: How to handle API dependencies?

A: Encapsulate prerequisite steps or use a test‑data factory (direct DB insert) to keep cases independent.

Q: How to validate data correctness?

A: Verify protocol layer (status code), structure layer (JSON Schema), and business layer (key field values).

Q: Integrate API tests into CI/CD?

A: Trigger smoke tests on commit; block pipeline on failures; generate Allure reports.

Q: When is automation not suitable?

A: Frequently changing requirements, one‑off verification, or results that are subjective (e.g., UI aesthetics).

Performance Testing (10 questions)

Q: Types of performance tests?

A: Load, stress, stability, concurrency, and capacity testing.

Q: Key performance indicators?

A: Response time (RT), throughput (TPS/QPS), error rate, CPU/memory usage, and connection count.

Q: How to design performance test scenarios?

A: Model production traffic (e.g., peak QPS = 1000) and simulate realistic user behavior.

Q: Parameterize tests in JMeter?

A: Use CSV Data Set Config, user variables, or function helpers like __Random.

Q: How to monitor server resources?

A: Deploy Prometheus + Grafana for CPU/memory/disk I/O; use Arthas for Java applications.

Q: Why might TPS not increase?

A: Database slow queries, thread blocking, frequent GC, network bandwidth limits, or code lock contention.

Q: What is a performance “knee point” and how to find it?

A: The system’s maximum sustainable load where response time sharply rises or error rate spikes; locate by gradually increasing load and observing metrics.

Q: Steps for performance tuning?

A: First monitor to locate bottlenecks (DB, cache, code), then optimize SQL, add indexes, or adjust JVM parameters.

Q: How to isolate load‑test data?

A: Use a dedicated test database and tag records with unique identifiers (e.g., user_test_001).

Q: What belongs in a performance test report?

A: Scenario description, monitoring charts, bottleneck analysis, optimization suggestions, and a pass/fail conclusion.

Linux Commands (10 questions)

Q: How to view running processes?

A: ps -ef | grep java, top, or htop.

Q: How to check which port is in use?

A: netstat -tunlp | grep 8080 or lsof -i :8080.

Q: How to tail logs in real time?

A: tail -f app.log or grep "ERROR" app.log.

Q: How to find files?

A: find / -name "*.log" or locate filename (after updatedb).

Q: How to view disk usage?

A: df -h for filesystem space; du -sh /var/log for directory size.

Q: How to compress/decompress archives?

A: tar -czvf archive.tar.gz dir/ and tar -xzvf archive.tar.gz.

Q: How to see system load averages?

A: uptime (shows 1/5/15‑minute loads) or cat /proc/loadavg.

Q: How to kill a process?

A: kill -9 PID or pkill -f "java".

Q: How to view network connections?

A: ss -tuln (modern replacement for netstat) and use ping / telnet for connectivity tests.

Q: How to modify permissions?

A: chmod 755 file and chown user:group file.

Docker (10 questions)

Q: Difference between Docker and a virtual machine?

A: Docker shares the host kernel, is lightweight and starts in seconds; VMs run a full OS, incurring higher resource overhead.

Q: Common Docker commands?

A: docker build -t image ., docker run -d -p 8080:8080 image, docker logs.

Q: Key Dockerfile instructions?

A: FROM, COPY, RUN, EXPOSE, CMD.

Q: How to enter a running container?

A: docker exec -it /bin/bash.

Q: How does image layering work?

A: Each Dockerfile instruction creates a read‑only layer; the container adds a writable top layer.

Q: Persistent storage options?

A: Volumes ( -v /host:/container) or bind mounts.

Q: How to clean unused images?

A: docker system prune -a.

Q: What is Docker Compose?

A: A YAML‑based tool to define multi‑container applications (e.g., web + DB + Redis).

Q: How to view container resource usage?

A: docker stats.

Q: How to push an image to a registry?

A: docker push registry/image:tag; private registries can be Harbor or Nexus.

Kubernetes (10 questions)

Q: What is a Pod?

A: The smallest schedulable unit, containing one or more containers that share network and storage.

Q: Purpose of a Deployment?

A: Manages replica count, performs rolling updates, and enables rollbacks.

Q: Purpose of a Service?

A: Provides a stable access point to Pods (ClusterIP, NodePort, LoadBalancer).

Q: How to view Pod logs?

A: kubectl logs POD_NAME (add -f for follow).

Q: How to exec into a Pod?

A: kubectl exec -it POD_NAME -- /bin/sh.

Q: Difference between ConfigMap and Secret?

A: ConfigMap stores non‑sensitive configuration; Secret stores sensitive data (base64‑encoded).

Q: What is Ingress?

A: HTTP/HTTPS layer‑7 routing rules that replace NodePort exposure.

Q: How to scale a deployment?

A: kubectl scale deployment/app --replicas=5.

Q: What does Helm do?

A: Kubernetes package manager; uses Charts to templatize complex deployments.

Q: How to troubleshoot a failing Pod?

A: kubectl describe pod to view events; kubectl logs for container logs.

Test Management / Quality Leadership (12 questions)

Q: How to build a quality assurance system?

A: Shift‑left (requirement review, static scans) + middle‑platform (automation, performance) + shift‑right (monitoring, retrospectives).

Q: How to measure team effectiveness?

A: Delivery cycle time, production defect density, automation coverage, and smoke‑test pass rate.

Q: How to encourage developers to self‑test?

A: Provide unit‑test templates, enforce CI gate checks, and make quality metrics visible.

Q: How to conduct a quality retrospective?

A: Focus on root causes (people, process, tools), define actionable items, and track closure.

Q: Managing outsourced staff or interns?

A: Clearly define task boundaries, hold daily stand‑ups, and perform code/test case reviews.

Q: How to secure resources (headcount, budget)?

A: Use data‑driven ROI calculations and risk quantification (e.g., “no automation adds 3 days/month delay”).

Q: How to raise the technical capability of the team?

A: Organize tech talks, internal certifications, and encourage contributions to open‑source or community projects.

Q: How to handle an emergency production release?

A: Run a quick smoke test, verify core flows, rely on monitoring as a safety net, then back‑fill testing after release.

Q: Define “good quality”.

A: Users experience no visible failures, requirements are delivered correctly the first time, and releases are stable and efficient.

Q: How to promote a quality‑first culture?

A: Make quality everyone’s responsibility, recognize “quality stars”, and celebrate successes publicly.

Q: How to implement testing left‑shift?

A: Participate in requirement reviews, write acceptance criteria using Gherkin (Given‑When‑Then), and drive contract testing.

Q: Three‑year roadmap for a quality team?

A: Move toward automation & AI, embed quality early in development, and integrate experience, security, and performance into a unified strategy.

DockerautomationKubernetesPerformance Testingsoftware testingInterview preparationQA
Test Development Learning Exchange
Written by

Test Development Learning Exchange

Test Development Learning Exchange

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.