Common Reasons Why End-to-End Automation Testing Fails and How to Avoid Them
The article outlines why end-to-end test automation often fails—such as hiring the wrong people, neglecting code quality, underestimating long-term effort, misunderstanding automation scope, limited test coverage, and poor visibility—and offers practical guidance to improve automation success.
Automated end-to-end testing aims to replace part of manual testing work by programmatically testing front‑end and back‑end APIs and performance, though not everything a manual tester does can be automated; for example, UX and usability testing are hard to automate, but most repetitive tests can be.
Automation brings many benefits, such as saving time and enabling more tests, allowing release cycles shorter than the time required for exhaustive manual testing, though many projects fail to meet initial expectations.
Previous article "Web端自动化测试失败原因汇总" listed several causes of automation failure.
Wrong People Working on Automation
Automation testing involves a lot of programming and scripting, requiring testers with development skills; it is not merely a role for software developers.
Some managers believe that good manual testers can be quickly trained on tools and scripts to become automation engineers, a view that can be disastrous.
Another reason is the belief that simple tools, open‑source software, or machine learning can generate test cases, which so far only produce demos or simple cases; a robust test suite requires hand‑written code and sufficient development skills.
Lack of Code‑Quality Mindset
Automation code should be high‑quality, maintainable, and extensible to ensure lasting effectiveness.
Often this is not the case because the personnel lack a clear understanding of code quality; solid foundations in computer science, algorithms, and data structures, plus experience, help developers recognize the importance of quality.
Low quality may also stem from time constraints or a lack of awareness that automation requires upfront investment; management should allocate sufficient time in planning.
Not Recognizing Automation as a Long‑Term Project
End‑to‑end automation requires ongoing updates and maintenance for the product or service under test, potentially lasting the entire lifecycle of the company’s offerings.
Hiring short‑term contractors can lead to maintenance gaps when they leave; the automation must be treated as a long‑term effort alongside the product.
Misunderstanding the Composition of Automation
Leadership without a technical background may misunderstand what developing automation entails, thinking it’s a simple add‑on rather than a software development project.
Automation should start early and run in parallel with feature and product development; presenting automation as a ready‑made solution is a fundamental misconception.
Abandoning Automatable Tasks Too Quickly
While some tasks are harder to automate, many can be; avoid giving up prematurely on automation opportunities.
Adopting the right attitude can make automation simpler; any procedural task can be automated, increasing the value of the automation suite.
Limited Test Scope
Successful automation projects involve test executors and tools, but relying solely on tools limits ROI; integration with build systems and CI is essential.
End‑to‑end tests should be scheduled (e.g., daily) rather than run ad‑hoc; for front‑end testing on web and mobile, a device matrix or cloud real‑device farm is needed, while back‑end API performance tests can be deployed on cloud instances in various regions.
Lack of Visibility, Traceability, and Reporting
When automated tests run on schedule or on‑demand, results and data are generated; mechanisms to collect, report, and analyze this data are crucial.
Without proper reporting, the project can suffer; good reports act as a hub for results, showing test case details, frequency, failure reasons, and linking to bugs for action.
Disclaimer: Article originally published on the "FunTester" public account; unauthorized reproduction (except Tencent Cloud) is prohibited.
Technical Article Highlights
Linux Performance Monitoring Tool netdata Chinese Version
Performance Testing Framework Third Edition
How to Enjoy Performance Testing on Linux CLI
HTTP Mind Map Illustrated
Graphical Output of Performance Test Data
Measuring Asynchronous Write Interface Latency in Load Tests
Quantitative Performance Testing for Multiple Login Methods
JMeter Throughput Error Analysis
No‑Code Article Highlights
Programming Mindset for Everyone
JSON Basics
2020 Tester Self‑Improvement
Automation Pitfalls for Beginners (Part 1)
Automation Pitfalls for Beginners (Part 2)
How to Become a Full‑Stack Automation Engineer
Left‑Shift Testing
Choosing Manual vs. Automated Testing?
FunTester
10k followers, 1k articles | completely useless
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.