Why ATMs Didn't Kill Bank Tellers—and AI Won't Erase Test Engineers
The article debunks the myth that automation eliminates jobs by tracing the ATM era, three waves of software testing evolution, and current AI trends, showing that repetitive tasks disappear while roles requiring judgment and context shift and expand.
Automation vs. Job Elimination
Task automation does not automatically eliminate entire occupations. Historical examples show that new technologies often change job roles rather than erase them.
ATM Case Study
IMF economist James Bessen documents that ATMs were introduced in the United States in the 1970s and, by the mid‑1990s, more than 400,000 units were deployed. Teller counts at a typical branch fell from 20 in 1988 to 13 in 2004—a 35% reduction. However, banks used the lower operating costs to open more branches, increasing urban locations by 43%, so the total number of tellers rose. The role shifted from cash handling to relationship‑banking, focusing on sales, complex service, and trust‑based interactions that ATMs cannot perform. CCIA identifies conflating “task automation” with “job elimination” as a common logical error.
Three Waves in Software Testing
QTP era (early 2000s) : Hewlett‑Packard released QuickTest Professional, enabling non‑technical users to record and replay tests. The prediction was that manual testers would become obsolete. In practice, manual execution declined while demand grew for engineers who could write and maintain VBScript, turning testers into script developers.
Selenium era (2010s) : Open‑source Selenium lowered entry barriers, prompting the belief that testing would become a low‑skill activity. Selenium requires Java or Python proficiency, and the complexity of framework setup, data management, and environment configuration shifted testers toward becoming framework architects.
DevOps wave (post‑2015) : Continuous integration and delivery broke the notion of testing as the final gate. The forecast was that developers would absorb testing duties and QA teams would shrink. Instead, the proportion of large QA teams rose from 17% in 2023 to an expected 30% by 2025, as DevOps expanded the definition of quality to include developers, product owners, and dedicated quality engineers.
Each wave eliminated a narrow, repetitive role and created a broader, judgment‑oriented position.
Current AI Signals
McKinsey’s 2025 “State of AI” report presents two seemingly contradictory figures: 32% of organizations expect AI to reduce labor next year, while 28% of software‑engineering executives anticipate AI will increase headcount. The divergence reflects different adoption stages across companies.
Tesla’s QA data provides a concrete example: from 2020 to 2025 manual testers dropped 75%, while AI‑testing specialists grew 850%. The team did not shrink; it transformed—repetitive execution vanished, and responsibilities for AI system design, training, validation, and maintenance surged.
Gartner revised earlier forecasts, estimating that 60%–70% of routine testing tasks will be automated by 2030, yet demand for technical QA professionals will rise by 25%.
AI Hiring Bias Case
Amazon built an AI recruiting system to screen engineering candidates. After deployment, the system systematically lowered scores for female applicants because the training data—decade‑old hiring records—were male‑dominated. Amazon abandoned the system. The episode illustrates that insufficiently tested AI can fail in ways that remain invisible until real‑world harm occurs.
New Test Types in the AI Era
Bias and fairness testing, hallucination detection, and model‑drift testing have emerged as distinct AI‑era testing requirements. They supplement, rather than replace, traditional testing. The EU AI Act codifies such testing mandates, creating a clear compliance pathway for test engineers.
Structural Changes at Scale
The World Economic Forum predicts automation could eliminate 85 million jobs while creating up to 97 million new ones. The net effect is positive, but the distribution is uneven: disappearing roles demand different skills, timelines, and geographic contexts than the emerging ones.
Implications for Test Engineers
GitLab’s survey shows 75% of critical defects are still discovered manually, highlighting the limits of AI in exploratory testing, business‑context judgment, and user empathy. James Bach notes that when a human‑executed test is automated, it ceases to be the same test—it becomes merely an output check.
For testing professionals, transformation is mandatory and time‑bound. While 75% of organizations list AI testing as a strategic priority, only 16% have operationalized it, underscoring a narrow implementation gap.
Conclusion
Automation consistently removes highly repetitive, rule‑based tasks, leaving work that requires contextual understanding, judgment, and handling of edge cases. AI currently targets the former. Roles such as manual regression testers are being replaced, but those who evolve into quality engineers, AI‑testing specialists, or relationship‑focused professionals will thrive, while those who do not adapt risk obsolescence.
Code example
XX 将被淘汰Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
