R&D Management 22 min read

How Vivo Boosted Internet R&D Efficiency: A Deep Dive into Platform Design and Automation

This article examines Vivo's internet R&D efficiency platform, outlining the rapid growth challenges, the 1‑2‑3 framework of dual‑loop delivery, demand and development standardization, and three key automation scenarios that together reduced delivery time, increased test coverage, and saved hundreds of person‑days.

vivo Internet Technology
vivo Internet Technology
vivo Internet Technology
How Vivo Boosted Internet R&D Efficiency: A Deep Dive into Platform Design and Automation

Background and Challenges

Rapid growth of Vivo's internet business has expanded project count five‑fold, service instances over 3.7×, server numbers 2.8×, and R&D team size 2.7×, leading to higher collaboration difficulty and efficiency decline.

The development process now involves more than ten stages and ten roles, causing low delivery efficiency. Infrastructure evolved from physical servers to virtualization (OpenStack) and cloud hosts, increasing platform complexity.

1‑2‑3 Framework for R&D Efficiency

Vivo's R&D efficiency team proposes a "1‑2‑3" framework: a dual‑loop delivery model, two standardizations (demand and development), and three dedicated pipelines for business, data, and model/algorithm delivery.

Dual‑loop model : left loop defines direction and goals using question‑anchor‑co‑creation‑refine; right loop focuses on rapid execution and value realization.

Demand standardization : end‑to‑end demand lifecycle management, from proposal to experimentation.

Development standardization : automated, standardized process from branch creation to production deployment.

Three pipelines : support business, data, and AI/algorithm workloads, turning siloed tools into an integrated toolchain.

Key Technical Scenarios

Based on the two standardizations, three critical scenarios are addressed:

Demand automation : automatic branch creation, merge, test‑environment provisioning, and pipeline triggering.

Standardized pipelines : shared pipeline configurations, parallel execution, and rule‑based checks.

Test automation : automated test‑environment updates and test‑case execution for functional and performance testing.

Demand automation flow: event trigger → rule matching → rule executor → log recorder. Detailed steps include configuring trigger rules, queuing events, matching templates, executing actions, and recording results.

Standardized Pipeline Implementation

Three components are required: custom process definition, parallel execution support, and rule‑based validation. Parallel pipelines remove queue limits, isolate resources, generate dynamic tasks, and ensure cleanup.

Rule checks use bitwise operations (& and ^) to verify configuration consistency and identify mismatches.

Testing Automation

Testing is split into process‑control automation (covering 80% of projects) and test‑execution automation for both server‑side (interfaces, performance) and client‑side (stability, functionality, analytics). Server‑side automation achieves >70% coverage; client‑side automation covers >90% of APKs.

Interface automation consists of four steps: interface recording (agent‑based Java call capture), interface management (manual or recorded registration), test‑case generation (parameter parsing), and report generation (batch suite creation, trigger options, notification).

Performance testing uses a custom traffic‑recording and replay platform (MoonBox) to collect live traffic, store it in Elasticsearch, and replay it in a scalable container‑based test cluster, producing detailed performance reports.

Project Practice and Results

In the “App Store” project, the team applied demand layering, automated pipelines, and test automation, reducing average demand‑to‑delivery time to 17 days and saving over 270 person‑days in 2023.

Automation increased test activity penetration by 156%, test execution efficiency by 35%, and release success rate to 97.27%.

Future Roadmap

The team outlines a five‑stage evolution: full tool automation, tool integration, comprehensive R&D efficiency data linkage, platform‑driven scenario automation, and finally focusing on demand‑level value delivery.

Three north‑star metrics are defined: code change lead time < 1 hour, demand delivery < 2 weeks, and demand‑to‑result closure < 3 weeks.

platform engineeringAutomationdevopscontinuous deliveryR&D efficiency
vivo Internet Technology
Written by

vivo Internet Technology

Sharing practical vivo Internet technology insights and salon events, plus the latest industry news and hot conferences.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.