Product Management 18 min read

Crowdsourced Testing Platform for Bilibili: Background, Challenges, Risks, and Management

Bilibili launched a crowdsourced testing platform that mobilizes its engaged head‑users to run product tests across diverse devices, addressing limited professional resources and scenario mismatches, while mitigating information leakage, security, and compliance risks through confidentiality agreements, whitelist access, and an intelligent management mini‑program that tracks recruitment, feedback quality, and incentives.

Bilibili Tech
Bilibili Tech
Bilibili Tech
Crowdsourced Testing Platform for Bilibili: Background, Challenges, Risks, and Management

Background

Bilibili has a strong community atmosphere, and its highly active, sticky head‑users are valuable assets. To leverage these users’ product knowledge and fragmented spare time, a crowdsourced testing ("众测") model was explored. The goal is to let head‑users test products in user‑centric environments and use the results to improve product quality.

Challenges

Limited testing resources

Professional testers can only allocate about 80% of their time to core features, leaving many “non‑important” test items uncovered.

Diverse terminal environments

Variations in network, OS, device brands, etc., make it impossible for any app to achieve full coverage; bugs in a specific environment affect all users of that environment.

Mismatch between tester and user scenarios

Testers try to mimic user behavior, but performance tests often still miss issues that users encounter after release because lab data cannot fully simulate real‑world usage.

Pre‑testing research

Other crowdsourced testing platforms

Open platforms (often from phone manufacturers) have low entry barriers but suffer from low feedback response rates and leakage risks. Platforms with entry barriers target users with coding ability, but the need to download separate packages for each task raises costs and reduces participation.

User willingness survey

Questionnaires showed that 98% of respondents are willing to sign a confidentiality agreement. Users request timely follow‑up displays, a dedicated platform/tool, and minimal form filling.

Main difficulties, risks and countermeasures

Requirement understanding difficulty

Participants have varied technical backgrounds, leading to misunderstandings of task descriptions. Clear, example‑driven instructions and plain language are essential.

Communication coordination difficulty

Large‑scale recruitment and feedback collection consume massive manpower. An intelligent management system (mini‑program) was built to integrate registration, download, and feedback, reducing manual effort.

Quality assurance difficulty

Feedback quality varies. A modular approach splits tasks into free testing and customized recruitment of users with certain technical abilities. Effectiveness is measured by recruitment vs. actual sign‑ups, total feedback vs. valid feedback, and post‑release bug detection.

Incentive assessment difficulty

Motivation differs among participants. An incentive system awards points based on issue severity, allowing users to redeem rewards, and public recognition is given for high‑quality contributions.

Management risks and countermeasures

Information leakage risk

Confidential pre‑release information is protected by requiring users to sign a confidentiality agreement before accessing task details. Manual review is required before showing download links.

System security risk

Unreleased features may expose security vulnerabilities. A whitelist of authorized users limits access to sensitive tasks.

Policy compliance risk

According to relevant regulations, a privacy‑authorization step explains how personal data is collected, used, and protected. Users can decline and still use the platform without providing personal data.

Platformization

Standardization and platformization

The crowdsourced testing platform follows a typical three‑tier web architecture: UI (user and admin interfaces), routing layer (request validation), business layer (task, reward, feedback, user operations), and data layer (database and file services). Logging, permission control, and asynchronous messaging are integrated.

Management platform

Admins can create tasks (title, description, version, schedule, upload packages), review registrations, export results, and handle feedback. Feedback items link directly to the corresponding task ID.

QQ Mini‑Program (domestic)

Users register with their UID and QQ number, download the test package, and submit feedback (device info, description, screenshots/video).

Overseas game crowdsourced testing

Implemented on Discord for authentication, supporting 12 languages. Tasks can be targeted to specific language groups (e.g., Thai only).

Operational data (QQ Mini‑Program)

Since launch in June 2023, PVs have steadily risen, with over 10,000 unique visitors. Nearly 200 testing tasks have been published; up to 10+ tasks can run concurrently. Weekly active users and feedback volumes are tracked.

Conclusion

A small anecdote illustrates the human side of crowdsourced testing: a dedicated user repeatedly exchanged points for pink rabbit‑shaped gifts to donate to children’s welfare homes, embodying the “small fire can start a prairie fire” spirit of the platform.

risk managementplatform architectureproduct managementuser researchcrowdsourced testing
Bilibili Tech
Written by

Bilibili Tech

Provides introductions and tutorials on Bilibili-related technologies.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.