Backend Development 18 min read

Multi‑Tenant Account System Architecture and Migration Strategy at Bilibili

Bilibili redesigned its fragmented account system into a unified, multi-tenant architecture using DDD and a four-layer design, consolidating all business lines into one codebase with configurable data isolation, logic, and dependencies, split into four micro-services, and migrated safely via gray-release and bidirectional sync.

Bilibili Tech
Bilibili Tech
Bilibili Tech
Multi‑Tenant Account System Architecture and Migration Strategy at Bilibili

This article presents the redesign of Bilibili's account system from a backend engineering perspective. The existing system suffers from high maintenance cost due to many independent code branches for different business lines (domestic Bilibili, international Bilibili, overseas games, etc.), a fragmented micro‑service landscape (over 20 services), and difficulty in onboarding new tenants.

The new architecture adopts a multi‑tenant approach, consolidating all business lines onto a single code base while allowing tenant‑specific differences through configuration and interface abstraction. The design is guided by Domain‑Driven Design (DDD) concepts and a four‑layer architecture (interface, application, domain, infrastructure).

Key capabilities of the account system include registration, login (various credential types), password management, phone/email binding, SNS management, and token‑based authentication. Supporting services such as behavior captcha, phone blacklist, and email/SMS sending are also described.

The system is split into four micro‑services: User Service, Login Service, Auth Service, and Account Support Service. Each service owns a clear domain and shares common modules where appropriate.

Multi‑tenant solution addresses three dimensions of difference:

Data isolation (database‑level or table‑level)

Business‑logic variation (different flow nodes per tenant)

External‑dependency variation (different KV stores, MySQL instances, etc.)

Differences are expressed by abstracting the varying parts into interfaces and providing tenant‑specific implementations selected via configuration.

Configuration example (wrapped in code tags):

# db数据源配置
[db]
    [db.intl]
        addr = "172.0.0.1:5805"
        dsn = "bstar:xxxxxxxxxxxx@tcp(172.0.0.1:5805)/intl"
        active = 10
    [db.main]
        addr = "172.0.0.1:5062"
        dsn = "main:xxxxxxxxxxxx@tcp(172.0.0.1:5062)/main"
        active = 10
# redis数据源配置
[redis]
    [redis.intl]
        addr = "172.0.0.1:7101"
    [redis.main]
        addr = "172.0.0.1:7102"
# 租户配置化    
[tenant]
    defaultKey = "main"
    [tenant.configs.main.domainService]
        "TokenService" = "TokenServiceMain" # 选择具体tokenService的实现
    [tenant.configs.main.daoService]
        "TokenPersistence" = "KvToken"  # 选择不同的存储实现
    [tenant.configs.main.dao]
        db = "main"  # 选择库,可以做库维度数据隔离
        table = ""  # 选择不同的表名后缀,可以做表维度的数据隔离
        redis = "main" # 选择当前租户使用的redis资源
    [tenant.configs.intl.domainService]
        "TokenService" = "TokenServiceBstar"
    [tenant.configs.intl.daoService]
        "TokenPersistence" = "DbToken"
    [tenant.configs.intl.dao]
        db = "intl"
        table = "intl"
        redis = "intl"

Two deployment modes are supported:

Independent deployment – dedicated resources for high‑QPS, high‑availability tenants.

Shared deployment – multiple tenants share the same resources, with differences driven solely by configuration.

The migration to the new system follows a gray‑release strategy that ensures safety, controllable scope, and observability. Traffic is routed through the old services, which forward requests to the new services based on whitelist or percentage rules.

Bidirectional data synchronization between old and new databases is achieved via Canal binlog replication. The article discusses loop problems (update‑loop, insert/delete‑loop, soft‑delete‑hard‑delete loop) and presents solutions such as adding a sync_time column and using Redis markers to deduplicate processed binlogs.

Data consistency checks are performed both incrementally (binlog‑based) and via dual‑query of critical APIs, with alerts for mismatches and manual remediation.

Future work includes moving the synchronization to a managed DTS service to simplify the pipeline.

Conclusion : The multi‑tenant architecture enables a seamless, low‑risk replacement of the legacy account system, reduces operational overhead, and provides a scalable foundation for adding new business lines such as domestic and international Bilibili services.

backendMigrationmicroservicesconfigurationmulti-tenantDDDaccount system
Bilibili Tech
Written by

Bilibili Tech

Provides introductions and tutorials on Bilibili-related technologies.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.