Design and Implementation of the Seras Serverless BFF/FAAS Platform for Front‑End Efficiency
This article describes the background, design decisions, architecture, core concepts (Serverless, BFF, FAAS), implementation details, performance results, and future plans of the internally built Seras platform that enables front‑end teams to write cloud functions without managing servers, improving code reuse, traceability, and deployment speed.
The author, Li Zhiyong, joined Qunar Travel in April 2019 and led the construction of a low‑code platform that integrated more than 60 back‑office systems, handling millions of lines of code and hundreds of pages, and now focuses on component‑based low‑code, Serverless platforms, cross‑end rendering, and service reliability.
In the preface, the article explains the team’s pain points—large numbers of front‑end projects, limited manpower, and fragmented UI logic—and introduces the Seras platform (named after a League of Legends character) as a Serverless‑driven solution that builds a BFF layer and a FAAS service platform to let front‑end developers write cloud functions without worrying about servers.
Key terminology is defined: Serverless means developers focus on business logic without managing servers; BFF (Backend‑For‑Frontend) is a service layer that isolates front‑end UI concerns from back‑end data processing; FAAS (Function‑as‑a‑Service) allows developers to expose functions as APIs.
The core idea of Seras is to combine Serverless, BFF, and FAAS so that front‑end engineers can develop business features by writing cloud functions, achieving code reuse across projects, automated online case verification, and zero‑setup development.
The background section details the massive scale of Qunar’s front‑end codebase (200+ Node projects, 300+ RN/mini‑program projects) and the resulting efficiency bottlenecks, such as UI‑logic disputes, multi‑API data integration conflicts, duplicated processing code across repositories, and redundant node services.
To address these issues, the team evaluated commercial solutions (Alibaba Cloud, AWS Lambda) and concluded that a self‑built platform would better meet internal DevOps, security, and cost requirements.
The solution emphasizes full‑chain traceability, code reuse across services, automated regression testing with real online cases, high performance, and high availability, while preserving existing DevOps processes.
Architecture diagrams (omitted here) illustrate a two‑branch system: one branch handles data that the back‑end deems suitable but the front‑end does not, and the other uses the BFF layer to aggregate multiple APIs. The BFF layer is built with the internal low‑code platform for UI configuration, containerized deployment with auto‑scaling, and MySQL for persistence.
The platform provides a configuration system for editing single functions and orchestrations, with syntax checking via Babel, and offers six common capabilities: function invocation, QConfig retrieval, Redis operations, HTTP requests, logging, and orchestration break.
Version management includes beta, simulation, and production environments, supporting Git‑like collaborative workflows where each developer can create isolated beta versions before merging.
Usability features include full‑link tracing via traceId, online debugging with mock data, searchable code, jump‑to‑definition, server‑side capability abstraction, and visual flowcharts for orchestration.
Security and high‑availability measures comprise online code review, multi‑environment validation, automated case regression testing, release approvals, rate‑limiting, dual‑environment hot‑switching, auto‑scaling, and global monitoring/alerting.
Landing results show that single‑function and orchestration latencies (P90) are under 10 ms, with major integration scenarios in marketing activities, ticketing flows, and hotel tag systems. Code size for ticketing pages dropped from 20 k lines to 6.5 k lines, iteration time reduced from one day to minutes, and performance improvements of up to 72 % (P99) and 75 % (P50) were achieved.
Future plans include cache pre‑warming to reduce cold‑start latency, broader business adoption, and physical cluster isolation to prevent single‑service failures from affecting the whole system.
Qunar Tech Salon
Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.