Engineering the Bifrost WebSocket Gateway at Postman: Architecture, Scaling, and Lessons Learned

The article recounts how Postman's Service Foundation team identified the limitations of the monolithic Sync service, designed and built the Bifrost WebSocket gateway using Fastify, AWS ElastiCache for Redis, and a private API, and scaled it through horizontal expansion and custom load‑factor metrics while sharing practical engineering insights.

Laravel Tech Community
Laravel Tech Community
Laravel Tech Community
Engineering the Bifrost WebSocket Gateway at Postman: Architecture, Scaling, and Lessons Learned

Postman's Service Foundation team created the Bifrost WebSocket gateway to replace the overloaded Sync monolith, drawing an analogy to the mythic Bifrost bridge that instantly connects realms.

The company’s development teams operate as cross‑functional feature squads, with the Service Foundation team providing shared tools and infrastructure for the entire engineering organization.

Sync, a core service handling client‑side activity synchronization via WebSockets and a publish‑subscribe model, grew too large, causing cascading failures, degraded user experience, and high maintenance costs.

To address this, the team followed a six‑step process: securing organizational support, identifying blind spots, building the Bifrost gateway, testing it, migrating clients, and scaling the new service.

Bifrost consists of a public gateway built with the Fastify framework and Amazon ElastiCache for Redis as a central message broker, and a private API that proxies traffic to internal Postman services, also using Fastify.

Testing relied on manual verification due to the lack of automated WebSocket testing, leading to a successful proof‑of‑concept by January 2020.

Migration involved redirecting traffic from the legacy Godserver initialization flow to Bifrost without requiring client code changes, and the team documented migration steps for all dependent squads.

Scaling strategies include horizontal scaling of WebSocket connections across many small EC2 instances, and a custom multi‑dimensional load‑factor metric that balances CPU and memory usage for different throughput levels.

The article concludes with future work such as adding redundancy for the Redis broker, increasing bandwidth to handle ten‑fold traffic growth, and further decoupling monolithic services.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

backend architecturemicroservicesScalabilityRedisWebSocketAWSfastify
Laravel Tech Community
Written by

Laravel Tech Community

Specializing in Laravel development, we continuously publish fresh content and grow alongside the elegant, stable Laravel framework.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.