Designing a Friend Status Broadcast System: From Database Queries to Asynchronous Delivery
This article outlines the step‑by‑step design of a friend‑status broadcast service, starting with simple MongoDB queries, moving to message‑driven delivery, adding asynchronous processing with a job queue, and finally handling inactive users through caching and expiration strategies.
The author revisits a discontinued website that featured a friend‑status broadcast similar to Douban or Twitter, summarizing various sources (InfoQ, Twitter scaling articles) and presenting a personal, incremental design.
Description: What Is a Friend Status
Friend status streams appear in many social platforms; a user follows dozens to hundreds of others, and each status is a small, tolerable message fragment that can be delayed or missing.
Phase 1: Database‑Query Timeline
Using MongoDB, a user’s timeline can be built by querying statuses whose user_id is in the follower’s following_ids array.
db.statuses.findOne()
{
_id : ObjectId(...),
user_id: ObjectId(...),
content: "Hi, I'm Rei.",
...
}
db.statuses.find({ user_id : { $in : user.following_ids } })This approach works for small loads but does not scale for large‑user systems.
Phase 2: Message Delivery Replaces Database Queries
Instead of pulling data on each request, status messages are pushed to followers’ timelines. Only the status ID is stored per user.
# after create status
after_create :deliver_status
def deliver_status
self.user.followers.each do |user|
if self.access_allow? user
user.home_timelines << status.id
end
end
endFollowers later retrieve full messages by ID from the database or cache.
Phase 2 (Improved): Asynchronous Delivery
To avoid blocking the posting user, delivery is off‑loaded to a background worker queue (e.g., Resque).
# after create status
after_create :enqueue_status
def enqueue_status
Resque.enqueue(StatusDeliver, self.id) # enqueue delivery job
end
def deliver_status
...
endThe worker processes the queue independently, keeping the front‑end responsive.
Phase 3: Ignoring Inactive Users
For users with massive follower counts, delivering to inactive followers wastes resources. The solution is to cache each user’s timeline (e.g., in Memcached) with an expiration tied to activity; inactive users are skipped during delivery.
When an inactive user returns, they can manually refresh their timeline, which triggers a partial query to repopulate the cache.
Conclusion
Real‑world broadcast systems also involve cross‑data‑center deployment, load balancing, and sharding, but the incremental approach described—starting with simple queries, moving to push‑based delivery, adding async processing, and finally optimizing for inactivity—provides a practical roadmap for building scalable friend‑status services.
Architect
Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.