Backend Development 12 min read

Optimizing Group Chat Performance in an Instant Messaging Backend

This article analyzes the challenges of scaling group chat in an instant messaging system and presents architectural optimizations—including shared message storage, periodic conversation list updates, offline count handling, and version‑based incremental sync—to reduce write and read amplification while improving overall performance.

58 Tech
58 Tech
58 Tech
Optimizing Group Chat Performance in an Instant Messaging Backend

Background

In the previous article on the evolution of the 58 instant messaging backend architecture, the overall system and business model were introduced. IM messages are divided into single‑chat and group‑chat; group messages must be delivered to all members, causing significant load as group size grows.

Group Message Sending Flow

The single‑chat sending process involves the client calling the access layer, the access layer storing the message and pushing it to a queue, and asynchronous tasks delivering it via long‑connection or push notification. Applying this flow directly to group chat leads to write amplification.

1. Message Storage

Storing a copy of each group message for every member would cause massive write amplification. Instead, a single message list per group is stored, identified by the group ID, and members share this data. Each member maintains a join position and read position to support history retrieval and unread count.

2. Conversation List Updates

Each user’s conversation list is ordered by the latest message timestamp. Updating the list for every group member on each message would cause write amplification. The solution is to periodically update conversation lists: when a message is stored, if the scheduled update time has passed, an event is queued to update all members’ lists asynchronously, reducing the frequency of writes.

3. Offline List and Count

For group chat, the offline message list is removed to avoid write amplification; instead, clients pull history on demand. Offline counts are stored in a Redis cluster without persistence and are cleared on login, minimizing storage impact.

Optimized Group Message Sending Process

Client calls the access layer to send a group message.

Access layer stores a single group message, checks if a conversation‑list update is needed, and enqueues the task.

Asynchronous workers fetch all group members, deliver the message via long‑connection to online users, update offline counts for offline users, and perform the periodic conversation‑list update when due.

Group Information Synchronization

Group info (name, avatar, announcements, member data) is synchronized using incremental updates. Each member and each group maintain version numbers. When a client logs in, it sends its local version; the server pushes only the delta if versions differ. To avoid read amplification, each user also stores the maximum version among their groups, allowing the server to identify which groups have updates without scanning all groups.

Full group‑info sync flow:

Member modifies group data → member’s version increments.

Server updates all members’ group‑list version numbers.

Online members receive incremental data via long‑connection; offline members receive it on next login.

Summary

Group chat introduces far greater complexity and performance pressure than single chat. By consolidating message storage, employing periodic conversation‑list updates, eliminating per‑member offline lists, and using version‑based incremental synchronization, the system mitigates both write and read amplification, improves scalability, and provides a balanced solution for large‑scale instant messaging services.

scalable architectureMessage Queuebackend optimizationoffline syncGroup ChatWrite Amplification
58 Tech
Written by

58 Tech

Official tech channel of 58, a platform for tech innovation, sharing, and communication.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.