Big Data 8 min read

Capacity-Constrained Influence Maximization: Algorithms and Applications

The paper introduces Capacity‑Constrained Influence Maximization (CIM), a framework that selects up to k neighbors per active user to maximize spread under node capacity limits, proposes MG‑Greedy and RR‑Greedy algorithms with ≥½ approximation, and demonstrates the near‑linear RR‑OPIM+ method’s superior accuracy and speed on large social networks and a Tencent game recommendation system.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Capacity-Constrained Influence Maximization: Algorithms and Applications

Background: In modern social networks, information and influence spread ubiquitously. Influence Maximization (IM) aims to select a small set of highly influential users to maximize the spread of information. Traditional IM seeks a seed set of size s that influences the maximum number of nodes, but real-world applications face capacity constraints on nodes (e.g., limited user attention in online platforms or games).

Problem Definition: Given a directed graph G = (V, E) with node set V and edge set E , a diffusion model M , a set of active participants A , and a capacity constant k , the Constrained Influence Maximization (CIM) problem seeks, for each active participant u , an optimal neighbor set S_u (|S_u| ≤ k) that maximizes the overall influence spread while satisfying the capacity constraint.

Proposed Framework (CIM): The authors introduce a new IM framework called Capacity-Constrained Influence Maximization (CIM). Two greedy strategies are designed:

MG‑Greedy: iteratively selects the edge with the largest marginal gain and distributes it to a connected active participant.

RR‑Greedy: uses a round‑robin scheme to assign the globally best marginal‑gain seed to each active participant.

Both algorithms guarantee approximation ratios of at least 1/2 of the optimal solution. To handle large‑scale networks, a Reverse Reachable (RR) set based method is employed, leading to the RR‑OPIM+ algorithm, which runs in near‑linear time and achieves a solution within (1/2 − ε) of the optimum. This work was accepted at KDD 2023.

Scalable Implementation: The influence spread σ is estimated via RR sets, avoiding costly Monte‑Carlo simulations. Building on the OPIM framework, the authors develop MG‑OPIM, RR‑OPIM, and the enhanced RR‑OPIM+ algorithm, which maximizes the coverage of RR sets with near‑linear complexity.

Experimental Evaluation: Experiments on public network datasets (e.g., DNC, Orkut, Twitch) show that the proposed greedy algorithms significantly outperform traditional baselines. For k = 10, RR‑Greedy improves spread by 27 % over Degree on DNC, and RR‑OPIM+ improves by 39 % on Orkut. In terms of runtime, RR‑OPIM+ reduces execution time by two orders of magnitude compared to MG‑OPIM and RR‑OPIM on Twitch.

Real‑World Application: The RR‑OPIM+ algorithm was applied to a friend‑recommendation scenario in a Tencent shooting game. By combining influence scores from RR‑OPIM+ with existing intimacy metrics, the system achieved a 6.5 % increase in actual propagation人数 and improved user active time.

Team Information: The Tencent Interactive Entertainment Social Algorithm team focuses on efficient social‑network algorithms, serving over 20 Tencent products and publishing more than 20 papers in top conferences.

Reference: Zhang, S., Huang, Y., Sun, J., Lin, W., Xiao, X., & Tang, B. (2023). Capacity constrained influence maximization in social networks. In Proceedings of KDD 2023, pp. 3376‑3385.

Big DataCapacity ConstraintGreedy Algorithmsinfluence maximizationKDD 2023Reverse Reachable Setsocial networks
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.