Big Data 9 min read

How Inceptor’s New Scheduler Tackles Multi‑Tenant Resource Challenges

This article explains how Inceptor 5.0’s enhanced Scheduler resolves multi‑tenant resource contention by introducing finer‑grained scheduling modes, ACL and SLA configuration, and a tree‑structured queue system that balances priority, quotas, and user permissions for more reliable big‑data job execution.

StarRing Big Data Open Lab
StarRing Big Data Open Lab
StarRing Big Data Open Lab
How Inceptor’s New Scheduler Tackles Multi‑Tenant Resource Challenges

Challenges in Multi‑Tenant Scheduling

In multi‑tenant environments, job scheduling in Inceptor must consider task priority, resource usage, user/group/role permissions, and quota limits, leading to issues such as large batch jobs monopolizing resources, single users over‑consuming resources, and insufficient granularity when using Stage‑level scheduling.

Traditional Scheduler Algorithm

The legacy Inceptor Scheduler (TDH 4.x) creates a SparkContext, instantiates DAGScheduler to split the execution plan into Stages, and submits each Stage as a TaskSet to the TaskScheduler. Scheduling operates at the TaskSet or Task level, which is invisible to sessions and cannot allocate resources based on tenant permissions or priorities, limiting flexibility in enterprise production.

Innovation 1: Finer‑Grained Scheduling Modes

Inceptor Scheduler now supports three modes: FIFO, FAIR, and FURION.

FIFO : Simple first‑in‑first‑out ordering, scheduling TaskSets sequentially.

FAIR : Introduces queues (resource pools) with configurable weights; TaskSets can be submitted to different queues, and scheduling between queues follows a fair‑share policy.

FURION : A tree‑structured queue model not supported by open‑source Spark. It schedules based on CPU count, weight, and running task count, with the scheduling object being individual Tasks.

Comparison of FURION and FAIR

Both implement weighted, fair scheduling strategies.

Differences: FURION builds queue relationships as a tree structure, while FAIR uses a flat parallel model; FURION schedules at Task granularity, whereas FAIR schedules at TaskSet granularity.

Innovation 2: Graphical ACL and SLA Configuration

Scheduler settings can be edited via configuration files or the Guardian UI; the UI offers a more intuitive approach for ordinary users.

After installing the Guardian plugin, navigate to “Permissions → INCEPTOR” to configure the Inceptor service.

Queue Permissions

Admins can assign GLOBAL permissions and per‑queue permissions (SUBMIT, ADMIN, ACCESS) to users, groups, or roles.

SUBMIT : Allows submitting jobs to the queue.

ADMIN : Grants the ability to delegate queue operations to other users.

ACCESS : Controls access to the Inceptor service; set only in the global configuration.

Compute Quotas

In the Compute Quota page, admins set queue parameters such as weight, reserved CPU cores/percentage, maximum CPU cores/percentage, and scheduling policy. Since FURION encompasses FAIR and FIFO capabilities, only FURION parameters are exposed in Guardian.

Guardian also allows configuring the maximum number of concurrent SQL statements per queue, connection, or user, with defaults applied to users not explicitly set.

Users without specific settings inherit the default configuration.

Demonstration of Scheduler Effects

The video showcases scenarios such as isolated task monitoring for regular users, immediate effect of ACL and quota changes, priority‑driven execution, reserved resource constraints, and maximum resource quota enforcement, illustrating how the Scheduler maintains differentiated resource allocation across tenants.

Conclusion

Inceptor Scheduler enables user‑ and statement‑based job scheduling, making Inceptor suitable for multi‑tenant deployments. By considering permissions, priorities, and quotas beyond TaskSet/Task granularity, it delivers more balanced resource distribution, enhanced security through task isolation, and clearer visibility of job status for each user.

schedulerInceptor
StarRing Big Data Open Lab
Written by

StarRing Big Data Open Lab

Focused on big data technology research, exploring the Big Data era | [email protected]

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.