Backend Development 8 min read

Updates to DCS_FunTester Distributed Load‑Testing Framework and Gradle Multi‑Module Integration

This article details the finalization of the DCS_FunTester distributed load‑testing framework, shares practical lessons from converting a two‑project setup into a Gradle multi‑module build, and explains the implementation of result collection, task distribution, health‑check, and registration mechanisms using Java, Maven, and Spring APIs.

FunTester
FunTester
FunTester
Updates to DCS_FunTester Distributed Load‑Testing Framework and Gradle Multi‑Module Integration

After two rounds of updates, the core features of the DCS_FunTester framework are essentially complete, and no further functional updates are planned for now.

Gradle Multi‑Module

The original implementation used separate master and slave projects, which proved inconvenient, so the code was migrated into a single Gradle multi‑module project, exposing several pitfalls.

Dependency on Other Modules

A working solution was found after many attempts:

dependencies {
    implementation project(':slave')
}

Typical tutorials did not help; the correct configuration was finally placed in the root build.gradle after removing the sub‑module settings.

Switching to Maven for the new version of DCS_FunTester has shown better results, especially when combined with IntelliJ.

Sub‑module Dependencies

Attempts to declare dependencies in the parent module’s build.gradle using a subprojects { dependencies { … } } block failed, likely due to using compile or local JARs.

setting.gradle

The simple configuration is:

rootProject.name = 'dcs_funtester'
include 'slave'
include 'master'

Result Collection

Test results are aggregated on the master node; the slave node pushes its results to master after each test case finishes.

Example of handling a single request on the slave side:

@Async
@Override
public void runRequest(HttpRequest request) {
    BaseRequest r = request.getRequest();
    HttpRequestBase re = FunRequest.initFromJson(r.toJson()).getRequest();
    Integer times = request.getTimes();
    String mode = request.getMode();
    Integer thread = request.getThread();
    Integer runup = request.getRunup();
    String desc = request.getDesc();
    if (mode.equalsIgnoreCase("ftt")) {
        Constant.RUNUP_TIME = runup;
        RequestThreadTimes task = new RequestThreadTimes(re, times);
        Concurrent concurrent = new Concurrent(task, thread, desc);
        PerformanceResultBean resultBean = concurrent.start();
        SlaveManager.updateResult(resultBean, request.getMark());
    }
}

Corresponding controller on the master side:

@ApiOperation(value = "更新测试结果")
@ApiImplicitParam(name = "bean", value = "测试结果对象", dataTypeClass = PerformanceResultBean.class)
@PostMapping(value = "/upresult/{mark}")
public Result updateResult(@PathVariable(value = "mark") int mark, @RequestBody PerformanceResultBean bean) {
    NodeData.addResult(mark, bean);
    return Result.success();
}

Result storage is kept in memory within the JVM without persistence, as the feature may be replaced by server‑side statistics later.

Task Distribution

When the master node receives a task, it checks node requirements against available nodes, dispatches the task to each selected node, and rolls back if any node fails.

Demo of a single‑request execution:

@Override
int runRequest(HttpRequest request) {
    def num = request.getMark();
    def hosts = NodeData.getRunHost(num);
    def mark = SourceCode.getMark();
    request.setMark(mark);
    try {
        hosts.each {
            def re = MasterManager.runRequest(it, request);
            if (!re) FailException.fail();
            NodeData.addTask(it, mark);
        }
    } catch (FailException e) {
        hosts.each { f -> MasterManager.stop(f) };
        FailException.fail("多节点执行失败!");
    }
    return mark;
}

Health‑Check Interface

A temporary health‑check endpoint was added to the slave node so the master can periodically verify liveness, mitigating failures caused by frequent slave node churn. This will later be replaced by Nacos service discovery.

Slave node endpoint:

@ApiOperation(value = "节点状态,是否存活")
@GetMapping(value = "/alive")
public Result alive() {
    return Result.success();
}

Master node utility method:

static boolean alive(String host) {
    try {
        String url = SlaveApi.ALIVE;
        return isRight(getGetResponse(host, url, null));
    } catch (Exception e) {
        logger.warn("节点: {}探活失败!", host);
        return false;
    }
}

Registration Optimization

To prevent the master from registering unreachable slave nodes, the registration process now includes a health‑check step before adding the slave to the node list.

Controller code for registration:

@ApiOperation(value = "注册接口")
@PostMapping(value = "/register")
public Result register(@Valid @RequestBody RegisterBean bean) {
    def url = bean.getUrl();
    def stop = MasterManager.alive(url);
    if (!stop) FailException.fail("注册失败!");
    NodeData.register(url, false);
    return Result.success();
}

Overall, the DCS_FunTester framework now has a stable feature set, and future work will focus on integrating management components such as Nacos and extending business‑specific functionalities.

JavaMicroservicesBackend DevelopmentGradleDistributed TestingDCS_FunTester
FunTester
Written by

FunTester

10k followers, 1k articles | completely useless

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.