Artificial Intelligence 13 min read

Integrating Chinese Open‑Source AI Platforms with Java SDK and Prompt Engineering

This article introduces several Chinese open‑source AI platforms, shows how to import their Java SDKs, obtain API keys, run test demos, encapsulate a reusable AI module with Spring Boot configuration, and apply prompt‑engineering techniques to generate AI‑driven questionnaire content.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Integrating Chinese Open‑Source AI Platforms with Java SDK and Prompt Engineering

Several Chinese open‑source AI platforms that support SDK calls are listed, including ZhipuAI (offering chat, vision, and code‑generation models), Baidu PaddlePaddle, Tencent AI Lab, Alibaba Cloud PAI, and Huawei ModelArts.

Import Dependency

<dependency>
    <groupId>cn.bigmodel.openapi</groupId>
    <artifactId>oapi-java-sdk</artifactId>
    <version>release-V4-2.3.0</version>
</dependency>

Obtain API Key – Retrieve the key from the personal center of the platform’s website.

Test Demo – A simple SpringBootTest verifies that the AI call works (replace the placeholder with your own key).

@SpringBootTest
public class ZhiPuAiTest {
    @Test
    public void test() {
        String apiKey = "your‑apikey";
        // create client
        ClientV4 client = new ClientV4.Builder(apiKey).build();
        // build request
        List<ChatMessage> messages = new ArrayList<>();
        ChatMessage chatMessage = new ChatMessage(ChatMessageRole.USER.value(), "作为一名营销专家,请为智谱开放平台创作一个吸引人的slogan");
        messages.add(chatMessage);
        String requestId = String.valueOf(System.currentTimeMillis());
        ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest.builder()
                .model(Constants.ModelChatGLM4)
                .stream(Boolean.FALSE)
                .invokeMethod(Constants.invokeMethod)
                .messages(messages)
                .requestId(requestId)
                .build();
        // invoke
        ModelApiResponse resp = client.invokeModelApi(chatCompletionRequest);
        System.out.println("model output:" + resp.getMsg());
    }
}

Encapsulate a Generic AI Module

Configuration file ( application.yml ) stores the API key:

ai:
  api-key: your‑key

Configuration class creates a ClientV4 bean:

@Configuration
@ConfigurationProperties(prefix = "ai")
@Data
public class AiConfig {
    /** apiKey */
    private String apiKey;
    @Bean
    public ClientV4 getClientV4() {
        return new ClientV4.Builder(apiKey).build();
    }
}

The AiManager component provides synchronous and streaming request methods, with stable (0.05) and unstable (0.99) temperature settings, and utility methods that accept system/user messages or a full message list.

@Component
public class AiManager {
    @Resource
    private ClientV4 clientV4;
    private static final float STABLE_TEMPERATURE = 0.05f;
    private static final float UNSTABLE_TEMPERATURE = 0.99f;
    public String doSyncUnstableRequest(String systemMessage, String userMessage) {
        return doRequest(systemMessage, userMessage, Boolean.FALSE, UNSTABLE_TEMPERATURE);
    }
    public String doSyncStableRequest(String systemMessage, String userMessage) {
        return doRequest(systemMessage, userMessage, Boolean.FALSE, STABLE_TEMPERATURE);
    }
    public String doSyncRequest(String systemMessage, String userMessage, Float temperature) {
        return doRequest(systemMessage, userMessage, Boolean.FALSE, temperature);
    }
    public String doRequest(String systemMessage, String userMessage, Boolean stream, Float temperature) {
        List<ChatMessage> chatMessageList = new ArrayList<>();
        chatMessageList.add(new ChatMessage(ChatMessageRole.SYSTEM.value(), systemMessage));
        chatMessageList.add(new ChatMessage(ChatMessageRole.USER.value(), userMessage));
        return doRequest(chatMessageList, stream, temperature);
    }
    public String doRequest(List<ChatMessage> messages, Boolean stream, Float temperature) {
        ChatCompletionRequest req = ChatCompletionRequest.builder()
                .model(Constants.ModelChatGLM4)
                .stream(stream)
                .temperature(temperature)
                .invokeMethod(Constants.invokeMethod)
                .messages(messages)
                .build();
        try {
            ModelApiResponse resp = clientV4.invokeModelApi(req);
            return resp.getData().getChoices().get(0).toString();
        } catch (Exception e) {
            e.printStackTrace();
            throw new BusinessException(ErrorCode.SYSTEM_ERROR, e.getMessage());
        }
    }
    public Flowable<ModelData> doStreamRequest(String systemMessage, String userMessage, Float temperature) {
        List<ChatMessage> list = new ArrayList<>();
        list.add(new ChatMessage(ChatMessageRole.SYSTEM.value(), systemMessage));
        list.add(new ChatMessage(ChatMessageRole.USER.value(), userMessage));
        return doStreamRequest(list, temperature);
    }
    public Flowable<ModelData> doStreamRequest(List<ChatMessage> messages, Float temperature) {
        ChatCompletionRequest req = ChatCompletionRequest.builder()
                .model(Constants.ModelChatGLM4)
                .stream(Boolean.TRUE)
                .temperature(temperature)
                .invokeMethod(Constants.invokeMethod)
                .messages(messages)
                .build();
        try {
            ModelApiResponse resp = clientV4.invokeModelApi(req);
            return resp.getFlowable();
        } catch (Exception e) {
            e.printStackTrace();
            throw new BusinessException(ErrorCode.SYSTEM_ERROR, e.getMessage());
        }
    }
}

Prompt Design Techniques – Six practical tips are presented: defining a system prompt, role‑playing, providing detailed requirements, using separators, chain‑of‑thought prompting, and few‑shot learning, each illustrated with Chinese examples.

Project‑Level AI Invocation

Define a constant containing the prompt template for questionnaire generation:

private static final String GENERATE_QUESTION_SYSTEM_MESSAGE = "你是一位严谨的出题专家,我会给你如下信息:\n" +
        "```\n" +
        "应用名称,\n" +
        "【【【应用描述】】】,\n" +
        "应用类别,\n" +
        "要生成的题目数,\n" +
        "每个题目的选项数\n" +
        "```\n" +
        "请你根据上述信息,按照以下步骤来出题:\n" +
        "1. 要求:题目和选项尽可能地短,题目不要包含序号,每题的选项数以我提供的为主,题目不能重复\n" +
        "2. 严格按照下面的 json 格式输出题目和选项\n" +
        "```\n" +
        "[{\"options\":[{\"value\":\"选项内容\",\"key\":\"A\"},{\"value\":\"\",\"key\":\"B\"}],\"title\":\"题目标题\"}]\n" +
        "```\n" +
        "title 是题目,options 是选项,每个选项的 key 按照英文字母序(比如 A、B、C、D)以此类推,value 是选项内容\n" +
        "3. 检查题目是否包含序号,若包含序号则去除序号\n" +
        "4. 返回的题目列表格式必须为 JSON 数组";

In a controller, the AI is called to generate questionnaire items, the JSON part is extracted, parsed into DTOs, and returned:

@PostMapping("/ai_generate")
public BaseResponse<List<QuestionContentDTO>> aiGenerateQuestion(@RequestBody AiGenerateQuestionRequest req) {
    ThrowUtils.throwIf(req == null, ErrorCode.PARAMS_ERROR);
    Long appId = req.getAppId();
    int questionNumber = req.getQuestionNumber();
    int optionNumber = req.getOptionNumber();
    App app = appService.getById(appId);
    ThrowUtils.throwIf(app == null, ErrorCode.NOT_FOUND_ERROR);
    String userMessage = getGenerateQuestionUserMessage(app, questionNumber, optionNumber);
    String result = aiManager.doSyncRequest(GENERATE_QUESTION_SYSTEM_MESSAGE, userMessage, null);
    int start = result.indexOf("[");
    int end = result.lastIndexOf("]");
    String json = result.substring(start, end + 1);
    List<QuestionContentDTO> list = JSONUtil.toList(json, QuestionContentDTO.class);
    return ResultUtils.success(list);
}

The article concludes with a source citation and a notice encouraging readers to join a technical community, but the technical content above provides a complete guide for integrating AI SDKs into a Java backend project.

backendJavaArtificial Intelligenceprompt engineeringSpring BootAI SDK
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.