Integrating DeepSeek Large Model with SpringAI in Java Applications
This article provides a concise guide on using SpringAI to connect Java applications with the domestic large‑language model DeepSeek, covering design philosophy, configuration, code examples for chat, streaming, structured output, security hardening, performance tuning, and production best practices.
Hello everyone, I am Chuck1sn, a developer dedicated to promoting modern JVM technology stacks.
When Java Meets Domestic Large Model
SpringAI, the AI integration framework in the Spring ecosystem, now officially supports direct connection to the domestic large model DeepSeek, allowing Java projects to call the model using a standardized approach.
This quick‑reference article focuses on four main points:
Understanding SpringAI's abstract design philosophy
Configuring a direct DeepSeek connection
Implementing complete chat and streaming responses
Production‑grade best practices
1. SpringAI Design Philosophy
1.1 Unified API Abstraction
The core value of SpringAI is to unify the differing APIs of various AI providers (OpenAI, Azure, DeepSeek) behind a single ChatClient interface:
public interface ChatClient {
ChatResponse call(ChatRequest request);
Flux<ChatResponse> stream(ChatRequest request);
}This aligns with the Hexagonal Architecture, treating AI capabilities as pluggable ports decoupled from business logic.
1.2 Configuration‑Based Connection
By editing application.yml, you can switch providers easily:
spring:
ai:
provider: deepseek # change this value to switch providers
deepseek:
base-url: https://api.deepseek.com/v1
api-key: ${DEEPSEEK_API_KEY}This configuration style integrates naturally with Spring Security and Spring Cloud configuration centers, supporting dynamic model provider switching.
2. Quick Integration of DeepSeek
2.1 Add Dependency
Include the DeepSeek module in your pom.xml:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-deepseek</artifactId>
<version>0.8.1</version>
</dependency>2.2 Configure Connection Parameters
Provide DeepSeek credentials in application.yml:
spring:
ai:
deepseek:
base-url: https://api.deepseek.com/v1
api-key: sk-your-api-key-here
chat:
options:
model: deepseek-chat
temperature: 0.7The configuration inheritance mechanism lets global settings be overridden by specific chat options.
2.3 Implement Basic Conversation
Create a service component:
@Service
@RequiredArgsConstructor
public class DeepSeekService {
private final DeepSeekChatClient chatClient;
public String generateContent(String prompt) {
Prompt request = new Prompt(new UserMessage(prompt));
return chatClient.call(request).getResult().getOutput().getContent();
}
}Expose an API in a controller:
@RestController
@RequestMapping("/api/ai")
public class AIController {
private final DeepSeekService deepSeekService;
@PostMapping("/ask")
public ResponseEntity<String> askQuestion(@RequestBody String question) {
String answer = deepSeekService.generateContent(question);
return ResponseEntity.ok(answer);
}
}3. Advanced Features
3.1 Streaming Response
Use Server‑Sent Events (SSE) for real‑time feedback:
@GetMapping("/stream")
public Flux<String> streamResponse(@RequestParam String prompt) {
return chatClient.stream(new Prompt(prompt))
.map(response -> response.getResult().getOutput().getContent());
}Frontend JavaScript example:
const eventSource = new EventSource('/api/ai/stream?prompt=How to design a distributed system');
eventSource.onmessage = (e) => {
console.log(e.data); // receive incremental chunks
};3.2 Structured Output
Define a prompt template that forces JSON‑formatted results:
@Bean
public PromptTemplate userPromptTemplate() {
return new PromptTemplate("""
Please classify the following user feedback:
{feedback}
Return in JSON format:
{
"category": "bug|feature|compliment",
"severity": 1-5
}
""");
}
public AnalysisResult analyzeFeedback(String feedback) {
Prompt prompt = userPromptTemplate().create(Map.of("feedback", feedback));
String json = chatClient.call(prompt).getResult().getOutput().getContent();
return objectMapper.readValue(json, AnalysisResult.class);
}4. Production‑Grade Best Practices
4.1 Security Hardening
Protect AI endpoints with Spring Security:
@Bean
SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
http.authorizeHttpRequests(auth -> auth
.requestMatchers("/api/ai/**").hasRole("AI_USER"))
.oauth2ResourceServer(OAuth2ResourceServerConfigurer::jwt);
return http.build();
}4.2 Performance Tuning
Adjust client timeouts and connection pool size:
spring:
ai:
deepseek:
client:
connect-timeout: 5s
read-timeout: 30s
max-connections: 504.3 Monitoring and Alerts
Integrate Micrometer metrics with a common tag for the AI provider:
@Bean
MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
return registry -> registry.config()
.commonTags("ai.provider", "deepseek");
}5. Architectural Considerations: Where AI Fits
In a typical domain‑driven design, place the AI service between the application layer and the domain layer:
UI Layer
↓
Application Service Layer → AI Service Proxy (prompt engineering)
↓
Domain Model Layer
↓
Infrastructure Layer (SpringAI implementation)This ensures the domain model remains independent of specific AI implementations, while the application service orchestrates AI context and the infrastructure handles technical details.
Conclusion
Integrating DeepSeek via SpringAI gives you powerful large‑model capabilities while adhering to sustainable architecture principles, such as encapsulating AI calls as domain services, adding audit logs for critical AI operations, and regularly evaluating model output for business consistency.
Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
