How to Optimize Performance and Deploy a Production‑Ready Blog System

This article walks through a complete performance‑optimization and deployment pipeline for a Spring Boot blog, covering multi‑level caching with Caffeine and Redis, database indexing and cursor pagination, read‑write splitting, asynchronous processing, rate limiting, Docker multi‑stage builds, Nginx reverse‑proxy setup, Actuator monitoring, custom metrics, health checks, alerting, JMeter load testing, and JVM tuning.

Coder Trainee
Coder Trainee
Coder Trainee
How to Optimize Performance and Deploy a Production‑Ready Blog System

Overall Performance Flow

Requests flow: CDN → Nginx → local cache (Caffeine) → distributed cache (Redis) → MySQL.

Multi‑Level Cache Architecture

Cache tiers

L1: Caffeine (local) – hot data, TTL 1‑5 min, max 10 000 entries.

L2: Redis – user sessions, article details, TTL 10‑30 min.

L3: MySQL – persistent storage.

Caffeine configuration

@Configuration
public class CacheConfig {
    @Bean
    public Cache<String, Object> localCache() {
        return Caffeine.newBuilder()
                .maximumSize(10000)
                .expireAfterWrite(5, TimeUnit.MINUTES)
                .recordStats()
                .build();
    }

    /** Hot article cache (prevent cache breakdown) */
    @Bean
    public Cache<Long, ArticleDetailVO> hotArticleCache() {
        return Caffeine.newBuilder()
                .maximumSize(100)
                .expireAfterWrite(10, TimeUnit.MINUTES)
                .build();
    }
}

Multi‑level cache service

@Service
public class MultiLevelCacheService {
    @Autowired
    private Cache<String, Object> localCache;
    @Autowired
    private RedisTemplate redisTemplate;
    @Autowired
    private ArticleMapper articleMapper;
    private static final String ARTICLE_CACHE_KEY = "article:detail:";

    /** Multi‑level cache query */
    public ArticleDetailVO getArticleWithCache(Long articleId) {
        String localKey = "article:" + articleId;
        // 1. Local cache
        ArticleDetailVO cached = (ArticleDetailVO) localCache.getIfPresent(localKey);
        if (cached != null) {
            log.debug("Local cache hit: {}", articleId);
            return cached;
        }
        // 2. Redis cache
        String redisKey = ARTICLE_CACHE_KEY + articleId;
        ArticleDetailVO vo = (ArticleDetailVO) redisTemplate.opsForValue().get(redisKey);
        if (vo != null) {
            log.debug("Redis cache hit: {}", articleId);
            localCache.put(localKey, vo);
            return vo;
        }
        // 3. DB (synchronized to avoid cache breakdown)
        synchronized (this) {
            vo = (ArticleDetailVO) redisTemplate.opsForValue().get(redisKey);
            if (vo != null) {
                return vo;
            }
            vo = articleMapper.selectArticleDetail(articleId);
            if (vo != null) {
                redisTemplate.opsForValue().set(redisKey, vo, 30, TimeUnit.MINUTES);
                localCache.put(localKey, vo);
            }
        }
        return vo;
    }

    /** Cache eviction when article updates */
    public void evictArticleCache(Long articleId) {
        String localKey = "article:" + articleId;
        String redisKey = ARTICLE_CACHE_KEY + articleId;
        localCache.invalidate(localKey);
        redisTemplate.delete(redisKey);
    }
}

Bloom filter to prevent cache penetration

@Component
public class BloomFilterService {
    private BloomFilter<String> bloomFilter;

    @PostConstruct
    public void init() {
        // Expect 1,000,000 items, 1% false‑positive rate
        bloomFilter = BloomFilter.create(Funnels.stringFunnel(StandardCharsets.UTF_8), 1_000_000, 0.01);
        // Load existing article IDs
        List<Long> articleIds = articleMapper.selectAllIds();
        for (Long id : articleIds) {
            bloomFilter.put("article:" + id);
        }
    }

    /** Check whether an article ID possibly exists */
    public boolean mightExist(Long articleId) {
        return bloomFilter.mightContain("article:" + articleId);
    }
}

Database Optimization

Index tuning

-- Article table composite indexes
CREATE INDEX idx_status_published ON article(status, published_time DESC);
CREATE INDEX idx_user_status ON article(user_id, status);
CREATE INDEX idx_category_status ON article(category_id, status);

-- Comment table indexes
CREATE INDEX idx_article_parent ON comment(article_id, parent_id);
CREATE INDEX idx_user ON comment(user_id);

-- Tag association table index
CREATE INDEX idx_article_tag ON article_tag(article_id, tag_id);

Cursor pagination (replace OFFSET)

<!-- 游标分页查询 -->
<select id="selectPageByCursor" resultType="Article">
    SELECT * FROM article
    WHERE status = 1 AND deleted = 0
    <if test="lastId != null">
        AND id < #{lastId}
    </if>
    ORDER BY id DESC
    LIMIT #{pageSize}
</select>
public PageResult<Article> pageByCursor(Long lastId, int pageSize) {
    List<Article> list = articleMapper.selectPageByCursor(lastId, pageSize);
    Long nextCursor = list.isEmpty() ? null : list.get(list.size() - 1).getId();
    return PageResult.of(list, nextCursor);
}

Read‑write splitting

# application.yml
spring:
  datasource:
    master:
      jdbc-url: jdbc:mysql://master:3306/blog
      username: root
      password: xxx
    slave:
      jdbc-url: jdbc:mysql://slave:3306/blog
      username: root
      password: xxx
@Configuration
public class DataSourceConfig {
    @Bean
    @Primary
    @ConfigurationProperties("spring.datasource.master")
    public DataSource masterDataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean
    @ConfigurationProperties("spring.datasource.slave")
    public DataSource slaveDataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean
    public DataSource routingDataSource() {
        Map<Object, Object> targetDataSources = new HashMap<>();
        targetDataSources.put("master", masterDataSource());
        targetDataSources.put("slave", slaveDataSource());
        RoutingDataSource routingDataSource = new RoutingDataSource();
        routingDataSource.setDefaultTargetDataSource(masterDataSource());
        routingDataSource.setTargetDataSources(targetDataSources);
        return routingDataSource;
    }
}
@Target({ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
public @interface DataSource {
    String value() default "master";
}
@Aspect
@Component
public class DataSourceAspect {
    @Before("@annotation(dataSource)")
    public void before(JoinPoint point, DataSource dataSource) {
        DataSourceContextHolder.setDataSource(dataSource.value());
    }

    @After("@annotation(dataSource)")
    public void after(DataSource dataSource) {
        DataSourceContextHolder.clear();
    }
}

API Performance Enhancements

Asynchronous processing

@Component
public class AsyncService {
    @Async
    public void recordViewCount(Long articleId, String ip) {
        String today = DateUtil.formatDate(new Date());
        redisTemplate.opsForHyperLogLog().add("uv:" + today + ":" + articleId, ip);
    }

    @Async
    public void sendNotification(Long userId, String message) {
        notificationService.send(userId, message);
    }
}

Batch fetching article tags (avoid N+1)

public Map<Long, List<Tag>> batchGetTags(List<Long> articleIds) {
    List<ArticleTag> relations = articleTagMapper.selectByArticleIds(articleIds);
    List<Long> tagIds = relations.stream()
            .map(ArticleTag::getTagId)
            .distinct()
            .collect(Collectors.toList());
    List<Tag> tags = tagMapper.selectBatchIds(tagIds);
    Map<Long, Tag> tagMap = tags.stream()
            .collect(Collectors.toMap(Tag::getId, Function.identity()));
    Map<Long, List<Tag>> result = new HashMap<>();
    for (ArticleTag relation : relations) {
        result.computeIfAbsent(relation.getArticleId(), k -> new ArrayList<>())
              .add(tagMap.get(relation.getTagId()));
    }
    return result;
}

Sliding‑window rate limiting

@Component
public class RateLimiterService {
    @Autowired
    private RedisTemplate redisTemplate;

    /** Sliding‑window rate limit */
    public boolean allowRequest(String key, int maxCount, int windowSeconds) {
        long now = System.currentTimeMillis();
        String windowKey = "rate:limit:" + key;
        // Remove expired entries
        redisTemplate.opsForZSet().removeRangeByScore(windowKey, 0, now - windowSeconds * 1000L);
        Long count = redisTemplate.opsForZSet().zCard(windowKey);
        if (count >= maxCount) {
            return false;
        }
        redisTemplate.opsForZSet().add(windowKey, String.valueOf(now), now);
        redisTemplate.expire(windowKey, windowSeconds, TimeUnit.SECONDS);
        return true;
    }
}
@GetMapping("/api/article/list")
public Result list(@RequestParam Long userId) {
    if (!rateLimiter.allowRequest("user:" + userId, 10, 60)) {
        return Result.error(429, "Too many requests");
    }
    // ...
}

Docker Deployment

Multi‑stage Dockerfile

# Multi‑stage build
FROM maven:3.8-openjdk-11 AS builder
WORKDIR /app
COPY pom.xml .
RUN mvn dependency:go-offline
COPY src ./src
RUN mvn clean package -DskipTests

FROM openjdk:11-jre-slim
WORKDIR /app
COPY --from=builder /app/target/blog-*.jar app.jar
COPY --from=builder /app/src/main/resources/application-docker.yml ./config/
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "-Dspring.profiles.active=docker", "app.jar"]

docker‑compose.yml

version: '3.8'
services:
  mysql:
    image: mysql:8.0
    container_name: blog-mysql
    environment:
      MYSQL_ROOT_PASSWORD: root123
      MYSQL_DATABASE: blog
    ports:
      - "3306:3306"
    volumes:
      - mysql-data:/var/lib/mysql
    networks:
      - blog-network

  redis:
    image: redis:7-alpine
    container_name: blog-redis
    ports:
      - "6379:6379"
    networks:
      - blog-network

  elasticsearch:
    image: elasticsearch:8.5.0
    container_name: blog-es
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ports:
      - "9200:9200"
    networks:
      - blog-network

  backend:
    build: ./backend
    container_name: blog-backend
    depends_on:
      - mysql
      - redis
      - elasticsearch
    ports:
      - "8080:8080"
    networks:
      - blog-network

  nginx:
    image: nginx:alpine
    container_name: blog-nginx
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./dist:/usr/share/nginx/html
    ports:
      - "80:80"
      - "443:443"
    depends_on:
      - backend
    networks:
      - blog-network

networks:
  blog-network:
    driver: bridge

volumes:
  mysql-data:

Nginx reverse‑proxy configuration

# nginx.conf
upstream blog_backend {
    server backend:8080 weight=3;
    keepalive 32;
}

server {
    listen 80;
    server_name blog.example.com;

    # Gzip compression
    gzip on;
    gzip_types text/plain text/css application/json application/javascript text/xml;
    gzip_min_length 1024;

    # Static resource cache
    location ~* \.(css|js|png|jpg|jpeg|gif|ico|svg)$ {
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # API proxy
    location /api/ {
        proxy_pass http://blog_backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    # Frontend pages
    location / {
        root /usr/share/nginx/html;
        try_files $uri $uri/ /index.html;
    }
}

Monitoring and Alerting

Spring Boot Actuator

management:
  endpoints:
    web:
      exposure:
        include: health,metrics,info,prometheus
  metrics:
    export:
      prometheus:
        enabled: true

Custom metrics with Micrometer

@Component
public class CustomMetrics {
    @Autowired
    private MeterRegistry meterRegistry;
    private Counter requestCounter;
    private Timer requestTimer;

    @PostConstruct
    public void init() {
        requestCounter = Counter.builder("api.requests.total")
                .description("Total request count")
                .register(meterRegistry);
        requestTimer = Timer.builder("api.request.duration")
                .description("Request latency")
                .register(meterRegistry);
    }

    public void recordRequest(String api, long duration, boolean success) {
        requestCounter.increment();
        requestTimer.record(duration, TimeUnit.MILLISECONDS);
        meterRegistry.counter("api.requests", "api", api, "success", String.valueOf(success)).increment();
    }
}

Health‑check endpoint

@Component
public class HealthChecker {
    @Autowired
    private RedisTemplate redisTemplate;
    @Autowired
    private JdbcTemplate jdbcTemplate;
    @Autowired
    private RestTemplate restTemplate;

    @GetMapping("/actuator/health/custom")
    public Map<String, Object> customHealth() {
        Map<String, Object> health = new HashMap<>();
        health.put("status", "UP");
        // Redis check
        try {
            String ping = (String) redisTemplate.execute((RedisCallback<String>) connection -> connection.ping());
            health.put("redis", "UP".equals(ping) ? "UP" : "DOWN");
        } catch (Exception e) {
            health.put("redis", "DOWN");
            health.put("status", "DOWN");
        }
        // MySQL check
        try {
            jdbcTemplate.queryForObject("SELECT 1", Integer.class);
            health.put("mysql", "UP");
        } catch (Exception e) {
            health.put("mysql", "DOWN");
            health.put("status", "DOWN");
        }
        return health;
    }
}

Alert via DingTalk robot

@Component
public class AlertService {
    @Value("${dingtalk.webhook}")
    private String webhook;

    public void sendAlert(String title, String content, String level) {
        Map<String, Object> message = new HashMap<>();
        message.put("msgtype", "markdown");
        Map<String, String> markdown = new HashMap<>();
        markdown.put("title", title);
        String color = "WARNING".equals(level) ? "#FFA500" : "#FF0000";
        markdown.put("text", String.format(
                "## %s

**Level:** %s

**Content:** %s

**Time:** %s",
                title, level, content, new Date()));
        message.put("markdown", markdown);
        restTemplate.postForObject(webhook, message, String.class);
    }
}

Stress Testing and JVM Tuning

JMeter load‑test recommendations

// Test configuration suggestions
// Environment: 4‑core CPU, 8 GB RAM, single machine
// Goal: QPS 1000, response time < 200 ms

// Thread group
Threads: 200
Ramp‑up: 10 seconds
Loops: 100

// Sample results
// Before optimization: QPS 300, avg response 500 ms
// After optimization: QPS 1200, avg response 80 ms

Production JVM parameters

# JVM options for production
java -jar app.jar \
  -Xms2g -Xmx2g \
  -XX:+UseG1GC \
  -XX:MaxGCPauseMillis=200 \
  -XX:+PrintGCDetails \
  -XX:+PrintGCDateStamps \
  -Xloggc:/var/log/gc.log \
  -Duser.timezone=Asia/Shanghai

Repository

Full source code: https://github.com/xxx/blog-system

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

DockerPerformance OptimizationRedisspring-bootCaffeineRate limitingdatabase indexing
Coder Trainee
Written by

Coder Trainee

Experienced in Java and Python, we share and learn together. For submissions or collaborations, DM us.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.