Step-by-Step Guide to Deploying a Small Web Application to Alibaba Cloud with Frontend Packaging and Backend Setup
This article provides a comprehensive tutorial on configuring front‑end resource bundling, adjusting server settings, building and uploading SpringBoot back‑end modules, creating deployment scripts, managing environment‑specific properties, and implementing a Baidu Tieba hot‑search crawler, enabling a complete end‑to‑end cloud deployment.
1. Introduction
The guide walks through preparing a small website for cloud deployment, covering front‑end resource packaging, back‑end build, environment configuration, and a Baidu Tieba hot‑search crawler.
2. Frontend Resource Packaging
2.1 Modify Configuration
Update apiService.js to set baseURL to your server IP and port (80 for HTTP, 443 for HTTPS).
import axios from "axios";
// Create axios instance with base URL
const apiClient = axios.create({
baseURL: "http://ip:80/api",
headers: { "Content-Type": "application/json" }
});
export default {
// Wrapper for GET requests
get(fetchUrl) { return apiClient.get(fetchUrl); }
};Replace the vue.config.js content to disable source maps and hash filenames:
const { defineConfig } = require('@vue/cli-service')
module.exports = defineConfig({
transpileDependencies: true,
// Remove .map files for production
productionSourceMap: false,
// Disable filename hashing
filenameHashing: false
})2.2 Execute Build Command
Run the following command in the terminal:
npm run buildThe build creates a dist folder containing HTML, JS, and CSS assets.
3. Backend Application Packaging
3.1 Install Alibaba Cloud Toolkit
In IntelliJ IDEA, go to File → Settings → Plugins , search for Alibaba Cloud Toolkit , and install it. The plugin provides direct access to Alibaba Cloud ECS instances.
3.2 Deploy xxl‑job Application
(1) Package
Run the Maven package goal to generate the JAR.
(2) Upload
Use the Cloud Toolkit to upload the JAR to /home/admin/xxl-job on the ECS server.
(3) Run with Shell Script
Create start.sh to manage the application lifecycle:
#!/bin/bash
APP_NAME=xxx.jar
usage() { echo "Usage: sh script.sh [start|stop|restart|status]"; exit 1; }
is_exist(){ pid=`ps -ef|grep $APP_NAME|grep -v grep|awk '{print $2}'`; if [ -z "${pid}" ]; then return 1; else return 0; fi; }
start(){ is_exist; if [ $? -eq 0 ]; then echo "${APP_NAME} is already running. pid=${pid}."; else nohup java -jar /home/admin/$APP_NAME > /dev/null 2>&1 &; echo "${APP_NAME} start success"; fi; }
stop(){ is_exist; if [ $? -eq 0 ]; then kill -9 $pid; else echo "${APP_NAME} is not running"; fi; }
status(){ is_exist; if [ $? -eq 0 ]; then echo "${APP_NAME} is running. Pid is ${pid}"; else echo "${APP_NAME} is NOT running."; fi; }
restart(){ stop; start; }
case "$1" in
"start") start ;;
"stop") stop ;;
"status") status ;;
"restart") restart ;;
*) usage ;;
esacMake the script executable with chmod 744 start.sh and use ./start.sh start to launch the service.
3.3 Environment‑Specific Configuration
Add Maven profiles for dev and publish environments, and include them in application.properties using spring.profiles.active=@environment@ :
# Activate environment
spring.profiles.active=@environment@
# Basic settings
spring.application.name=summo-sbmy
server.port=80
spring.thymeleaf.prefix=classpath:/static/
spring.thymeleaf.suffix=.html
spring.thymeleaf.mode=HTML
# MyBatis settings
mybatis.configuration.auto-mapping-behavior=full
mybatis.configuration.map-underscore-to-camel-case=true
mybatis-plus.mapper-locations=classpath*:/mybatis/mapper/*.xmlBuild with the appropriate profile:
mvn clean package -Dmaven.test.skip=true -P=daily # Development
mvn clean package -Dmaven.test.skip=true -P=publish # Production4. Baidu Tieba Hot‑Search Crawler (Optional Extension)
The crawler fetches hot topics from Baidu Tieba using Jsoup and stores them in the database.
package com.summo.sbmy.job.tieba;
import java.io.IOException;
import java.net.URI;
import java.net.URLDecoder;
import java.nio.charset.StandardCharsets;
import java.util.*;
import java.util.stream.Collectors;
import javax.annotation.PostConstruct;
import com.google.common.collect.Lists;
import com.summo.sbmy.common.model.dto.HotSearchDetailDTO;
import com.summo.sbmy.dao.entity.SbmyHotSearchDO;
import com.summo.sbmy.service.SbmyHotSearchService;
import com.summo.sbmy.service.convert.HotSearchConvert;
import com.xxl.job.core.biz.model.ReturnT;
import com.xxl.job.core.handler.annotation.XxlJob;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections4.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.select.Elements;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import static com.summo.sbmy.common.cache.SbmyHotSearchCache.CACHE_MAP;
import static com.summo.sbmy.common.enums.HotSearchEnum.TIEBA;
/**
* Tieba hot‑search Java crawler
*/
@Component
@Slf4j
public class TiebaHotSearchJob {
@Autowired
private SbmyHotSearchService sbmyHotSearchService;
@PostConstruct
public void init() {
try { hotSearch(null); } catch (IOException e) { log.error("Init error", e); }
}
@XxlJob("tiebaHotSearchJob")
public ReturnT
hotSearch(String param) throws IOException {
log.info("Tieba hot‑search job start");
String url = "https://tieba.baidu.com/hottopic/browse/topicList?res_type=1";
List
list = Lists.newArrayList();
Document doc = Jsoup.connect(url).get();
Elements titles = doc.select(".topic-top-item-desc");
Elements urls = doc.select(".topic-text");
Elements levels = doc.select(".topic-num");
for (int i = 0; i < levels.size(); i++) {
SbmyHotSearchDO d = SbmyHotSearchDO.builder().hotSearchResource(TIEBA.getCode()).build();
d.setHotSearchTitle(titles.get(i).text().trim());
d.setHotSearchUrl(urls.get(i).attr("href"));
d.setHotSearchId(getValueFromUrl(d.getHotSearchUrl(), "topic_id"));
d.setHotSearchHeat(levels.get(i).text().trim().replace("W实时讨论", "") + "万");
d.setHotSearchOrder(i + 1);
list.add(d);
}
if (CollectionUtils.isEmpty(list)) return ReturnT.SUCCESS;
CACHE_MAP.put(TIEBA.getCode(), HotSearchDetailDTO.builder()
.hotSearchDTOList(list.stream().map(HotSearchConvert::toDTOWhenQuery).collect(Collectors.toList()))
.updateTime(Calendar.getInstance().getTime()).build());
sbmyHotSearchService.saveCache2DB(list);
log.info("Tieba hot‑search job end");
return ReturnT.SUCCESS;
}
public static String getValueFromUrl(String url, String param) {
if (StringUtils.isAnyBlank(url, param)) throw new RuntimeException("URL or param empty");
try {
URI uri = new URI(url);
String query = uri.getQuery();
Map
map = new HashMap<>();
for (String pair : query.split("&")) {
int idx = pair.indexOf("=");
String key = URLDecoder.decode(pair.substring(0, idx), StandardCharsets.UTF_8.name());
String value = URLDecoder.decode(pair.substring(idx + 1), StandardCharsets.UTF_8.name());
map.put(key, value);
}
return map.get(param);
} catch (Exception e) {
log.error("Parameter extraction error", e);
throw new RuntimeException("Failed to get param from URL");
}
}
}Running the crawler periodically populates the hot‑search cache and persists data to the database.
5. Conclusion
By following the steps above, you can successfully package front‑end assets, configure and build SpringBoot back‑end modules, deploy them to an Alibaba Cloud ECS instance using a custom start.sh script, and optionally extend functionality with a Baidu Tieba hot‑search crawler.
Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.