Step‑by‑Step Guide to Deploy UCloud’s Free USDP Big Data Platform on CentOS
This article walks you through the complete installation and configuration of UCloud's free USDP (UCloud Data Platform) on a three‑node CentOS 7.2‑7.6 cluster, covering environment preparation, package download, repair scripts, MySQL setup, service startup, web UI activation, monitoring, and a quick Hive query example.
Background
Cloudera and Hortonworks merged, ending free community versions of CDP and HDP. UCloud released a free, one‑stop USDP (UCloud Data Platform) that supports HDFS, Kudu, Elasticsearch and other components.
Environment Preparation
USDP requires a Manager Node with a MySQL instance and Worker Nodes running the Agent. A minimum of three CentOS 7.2‑7.6 nodes (8 CPU, 32 GB RAM, 500 GB data disk) is needed.
Download and Extract USDP
Download the free package (≈43 GB) from https://s3-cn-bj.ufileos.com/jungle111111/usdp-1.0.0.0/install/usdp-free-1.0.0.tar.gz , then extract it to /opt/usdp-srv/ on the repair node.
[root@node1 usdp-1.0.0.0]# llDirectory Structure
agent : USDP distributed client
bin : start/stop scripts
config : configuration files
jmx_exporter : monitoring exporter
recommend : service templates
repair : initialization scripts and packages
repository : service resource packages
scripts : auxiliary scripts
server : USDP manager
sql : metadata initialization SQL
templated : service configuration templates
verify : certificate storage
versions : package version info
Repair Module Configuration
Edit repair.properties to set YUM source, NMAP, NTP, MySQL hosts, passwords, and node counts. Edit repair-host-info.properties to list each node’s IP, SSH port, password, and hostname.
# Set the YUM source host IP
yum.repo.host.ip=10.23.110.136
# NMAP host
nmap.server.ip=10.23.110.136
nmap.server.port=22
nmap.server.password=abcd123456
# MySQL host
mysql.ip=10.23.110.136
mysql.host.ssh.port=22
mysql.host.ssh.password=abcd123456
mysql.password=abc123456
repair.host.num=3
repair.log.dir=./logsInitialize the Cluster
Run the one‑click repair script on the repair node:
cd /opt/usdp-srv/usdp/repair/sbin
bash repair.sh initAll
source /etc/profileThe script installs required packages (JDK, Python, MySQL), distributes configuration files, and reports SUCCESS for each component.
Configure MySQL for USDP
Update /opt/usdp-srv/usdp/config/application-server.yml with the datasource URL, driver, username and password.
datasource:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.p6spy.engine.spy.P6SpyDriver
url: jdbc:p6spy:mysql://node1:3306/db_udp?useUnicode=true&characterEncoding=utf-8&useSSL=false
username: root
password: abc123456Start USDP Manager Service
On the manager node, execute:
cd /opt/usdp-srv/usdp/
bin/start-udp-server.shSuccessful start is indicated by UDP Server is running with: 10691.
Web UI and Cluster Creation
Access http://10.23.110.136, set an admin password, import the free license (generated from the hardware ID), and use the wizard to create a cluster. Choose at least three nodes, select a recommended component set (e.g., Recommendation B), configure HDFS and Hive defaults, and start deployment. Progress reaches 100 % when finished.
Monitoring and Alerts
USDP aggregates JMX, HTTP, and custom metrics into Prometheus and provides Grafana dashboards. Pre‑defined alert templates can notify via WeChat, DingTalk, email, or webhook, and users can create custom alerts.
Cluster Usage Example
Log in as the hadoop user and launch Hive:
su hadoop
/srv/udp/1.0.0.0/hive/bin/hiveCreate a table and insert a row:
create table iteblog_test_usdp_hive (id int, name string, age int);
insert into iteblog_test_usdp_hive values (1,'iteblog',100);
select * from iteblog_test_usdp_hive;The default execution engine is Tez, which can be changed via the USDP UI.
Conclusion
USDP offers a fully automated, free alternative to CDH for deploying a production‑grade Hadoop ecosystem, dramatically reducing manual effort and error risk.
UCloud Tech
UCloud is a leading neutral cloud provider in China, developing its own IaaS, PaaS, AI service platform, and big data exchange platform, and delivering comprehensive industry solutions for public, private, hybrid, and dedicated clouds.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
