How to Deploy Ceph Offline on ZTE NewStart Linux: Step‑by‑Step Guide
This tutorial walks through creating an offline Ceph RPM repository on a ZTE NewStart Linux machine, configuring custom yum sources, installing Ceph, and setting up monitor, OSD, and manager daemons across multiple nodes for a fully functional storage cluster.
About ZTE NewStart System
ZTE NewStart operating system is based on a stable Linux kernel and includes three variants: embedded (NewStart CGEL), server (NewStart CGSL), and desktop (NewStart NSDL). After nearly ten years of development, it offers security hardening, autonomy, and easy management, and is used by telecom operators, large state enterprises, and e‑government solutions.
Background
Amid national digital transformation and domestic substitution, especially for government projects, Chinese‑made CPUs and operating systems are preferred for security and independence. This article demonstrates offline Ceph deployment on ZTE NewStart CGSL with a HaiGuang CPU, a method also applicable to Longxi and CentOS 8.
Creating an Offline Installation Package
Use a network‑connected ZTE NewStart machine to build an offline RPM repository for Ceph. Choose a minimal installation to avoid dependency conflicts with default components such as libvirt and qemu.
Yum Repository Configuration
ZTE NewStart does not provide an official online yum source; use mirrors from AnolisOS and EPEL instead.
<code>AnolisOS.repo
[AppStream]
name=AnolisOS-8.6 - AppStream
baseurl=http://mirrors.openanolis.cn/anolis/8.6/AppStream/x86_64/os
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CGSL-V6
[BaseOS]
name=AnolisOS-8.6 - BaseOS
baseurl=http://mirrors.openanolis.cn/anolis/8.6/BaseOS/x86_64/os
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CGSL-V6
... (additional repo sections omitted for brevity) ...
</code> <code>epel.repo
[epel]
name=Extra Packages for Enterprise Linux 8 - $basearch
# It is much more secure to use the metalink, but if you wish to use a local mirror
# place its address here.
baseurl=https://mirrors.aliyun.com/epel/8/Everything/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=$basearch&infra=$infra&content=$contentdir
enabled=1
gpgcheck=1
countme=1
gpgkey=file:///etc/yum.repos.d/RPM-GPG-KEY-EPEL-8
</code> <code>ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-pacific/el8/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-pacific/el8/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-pacific/el8/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
</code>Configure yum cache:
<code>[main]
gpgcheck=1
installonly_limit=3
clean_requirements_on_remove=True
best=True
skip_if_unavailable=False
cachedir=/data/yum
keepcache=1
</code>Install Ceph
<code>yum install ceph -y
</code>Create Offline Repository
<code>find /data/cache -name "*.rpm" -exec cp {} /mnt ;
createrepo /mnt
tar -zcvf offline.tar.gz mnt/
</code>Install Ceph from Offline Package
Extract and install the RPMs without dependency checks:
<code>tar -zxvf offline.tar.gz
cd mnt
rpm -ivh *.rpm --nodeps --force
</code>Deploy Monitor Nodes
Each Ceph cluster needs at least one monitor (mon). The guide creates three monitors on node1, node2, and node3.
Add Monitor on node1
Generate a unique FSID and create the initial configuration:
<code>uuidgen
</code>Edit
/etc/ceph/ceph.confto include the FSID, monitor members, network settings, and authentication options.
<code>vim /etc/ceph/ceph.conf
[global]
fsid=9c079a1f-6fc2-4c59-bd4d-e8bc232d33a4
mon initial members = node1
mon host = 192.168.2.16
public network = 192.168.2.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 8
osd pool default pgp num = 8
osd crush chooseleaf type = 1
</code>Create monitor keyrings:
<code>ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
</code>Generate the monitor map and initialize the monitor daemon:
<code>monmaptool --create --add `hostname` 192.168.2.16 --fsid 9c079a1f-6fc2-4c59-bd4d-e8bc232d33a4 /tmp/monmap
sudo -u ceph ceph-mon --mkfs -i `hostname` --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
systemctl start ceph-mon@`hostname` && systemctl enable ceph-mon@`hostname`
</code>Copy the keyring and configuration to the other monitor nodes and adjust ownership:
<code>scp /tmp/ceph.mon.keyring ceph2:/tmp/ceph.mon.keyring
scp /etc/ceph/* root@ceph2:/etc/ceph/
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph2:/var/lib/ceph/bootstrap-osd/
# repeat for ceph3
chown ceph:ceph /tmp/ceph.mon.keyring
</code>Start the monitors on all nodes and update
ceph.confwith all monitor hosts:
<code>vim /etc/ceph/ceph.conf
mon initial members = node1,node2,node3
mon host = 192.168.2.16,192.168.2.17,192.168.2.18
systemctl restart ceph-mon@`hostname`
</code>Remove a Monitor
<code>ceph mon remove {mon-id}
</code>Add OSDs
Use the
ceph-volumeutility to prepare and activate OSDs on each node.
Create OSD
<code>ceph-volume lvm create --data /dev/sdb
</code>The process consists of preparation and activation stages:
<code>ceph-volume lvm prepare --data /dev/sdb
ceph-volume lvm list # view OSD FSID
ceph-volume lvm activate {ID} {FSID}
</code>Start OSD daemons on each node:
<code># node1
systemctl restart ceph-osd@0
systemctl enable ceph-osd@0
# node2
systemctl restart ceph-osd@1
systemctl enable ceph-osd@1
# node3
systemctl restart ceph-osd@2
systemctl enable ceph-osd@2
</code>Create MGR Daemon
Each monitor node should also run a manager daemon.
Create Key Directory
<code>sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-`hostname -s`
cd /var/lib/ceph/mgr/ceph-`hostname -s`
</code>Create Authentication Key
<code>ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' > keyring
chown ceph.ceph /var/lib/ceph/mgr/ceph-`hostname`/keyring
</code>Start MGR Daemon
<code>systemctl enable ceph-mgr@`hostname -s` && systemctl start ceph-mgr@`hostname -s`
# or
ceph-mgr -i `hostname`
</code>Cluster Status
After completing the above steps, verify the Ceph cluster status; in this example only two OSDs were added.
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.