Operations 16 min read

Deploy Ceph Offline on ZTE NewStart CGSL: Step‑by‑Step Guide

This article provides a comprehensive, step‑by‑step tutorial for installing Ceph on ZTE NewStart CGSL using an offline RPM repository, covering yum repository setup, Ceph package installation, monitor, OSD, and MGR node configuration, and includes all necessary command examples.

Ops Development Stories
Ops Development Stories
Ops Development Stories
Deploy Ceph Offline on ZTE NewStart CGSL: Step‑by‑Step Guide

About ZTE NewStart System

ZTE NewStart operating system is based on a stable Linux kernel and includes Embedded OS (NewStart CGEL), Server OS (NewStart CGSL), and Desktop OS (NewStart NSDL). After nearly ten years of development, it offers security hardening, independent control, and easy management, and is used by telecom operators, large enterprises, and e‑government solutions.

Background

In the context of national digital transformation and domestic substitution, especially for government projects, native CPUs and operating systems are used to achieve independent innovation and secure reliability. This guide demonstrates offline Ceph deployment on ZTE NewStart CGSL with Haiguang CPU, applicable also to Longxi and CentOS 8.

Creating Offline Installation Package

First, use a network‑connected ZTE NewStart machine to create an offline RPM repository for Ceph. Since the system installs components like libvirt and qemu by default, choose a minimal installation to avoid dependency conflicts.

Yum Repository Configuration

ZTE NewStart does not provide an official online yum source. Use the following repository definitions (AnolisOS and EPEL) to configure yum.

<code>## AnolisOS.repo
[AppStream]
name=AnolisOS-8.6 - AppStream
baseurl=http://mirrors.openanolis.cn/anolis/8.6/AppStream/x86_64/os
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CGSL-V6

[BaseOS]
name=AnolisOS-8.6 - BaseOS
baseurl=http://mirrors.openanolis.cn/anolis/8.6/BaseOS/x86_64/os
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CGSL-V6

[...additional repos omitted for brevity...]
</code>
<code># epel.repo
[epel]
name=Extra Packages for Enterprise Linux 8 - $basearch
baseurl=https://mirrors.aliyun.com/epel/8/Everything/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=$basearch&infra=$infra&content=$contentdir
enabled=1
gpgcheck=1
countme=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8
</code>
<code># ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-pacific/el8/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-pacific/el8/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-pacific/el8/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
</code>

Configure yum cache:

<code>[main]
gpgcheck=1
installonly_limit=3
clean_requirements_on_remove=True
best=True
skip_if_unavailable=False
cachedir=/data/yum
keepcache=1
</code>

Install Ceph

<code>yum install ceph -y
</code>

Create Offline Repository

<code>find /data/cache -name "*.rpm" -exec cp {} /mnt ;
createrepo /mnt
tar -zcvf offline.tar.gz mnt/
</code>

Install Ceph from Offline Package

Extract the offline tarball and install the RPMs without dependency checks.

<code>tar -zxvf offline.tar.gz
cd mnt
rpm -ivh *.rpm --nodeps --force
</code>

Deploy Monitor Nodes

Each Ceph cluster requires at least one monitor (mon). The first monitor is created on node1, then replicated to node2 and node3.

Add monitor on node1

Generate a unique FSID for the cluster:

<code>uuidgen
</code>

Edit

/etc/ceph/ceph.conf

and insert the FSID and network settings:

<code>vim /etc/ceph/ceph.conf
[global]
fsid=9c079a1f-6fc2-4c59-bd4d-e8bc232d33a4
mon initial members = node1
mon host = 192.168.2.16
public network = 192.168.2.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 8
osd pool default pgp num = 8
osd crush chooseleaf type = 1
</code>

Create the monitor keyring:

<code>ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
</code>

Create the admin keyring:

<code>ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin \
  --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
</code>

Generate the bootstrap‑OSD keyring:

<code>ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd \
  --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
</code>

Import the admin and bootstrap keyrings into the monitor keyring:

<code>ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
</code>

Set ownership of the monitor keyring:

<code>chown ceph:ceph /tmp/ceph.mon.keyring
</code>

Create the monitor map:

<code>monmaptool --create --add `hostname` 192.168.2.16 --fsid 9c079a1f-6fc2-4c59-bd4d-e8bc232d33a4 /tmp/monmap
</code>

Create the monitor data directory:

<code>sudo -u ceph mkdir /var/lib/ceph/mon/ceph-`hostname`
</code>

Initialize the monitor on node1:

<code>sudo -u ceph ceph-mon --mkfs -i `hostname` --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
systemctl start ceph-mon@`hostname` && systemctl enable ceph-mon@`hostname`
</code>

Install monitors on the other two nodes

Copy the keyring and configuration files to node2 and node3, then adjust ownership:

<code>scp /tmp/ceph.mon.keyring node2:/tmp/ceph.mon.keyring
scp /etc/ceph/* root@node2:/etc/ceph/
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@node2:/var/lib/ceph/bootstrap-osd/
# repeat for node3
chown ceph.ceph /tmp/ceph.mon.keyring   # on each node
</code>

Retrieve the monmap on each node:

<code>ceph mon getmap -o /tmp/ceph.mon.map
</code>

Initialize the monitor on node2 and node3:

<code>sudo -u ceph ceph-mon --mkfs -i `hostname` --monmap /tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyring
systemctl start ceph-mon@`hostname` && systemctl enable ceph-mon@`hostname`
</code>

Update

ceph.conf

on all nodes to list all monitors and restart the services:

<code>vim /etc/ceph/ceph.conf
mon initial members = node1,node2,node3
mon host = 192.168.2.16,192.168.2.17,192.168.2.18
systemctl restart ceph-mon@`hostname`
</code>

Remove a monitor

<code>ceph mon remove {mon-id}
</code>

Add OSDs

Ceph provides the

ceph-volume

utility to prepare disks for OSDs.

Create OSD

On node1, create an OSD on

/dev/sdb

:

<code>ceph-volume lvm create --data /dev/sdb
</code>

The process consists of preparation and activation stages:

<code>ceph-volume lvm prepare --data /dev/sdb
ceph-volume lvm list   # view OSD FSID
ceph-volume lvm activate {ID} {FSID}
</code>

Start the OSD services on each node:

<code># node1
systemctl restart ceph-osd@0
systemctl enable ceph-osd@0

# node2
systemctl restart ceph-osd@1
systemctl enable ceph-osd@1

# node3
systemctl restart ceph-osd@2
systemctl enable ceph-osd@2
</code>

Create MGR

Each node running a monitor should also run a manager daemon.

Create key directory

<code>sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-`hostname -s`
cd /var/lib/ceph/mgr/ceph-`hostname -s`
</code>

Create authentication key

<code>ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' > keyring
chown ceph.ceph /var/lib/ceph/mgr/ceph-`hostname`/keyring
</code>

Start MGR daemon

<code>systemctl enable ceph-mgr@`hostname -s` && systemctl start ceph-mgr@`hostname -s`
# or
ceph-mgr -i `hostname`
</code>

Finally, check the Ceph cluster status (the example shows two OSDs added).

LinuxstorageCephoffline deploymentZTE NewStart
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.