Operations 13 min read

How to Perform Offline Ceph Octopus Deployment with cephadm on Ubuntu

This guide walks through creating an offline installation package, caching required Debian packages and Docker images, installing Docker and cephadm, bootstrapping a Ceph cluster, and deploying OSD, MDS, and RGW services on Ubuntu nodes without internet access.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How to Perform Offline Ceph Octopus Deployment with cephadm on Ubuntu

Creating Offline Installation Package

First cache required Debian packages and Docker images in an online environment.

Install docker-ce

<code>curl -sSL https://get.daocloud.io/docker | sh
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
</code>

Install cephadm

Use curl to fetch the latest standalone script; if the network is poor, copy it from GitHub.

Edit

/etc/resolv.conf

to set nameserver to 114.114.114.114.

<code>curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod +x cephadm
</code>

Install cephadm

<code>./cephadm add-repo --release octopus
./cephadm install
</code>

Bootstrap new cluster

Create directory

/etc/ceph

and run:

<code>mkdir -p /etc/ceph
./cephadm bootstrap --mon-ip 192.168.10.2
</code>

Enable ceph CLI

<code>cephadm add-repo --release octopus
cephadm install ceph-common
</code>

Deploy OSD

Device is considered available if it meets all conditions: no partitions, no LVM state, not a system device, no filesystem, not a Ceph BlueStore OSD, and size greater than 5 GB.

Create OSD on specific host/device

<code>ceph orch daemon add osd node1:/dev/sdb
</code>

Export Docker Images

Save required images to tar files for the offline package:

<code>docker save -o ceph.tar quay.io/ceph/ceph:v15
docker save -o prometheus.tar quay.io/prometheus/prometheus:v2.18.1
docker save -o grafana.tar quay.io/ceph/ceph-grafana:6.7.4
docker save -o alertmanager.tar quay.io/prometheus/alertmanager:v0.20.0
docker save -o node-exporter.tar quay.io/prometheus/node-exporter:v0.18.1
</code>

Export Deb Packages

Copy cached deb files from

/var/cache/apt/archives

to a new folder and create a Packages.gz index.

<code>apt-get install dpkg-dev -y
mkdir /offlinePackage
cp -r /var/cache/apt/archives /offlinePackage
dpkg-scanpackages /offlinePackage/ /dev/null | gzip > /offlinePackage/Packages.gz
tar zcvf offlinePackage.tar.gz /offlinePackage/
</code>

Modify cephadm Script

Change the pull command to use local images instead of pulling from the internet.

Start Offline Deployment

Prerequisites

cephadm supports Octopus v15.2.0 and later.

Requires container runtime (podman or docker) and Python 3.

Time synchronization.

Basic Configuration

All three Ubuntu 20.04 nodes must run the following steps.

Configure hosts resolution

<code>cat >> /etc/hosts <<EOF
192.168.10.2 node1
192.168.10.3 node2
192.168.10.4 node3
EOF
</code>

Set hostnames

<code>hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3
</code>

Configure local apt source

<code>tar zxvf offlinePackage.tar.gz -C /
mv /etc/apt/sources.list /etc/apt/sources.list.bak
vi /etc/apt/sources.list
deb file:/// offlinePackage/
apt update
</code>

Install Docker from offline deb packages

<code>cd /offlinedeb/archives
dpkg -i containerd.io_1.4.11-1_amd64.deb
dpkg -i docker-ce-cli_5...amd64.deb
dpkg -i docker-ce-rootless-extras_5...amd64.deb
dpkg -i docker-ce_5...amd64.deb
systemctl start docker
systemctl enable docker
</code>

Load Docker images

<code>docker load -i node-exporter.tar
docker load -i alertmanager.tar
docker load -i prometheus.tar
docker load -i ceph.tar
docker load -i grafana.tar
</code>

Install cephadm

<code>chmod +x cephadm
cp cephadm /usr/bin/
apt install cephadm --allow-unauthenticated
# if errors, run apt --fix-broken install
</code>

Bootstrap new cluster

<code>cephadm bootstrap --mon-ip 192.168.174.128
</code>

This creates monitor and manager daemons, generates SSH keys, writes

ceph.conf

, admin keyring, and public key.

Add Hosts to Cluster

<code>ssh-copy-id -f -i /etc/ceph/ceph.pub node2
ssh-copy-id -f -i /etc/ceph/ceph.pub node3
ceph orch host add node2
ceph orch host add node3
</code>

Adding hosts automatically expands monitor and manager services.

Deploy OSD

List storage devices:

<code>ceph orch device ls
</code>

Add OSDs on each node:

<code>ceph orch daemon add osd node1:/dev/sdb
ceph orch daemon add osd node1:/dev/sdc
... (repeat for other devices and nodes)
</code>

Deploy MDS

<code>ceph orch apply mds *fs-name* --placement="3 node1 node2 node3"
</code>

Create pools for CephFS and then the filesystem:

<code>ceph osd pool create cephfs_data 64 64
ceph osd pool create cephfs_metadata 64 64
ceph fs new cephfs cephfs_metadata cephfs_data
</code>

Deploy RGW

<code>ceph orch apply rgw myorg cn-east-1 --placement="3 node1 node2 node3"
# or use radosgw-admin to create realm, zonegroup, zone, and period
</code>
DockerCephOSDUbuntuoffline deploymentRGWcephadm
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.