Cloud Computing 42 min read

Step-by-Step Manual OpenStack Deployment on Rocky: From Architecture to Services

This guide walks through a full manual installation of OpenStack Rocky on a two‑node (controller + compute) environment, covering architecture diagrams, network design choices, core services configuration (DNS, NTP, MySQL, RabbitMQ, Memcached, Etcd), and detailed setup of Keystone, Glance, Nova, Neutron, Horizon and Cinder with all required commands, config files and verification steps.

AI Cyberspace
AI Cyberspace
AI Cyberspace
Step-by-Step Manual OpenStack Deployment on Rocky: From Architecture to Services

Reference List

"Cloud Computing Development Chronology 1725‑2023 (Second Edition)"

"Virtualization Technology – Hardware‑Assisted Virtualization"

"Virtualization Technology – QEMU‑KVM Kernel‑Based VM"

"Virtualization Technology – VirtIO Virtual Device Standard"

"Virtualization Technology – Linux Kernel Virtual Network Device"

"Virtualization Technology – Bridge & VLAN for Multi‑Plane KVM Network"

"Virtualization Technology – VirtIO Networking Device"

"Virtualization Technology – Libvirt Heterogeneous Management Component"

Table of Contents

OpenStack Architecture

Conceptual architecture

Logical architecture

Network Selection

Networking Option 1: Provider networks

Networking Option 2: Self‑service networks

Two‑Node Deployment Topology

Basic Services

DNS (hosts file)

NTP

YUM repository

MySQL

RabbitMQ

Memcached

Etcd

OpenStack Projects

Keystone (Identity)

Glance (Image)

Nova (Compute)

Neutron (Networking)

Horizon (Dashboard)

Cinder (Block Storage)

Preface

OpenStack automated deployment tools such as Kolla and TripleO are widely used, but this article records a fully manual deployment to deepen understanding of OpenStack’s software architecture.

Rocky release is chosen for stability.

A two‑node (controller + compute) architecture is used for easier management.

OpenStack Architecture

Official documentation: https://docs.openstack.org/install-guide/

Conceptual Architecture

Conceptual Architecture
Conceptual Architecture

Logical Architecture

Logical Architecture
Logical Architecture

Network Selection

Networking Option 1: Provider Networks

Provider Networks
Provider Networks

Provider networks bridge the virtual network directly to the physical provider network (L2/L3 switches, routers). Simpler model with higher performance, but Neutron does not enable L3 router services, so LBaaS, FWaaS, etc., are unavailable.

Networking Option 2: Self‑Service Networks

Self‑Service Networks
Self‑Service Networks

Self‑service networks provide a complete L2/L3 virtualized solution; users can create virtual networks without knowing the underlying physical topology. Neutron offers multi‑tenant isolation and multi‑plane networking.

Self‑Service Network Details
Self‑Service Network Details

Two‑Node Deployment Topology

Two‑Node Topology
Two‑Node Topology

Controller

ens160: 172.18.22.231/24

ens192: 10.0.0.1/24

ens224: br‑provider (NIC)

sba: system disk

sdb: Cinder storage disk

Compute

ens160: 172.18.22.232/24

ens192: 10.0.0.2/24

sba: system disk

NOTE: All passwords in the guide are replaced with the placeholder fanguiju.

Basic Services

DNS (hosts file)

Using /etc/hosts instead of a DNS server.

# cat /etc/hosts
127.0.0.1   controller localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.18.22.231 controller
172.18.22.232 compute

NTP Time Synchronization

Controller configuration:

# cat /etc/chrony.conf | grep -v ^# | grep -v ^$
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 172.18.22.0/24
logdir /var/log/chrony

# systemctl enable chronyd.service
# systemctl start chronyd.service
# chronyc sources

Compute configuration (uses controller as NTP source):

# cat /etc/chrony.conf | grep -v ^# | grep -v ^$
server controller iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony

# systemctl enable chronyd.service
# systemctl start chronyd.service
# chronyc sources

YUM Repository

$ yum install centos-release-openstack-rocky -y
$ yum upgrade -y
$ yum install python-openstackclient -y
$ yum install openstack-selinux -y

MySQL (MariaDB)

$ yum install mariadb mariadb-server python2-PyMySQL -y
# cat /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 172.18.22.231
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

# systemctl enable mariadb.service
# systemctl start mariadb.service
# mysql_secure_installation

Problem: OpenStack services experience slow responses and “Too many connections”.

Solution: Increase max_connections, reduce wait_timeout, and enable automatic cleanup.

# cat /etc/my.cnf | grep -v ^$ | grep -v ^#
[client-server]
[mysqld]
symbolic-links=0
max_connections=1000
wait_timeout=5
#interactive_timeout = 600
!includedir /etc/my.cnf.d

RabbitMQ Message Queue

$ yum install rabbitmq-server -y
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
# rabbitmqctl add_user openstack fanguiju
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Problem: "Unable to connect to node rabbit@localhost: nodedown" due to hostname mismatch.

Solution: Ensure hostnames resolve identically on both nodes and restart the OS to refresh the Erlang cookie.

Memcached

The Identity service caches tokens in Memcached. For production, enable firewalling, authentication and encryption.
$ yum install memcached python-memcached -y
# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
#OPTIONS="-l 127.0.0.1,::1"
OPTIONS="-l 127.0.0.1,::1,controller"
# systemctl enable memcached.service
# systemctl start memcached.service

Etcd

Etcd provides a reliable distributed key‑value store for OpenStack services.
$ yum install etcd -y
# cat /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://172.18.22.231:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.18.22.231:2379"
ETCD_NAME="controller"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.18.22.231:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://172.18.22.231:2379"
ETCD_INITIAL_CLUSTER="controller=http://172.18.22.231:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
# systemctl enable etcd
# systemctl start etcd

OpenStack Projects

Keystone (Controller)

$ yum install openstack-keystone httpd mod_wsgi -y
# /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:fanguiju@controller/keystone

[token]
provider = fernet
# CREATE DATABASE keystone;
# GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'fanguiju';
# su -s /bin/sh -c "keystone-manage db_sync" keystone
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
# keystone-manage bootstrap --bootstrap-password fanguiju \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

Configure Apache to serve Keystone:

# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
# cat /usr/share/keystone/wsgi-keystone.conf (excerpt)
Listen 5000
<VirtualHost *:5000>
  WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  WSGIProcessGroup keystone-public
  WSGIScriptAlias / /usr/bin/keystone-wsgi-public
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  ErrorLog /var/log/httpd/keystone.log
  CustomLog /var/log/httpd/keystone_access.log combined
</VirtualHost>
# systemctl enable httpd.service
# systemctl start httpd.service

Create a demo project and user (optional):

# openstack project create --domain default --description "Service Project" service
# openstack project create --domain default --description "Demo Project" myproject
# openstack user create --domain default --password-prompt myuser
# openstack role create myrole
# openstack role add --project myproject --user myuser myrole

Glance (Controller)

$ yum install openstack-glance -y
# /etc/glance/glance-api.conf (excerpt)
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

[database]
connection = mysql+pymysql://glance:fanguiju@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = fanguiju
# CREATE DATABASE glance;
# GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'fanguiju';
# su -s /bin/sh -c "glance-manage db_sync" glance
# systemctl enable openstack-glance-api.service openstack-glance-registry.service
# systemctl start openstack-glance-api.service openstack-glance-registry.service

Verify image upload:

$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
$ openstack image create "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare --public
$ openstack image list

Nova (Controller)

$ yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api -y
# /etc/nova/nova.conf (excerpt)
[DEFAULT]
my_ip = 172.18.22.231
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:fanguiju@controller
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
connection = mysql+pymysql://nova:fanguiju@controller/nova_api

[database]
connection = mysql+pymysql://nova:fanguiju@controller/nova

[placement_database]
connection = mysql+pymysql://placement:fanguiju@controller/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = fanguiju

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = fanguiju
# CREATE DATABASE nova_api;
# CREATE DATABASE nova;
# CREATE DATABASE nova_cell0;
# CREATE DATABASE placement;
# GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'fanguiju';
# su -s /bin/sh -c "nova-manage api_db sync" nova
# su -s /bin/sh -c "nova-manage db sync" nova
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
# systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
# openstack compute service list

Nova (Compute)

$ yum install openstack-nova-compute -y
# /etc/nova/nova.conf (excerpt for compute)
[DEFAULT]
my_ip = 172.18.22.232
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:fanguiju@controller
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver = libvirt.LibvirtDriver
instances_path = /var/lib/nova/instances

[api_database]
connection = mysql+pymysql://nova:fanguiju@controller/nova_api

[database]
connection = mysql+pymysql://nova:fanguiju@controller/nova

[placement_database]
connection = mysql+pymysql://placement:fanguiju@controller/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = fanguiju

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = fanguiju

[libvirt]
virt_type = qemu
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Problem: Compute node cannot reach RabbitMQ (port 5672) due to firewall.

# firewall-cmd --zone=public --permanent --add-port=4369/tcp && \
  firewall-cmd --zone=public --permanent --add-port=25672/tcp && \
  firewall-cmd --zone=public --permanent --add-port=5671-5672/tcp && \
  firewall-cmd --zone=public --permanent --add-port=15672/tcp && \
  firewall-cmd --zone=public --permanent --add-port=61613-61614/tcp && \
  firewall-cmd --zone=public --permanent --add-port=1883/tcp && \
  firewall-cmd --zone=public --permanent --add-port=8883/tcp
# firewall-cmd --reload
# systemctl stop firewalld && systemctl disable firewalld

Neutron Open vSwitch (Controller)

$ yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y
# /etc/neutron/neutron.conf (excerpt)
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:fanguiju@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:fanguiju@controller/neutron

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = fanguiju

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = fanguiju

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
# /etc/neutron/plugins/ml2/ml2_conf.ini (excerpt)
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
extension_drivers = port_security
mechanism_drivers = openvswitch,l2population

[securitygroup]
enable_ipset = true

[ml2_type_vxlan]
vni_ranges = 1:1000
# /etc/neutron/plugins/ml2/openvswitch_agent.ini (excerpt)
[ovs]
bridge_mappings = provider:br-provider
local_ip = 10.0.0.1

[agent]
tunnel_types = vxlan
l2_population = True

[securitygroup]
firewall_driver = iptables_hybrid
# /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = openvswitch
external_network_bridge =
# /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
# /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = fanguiju
# /etc/nova/nova.conf (add Neutron section)
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = fanguiju
service_metadata_proxy = true
metadata_proxy_shared_secret = fanguiju
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
# systemctl enable openvswitch && systemctl start openvswitch
# ovs-vsctl add-br br-provider && ovs-vsctl add-port br-provider ens224
# CREATE DATABASE neutron;
# GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'fanguiju';
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
# systemctl enable neutron-server.service neutron-openvswitch-agent.service \
  neutron-dhcp-agent.service neutron-metadata-agent.service
# systemctl start neutron-server.service neutron-openvswitch-agent.service \
  neutron-dhcp-agent.service neutron-metadata-agent.service
# systemctl enable neutron-l3-agent.service && systemctl start neutron-l3-agent.service
# openstack network agent list

Horizon (Controller)

$ yum install openstack-dashboard -y
# /etc/openstack-dashboard/local_settings (excerpt)
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': 'controller:11211',
    }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {"identity": 3, "image": 2, "volume": 2}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_ipv6': True,
    'enable_distributed_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_ha_router': False,
    'enable_fip_topology_check': True,
    'supported_vnic_types': ['*'],
    'physical_networks': [],
}
# /etc/httpd/conf.d/openstack-dashboard.conf (excerpt)
WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL}
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
  Options All
  AllowOverride All
  Require all granted
</Directory>
<Directory /usr/share/openstack-dashboard/static>
  Options All
  AllowOverride All
  Require all granted
</Directory>
# systemctl restart httpd.service memcached.service
# Access http://controller/dashboard in a browser.

Cinder (Controller)

# Prepare LVM backend
$ yum install lvm2 device-mapper-persistent-data -y
# /etc/lvm/lvm.conf (filter)
filter = [ "a/sdb/", "r/.*/" ]
# systemctl enable lvm2-lvmetad.service && systemctl start lvm2-lvmetad.service
# pvcreate /dev/sdb && vgcreate cinder-volumes /dev/sdb
$ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
$ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
$ openstack user create --domain default --password-prompt cinder
$ openstack role add --project service --user cinder admin
$ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%(project_id)s
$ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%(project_id)s
$ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%(project_id)s
$ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%(project_id)s
$ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%(project_id)s
$ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%(project_id)s
$ yum install openstack-cinder targetcli python-keystone -y
# /etc/cinder/cinder.conf (excerpt)
[DEFAULT]
my_ip = 172.18.22.231
enabled_backends = lvm
auth_strategy = keystone
transport_url = rabbit://openstack:fanguiju@controller
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:fanguiju@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = fanguiju

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
# CREATE DATABASE cinder;
# GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'fanguiju';
# GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'fanguiju';
# su -s /bin/sh -c "cinder-manage db sync" cinder
# systemctl restart openstack-nova-api.service
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service
# openstack volume service list

Completion

The minimal manual OpenStack Rocky deployment is now complete. You can create a boot‑from‑image instance, attach volumes, and explore the dashboard.

LinuxNetworkingOpenStackManual DeploymentRocky
AI Cyberspace
Written by

AI Cyberspace

AI, big data, cloud computing, and networking.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.