Operations 13 min read

Master PXE and Kickstart: Automate Mass CentOS Installations

This guide explains the PXE boot process, its required DHCP and TFTP services, and how to combine PXE with Kickstart to perform unattended, large‑scale CentOS installations, including detailed configuration steps for DHCP, HTTP, TFTP, boot files, and a complete Kickstart script.

360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
Master PXE and Kickstart: Automate Mass CentOS Installations

PXE Overview

PXE (Preboot Execution Environment) is a technology developed by Intel that allows a client workstation to boot an operating system over the network. It works in a client/server model, using DHCP to obtain an IP address and TFTP (or MTFTP) to download a bootloader and kernel into memory, after which the OS installation proceeds. PXE requires a network‑capable NIC with PXE support.

PXE Workflow

PXE client requests an IP address from the DHCP server.

DHCP server returns the IP address and the location of the PXE boot file (usually on a TFTP server).

PXE client downloads the pxelinux.0 file from the TFTP server.

The client executes pxelinux.0 , which loads the kernel ( vmlinuz ) and initrd.img .

Based on the result, the client loads the kernel and filesystem via TFTP.

The installer starts, and the user can choose HTTP, FTP, or NFS as the installation source.

Kickstart Overview

Kickstart provides an unattended installation method for CentOS. It records all required installation parameters in a ks.cfg file. During installation the installer first looks for this file; if found, it uses the predefined values, otherwise manual input is required. A complete ks.cfg can automate hostname configuration, user creation, package selection, and post‑installation steps.

Requirements for PXE/Kickstart

Network card that supports PXE boot.

DHCP server that supplies IP address and points to the TFTP server.

TFTP server that provides the bootloader, kernel, and initrd.

Installation source (NFS/FTP/HTTP) containing the OS image.

Environment

OS: CentOS Linux release 7.4.1708 (Core)

Server IP: 192.168.221.129

Installation ISO: CentOS-7-x86_64-DVD-1708.iso

Tools: kickstart, dhcp, tftp-server, tftp, httpd

Preparation

<code># cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# setenforce 0
# getenforce
Permissive or Disabled
# systemctl stop firewalld.service
# systemctl disable firewalld.service
# yum install -y httpd dhcp tftp-server tftp syslinux
</code>

Configuration

DHCP configuration

<code># vim /etc/dhcp/dhcpd.conf
option domain-name "pxe.com";
option domain-name-servers ns1.pxe.com, pxe.com;
default-lease-time 600;
max-lease-time 7200;
log-facility local7;
subnet 192.168.221.0 netmask 255.255.255.0 {
    option routers 192.168.221.2;
    option subnet-mask 255.255.255.0;
    option domain-name-servers 192.168.221.129;
    range dynamic-bootp 192.168.221.130 192.168.10.230;
    default-lease-time 21600;
    max-lease-time 43200;
    next-server 192.168.221.129;
    filename "pxelinux.0";
}
# systemctl start dhcpd.service
# systemctl enable dhcpd.service
# netstat -tunpl | grep dhcpd
</code>

Httpd service configuration

<code># mkdir -pv /var/www/html/centos7
# mount --bind /media/cdrom/ /var/www/html/centos7/
# systemctl start httpd.service
# systemctl enable httpd.service
# netstat -tunpl | grep httpd
</code>

TFTP service configuration

<code># vim /etc/xinetd.d/tftp
service tftp
{
    socket_type = dgram
    protocol = udp
    wait = yes
    user = root
    server = /usr/sbin/in.tftpd
    server_args = -s /var/lib/tftpboot
    disable = no
    per_source = 11
    cps = 100 2
    flags = IPv4
}
# systemctl start tftp.socket
# systemctl enable tftp.socket
# systemctl start tftp.service
# systemctl enable tftp.service
# netstat -tunpl | grep 69
</code>

Copy boot files

<code># cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/
# cp /media/cdrom/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/
# cp /media/cdrom/isolinux/{vesamenu.c32,boot.msg,splash.png} /var/lib/tftpboot/
# cp /usr/share/syslinux/{chain.c32,mboot.c32,menu.c32,memdisk} /var/lib/tftpboot/
# mkdir /var/lib/tftpboot/pxelinux.cfg/
# cp /media/cdrom/isolinux/isolinux.cfg /var/lib/tftpboot/pxelinux.cfg/default
</code>

Edit pxelinux.cfg/default

<code># vim /var/lib/tftpboot/pxelinux.cfg/default
default vesamenu.c32
timeout 60
display boot.msg
menu clear
menu background splash.png
menu title HQHY PXE BootMenu: Install CentOS 6.2 or 7.4
menu vshift 8
menu rows 18
menu margin 8
label linux 1
    menu label ^Install CentOS 7.4
    menu default
    kernel centos7.4/vmlinuz net.ifnames=0 biosdevname=0
    append initrd=centos7.4/initrd.img inst.repo=http://192.168.221.129/centos7.4 inst.ks=http://192.168.221.129/ks74.cfg
</code>

Create Kickstart file ks74.cfg

<code># vim /var/www/html/ks74.cfg
install
url --url="http://192.168.221.129/centos7.4"
lang en_US.UTF-8
keyboard us
skipx
text
reboot
rootpw --iscrypted $1$PChLniYo$DG6raNbcIO5AIPRiP/js20
user --name=qihoo --groups=wheel --password=$6$IK2rvMJD$TGIckBiHXQk8U9vAlfKy4KnxEgB0uQf5TR2aQVM4PhdmivzyhxWXSNxU2wo0wTdnFUOtGq6b59tjQmrNcglnm0 --iscrypted --uid=1111
timezone Asia/Shanghai
firewall --enabled --port=22:tcp
authconfig --enableshadow --enablemd5
selinux --permissive
clearpart --all --initlabel
%include /tmp/part
%packages
@core
@base
@mail-server
-... (unwanted packages omitted)
%end
%pre
#!/bin/bash
... (script content)
%end
</code>
automationPXECentOSKickstartDHCPSystem InstallationTFTP
360 Zhihui Cloud Developer
Written by

360 Zhihui Cloud Developer

360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.