~ / distros / kickstart-rocky-linux-at-scale
Distros

How to Configure Kickstart for Automated Rocky Linux Installations at Scale

A production-grade walkthrough of Kickstart automation for Rocky Linux 9 and 10: from Anaconda internals and LVM partition layouts to %pre scripting, %post hardening, PXE delivery, scalable deployment patterns, and what changes when you move to Rocky Linux 10.

date March 2026
read 45 min

A single character typo in a clearpart directive once wiped the wrong disk on every machine in a 60-node rack before anyone caught it. The install script ran perfectly — it just ran on the wrong device, because a device name assumption that held on the test VM did not hold on the production hardware. The machines finished installing. They came up clean. The data that had been on those disks was gone.

That is the world Kickstart lives in: powerful enough to provision thousands of servers without human input, precise enough that a single misplaced flag causes silent, irreversible damage at scale. Done correctly, it is one of the most elegant tools in the enterprise Linux administrator's kit. Done carelessly, it is a force multiplier for mistakes.

This guide is about doing it correctly. By the end, you will understand not just what each directive does but why each decision is made the way it is — so that when your environment differs from the examples here, you can reason about the right choice rather than copy a template and hope.

Rocky Linux, the community-driven, binary-compatible downstream rebuild of Red Hat Enterprise Linux created in the wake of CentOS Stream's repositioning, inherits the full Kickstart ecosystem. As of 2026, Rocky Linux 9.7 ("Blue Onyx") is the current Rocky Linux 9 release, supported through May 2032, and Rocky Linux 10.1 ("Red Quartz") is the current Rocky Linux 10 release, supported through May 2035. Kickstart files work not only on Rocky Linux but also on CentOS Stream, Fedora, and many other distributions — the operational knowledge here transfers across the entire RHEL family.

Version Note

The examples and directives in this guide target Rocky Linux 9, which remains the primary enterprise deployment target. Rocky Linux 10 introduces important Kickstart-relevant changes — including root account behavior, architecture requirements, and password hashing defaults — covered in the dedicated Rocky Linux 10 Migration Notes section below.

Read This First: What You Are Building Toward

Every decision explained in this guide corresponds to a line in the production skeleton at the end. If you want to jump straight to the complete file, go to Quick Reference: The Minimal Production Kickstart Skeleton. Then come back here to understand why each line is written the way it is. The skeleton without the reasoning is a template you copy; the reasoning without the skeleton is theory you forget. Together, they give you a file you can own, defend, and adapt.

What Kickstart Actually Is Under the Hood

Most documentation introduces Kickstart as a "configuration file." That is technically true but intellectually incomplete. To really understand it, you need to understand Anaconda.

Anaconda is the Python-based installer used by Rocky Linux, RHEL, Fedora, and their kin. When a machine boots from Rocky Linux installation media, the kernel and an initial RAM disk (initrd) load into memory. That initrd contains a minimal runtime environment with just enough userspace tooling to locate and launch Anaconda. Anaconda itself is a large Python application that handles storage detection and manipulation through a library called blivet, package resolution and installation through DNF, network configuration, and all the other installation tasks.

When Anaconda starts, one of the first things it does is check whether a Kickstart file has been specified via the kernel command line parameter inst.ks=. If so, it fetches the file over the network (HTTP, HTTPS, FTP, NFS), from a local disk path, or from a CD/USB device, then parses it using pykickstart — a Python library that reads and validates the Kickstart syntax. The normal behavior is silent and automatic: every directive is consumed, and if the file is complete and valid, the machine installs itself without any human input.

Note

The pykickstart library is the canonical reference implementation. It is open source, maintained by Chris Lumens and the Anaconda team, and its documentation at pykickstart.readthedocs.io is the authoritative source for every directive, its options, and the versions in which features were added or removed. When something behaves unexpectedly, that documentation is where you go, not forum posts. After every Rocky Linux installation, a file named anaconda-ks.cfg is written to /root/ containing the Kickstart equivalent of all choices made during that install — this is an excellent starting point for building your own files.

The Installation Lifecycle: When Each Section Runs

Before writing a single directive, you need a mental map of the machine's lifecycle. Every section of a Kickstart file corresponds to a specific phase. Understanding what environment is available at each phase determines what you can and cannot do — and where bugs hide when things go wrong.

01
BIOS/UEFI → PXE or Media
Hardware initializes. Network boot (PXE/TFTP) or local media delivers the kernel and initrd. The inst.ks= parameter on the kernel command line tells Anaconda where to find the Kickstart file.
02
%pre — initrd environment
Runs before any disk partitioning or package installation. Only minimal initrd tools are available — no DNF, no full userspace. Use for dynamic disk detection, hardware probing, or writing %include files. Failures here abort the install before anything touches the disk.
03
Global Directives + Storage
Anaconda processes all top-level directives: locale, network, storage layout, bootloader, SELinux mode. Partitions are created and formatted here. Any %include files generated in %pre are merged at this point.
04
%packages — DNF installation
DNF resolves and installs all specified packages into the newly formatted filesystem. Package groups, environment groups, and explicit exclusions are processed here. The system is not yet running — packages are laid down into /mnt/sysimage.
05
%addon — OpenSCAP / add-ons
Anaconda add-ons run after packages but before %post. The org_fedora_oscap add-on applies SCAP profiles here — CIS, DISA STIG, PCI-DSS — before the system ever boots. This is why install-time SCAP remediation is more complete than post-boot remediation: configurations like partition mount options and bootloader parameters are set correctly from the start.
06
%post — chrooted into new system
Runs after all packages are installed, chrooted into the new system by default. Full package environment is available. Use for hardening, SSH key deployment, sysctl configuration, AIDE initialization, auditd rules, and bootstrapping configuration management agents. This is where the system's ongoing personality is established.
OK
Reboot → First Boot
Anaconda unmounts everything and reboots (or powers off if poweroff is specified). The machine boots into the freshly installed system. Configuration management takes over from here.
ERR
%onerror — failure handler
Fires on any fatal Anaconda error. Use to broadcast failure notifications, log to a remote syslog server, or power off the machine. Without this section, a failed install silently waits at the Anaconda error screen until someone checks the console.

The Anatomy of a Kickstart File

A Kickstart file is divided into sections. Each section has a defined purpose, and the ordering is partially significant. The major sections are: global directives at the top (configuration without a section header), %packages, %pre, %post, and %onerror.

Pro Tip

After any manual Rocky Linux installation, the system writes a file named anaconda-ks.cfg to /root/. This file is the exact Kickstart equivalent of every choice made during that interactive install. It is the fastest way to get a valid starting point for your own files. The #version=RHEL9 comment at the top of a Kickstart file is a hint to pykickstart about which version of the directive syntax to validate against — it does not affect Anaconda at runtime, but it ensures ksvalidator applies the correct rules.

Global Directives: The Installation Blueprint

These are the directives that appear before any % section marker. They define the fundamental parameters of the installation: where to get packages, how to configure the keyboard and locale, how to handle the bootloader, and how to partition storage.

Installation source and method

Two directives that belong at the top of every production Kickstart file but are frequently omitted are eula --agreed and firstboot --disabled. Without eula --agreed, Anaconda may pause for license acceptance even in text mode. Without firstboot --disabled, the Initial Setup wizard launches on first boot and waits for input — exactly the kind of prompt that silently stalls a provisioned machine.

kickstart
text
skipx
eula --agreed
firstboot --disabled
kickstart
url --url="https://dl.rockylinux.org/pub/rocky/9/BaseOS/x86_64/os/"
repo --name="AppStream" --baseurl="https://dl.rockylinux.org/pub/rocky/9/AppStream/x86_64/os/"

The url directive tells Anaconda where to pull the BaseOS packages from. In a large-scale environment, you almost never want this pointing to the upstream Rocky Linux mirrors. You want it pointing to an internal mirror server or a local package repository proxy like Squid caching the upstream mirror, or a full local mirror created with reposync. This reduces installation time dramatically and eliminates dependency on external network availability.

Locale, keyboard, and timezone

kickstart
lang en_US.UTF-8
keyboard --xlayouts='us'
timezone America/Chicago --utc

The --utc flag on timezone is critical and commonly forgotten. It forces the hardware clock to be treated as UTC, which is correct for servers. Systems that store local time in the hardware clock suffer subtle, painful bugs around daylight saving time transitions that are notoriously difficult to diagnose. Rocky Linux 9 also supports the timesource directive for NTP server specification:

kickstart
timezone America/Chicago --utc
timesource --ntp-server=0.pool.ntp.org
timesource --ntp-server=1.pool.ntp.org

In environments with an internal NTP infrastructure, replace the pool addresses with your internal NTP server addresses. Having Anaconda configure chrony during install means time synchronization is active from the first boot rather than requiring a post-install configuration step.

Network configuration

kickstart
network --bootproto=dhcp --device=ens3 --activate --onboot=yes
network --hostname=node01.internal.example.com

For scale deployments, you frequently cannot hardcode IP addresses per machine. DHCP is the common approach during installation, with subsequent configuration management (Ansible, Salt, Puppet) handling static addressing post-install. If you do need static IPs during installation — for example when installing over a network that has no DHCP — the directive becomes:

kickstart
network --bootproto=static --device=ens3 --ip=10.0.10.50 --netmask=255.255.255.0 \
  --gateway=10.0.10.1 --nameserver=10.0.0.53 --activate

Authentication and credentials

Always use the --iscrypted option for the root password to ensure it is not displayed in plain text. Generate the correct SHA-512 crypt hash with:

terminal
$ openssl passwd -6 'YourPasswordHere'

The output begins with $6$ indicating SHA-512. That entire string goes directly into the Kickstart file:

kickstart
rootpw --iscrypted $6$rounds=5000$somerandomsalt$longhashstring...
Caution

Do not ever store an unencrypted password in a Kickstart file that lives in version control, a web server directory, or anywhere it might be read by unauthorized parties. Best practice is to rotate credentials post-install via your configuration management system and avoid committing any credentials to repositories at all.

SELinux and services

kickstart
selinux --enforcing
services --enabled="sshd,chronyd,firewalld"
services --disabled="kdump,postfix"
Warning

A common temptation is to set selinux --disabled to avoid "SELinux problems." Resist this strongly in production environments. SELinux enforcing mode is a mandatory access control layer that meaningfully reduces the blast radius of application-level vulnerabilities. If an application misbehaves with SELinux enabled, that is a signal to fix the application or write a proper policy, not to disable the entire security subsystem. Set it enforcing during installation and leave it enforcing.

Storage Configuration: Where Administrators Live or Die

Storage directives are where the majority of Kickstart disasters happen. Getting these wrong can wipe data silently, create non-bootable systems, or produce layouts that cause problems years later under CIS or STIG compliance audits.

The Blast Radius Problem

Storage directives are the one place in a Kickstart file where a mistake is not just a failed install — it is irreversible data destruction at whatever scale you are deploying. A clearpart --all directive that runs against the wrong disk on 50 machines in parallel destroys 50 disks in parallel. Read this section carefully before touching production deployments. Test every storage layout change in a virtual machine first.

The disk selection problem

Device names in the sdX format are not guaranteed to be consistent across reboots. When a command calls for a device node name, you can instead use any item from /dev/disk. On a physical server with multiple storage controllers and disk types, sda during installation might not be sda after the first reboot. Referencing disks by their /dev/disk/by-id/ or /dev/disk/by-path/ names is far more reliable in complex environments.

The ignoredisk directive controls which disks Anaconda will even consider:

kickstart
ignoredisk --only-use=sda

This tells the installer: "Only use sda for installation; ignore all other disks." In environments where servers have data disks, SAN connections, or shared storage, this directive is not optional. Failing to specify it means Anaconda may try to interact with disks you do not want it to touch.

The clearpart and zerombr interaction

kickstart
zerombr
clearpart --all --initlabel --disklabel=gpt

zerombr clears the Master Boot Record on any disk that has an invalid partition table. Without this directive, the installer may pause and ask for confirmation when it encounters an MBR it does not recognize. In unattended deployments, a prompt that waits for human input is a deployment that never finishes. zerombr ensures forward progress.

The --disklabel=gpt flag forces GPT partition tables rather than the legacy MBR format. GPT is correct for modern systems: it supports disks larger than 2TB, allows more than four primary partitions, stores multiple copies of partition table data for resilience, and is required for UEFI boot. Always use GPT unless you have a specific legacy constraint.

A production-grade LVM layout

The following is a full, security-conscious partition layout that passes CIS Benchmark Level 2 requirements by placing security-sensitive directories on their own mount points with restrictive mount options:

kickstart
part biosboot  --fstype="biosboot"  --size=2      --ondisk=sda
part /boot/efi --fstype="efi"       --size=1024   --ondisk=sda --fsoptions="umask=0077,shortname=winnt"
part /boot     --fstype="xfs"       --size=2048   --ondisk=sda
part pv.01     --fstype="lvmpv"     --size=1      --grow --ondisk=sda

volgroup vg_os pv.01 --pesize=4096

logvol /              --vgname=vg_os --size=10240 --fstype=xfs  --name=lv_root
logvol /home          --vgname=vg_os --size=4096  --fstype=xfs  --name=lv_home  --fsoptions="nodev,nosuid"
logvol /tmp           --vgname=vg_os --size=2048  --fstype=xfs  --name=lv_tmp   --fsoptions="nodev,nosuid,noexec"
logvol /var           --vgname=vg_os --size=8192  --fstype=xfs  --name=lv_var
logvol /var/log       --vgname=vg_os --size=4096  --fstype=xfs  --name=lv_varlog
logvol /var/log/audit --vgname=vg_os --size=2048  --fstype=xfs  --name=lv_audit
logvol /var/tmp       --vgname=vg_os --size=2048  --fstype=xfs  --name=lv_vartmp --fsoptions="nodev,nosuid,noexec"
logvol swap           --vgname=vg_os --size=4096  --name=lv_swap
Note

UEFI vs. BIOS: why this layout includes both a biosboot and an EFI partition. On systems that may boot in either mode, carrying both partition types in the Kickstart layout is the safest choice. The biosboot partition (2 MiB, no filesystem, GPT type BIOS Boot) is used by GRUB 2 on BIOS/CSM systems when the partition table is GPT — GRUB cannot use the legacy MBR gap on GPT disks, so it stages itself here instead. The /boot/efi partition (FAT32, at least 200 MiB recommended, 1 GiB for update headroom) is used on UEFI systems. A machine booting in UEFI mode ignores the biosboot partition entirely; a machine booting via legacy BIOS ignores the EFI partition entirely. If you are deploying exclusively to hardware or hypervisors where boot mode is known and fixed, you can omit the irrelevant partition — but in heterogeneous environments, carrying both costs only 2 MiB and eliminates a class of install-time failures.

Decision: which boot partitions to include?
Include both biosboot + /boot/efi if...
  • Fleet is heterogeneous (mixed BIOS/UEFI hardware)
  • You cannot guarantee boot mode is fixed
  • You are deploying to bare metal you do not fully control
  • Default recommendation: costs 2 MiB, eliminates a failure class
Omit biosboot if...
  • Fleet is exclusively UEFI (confirmed, not assumed)
  • All hardware is Haswell-era or newer cloud/hypervisor
  • You have verified every target boots in UEFI mode
Omit /boot/efi if...
  • Fleet is exclusively legacy BIOS with GPT disks
  • Older hardware where UEFI is not supported at all
  • You have verified no UEFI boot paths exist in the environment

Why separate mount points? Consider what happens when a web application writes unbounded amounts of data to /var/log. Without a separate partition, that growth can fill the root filesystem, crash system services, and potentially make the system unrecoverable without console access. With separate mount points, the impact is isolated. The noexec flag on /tmp and /var/tmp prevents execution of binaries placed there — a classic attacker technique. The nosuid flag prevents SUID bit exploitation on those paths.

Full-disk encryption with LUKS

For environments requiring encryption at rest, LUKS can be configured directly in Kickstart:

kickstart
part pv.01 --fstype="lvmpv" --ondisk=sda --encrypted --luks-type=luks2 \
  --cipher=aes-xts-plain64 --pbkdf-time=5000 --passphrase=InstallTimePassphrase

There is an important operational consideration: the passphrase hardcoded in the Kickstart file will end up in the installed system's initramfs unless you change it during the %post phase. A proper production workflow changes the LUKS passphrase post-install, stores the key in a secrets management system (HashiCorp Vault, Tang/Clevis for network-bound disk encryption), and removes the installation passphrase. The Clevis/Tang combination enables NBDE (Network Bound Disk Encryption), where the disk can only be decrypted when the machine has network access to authorized Tang servers — a compelling pattern for datacenter servers.

Note

When encrypting one or more partitions, Anaconda attempts to gather 256 bits of entropy. Gathering entropy can take some time — the process will stop after a maximum of 10 minutes. In a virtual machine, you can attach a virtio-rng device to the guest to speed up entropy gathering.

The %packages Section: Defining the System's Purpose

The %packages section tells DNF which packages and package groups to install. It is more nuanced than it appears.

kickstart
%packages --ignoremissing --exclude-weakdeps
@^minimal-environment
@base
@core
openssh-server
vim-enhanced
chrony
firewalld
audit
aide
-telnet
-rsh
-ypserv
-ypbind
-tftp
-tftp-server
%end

The @^minimal-environment notation installs an environment group. The @base and @core notations install component groups. Package exclusions are prefixed with -. The --exclude-weakdeps flag skips weak dependencies (Recommends and Suggests from RPM metadata), producing a leaner install. This matters at scale: over thousands of machines, unnecessary packages multiply into gigabytes of consumed space, expanded attack surface, and longer package update cycles.

The security-relevant exclusions above (-telnet, -rsh, -ypserv, etc.) are standard CIS Benchmark recommendations. These legacy protocols transmit data in plaintext and have no place on modern systems.

Installing Packages from Private or Authenticated Repositories

In environments managed by Red Hat Satellite, Foreman/Katello, Pulp, or a private Artifactory proxy, the package source in the Kickstart file points to an internal URL rather than a public mirror. This works identically to the public case — the only difference is the URL and, in some configurations, authentication headers.

For Satellite-managed environments, machines are typically subscribed to the correct content views via the %post section after install. During the install itself, an activation key can be passed to the Satellite host so that content is delivered from the correct environment:

kickstart %post (Satellite registration)
%post --log=/root/ks-post-satellite.log
#!/bin/bash
set -euo pipefail

# Download and install Satellite CA certificate first
curl -ko /etc/pki/ca-trust/source/anchors/satellite-ca.crt \
  https://satellite.internal.example.com/pub/katello-ca-consumer-latest.noarch.rpm
rpm -Uvh /etc/pki/ca-trust/source/anchors/satellite-ca.crt

# Register and attach to the correct content view
subscription-manager register \
  --org="YourOrg" \
  --activationkey="rl9-base-servers" \
  --serverurl=https://satellite.internal.example.com \
  --force

%end

For a simpler internal mirror (reposync behind nginx, for example), no authentication is needed — just point the url and repo directives at the internal hostname. The value of running an internal mirror at scale is significant: installation time drops from 10–15 minutes against upstream mirrors over commodity internet to under 3 minutes against a local 10 GbE mirror, and you are insulated from upstream availability and CDN consistency issues during a large parallel deployment run.

The %pre Section: Before the Disk Is Touched

The %pre section runs as a shell script inside the installation environment before Anaconda processes any storage directives or installs any packages. This is a powerful and dangerous capability. The system at this point is running from the initrd, with a minimal set of tools available — no DNF, no full userspace, no network utilities beyond what the initrd includes.

The key thing to understand: %pre runs in the installer environment, not in the system being installed. Files written to the filesystem here are written to the ramdisk, not to the target disk. The only way to pass data from %pre to the main installation is through the %include mechanism — writing a partial Kickstart file to a temp path and including it from the global directives section.

Conditional disk selection

In heterogeneous hardware environments, different server models may present storage under different device names. A %pre script can detect the available disks and write dynamic storage configuration that Anaconda then consumes:

kickstart %pre
%pre --interpreter=/bin/bash --log=/tmp/pre-install.log
#!/bin/bash

# Detect the first available block device
DISK=""
for d in sda vda nvme0n1 xvda; do
  if [ -b "/dev/$(echo $d | sed 's/n[0-9]*$//')" ] || [ -b "/dev/$d" ]; then
    DISK=$d
    break
  fi
done

if [ -z "$DISK" ]; then
  echo "ERROR: No suitable disk found" >&2
  exit 1
fi

# Write a dynamic storage snippet that will be included
cat > /tmp/storage-config.ks << EOF
ignoredisk --only-use=$DISK
zerombr
clearpart --drives=$DISK --initlabel --disklabel=gpt
bootloader --boot-drive=$DISK
part biosboot  --fstype=biosboot   --size=2    --ondisk=$DISK
part /boot/efi --fstype=efi        --size=1024 --ondisk=$DISK
part /boot     --fstype=xfs        --size=2048 --ondisk=$DISK
part pv.01     --fstype=lvmpv      --size=1    --grow --ondisk=$DISK
EOF
%end

The dynamically written file at /tmp/storage-config.ks can then be sourced using the %include /tmp/storage-config.ks directive in the global section of the Kickstart file. This is a powerful composition pattern.

Disk wiping for multi-disk servers

kickstart %pre
%pre --interpreter=/bin/bash
#!/bin/bash
# Remove LVM metadata
pvs --noheadings -o pv_name 2>/dev/null | grep sda | while read pv; do
  vg=$(pvs --noheadings -o vg_name "$pv" 2>/dev/null | tr -d ' ')
  [ -n "$vg" ] && vgremove -ff "$vg" 2>/dev/null
  pvremove -ff "$pv" 2>/dev/null
done

# Clear partition signatures
wipefs -af /dev/sda
dd if=/dev/zero of=/dev/sda bs=1M count=10 2>/dev/null
%end

The %post Section: Hardening and Configuration After Install

The %post section runs after all packages have been installed, inside the newly installed system's filesystem (chrooted by default). This is where the real post-install hardening, agent deployment, and configuration management bootstrapping happens.

kickstart %post
%post --log=/root/ks-post.log
#!/bin/bash
set -euo pipefail

# Redirect output to console during install for visibility
exec 1>/root/ks-post.log 2>&1
tail -f /root/ks-post.log > /dev/console &

# System hardening: kernel parameters
cat >> /etc/sysctl.d/99-hardening.conf << 'EOF'
net.ipv4.ip_forward = 0
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.all.accept_source_route = 0
EOF

# Configure SSH hardening
cat >> /etc/ssh/sshd_config.d/99-hardening.conf << 'EOF'
PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
# Protocol 2 is the only protocol supported since OpenSSH 7.4; this line is redundant but harmless
Protocol 2
MaxAuthTries 3
ClientAliveInterval 300
ClientAliveCountMax 2
X11Forwarding no
AllowAgentForwarding no
EOF

# Deploy authorized SSH key for initial access
mkdir -p /home/sysadmin/.ssh
chmod 700 /home/sysadmin/.ssh
cat > /home/sysadmin/.ssh/authorized_keys << 'EOF'
ssh-ed25519 AAAAC3Nza... your-deployment-key-here
EOF
chmod 600 /home/sysadmin/.ssh/authorized_keys
chown -R sysadmin:sysadmin /home/sysadmin/.ssh

# Configure firewalld
systemctl enable firewalld
firewall-offline-cmd --set-default-zone=drop
firewall-offline-cmd --zone=drop --add-service=ssh

# AIDE initialization (file integrity database)
aide --init
mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

# Configure auditd rules for compliance
cat >> /etc/audit/rules.d/99-hardening.rules << 'EOF'
-w /var/log/lastlog -p wa -k logins
-w /var/run/faillock/ -p wa -k logins
-w /etc/sudoers -p wa -k scope
-w /etc/sudoers.d/ -p wa -k scope
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/shadow -p wa -k identity
EOF

# Bootstrap configuration management agent
dnf install -y ansible-core python3-pip
# ansible-pull -U https://git.internal.example.com/ansible/base-config.git \
#   -i localhost, site.yml

%end

The --log=/root/ks-post.log argument is critically important. When %post scripts fail silently, having a log is often the only way to diagnose the problem. The combination of set -euo pipefail and redirecting output to the console means you can watch the post-install script run in real time, seeing each step as it executes.

Warning

What happens when a %post script fails? By default, Anaconda treats a non-zero exit code from %post as a fatal error and halts the installation in an error state — which is the correct behavior when paired with set -euo pipefail, since you want the install to stop if hardening steps did not complete. However, if you have steps that are advisory rather than mandatory (for example, a best-effort AIDE initialization that may take too long on slow disks), you can either wrap those specific commands in their own block without set -e, or use %post --erroronfail=false on a separate secondary section. Never use --erroronfail=false on your primary hardening block — that mask conceals real failures. When debugging a failed install, the post log at /root/ks-post.log is the first place to look; mount the installed disk from a live environment if the machine won't boot far enough to read it.

Diagnosing a Failed %post Script

When a %post script fails and the machine either halts or boots into a broken state, the diagnostic workflow is:

  1. Check /root/ks-post.log — if the machine boots at all, this is the first place to look. The set -euo pipefail directive means the script stopped at the first failing command; the log shows exactly where.
  2. If the machine will not boot, boot from rescue media, mount the installed filesystem (mount /dev/vg_os/lv_root /mnt/sysimage), and read /mnt/sysimage/root/ks-post.log.
  3. Check for SELinux AVC denials — a common %post failure mode is a curl or dnf call blocked by SELinux policy in the chrooted environment. Look for AVC entries in /mnt/sysimage/var/log/audit/audit.log.
  4. Network availability%post runs chrooted, but network connectivity depends on the installer environment. If a %post script tries to reach an external host (configuration management server, Vault instance, Satellite) and the network is not fully up yet, the call will fail. Add a brief connectivity check before any network-dependent steps.
  5. OpenSCAP remediation log — if you use the %addon approach, the SCAP remediation log is at /root/openscap_data/. Review it after first boot to confirm the profile was applied successfully and to see which rules were not met.

The %onerror Section: Graceful Failure Handling

Almost no one includes this section, which is a mistake. In large-scale deployments, installations will fail. Hardware faults, network blips, misconfigured DHCP, corrupted ISO files — all of these can abort an installation midway. The %onerror section defines what happens when an error occurs:

kickstart %onerror
%onerror
#!/bin/bash
echo "CRITICAL: Installation failed on $(hostname -I 2>/dev/null || echo 'unknown')" | \
  wall

# Log failure to syslog server
logger -n 10.0.0.10 -P 514 -t kickstart "Installation FAILED on $(cat /etc/hostname 2>/dev/null)"

# Optional: Write failure indicator to a monitoring endpoint
curl -sf --max-time 5 \
  "http://deploy.internal.example.com/api/failure?host=$(hostname)" || true

poweroff
%end

At scale, knowing immediately when a machine fails to install — without someone having to check a console — is the difference between a smooth deployment run and discovering at 6 a.m. that half your rack never came up.

The %addon Section: Applying Security Profiles at Install Time

The %addon section is the least-known part of Kickstart and one of the most operationally valuable for compliance-driven environments. It allows Anaconda add-ons to receive configuration at install time — and the most practically relevant add-on is org_fedora_oscap, which invokes OpenSCAP during installation to apply a security profile (CIS, DISA STIG, PCI-DSS, or others) while the system is still being built, before it ever boots into production.

Why does timing matter? Applying a SCAP profile post-install via a remediation script works, but it runs on a live, running system. That means some remediations (partition mount options, certain kernel parameters, bootloader hardening) require reboots to take full effect. Applying the profile at install time through the %addon section applies remediations during the initial build, before the first boot, so the system comes up already compliant.

The package required in the installation environment is openscap-anaconda-addon, which must be included in the %packages section:

kickstart %packages (OpenSCAP)
%packages --ignoremissing --exclude-weakdeps
@^minimal-environment
@core
openscap
openscap-scanner
scap-security-guide
openscap-anaconda-addon
...
%end

The %addon block then specifies the profile to apply. The following applies CIS Benchmark Level 2 for Rocky Linux 9:

kickstart %addon
%addon org_fedora_oscap
  content-type = scap-security-guide
  profile = xccdf_org.ssgproject.content_profile_cis_server_l2
%end

For DISA STIG, replace the profile ID with xccdf_org.ssgproject.content_profile_stig. To find the exact profile ID for your target content, run oscap info /usr/share/xml/scap/ssg/content/ssg-rl9-xccdf.xml on a system with the scap-security-guide package installed — it lists every available profile and its ID.

Warning

The SCAP remediation that runs during installation can override or conflict with hardening steps you have written manually in your %post section. If you use the %addon approach, audit for duplicate or conflicting configurations and remove the redundant %post steps. Running two hardening passes that fight each other over the same configuration files produces unpredictable results. The remediation log is written to /root/openscap_data/ and is worth reviewing after first boot to confirm the profile application succeeded.

Delivery Mechanisms: Getting the Kickstart File to the Machine

You can specify the Kickstart file location in several ways. Each has different implications at scale.

Method 1: Kernel command line (interactive or PXE)

When you see the GRUB boot menu, pressing e lets you edit the kernel parameters. You add inst.ks=http://deploy.internal.example.com/ks/rocky9-base.cfg. At scale this is impractical; it is useful for testing a new Kickstart file before automation.

Method 2: PXE + TFTP + DHCP

This is the standard enterprise approach. A PXE server (using PXELINUX or GRUB over TFTP) serves the kernel and initrd to booting machines. The PXE configuration automatically specifies the Kickstart URL:

pxelinux.cfg/default
label rocky9-auto
  menu label Rocky Linux 9 Automated Install
  kernel /networkboot/Rocky9/vmlinuz
  append initrd=/networkboot/Rocky9/initrd.img \
    inst.repo=http://mirror.internal.example.com/rocky/9 \
    inst.ks=http://deploy.internal.example.com/ks/rocky9-base.cfg \
    console=tty0 console=ttyS0,115200n8

The machine boots, gets an IP from DHCP, downloads the kernel via TFTP, fetches the Kickstart file over HTTP, and installs itself with zero interaction. For machines that need different configurations, DHCP can return different PXE filenames based on MAC address or vendor class, pointing different machines to different Kickstart files.

Method 3: Embedded in a custom ISO

Using mkkiso (part of the lorax package), a Kickstart file is copied into the root directory of a Rocky Linux ISO. The volume label of the rebuilt ISO must exactly match the original — Anaconda uses this label to locate the stage2 image, and a mismatch produces a "cannot load stage2 from cdrom" error even when the files are present. This approach is ideal for air-gapped environments, lab setups, or edge deployments where network access to a Kickstart server is not guaranteed. The ISO becomes entirely self-contained.

Method 4: Via Cobbler

Cobbler is a Linux installation server that integrates PXE, TFTP, DHCP, DNS, and Kickstart management into a unified system. Rather than manually managing flat PXE config files and a directory of Kickstart templates, Cobbler introduces an object model: distros (a kernel + initrd pair), profiles (a distro + Kickstart template combination), and systems (specific machines, identified by MAC address, each linked to a profile). When a machine PXE-boots, Cobbler dynamically generates the appropriate PXE menu entry and serves the correct Kickstart file based on which system object matches the requesting MAC.

This model scales naturally. Adding a new server role means creating a new profile with its own Kickstart template and assigning machines to it — you do not edit PXELINUX configuration files by hand. Cobbler also handles reposync to mirror upstream repositories locally, reducing installation time and eliminating external network dependencies. Its Python API (cobbler-api) allows programmatic management: you can write a script that reads a CMDB, creates system objects for every machine being provisioned, assigns profiles, and then triggers PXE installations — all without any manual Cobbler web UI interaction. Integration with Ansible, Salt, and similar tools is straightforward through the API.

The practical limitation is operational overhead: Cobbler requires a dedicated host (or VM), regular updates, and care around its DHCP management — if Cobbler manages DHCP, it owns that service entirely, which can conflict with existing DHCP infrastructure. For environments already running a separate DHCP server, Cobbler can be configured to not manage DHCP and instead rely on external DHCP to direct machines to the correct TFTP server. This is the more common production configuration.

Method 5: Via Foreman with Katello

Foreman with the Katello plugin is a heavier-weight solution that covers the full provisioning and lifecycle management stack: PXE boot, Kickstart delivery, post-install configuration management via Puppet or Ansible, software content lifecycle management (controlling which package versions are available to which hosts), subscription management for RHEL systems, and a web UI and API for managing all of it. Where Cobbler handles provisioning and stops there, Foreman+Katello handles provisioning and continues managing the host through its entire lifecycle.

The provisioning path works similarly at the network level — DHCP directs booting machines to Foreman's TFTP server, which serves a dynamically generated PXE menu that includes the correct Kickstart URL. But the Kickstart template is rendered by Foreman using ERB (Embedded Ruby) templating rather than Jinja2, and it is stored in Foreman's database rather than a file system. Foreman's host parameters system allows per-host and per-hostgroup variable overrides, so the same base template can produce customized Kickstart files for hundreds of distinct host configurations without duplicating the template itself.

The tradeoff is complexity. Foreman+Katello is a substantial installation with significant resource requirements and a steep initial configuration curve. For organizations already invested in Red Hat Satellite (which is the downstream commercial product built on Foreman+Katello), this is the natural choice. For teams that only need provisioning automation without the content lifecycle features, Cobbler or a simpler PXE+HTTPS approach is a better fit.

Method 6: Via HashiCorp Packer

Packer occupies a different niche from Cobbler and Foreman. Rather than managing ongoing bare-metal provisioning, Packer is used to build standardized machine images — QCOW2 files for KVM, OVA templates for VMware, AMIs for AWS, Azure managed images — from a Kickstart file. The resulting image is a pre-installed, pre-hardened operating system that can be deployed across a fleet without running the installer at all. Machines provisioned from an image are faster to bring up (no install phase), and the image itself is an auditable artifact in a way that a live install process is not.

Packer's qemu builder boots a Rocky Linux ISO in a QEMU VM, injects the Kickstart file via boot_command (sending keystrokes to the GRUB boot menu to append inst.ks=), waits for the installation to complete, and then runs any additional provisioners (shell scripts, Ansible playbooks) before shutting down and exporting the image. The resulting image can be stored in an image registry and deployed via cloud-init, Terraform, or direct QEMU instantiation.

In practice, teams often use both Kickstart and Packer: Kickstart for bare-metal provisioning where you are installing onto physical hardware, and Packer for building VM template images that get deployed to hypervisors and cloud platforms. The Kickstart file is the shared source of truth for both paths — the same hardening logic, the same partition layout reasoning, applied through different deployment mechanisms.

Method 7: Ephemeral API-driven Kickstart service

At large scale, static Kickstart files served from a directory are a security and flexibility problem at the same time. An ephemeral Kickstart service flips this model: instead of serving files, a lightweight HTTP API generates a Kickstart file per request, based on the requesting machine's identity. When a machine PXE-boots, it hits the API at inst.ks=https://ks.internal.example.com/api/kickstart?mac=${net0/mac} (using iPXE variable substitution to pass its MAC address). The API looks up the MAC against the CMDB, determines the correct role, renders the appropriate Jinja2 template with host-specific variables — hostname, management IP, disk device, role-specific packages, environment — returns the rendered file, and then invalidates the token. The file is never written to disk at all.

This approach solves several problems at once. Credentials embedded in the Kickstart file are generated on-demand, not stored in a file that persists on a server indefinitely. The API can enforce that each MAC address gets exactly one installation Kickstart before the token expires, preventing replay. The rendered output is logged against the request (MAC, timestamp, rendered content hash) for audit purposes without persisting the actual secrets. Secrets beyond the initial credential — Vault tokens, enrollment certificates, CM agent keys — can be retrieved by the %post script at install time using short-lived tokens generated by the API at render time and injected into the template. Python with Flask or FastAPI is sufficient for this; the complexity is in the secrets management plumbing, not the web layer itself.

The operational requirement this adds is high availability for the Kickstart API — if it is down during a provisioning window, machines cannot install. Running two instances behind an internal load balancer with a shared backing store (PostgreSQL or Redis for token state) is the standard mitigation. This is a meaningful investment compared to serving flat files, and it pays off only when you have a secrets management pipeline already in place and the provisioning volume justifies the operational surface area.

Method 8: Image streaming with kiwi-ng and OCI registries

Packer builds images and stores them as files. A more composable approach — particularly for environments already running container infrastructure — is to use kiwi-ng to build a Rocky Linux root filesystem image as an OCI container artifact, push it to an internal container registry, and stream it onto bare metal at provision time using a lightweight installer stub that runs from PXE-delivered initrd. The machine boots a minimal environment, pulls the image layers from the registry, writes them to disk, configures bootloader and host identity, and reboots into a production system without ever running Anaconda or consuming a Kickstart file at all.

This model has genuine advantages: image provenance is enforced by the registry (digest verification), updates to the base image flow through the same pipeline as container image updates, and the "install" is more accurately described as an image write — deterministic, fast, and with a complete audit trail. The Kickstart file is still used to build the kiwi image definition, but the per-machine provisioning path no longer involves it. The tradeoff is that this requires more mature infrastructure (an OCI registry reachable from the management network, a kiwi build pipeline, a custom PXE installer stub) and is more operationally complex to debug when something goes wrong. Teams reaching for this approach are typically those running Kubernetes infrastructure at scale who want to apply container-native supply chain practices to their base OS layer.

Scaling Considerations and Operational Patterns

Kickstart Templating

The moment you manage more than two or three distinct server roles, raw Kickstart files become unmaintainable if duplicated. The better approach is a templating system. Python's Jinja2 or even simple Bash-based sed substitution can parameterize a base template. More sophisticated teams use Cobbler's built-in template engine (Cheetah-based), Foreman's ERB templates with per-hostgroup parameter overrides, or HashiCorp Packer with its qemu builder for image-based deployments. Each tool occupies a different point on the complexity-vs-capability tradeoff: raw templates are simple and portable; Cobbler adds a provisioning object model; Foreman adds full lifecycle management; Packer shifts the output from a provisioning process to a deployable image artifact.

A Kickstart template might look like:

kickstart.j2
network --hostname={{ inventory_hostname }}.{{ domain }}
rootpw --iscrypted {{ root_password_hash }}

Generated at deploy time from an inventory system, each machine gets a customized Kickstart file served dynamically.

A pattern worth implementing explicitly: separate the concerns of the base template (storage layout, package selection, hardening) from the identity layer (hostname, management IP, environment variables, secrets). The base template is stable and changes infrequently. The identity layer changes with every deployment and should be composed at render time from the CMDB rather than stored in the template itself. This separation makes the base template independently testable — you can validate it against a set of synthetic identity fixtures — and keeps CMDB-derived values out of version control entirely.

Version Control and Testing as a Real CI/CD Pipeline

The conclusion section of this article makes the philosophical case for treating infrastructure as code. Here is what that looks like in practice for a Kickstart repository.

The repository layout that works well for teams managing multiple roles and multiple OS versions:

repository layout
kickstart-repo/
├── templates/
│   ├── base.ks.j2           # shared base template
│   ├── web-server.ks.j2     # role-specific overlay
│   └── database.ks.j2
├── vars/
│   ├── rocky9.yml           # version-specific variables
│   └── rocky10.yml
├── tests/
│   └── validate.sh          # ksvalidator wrapper
└── Makefile                 # render + validate targets

A minimal CI pipeline for this repository does three things on every commit: renders all templates using the variable files, runs ksvalidator against every rendered output, and optionally fires a virt-install test deployment in a dedicated test environment. The last step is expensive — a full VM install takes 5–15 minutes — but it catches classes of errors that ksvalidator cannot, including %post script failures and storage layout incompatibilities.

tests/validate.sh
#!/bin/bash
set -euo pipefail

RENDERED_DIR="./rendered"
FAILED=0

for ks in "$RENDERED_DIR"/*.cfg; do
  echo "Validating: $ks"
  if ! ksvalidator -v RHEL9 "$ks"; then
    echo "FAIL: $ks"
    FAILED=1
  fi
done

exit $FAILED

Treat Kickstart merge requests with the same review standards as application code. Storage layout changes in particular deserve careful review — a single character typo in a clearpart directive on a production deployment can wipe the wrong disk on every machine in a rack before anyone notices.

Two additional pipeline stages that the standard ksvalidator + virt-install loop misses: compliance scanning and drift detection. After a virt-install test deployment completes, run oscap against the installed VM disk image to verify that the applied SCAP profile passes at the expected score before the image is promoted. This catches regressions where a package addition or %post script change unintentionally breaks a CIS or STIG control — finding them in CI is far preferable to finding them in a post-production compliance scan. For drift detection, periodically run your Kickstart CI pipeline against images that have been live in production for 30 or 90 days, comparing the SCAP baseline against the current state. This tells you whether configuration management is holding the intended state or whether configuration drift has accumulated. Kickstart and configuration management together should produce a system that passes its compliance baseline at install time and holds it indefinitely.

Always validate Kickstart files before using them in production. The ksvalidator utility, included in the pykickstart package, checks syntax:

$ dnf install pykickstart && ksvalidator rocky9-base.cfg

This catches syntax errors before they cause a production deployment to fail at 2 a.m. Note that ksvalidator confirms syntax and flags deprecated options, but it does not validate the content of %pre, %post, or %packages sections. It cannot guarantee a successful install — only that the file is syntactically correct.

The pykickstart package also ships ksverdiff, which is invaluable when migrating files between RHEL versions:

$ ksverdiff --from RHEL9 --to RHEL10

This lists every directive that was added, removed, or changed between the two versions, giving you a precise checklist for migration rather than discovering incompatibilities mid-deployment.

For a full end-to-end test without touching real hardware, deploy to a virtual machine using virt-install:

terminal
$ virt-install \
  --name test-rocky9 \
  --ram 2048 \
  --vcpus 2 \
  --disk path=/var/lib/libvirt/images/test-rocky9.qcow2,size=40 \
  --os-variant rocky9 \
  --network bridge=virbr0 \
  --location /data/isos/Rocky-9-x86_64-dvd.iso \
  --initrd-inject=/etc/kickstart/rocky9-base.cfg \
  --extra-args="inst.ks=file:/rocky9-base.cfg console=ttyS0,115200" \
  --nographics \
  --noreboot

The --noreboot flag is particularly valuable during testing: the machine installs and shuts down, allowing you to inspect it before committing to a full boot. The --initrd-inject flag injects the Kickstart file directly into the initrd so no external HTTP server is required for the test run.

Security: The Kickstart File Itself Is a Target

Before the mitigations: understand the threat. An attacker with access to your management network VLAN during a provisioning window can do two things. First, they can passively watch TFTP and HTTP traffic and harvest the Kickstart file as it is delivered to booting machines — getting a hashed root password and any SSH key material embedded in the file. A SHA-512 hash from a weak password is crackable offline in hours with modern GPU hardware. Second, they can actively inject a modified Kickstart file by poisoning your PXE response or DNS — delivering a backdoored %post script that installs at the moment the machine is most vulnerable, before any endpoint security tooling is running.

This is not a theoretical attack. Management networks are frequently trusted implicitly and monitored poorly. The provisioning window — when a machine is booting and fetching its Kickstart file — is a brief but real exposure. The mitigations below address each part of that threat surface.

Practical protections: serve Kickstart files over HTTPS with a certificate that clients validate; restrict access to the Kickstart server to the management network; consider rotating credentials immediately post-install so that even if the Kickstart file is compromised, the installed credential is already obsolete; use a secrets management system like HashiCorp Vault and have the %post script retrieve secrets at install time using a one-time token rather than embedding them in the file.

Pro Tip

For highly sensitive environments, the Kickstart file itself should be generated on-demand per machine, served once, and then invalidated. This ephemeral Kickstart approach means no persistent store of installation credentials exists.

Infrastructure as Code, Starting at Bare Metal

Kickstart represents a philosophy that is worth internalizing: infrastructure should be defined in code, version-controlled, tested, and repeatable. A machine built by running through a graphical installer is essentially undocumented — the decisions made during that session exist only in the resulting filesystem, not in any artifact you can inspect, diff, or reproduce.

A Kickstart file is a specification. It says exactly what a system should be at the moment of birth. Combined with configuration management that maintains ongoing state, you have a complete, auditable description of your infrastructure from initial boot to the running state. When a machine needs to be rebuilt, you rebuild it from the specification. When the specification needs to change, you change it in version control, test in staging, and promote to production.

The question worth addressing directly: at what scale does this investment pay off? The honest answer is sooner than most people expect. Even at five machines, a manual install that takes 45 minutes with the graphical wizard takes 3 minutes unattended with Kickstart — and the Kickstart result is identical every time. At 20 machines, the difference is not just time but correctness: manual installs accumulate subtle differences that cause environment-specific bugs that consume debugging time for months. At 100 machines, unautomated provisioning is simply not viable.

The break-even point for writing a production Kickstart file is roughly one deployment run. After that, every subsequent deployment is essentially free — consistent, documented, and recoverable. The storage layout decisions you make once are applied identically across every machine in the fleet. The security hardening you write once is enforced everywhere, automatically.

The teams that build systems that way — that treat bare metal provisioning as a software engineering problem — consistently outperform teams that treat it as a manual craft. They recover faster from failures, scale with less operational overhead, and spend more time building capabilities than fighting fires.

This reflects a foundational principle running through Site Reliability Engineering: repetitive manual work is an operational liability, not a neutral cost. When automation is available and the work is clearly defined, engineering time spent doing it by hand is engineering time not spent improving the system. The Google SRE Book (Beyer et al., O'Reilly, 2016) identifies this pattern as toil — work that scales with service growth but yields no lasting improvement — and treats eliminating it as a first-class engineering responsibility. Gene Kim's The Phoenix Project reaches the same conclusion from a DevOps perspective: unautomated operations bottlenecks are where delivery pipelines collapse. Bare metal provisioning is not a one-time craft task; it is an operational process that should be specified, tested, and repeatable.

Rocky Linux 10: What Changes in Your Kickstart Files

Rocky Linux 10.0 ("Red Quartz") shipped on June 11, 2025, followed by Rocky Linux 10.1 in late 2025. Both remain actively supported. Several changes directly affect Kickstart files written for Rocky Linux 9.

Root account disabled by default

In Rocky Linux 10, Anaconda disables the root account by default. Installation now requires creating an administrative user with full sudo privileges. If you explicitly want to enable root login, you must add --allow-ssh to the rootpw directive — the flag that in Rocky Linux 8 was not required. Without an explicit user directive creating a wheel-group member, an unattended Rocky Linux 10 install will produce a system with no accessible login path.

A Rocky Linux 10 compatible credential block looks like:

kickstart (Rocky Linux 10)
# Root account with explicit SSH access enabled
rootpw --iscrypted --allow-ssh $6$rounds=5000$somerandomsalt$longhashstring...

# Required: a wheel-group admin user for sudo access
user --groups=wheel --name=sysadmin --iscrypted --password=$6$...insert_hash_here...

Password hashing algorithm change

Rocky Linux 10.1 changes the default password hashing algorithm for new users from SHA-512 (the $6$ prefix) to yescrypt (the $y$ prefix). For automated deployments where the hash is generated externally and placed in the Kickstart file, you can still supply a SHA-512 hash with --iscrypted — it remains valid. But if you are generating hashes inline or via scripts, be aware that openssl passwd -6 continues to produce SHA-512, while newer shadow-utils on RL10 systems will default to yescrypt for interactively created passwords. Use openssl passwd -6 for cross-version compatibility in Kickstart files.

Architecture baseline raised

Rocky Linux 10 no longer supports the x86-64-v2 microarchitecture. The new baseline is x86-64-v3, which corresponds roughly to Intel Haswell (2013) and AMD Excavator (2015) generation CPUs. Pre-Haswell Intel processors — including many older Xeon E5 v1 and E5 v2 era servers — will not run Rocky Linux 10. Before planning a migration, audit your hardware fleet against the x86-64-v3 feature requirements. Rocky Linux 10 also removes 32-bit compatibility for x86_64 entirely.

VNC replaced by RDP for remote graphical installation

Rocky Linux 10 replaces VNC with RDP (Remote Desktop Protocol) for graphical remote access during installation. This affects kernel boot options: inst.vnc is gone; use inst.rdp instead. For fully unattended Kickstart deployments this change is irrelevant, but if your deployment workflow includes any graphical installer interaction (for example, inspecting the installer UI during a test run), update your tooling accordingly.

Third-party repository GUI support removed

Anaconda's graphical interface on Rocky Linux 10 no longer supports adding third-party repositories during initial installation through the GUI. The correct path for any additional repo configuration is now either the inst.addrepo kernel boot option or Kickstart repo directives — exactly what production Kickstart files already do. This change reinforces the Kickstart-first approach described throughout this guide.

Migration Warning

Rocky Linux does not support in-place upgrades between major versions. Moving from Rocky Linux 9 to Rocky Linux 10 requires a fresh installation. Use ksverdiff --from RHEL9 --to RHEL10 to get an exact list of directive changes before attempting to adapt existing Kickstart files.

RL9 to RL10 Migration Checklist

Ordered by impact and likelihood of breaking an unattended install. Address these in sequence before running any RL10 deployments.

Critical — install will fail without this
Add explicit user --groups=wheel --name=sysadmin --iscrypted --password=... directive. Without it, an unattended RL10 install produces a system with no accessible login path.
Critical — install will fail without this
If root SSH access is required, add --allow-ssh to the rootpw directive. This flag did not exist in RL9 — add it for RL10.
High — hardware may be incompatible
Audit hardware fleet against x86-64-v3 requirements. Pre-Haswell Intel (Xeon E5 v1, E5 v2) and pre-Excavator AMD processors will not run RL10. Run grep flags /proc/cpuinfo | grep -E 'avx2|bmi2|fma' on candidate hardware — all three flags must be present.
Medium — affects hash generation scripts
If you generate password hashes programmatically (scripts, CI pipelines), verify they still use openssl passwd -6 to produce SHA-512. SHA-512 hashes remain valid in RL10, but RL10 systems generating new passwords interactively will default to yescrypt. Avoid mixing hash types across your fleet.
Medium — affects remote install workflows
Replace inst.vnc kernel boot options with inst.rdp. Update any tooling that connects to the installer graphically. Fully unattended Kickstart installs are unaffected.
Low — verify directive compatibility
Run ksverdiff --from RHEL9 --to RHEL10 against your existing Kickstart files. Review every changed or removed directive. Some directives deprecated in RL9 are removed entirely in RL10.
Low — test, do not assume
Run a full virt-install test of each adapted Kickstart file against a RL10 ISO before any production deployment. RL10 Anaconda changes may surface installation behaviors not caught by ksvalidator.

Quick Reference: The Minimal Production Kickstart Skeleton

rocky9-base.cfg
# Rocky Linux 9 Base Installation Kickstart
# Maintained in: git.internal.example.com/infra/kickstart
# Version: 1.0.0

#version=RHEL9
text
skipx
eula --agreed
firstboot --disabled

# Installation source
url --url="http://mirror.internal.example.com/rocky/9/BaseOS/x86_64/os/"
repo --name="AppStream" --baseurl="http://mirror.internal.example.com/rocky/9/AppStream/x86_64/os/"

# Locale
lang en_US.UTF-8
keyboard --xlayouts='us'
timezone America/Chicago --utc

# Network
network --bootproto=dhcp --device=ens3 --activate --onboot=yes
network --hostname=newhost.internal.example.com

# Authentication
rootpw --iscrypted $6$...insert_hash_here...
user --groups=wheel --name=sysadmin --iscrypted --password=$6$...insert_hash_here...

# Security
selinux --enforcing
firewall --enabled --service=ssh

# Storage
ignoredisk --only-use=sda
zerombr
clearpart --all --initlabel --disklabel=gpt
bootloader --location=mbr --boot-drive=sda --append="crashkernel=auto"

part biosboot  --fstype=biosboot  --size=2    --ondisk=sda
part /boot/efi --fstype=efi       --size=1024 --ondisk=sda
part /boot     --fstype=xfs       --size=2048 --ondisk=sda
part pv.01     --fstype=lvmpv     --size=1    --grow --ondisk=sda

volgroup vg_os pv.01

logvol /     --vgname=vg_os --size=10240 --fstype=xfs --name=lv_root
logvol /home --vgname=vg_os --size=4096  --fstype=xfs --name=lv_home --fsoptions="nodev,nosuid"
logvol /tmp  --vgname=vg_os --size=2048  --fstype=xfs --name=lv_tmp  --fsoptions="nodev,nosuid,noexec"
logvol /var  --vgname=vg_os --size=8192  --fstype=xfs --name=lv_var
logvol swap  --vgname=vg_os --size=4096  --name=lv_swap

# Reboot automatically after install
reboot --eject

%packages --ignoremissing --exclude-weakdeps
@^minimal-environment
@core
openssh-server
vim-enhanced
chrony
firewalld
audit
aide
-telnet
-rsh
-ypserv
%end

%pre --interpreter=/bin/bash --log=/tmp/ks-pre.log
# Pre-install disk validation or dynamic config
%end

%post --log=/root/ks-post.log
#!/bin/bash
set -euo pipefail
# Hardening, agent bootstrap, etc.
%end

%onerror
#!/bin/bash
logger -t kickstart "FAILED: $(hostname -I)"
poweroff
%end

Sources and Further Reading

The following authoritative sources were used in researching and verifying this guide. When Kickstart behavior is ambiguous or a directive behaves unexpectedly, these are the right places to look — not forum posts or outdated blog entries.

Steps to Deploy Rocky Linux with Kickstart

Step 1: Write global directives and define storage layout

Configure the installation source, locale, keyboard, timezone with UTC hardware clock, network settings, and hashed credentials. Define a GPT-based LVM layout with separate mount points for /home, /tmp, /var, /var/log, and /var/log/audit, applying nodev, nosuid, and noexec mount options where required by CIS Benchmark Level 2.

Step 2: Define packages, %pre logic, and %post hardening

Use the %packages section with --exclude-weakdeps and explicit exclusions for legacy protocols. Add a %pre script for dynamic disk detection or LVM cleanup if needed. Write a %post script with set -euo pipefail that applies kernel hardening via sysctl, hardens SSH configuration, deploys authorized keys, configures firewalld, initializes AIDE, adds auditd rules, and bootstraps the configuration management agent.

Step 3: Validate the file and deliver it via PXE or embedded ISO

Run ksvalidator against the Kickstart file using the pykickstart package to catch syntax errors before deployment. Use ksverdiff to compare directive changes between RHEL versions. Test a full install using virt-install with the --noreboot flag. For production delivery, serve the file over HTTPS from a PXE infrastructure that passes inst.ks= on the kernel command line, or embed it in a custom ISO for air-gapped environments using mkkiso from the lorax package.

Step 4: Adapt for Rocky Linux 10 if migrating

If targeting Rocky Linux 10, update your Kickstart file to add an explicit user directive with wheel group membership since the root account is disabled by default. Add --allow-ssh to rootpw if root SSH access is required. Verify hardware meets the x86-64-v3 microarchitecture baseline. Run ksverdiff --from RHEL9 --to RHEL10 for a full list of directive changes.

Frequently Asked Questions

What is the difference between %pre and %post in a Kickstart file?

The %pre section runs inside the installation environment before Anaconda processes any storage directives or installs packages. It has access only to the minimal tools in the initrd. The %post section runs after all packages are installed, chrooted into the newly installed system by default. Use %pre for dynamic storage configuration or disk wiping; use %post for hardening, SSH key deployment, and bootstrapping configuration management agents.

Is it safe to store passwords in a Kickstart file?

Passwords in Kickstart files should always be stored as hashed values using the --iscrypted option, never in plain text. Generate a SHA-512 hash with openssl passwd -6 or mkpasswd -m sha-512 and place the resulting $6$... string in the file. Even so, Kickstart files hosted on web servers should be served over HTTPS, restricted to the management network, and ideally generated on-demand and invalidated after a single use. Best practice is to rotate credentials immediately post-install via configuration management so that the installed credential does not match what is in the file.

Why should SELinux be set to enforcing in a Kickstart file?

SELinux in enforcing mode is a mandatory access control layer that limits what processes can do even after an attacker gains a foothold through an application-level vulnerability. Disabling it to avoid AVC denials removes a meaningful layer of defense. Any application that misbehaves under SELinux enforcing mode is signaling a policy gap that should be fixed, not a reason to disable the entire security subsystem. Set selinux --enforcing in Kickstart and leave it enforcing in production.

What changed in Kickstart files for Rocky Linux 10?

Rocky Linux 10 introduces several changes that require Kickstart file updates. The root account is disabled by default in the installer, making an explicit user directive with wheel group membership mandatory for unattended installs. If root SSH login is needed, add --allow-ssh to the rootpw directive. The default password hashing algorithm for new users changes from SHA-512 ($6$) to yescrypt ($y$) in Rocky Linux 10.1, though SHA-512 hashes remain valid in iscrypted fields. The architecture baseline is raised to x86-64-v3, dropping support for pre-Haswell Intel and pre-Excavator AMD processors. VNC for remote graphical installation is replaced by RDP. Use ksverdiff --from RHEL9 --to RHEL10 to get a complete list of directive-level changes before migrating files.

Can a Kickstart file apply a CIS or DISA STIG security profile automatically?

Yes. The %addon org_fedora_oscap section integrates OpenSCAP into the Anaconda installer and applies a SCAP profile — CIS Level 1 or 2, DISA STIG, PCI-DSS, or others — during the installation itself, before the first boot. This is preferable to post-install remediation for configurations that affect partition mount options or bootloader parameters, since those require a reboot to take effect and applying them at install time means the machine boots compliant from the start. Add openscap, openscap-scanner, scap-security-guide, and openscap-anaconda-addon to the %packages section, then configure the profile ID in the %addon block. Find available profile IDs by running oscap info against the SCAP content file for your target OS version.

How should I handle the UEFI versus BIOS boot partition question in a Kickstart storage layout?

Include both a biosboot partition (2 MiB, no filesystem) and a /boot/efi partition (FAT32, 1 GiB recommended) in your storage layout when deploying to heterogeneous hardware. The biosboot partition is used by GRUB 2 on BIOS/CSM systems with GPT disks — GPT eliminates the legacy MBR gap that GRUB previously staged itself into, so a dedicated biosboot partition takes its place. The /boot/efi partition is used on UEFI systems. Each boot mode uses the relevant partition and ignores the other. Carrying both costs 2 MiB and eliminates a class of installer failures in mixed-mode environments. If your hardware fleet is exclusively UEFI, you can omit biosboot; if it is exclusively legacy BIOS with GPT, you can omit /boot/efi — but confirm the boot mode is truly fixed before removing either.