Every modern Linux server you interact with -- whether it hosts a web application, routes network traffic, or runs a Kubernetes node -- boots and operates under the supervision of systemd. It is, without exaggeration, the most consequential piece of Linux userspace software written in the past two decades. And yet, many administrators only scratch its surface, treating it as a black box that somehow makes nginx start at boot.
This guide will change that. We are going to work through systemd from the ground up: how unit files are structured, how the dependency graph resolves, how to build your own services and timers, and how to wield journalctl like the diagnostic scalpel it was designed to be.
A Brief History: Why systemd Exists
Before systemd, Linux systems booted using SysVinit, a design dating back to 1983 in its original UNIX System V form and ported to Linux in the early 1990s. SysVinit executed shell scripts sequentially, one after another, during boot. It was simple and predictable -- but painfully slow and brittle on modern hardware where dozens of services need to start concurrently.
Canonical's Upstart (2006-2015) attempted to fix this with an event-driven model, but systemd creator Lennart Poettering argued that Upstart still placed too much burden on the administrator. As he explained at FOSDEM 2025, the core philosophical difference was this: with Upstart, the administrator told the computer which trigger should fire which action in order to build a full tree of actions reaching some goal. With systemd, you specify the goal, and the computer figures out the rest.
Poettering and co-developer Kay Sievers, both working at Red Hat at the time, began hashing out the basic ideas for what was initially called "Babykit" on a flight back from the Linux Plumbers Conference in 2009. Poettering's April 2010 blog post "Rethinking PID 1" introduced the project publicly, with the first release following shortly after. They designed it as a transactional system where the administrator declares a desired end state and systemd calculates the dependency graph to reach it. By 2015, every major Linux distribution had adopted systemd as its default init system.
Today, systemd is far more than an init system. It is a suite of basic building blocks for a Linux OS, providing a system and service manager that runs as PID 1 and starts the rest of the system. That suite includes logging (journald), network configuration (networkd), DNS resolution (resolved), temporary file management (tmpfiles), login session tracking (logind), and scheduled task execution (timers), among other components.
Unit Files: The Building Blocks
Everything systemd manages is called a unit. Units are defined in configuration files called unit files, and they come in several types identified by their file extension: .service for daemons and processes, .timer for scheduled tasks, .socket for socket-based activation, .target for grouping units into synchronization points, .mount for filesystem mount points, .device for kernel device nodes, and .path for filesystem path monitoring.
Unit files live in three directories, listed here in order of increasing priority:
/usr/lib/systemd/system/-- Package-installed units (lowest priority)/run/systemd/system/-- Runtime transient units/etc/systemd/system/-- Administrator-created units and overrides (highest priority)
If you need to customize a package-provided unit, never edit the file in /usr/lib/systemd/system/ directly. Instead, create an override using systemctl edit, which places a drop-in file in /etc/systemd/system/<unit>.d/override.conf.
Anatomy of a Service Unit File
Unit files use an INI-style format organized into sections. A service unit file contains three primary sections: [Unit], [Service], and [Install]. Let's examine a typical SSH daemon unit file:
[Unit] Description=OpenSSH server daemon Documentation=man:sshd(8) man:sshd_config(5) After=network.target sshlog.target auditd.service Wants=sshlog.target [Service] Type=notify EnvironmentFile=-/etc/sysconfig/sshd ExecStart=/usr/sbin/sshd -D $OPTIONS ExecReload=/bin/kill -HUP $MAINPID KillMode=process Restart=on-failure RestartSec=42s [Install] WantedBy=multi-user.target
The [Unit] section provides metadata and dependency information. Description gives a human-readable name used by systemd in status messages. After controls ordering -- it tells systemd that this unit must start after network.target and auditd.service have been activated, but it does not create a hard dependency on them. Wants creates a soft dependency: systemd will try to start sshlog.target when sshd starts, but sshd will not fail if that target is unavailable.
The [Service] section defines the runtime behavior. The Type directive is one of the most important decisions you'll make when writing a unit file. systemd supports several service types: simple (the default) means the main process is specified in ExecStart and systemd considers the service started immediately. forking is used for traditional daemons that fork a child process and exit the parent. notify means the service will send a readiness notification via sd_notify() when it has finished initializing. oneshot indicates a short-lived process that systemd should wait for before continuing to start dependent units.
The Restart directive controls automatic restart behavior. Setting it to on-failure means systemd will restart the service if it exits with a non-zero exit code, is terminated by a signal, times out, or triggers a watchdog failure. Other options include always (restart regardless of exit status), on-abnormal (restart only on signal or timeout), and no (never restart). The RestartSec value provides a delay between the service stopping and being restarted, which prevents tight restart loops from consuming system resources.
The [Install] section defines how the unit integrates into the boot sequence. WantedBy=multi-user.target means that when you run systemctl enable sshd, systemd creates a symlink in the multi-user.target.wants/ directory. This effectively makes multi-user.target pull in sshd as a dependency during boot, similar to how the old SysVinit runlevel 3 would start network services.
Targets: Organizing the Boot Sequence
Targets are systemd's replacement for SysVinit runlevels. They act as synchronization points -- named groups of units that represent a specific system state. The boot process moves through a chain of targets, each building on the previous one.
sysinit.target-- Early system initialization (filesystem mounts, swap, random seed)basic.target-- Basic system setup (timers, paths, sockets)multi-user.target-- Full multi-user system with networking (equivalent to runlevel 3)graphical.target-- Multi-user plus graphical login (equivalent to runlevel 5)
Targets are inclusive. When graphical.target activates, it also pulls in multi-user.target, which pulls in basic.target, and so on. You can inspect this chain with:
To see the default target your system boots into:
To change the default target:
You can switch between targets at runtime using systemctl isolate, which stops all units not required by the specified target. Be careful: running systemctl isolate rescue.target on a remote server will drop your SSH connection. Always ensure you have out-of-band console access (IPMI, KVM, cloud console) before isolating to single-user targets on remote machines.
Hands-On: Creating a Custom Service
Let's build a production-grade service unit for a custom Node.js application. This example includes security hardening directives that are considered best practice for any service exposed to a network.
First, create the unit file:
[Unit] Description=My Node.js Application Documentation=https://internal.example.com/docs/myapp After=network-online.target Wants=network-online.target [Service] Type=simple User=myapp Group=myapp WorkingDirectory=/opt/myapp Environment=NODE_ENV=production Environment=PORT=3000 ExecStart=/usr/bin/node app.js ExecReload=/bin/kill -HUP $MAINPID Restart=always RestartSec=10 TimeoutStopSec=30 # Security hardening NoNewPrivileges=yes PrivateTmp=yes ProtectSystem=strict ProtectHome=yes ReadWritePaths=/opt/myapp/data ProtectKernelTunables=yes ProtectControlGroups=yes # Logging StandardOutput=journal StandardError=journal SyslogIdentifier=myapp [Install] WantedBy=multi-user.target
Now activate it:
$ sudo systemctl daemon-reload # Reload unit files $ sudo systemctl enable myapp.service # Enable at boot $ sudo systemctl start myapp.service # Start now $ sudo systemctl status myapp.service # Verify
The security directives deserve attention. ProtectSystem=strict mounts the entire filesystem hierarchy as read-only for the service, except for paths explicitly listed in ReadWritePaths. PrivateTmp=yes gives the service its own private /tmp and /var/tmp directories. NoNewPrivileges=yes ensures that the service and its child processes cannot gain new privileges through setuid, setgid, or filesystem capabilities.
You can audit the security posture of any service with systemd's built-in analysis tool:
This produces a detailed scorecard evaluating each sandboxing directive, giving you a clear roadmap for tightening your configuration.
Understanding Dependencies: Wants, Requires, After, and Before
Dependency management in systemd operates on two orthogonal axes: requirement dependencies (what must be running) and ordering dependencies (in what sequence things start).
Wants= creates a soft dependency. If unit A has Wants=B, systemd will attempt to start B when A is activated, but A will not fail if B fails to start. Requires= creates a hard dependency -- if B fails to start or is later deactivated, A will be stopped as well.
After= and Before= control ordering only. They do not create dependencies by themselves. This is a common point of confusion. If service A has After=B, systemd will ensure B finishes starting before A begins -- but only if both A and B are being started in the same transaction. If B is not being started at all, the After directive has no effect.
In practice, you usually combine both: Wants=B plus After=B means "try to start B, and if B is being started, wait for it to finish before starting me."
To visualize the full dependency tree of any unit:
To see reverse dependencies (what depends on a given unit):
Systemd Timers: The Modern Cron
Systemd timers offer a more capable alternative to cron for scheduling recurring tasks. They provide dependency management, centralized logging via the journal, randomized delays to prevent thundering herd problems, persistent scheduling that catches up on missed runs, and second-level precision (cron is limited to minute granularity).
A timer consists of two files: a .timer unit that defines the schedule, and a .service unit that defines the work. By convention, they share the same base name.
Example: Automated Daily Backup
Create the service unit:
[Unit] Description=Daily Database Backup [Service] Type=oneshot User=backup ExecStart=/usr/local/bin/backup.sh StandardOutput=journal StandardError=journal
Create the timer unit:
[Unit] Description=Run backup daily at 2 AM [Timer] OnCalendar=*-*-* 02:00:00 Persistent=true RandomizedDelaySec=15min [Install] WantedBy=timers.target
Enable and start it:
The OnCalendar directive uses systemd's calendar event syntax, which follows the format DayOfWeek Year-Month-Day Hour:Minute:Second. Some practical examples include Mon..Fri *-*-* 09:00:00 for weekdays at 9 AM, *-*-01 00:00:00 for the first of every month at midnight, and shorthand forms like daily, weekly, and hourly.
Persistent=true tells systemd to record when the timer last fired. If the system was powered off during a scheduled run, systemd will trigger the service immediately upon next boot. RandomizedDelaySec=15min adds up to 15 minutes of random delay before execution, preventing multiple systems from hitting a backup server or database simultaneously -- the "thundering herd" problem.
You can validate calendar expressions using systemd-analyze:
$ systemd-analyze calendar "Mon..Fri *-*-* 09:00:00" $ systemd-analyze calendar --iterations=5 "daily"
To list all active timers and see when they last fired and when they will fire next:
Monotonic Timers
Beyond calendar-based scheduling, systemd supports monotonic timers that fire relative to a specific event:
[Timer] OnBootSec=15min # 15 minutes after boot OnUnitActiveSec=1h # 1 hour after the service was last activated OnStartupSec=30s # 30 seconds after systemd itself started
These are useful for tasks like initial health checks after boot or recurring polling intervals that should not be tied to wall-clock time.
Journal Logging with journalctl
The systemd journal, managed by systemd-journald, replaces the traditional syslog approach with a structured, indexed binary log format. Every message carries rich metadata: timestamps, PIDs, UIDs, systemd unit names, boot IDs, and more. This metadata makes filtering and correlation far more powerful than grepping through flat text files.
Essential journalctl Commands
# View all logs from the current boot $ journalctl -b # View logs from the previous boot (useful for diagnosing crash causes) $ journalctl -b -1 # Follow logs in real time for a specific service $ journalctl -u nginx.service -f # Filter by time range $ journalctl --since "2026-02-14 08:00" --until "2026-02-14 12:00" $ journalctl --since "1 hour ago" # Filter by priority level (0=emergency through 7=debug) $ journalctl -p err # Show errors and above $ journalctl -p warning -b # Warnings and above, current boot
Interleaving Multiple Services
One of the journal's most powerful capabilities is correlating logs across related services. If you are debugging an issue between a web server and its backend processor:
This produces a single, chronologically interleaved stream of logs from both services, making it far easier to trace cause and effect across process boundaries.
Structured Output
The journal stores metadata as structured key-value fields. You can output logs as JSON for integration with external tools:
You can also filter on any metadata field. To see all log entries from a specific executable path, regardless of which service invoked it:
To see all available fields in the journal:
Controlling Journal Size
Persistent journal storage is configured in /etc/systemd/journald.conf. The default behavior varies by distribution, but persistent journals are typically capped at 10% of the filesystem holding /var/log/journal/, up to a maximum of 4 GiB.
[Journal] Storage=persistent # persistent, volatile, auto, or none SystemMaxUse=2G # Maximum disk space for journal files SystemKeepFree=4G # Minimum free space to maintain SystemMaxFileSize=128M # Maximum size per individual journal file MaxRetentionSec=1month # Maximum time to retain entries RateLimitIntervalSec=30s # Rate limiting window RateLimitBurst=10000 # Maximum messages per interval
After modifying this file, restart the journal daemon:
You can also manage journal disk usage directly:
$ journalctl --disk-usage # Show current usage $ sudo journalctl --vacuum-size=500M # Shrink to 500 MB $ sudo journalctl --vacuum-time=2weeks # Remove entries older than 2 weeks
Advanced Patterns for Production
Drop-in Overrides
Never modify vendor-provided unit files directly. Use drop-in overrides:
This opens an editor for /etc/systemd/system/nginx.service.d/override.conf. Directives here are merged with the original unit file. For example, to increase the stop timeout and add memory limits:
[Service] TimeoutStopSec=60 MemoryMax=1G CPUQuota=80%
Additive directives like ExecStart and After accumulate across drop-ins rather than replacing the original value. To override them, you must first reset the directive with an empty assignment. For example, to change ExecStart, write ExecStart= (empty) followed by ExecStart=/new/command. Without the empty reset, the new value is appended rather than replacing the original. Note that the [Install] section is not processed from drop-in files -- changes to WantedBy require a full unit override via systemctl edit --full.
Use systemctl cat nginx.service to view the effective configuration of any unit, including all active drop-in overrides merged in order. This is invaluable for debugging when overrides do not behave as expected. If you need to completely prevent a unit from being started -- even as a dependency of another unit -- use systemctl mask, which symlinks the unit file to /dev/null.
Template Units
Template units allow you to create parameterized service definitions. They use the @ symbol in their filename, and the text after @ becomes available as %i inside the unit file.
[Unit] Description=Worker Instance %i After=network.target [Service] Type=simple User=worker ExecStart=/opt/worker/run --instance %i --config /etc/worker/%i.conf Restart=always RestartSec=5 [Install] WantedBy=multi-user.target
Enable multiple instances:
$ sudo systemctl enable --now [email protected] $ sudo systemctl enable --now [email protected] $ sudo systemctl enable --now [email protected]
Each instance runs independently with its own logs, PID tracking, and restart policy.
Boot Performance Analysis
systemd provides excellent tools for diagnosing slow boots:
$ systemd-analyze # Total boot time $ systemd-analyze blame # Time per unit, sorted $ systemd-analyze critical-chain # Critical path through the dependency tree $ systemd-analyze plot > boot.svg # Visual timeline as SVG
The critical-chain command is particularly useful because it shows the actual sequence of units on the critical path that determined your total boot time, including the time each unit spent waiting for its dependencies.
Conclusion
systemd is not just an init system -- it is the operational substrate of modern Linux. Understanding its unit files, dependency model, timers, and journal gives you direct control over how your systems boot, run, recover from failure, and report on their own health. The declarative, goal-oriented approach that Poettering and his collaborators introduced in 2010 has proven itself at scale across millions of production systems worldwide.
The most effective way to internalize these concepts is to practice: write a unit file for a service you currently manage, convert a cron job to a systemd timer, or audit the security posture of your running services with systemd-analyze security. The systemd documentation at freedesktop.org is authoritative and exhaustive, and the Arch Wiki systemd pages provide some of the best community-maintained practical guidance available.
Sources: freedesktop.org systemd documentation, DigitalOcean systemd tutorials, Arch Wiki systemd/Journal and systemd/Timers pages, Red Hat Enterprise Linux 9 systemd documentation, FOSDEM 2025 "14 Years of systemd" presentation by Lennart Poettering, ADMIN Magazine interview with Lennart Poettering (2022), SUSE Linux Enterprise Server documentation.