Most administrators interact with systemd-journald through journalctl and leave the daemon itself at its compiled defaults. That works -- until it doesn't. A high-traffic service floods the journal. A reboot wipes three days of debugging context. A compromised system has its logs quietly rewritten. Each of these outcomes is preventable through configuration, and none of them requires exotic tools. All of it lives in /etc/systemd/journald.conf and a handful of journalctl invocations.

This article works through the configuration surface of systemd-journald methodically -- storage, disk quotas, rate limiting, compression, Forward Secure Sealing, journal namespaces, and forwarding strategies. Where the official documentation describes what an option does, this article explains when and why to use it, and what goes wrong if you don't.

Operational surprises this article addresses
  • Your system may be silently losing logs on every reboot. Whether it does depends on a compile-time default that varies by distribution -- not just your journald.conf setting.
  • journald always starts in volatile mode, even with Storage=persistent configured. A crash before systemd-journal-flush.service completes loses those early-boot messages regardless of your configuration.
  • Corrupted journal files from unclean shutdowns are renamed to .journal~ and silently quarantined. They accumulate against your disk quota and are never cleaned up automatically.
  • Forward Secure Sealing is on by default -- and completely inert until you run journalctl --setup-keys. The directive in journald.conf enables the feature; only key generation activates it.
  • The RateLimitBurst= ceiling is not a fixed constant. It scales dynamically with available free disk space using a base-2 logarithm, so your actual burst capacity may exceed the configured value on a well-provisioned system.
  • A verification key sitting on the same disk it is meant to protect is security theater, not a control. It must leave the machine to have any forensic value.
Before you configure anything -- run this first

Three commands that tell you what your system is actually doing right now, independent of what you think you configured:

$ systemd-analyze cat-config systemd/journald.conf

Shows the full merged effective configuration from all drop-in files. Check the commented Storage= line -- the comment reflects the compile-time default your distribution shipped, which may be auto rather than the upstream persistent.

$ journalctl --disk-usage

Shows current journal disk consumption. If you have never set explicit quotas, this tells you how close you are to the default 4 GiB cap.

$ journalctl --list-boots

Shows available boot sessions. If you see only one entry regardless of how many reboots have occurred, your journal is volatile -- logs are not surviving reboots.

How journald Actually Works

systemd-journald is a single-process daemon -- not multi-threaded -- that receives log messages from several sources simultaneously: kernel messages via /dev/kmsg, native journal protocol messages over /run/systemd/journal/socket, stdout and stderr from all systemd-managed services piped through stream sockets, and syslog-compatible messages from the legacy /run/systemd/journal/syslog socket. Every message is stored as a structured, binary journal entry, not a line of text.

That binary format is central to why journald is both powerful and occasionally frustrating. It carries rich metadata per entry -- service unit name, PID, UID, GID, boot ID, monotonic timestamp, wall clock timestamp, and more -- enabling precise filtering that plain text logs cannot match. The tradeoff is that the files are not human-readable without journalctl, and the man journald.conf page itself notes that the daemon is implemented as a conventional single-process daemon, which means a blocked output target (such as a hung console) will block the entire daemon.

Note

The configuration file is /etc/systemd/journald.conf. The recommended approach is not to edit this file directly. Instead, create drop-in files in /etc/systemd/journald.conf.d/ with a .conf extension. Drop-in files are parsed after the main file in lexicographic order, and later values override earlier ones. This keeps your changes isolated from vendor defaults and easier to audit.

All settings live in a single [Journal] section. When you open the default journald.conf, every line is commented out -- those commented values represent compile-time defaults. Uncommenting a line and setting it is how you express intent; leaving it commented means "use the default." Restart systemd-journald.service after any change for it to take effect.

Two less-discussed directives worth knowing about from the start: LineMax= controls the maximum line length permitted when converting stream logs (stdout/stderr from services) into journal records. Lines longer than this value are split into multiple records. The default is 48K, which the man page notes is large enough to avoid truncating common log output while still fitting within network datagram limits if forwarding is in use. The other is the journal.storage system credential -- in recent systemd versions, Storage= can be configured at boot via a credential, with values in the configuration file taking precedence over the credential. This matters in credential-based deployment tooling and provisioned image environments.

Storage Modes: Where Logs Actually Go

The Storage= directive controls where journal data lands and whether it survives a reboot. It accepts four values: volatile, persistent, auto, and none.

Mode Writes to Survives reboot Use when
volatile /run/log/journal/ (tmpfs) no Ephemeral containers, diskless systems, or wherever a log aggregator captures everything before shutdown
persistent /var/log/journal/ (created if absent) yes Any system where log history matters across reboots; multi-user systems using per-user journals
auto Persistent if /var/log/journal/ exists, volatile otherwise depends The compile-time default on many distributions -- do not rely on it without checking what your distro shipped
none Messages dropped immediately no Pure forwarding relay environments where local storage is explicitly undesired; forwarding still works

volatile writes only to /run/log/journal/, which is a tmpfs. Logs are fast to write, consume no spinning disk I/O, and are completely wiped on reboot. This is appropriate for ephemeral containers, diskless systems, or environments where a centralized log aggregator captures everything before shutdown.

persistent writes to /var/log/journal/, creating the directory if it does not exist. Logs survive across reboots, and earlier boot sessions remain queryable via journalctl -b -1 (previous boot), -b -2, and so on.

auto behaves like persistent only if /var/log/journal/ already exists, and like volatile otherwise. The existence of the directory is the control mechanism.

A note on defaults: the current upstream systemd man page (as of systemd 260~devel, sourced from the project Git repository dated 2026-01-16 at man7.org) states that Storage= defaults to persistent in the default journal namespace, with the value determined at compile time. In practice, many downstream distributions -- including older Debian-based and Red Hat-based releases -- ship with auto compiled in as the effective default, meaning logs are volatile on any system where /var/log/journal/ does not already exist. To verify what your system compiled in, run systemd-analyze cat-config systemd/journald.conf and inspect the commented default for Storage=. Do not assume; check.

failure mode A production server configured with Storage=auto and no /var/log/journal/ directory silently runs in volatile mode. Every reboot wipes all log history. journalctl --list-boots returns a single entry no matter how many reboots have occurred. The fix is one command: sudo mkdir -p /var/log/journal, then restart the daemon.

The official upstream journald.conf(5) man page (freedesktop.org) states that the auto storage mode writes persistently only when /var/log/journal/ is already present on the filesystem -- the directory's existence is the sole control mechanism. If the directory does not exist, the daemon falls back to volatile behavior silently, with no warning logged and no indication in service status output.

To enable persistent logging, create the directory and restart the daemon:

$ sudo mkdir -p /var/log/journal && sudo systemctl restart systemd-journald

none discards all log data. Messages still reach the daemon but are immediately dropped. Forwarding to syslog, the console, or kmsg still functions, which means none is useful in environments where journald is purely a forwarding relay and local storage is explicitly undesired.

Warning

Per-user journal files -- which enable journalctl --user -- are only available when Storage=persistent is effectively active. If you depend on per-user journals for a multi-user system, set Storage=persistent explicitly rather than relying on auto.

Early Boot and the Journal Flush Point

There is a gap between "journald is running" and "journald is writing to persistent storage" that the configuration documentation does not surface clearly. Even when Storage=persistent is set and /var/log/journal/ exists, journald always starts by writing to volatile storage at /run/log/journal/. It switches to persistent storage only after receiving a flush signal -- and that signal comes from systemd-journal-flush.service, which runs after /var is mounted and writable.

The practical consequence: log messages from early boot, from services that start before systemd-journal-flush.service completes, are written to /run/log/journal/ first and then migrated. If the system crashes or loses power between early boot and the flush point, those messages are lost even on a system configured for persistent storage. This is by design, not a bug -- /var may be a separate filesystem that has not yet been checked or mounted at that point in the boot sequence.

failure mode A service fails during early boot, leaving no useful logs. journalctl -b shows nothing from before the flush point. The system was configured Storage=persistent and had /var/log/journal/ -- the configuration was correct. The crash happened in the gap before systemd-journal-flush.service ran. This window is unavoidable; the practical mitigation is to configure ForwardToConsole=yes temporarily during active debugging of early-boot failures only, then remove it.

To trigger the flush manually outside of boot -- for example, after switching Storage= from volatile to persistent mid-session without restarting the daemon:

$ journalctl --flush

This is equivalent to sending SIGUSR1 to the journald process. After flushing, volatile data at /run/log/journal/ is moved to /var/log/journal/. Note that changing Storage= to volatile does not automatically remove existing persistent data -- the flip only applies going forward.

Warning

If you switch a running system from Storage=persistent to Storage=volatile, historical journal data remains in /var/log/journal/ and continues to consume disk space. It will not be cleaned up by journald automatically. To remove it intentionally: journalctl --vacuum-size=1 after switching, or remove the directory contents manually. Verify disk usage with journalctl --disk-usage before and after.

Runtime Control Without Restarting the Daemon

Restarting systemd-journald.service is the blunt way to apply configuration changes, and it should be avoided during active incidents or on systems where log continuity matters. journald responds to several signals and journalctl subcommands that allow runtime control without a full restart.

Reload configuration with SIGHUP -- equivalent to systemctl reload systemd-journald. This re-reads journald.conf and all drop-ins and applies the new settings. If ReadKmsg= has changed, the kernel log buffer is flushed and re-opened. The active journal file continues in use; only the configuration changes.

Rotate journal files -- closes the currently active journal file, renames it to an archived file, and starts a new active file. Useful before vacuuming, since vacuum operations only touch archived files:

$ journalctl --rotate

Synchronize to disk -- forces all buffered journal data to be written to disk immediately, without waiting for the SyncIntervalSec= timer. Equivalent to SIGRTMIN+1:

$ journalctl --sync

The combination of rotate-then-sync is the correct pre-backup sequence: rotate closes active files so they can be safely copied, and sync ensures nothing is buffered in memory. A tight pre-snapshot hook in an automated backup script might look like:

pre-snapshot hook
# Close active journal files and flush buffers before snapshot
$ journalctl --rotate && journalctl --sync
Note

Configuration changes that affect the storage location -- changing Storage=, adding or removing drop-in files that define different quota limits -- require a full service restart to take effect. SIGHUP handles most runtime tunables but does not migrate active journal files to a new location.

Querying the Journal Effectively

Configuration only pays off if you can extract signal from what you've stored. journalctl has a query surface that most administrators use at about 20% of its capacity. The full filtering model is worth understanding, especially on systems where you've invested in structured logging or journal namespaces.

Filtering by unit, time, and priority

The most common filters -- and the ones that compose cleanly -- are by unit, time range, and syslog priority:

common journalctl filters
# All entries from a specific unit, newest first
$ journalctl -u nginx.service -r

# Entries between two timestamps (ISO 8601 or natural language)
$ journalctl --since "2026-03-01 08:00" --until "2026-03-01 09:00"
$ journalctl --since "1 hour ago"

# Combine unit and time range
$ journalctl -u sshd.service --since yesterday

# Filter by syslog priority: emerg(0) through debug(7)
# -p err shows err, crit, alert, and emerg -- it is a ceiling, not exact
$ journalctl -p err
$ journalctl -p warning..err

# Follow live output (like tail -f)
$ journalctl -f

# Show last N lines
$ journalctl -n 50
Note

The -p flag with a single priority name is a ceiling filter, not an exact match. journalctl -p err returns everything at err severity and above -- meaning err, crit, alert, and emerg. To filter a specific range, use the FROM..TO syntax: -p info..warning returns only info and warning.

Field-based filtering

Every journal entry carries structured fields. You can filter on any of them by passing FIELD=value as a positional argument. Multiple conditions on different fields are ANDed; multiple values for the same field are ORed:

field-based filtering
# Entries from a specific executable path
$ journalctl _EXE=/usr/sbin/sshd

# Entries logged by a specific PID
$ journalctl _PID=1234

# Entries from a specific UID
$ journalctl _UID=1000

# Entries from either of two systemd units (OR logic, same field)
$ journalctl _SYSTEMD_UNIT=nginx.service _SYSTEMD_UNIT=php-fpm.service

# Entries where the syslog identifier matches (useful for non-unit processes)
$ journalctl SYSLOG_IDENTIFIER=myapp

# List all unique values present in a field (useful for discovery)
$ journalctl -F _SYSTEMD_UNIT
$ journalctl -F SYSLOG_IDENTIFIER

The -F FIELD flag (long form: --field=) enumerates every unique value stored in the journal for that field. It is useful for discovering what service identifiers or executable paths are actually present before writing a targeted filter.

Output formats

The default one-line output format discards most of the structured metadata. When you need the full entry -- or when piping to another tool -- the output format flag is essential:

output format options
# JSON output -- one object per line, all fields included
$ journalctl -o json -u nginx.service | jq '.MESSAGE'

# Verbose output -- shows every field for each entry
$ journalctl -o verbose -n 5

# Short ISO format -- machine-parseable timestamps
$ journalctl -o short-iso --since "1 hour ago"

# Export format (for piping to systemd-journal-remote or backup tools)
$ journalctl -o export --since "2026-03-01" > march-logs.export

# Cat format: message text only, no metadata
$ journalctl -o cat -u myapp.service

The json format is particularly useful when feeding journal output to log aggregators or analysis pipelines. The export format is a binary Journal Export Format that can be re-imported by systemd-journal-remote or read back by journalctl --file.

Querying specific boot sessions and namespaces

boot and namespace queries
# List all available boot sessions
$ journalctl --list-boots

# Show only entries from the previous boot
$ journalctl -b -1

# Combine boot filter with unit filter
$ journalctl -b -1 -u networking.service

# Query a specific journal namespace
$ journalctl --namespace=databases -p err --since "6 hours ago"

# Query across all namespaces simultaneously
$ journalctl --namespace=* -f
Pro Tip

When chasing an incident across multiple services, build up your query incrementally. Start with journalctl -b --since "N hours ago" -p err to get the error landscape for the current boot, then add -u servicename to narrow it. Use -F _SYSTEMD_UNIT first if you are not sure which units were active during the incident window.

Writing to the Journal: systemd-cat and Structured Fields

Configuring the journal as a storage and forwarding layer is half the picture. The other half is sending data into it intentionally -- from shell scripts, cron jobs, backup tasks, or any process that is not already managed by systemd as a service unit.

systemd-cat

systemd-cat connects a command's stdout and stderr to the journal, adding the same structured metadata that a native service unit would provide. It is the straightforward way to make any script's output queryable and retention-managed through the journal:

systemd-cat usage
# Run a command and send its output to the journal
$ systemd-cat /usr/local/bin/backup.sh

# Set a custom identifier and priority for the log entries
$ systemd-cat -t "nightly-backup" -p info /usr/local/bin/backup.sh

# Pipe output from any command
$ echo "Deployment complete: v2.4.1" | systemd-cat -t "deploy" -p notice

# Use in a cron job or timer -- the identifier makes it filterable
$ systemd-cat -t "db-vacuum" psql -U postgres -c "VACUUM ANALYZE;"

The -t flag sets SYSLOG_IDENTIFIER, which is what appears in the default log output and what you use to filter with journalctl SYSLOG_IDENTIFIER=nightly-backup. The -p flag sets the syslog priority for all output from the command; stderr output defaults to warning if you do not specify a priority for it separately.

Note

When a service is managed as a systemd unit, its stdout and stderr are already piped to the journal automatically -- you do not need systemd-cat for those. systemd-cat is specifically for processes outside the systemd service supervision tree: cron jobs, standalone scripts, one-off commands, and anything invoked outside of a .service unit.

The JOURNAL_STREAM environment variable

Applications that want to behave correctly whether or not they are running under systemd can check the JOURNAL_STREAM environment variable. When a service's stdout or stderr is connected to journald through a stream socket, systemd sets JOURNAL_STREAM=fd:ino in the service environment. Applications that detect this variable know they are writing to journald and can omit their own timestamps and PID prefixes -- since journald adds that metadata itself -- avoiding duplicated fields in the output.

This is particularly relevant for applications that add their own log timestamps by default (many do). A log line that starts with 2026-03-19 08:00:00 INFO connection accepted stored in a journal entry that already carries a precise monotonic and wall-clock timestamp is redundant. Applications aware of JOURNAL_STREAM can strip their own timestamp prefix and let the journal carry it, which produces cleaner output in journalctl and avoids timestamp format inconsistencies when exporting to aggregators.

Writing structured fields with the native journal protocol

Scripts and applications that want to attach custom structured fields to journal entries -- beyond what systemd-cat provides -- can write to the native journal socket directly. In Python, the systemd.journal bindings (part of the python3-systemd package on Debian/Ubuntu) expose this cleanly:

native journal protocol (Python)
from systemd import journal

# Log a message with custom structured fields
journal.send(
    "Deployment completed",
    PRIORITY=journal.LOG_INFO,
    SYSLOG_IDENTIFIER="deploy",
    DEPLOY_VERSION="v2.4.1",
    DEPLOY_ENVIRONMENT="production",
    DEPLOY_COMMIT="a3f9c2d"
)

# These custom fields are then filterable:
# journalctl DEPLOY_ENVIRONMENT=production

User-defined fields sent this way must start with an uppercase letter and contain only uppercase letters, digits, and underscores. The journal silently drops fields that do not conform to this convention. Custom fields are stored as regular journal fields and are fully queryable with journalctl FIELDNAME=value -- including in the -F field enumeration mode. This is the mechanism that makes journald genuinely useful as a structured log store rather than a line-oriented text dump.

Corrupted Journal Files: What the Tilde Means

When journald stops uncleanly -- a kernel panic, a hard power cut, an OOM kill of the journald process itself -- any journal file that was actively being written is left in an inconsistent state. On the next start, journald detects this, renames the damaged file with a .journal~ suffix, and opens a fresh active file. The tilde files are quarantined: they are not written to further, but they are not automatically deleted.

These files will accumulate silently and count toward disk usage. They are also not included in quota calculations the same way clean archived files are -- journald will not rotate them out automatically. You can check for them directly:

failure mode A system with frequent unclean shutdowns (embedded hardware, no UPS, aggressive OOM killer) accumulates .journal~ files over weeks without any visible warning. journalctl --disk-usage reports consumption approaching the quota ceiling, but active log history is shorter than expected -- the tilde files are silently consuming the budget. Running ls /var/log/journal/$(cat /etc/machine-id)/*.journal~ reveals the culprits. Remove them after recovering any needed data.
$ ls /var/log/journal/$(cat /etc/machine-id)/*.journal~

To attempt recovery of data from a tilde file, journalctl can still read them -- they are structurally valid up to the point of the crash. To verify how much of a tilde file is intact:

journal integrity verification
# Verify all journal files -- reports which are consistent
$ journalctl --verify

# Read entries from a specific tilde file directly
$ journalctl --file=/var/log/journal/$(cat /etc/machine-id)/system@XXXX.journal~

# Remove tilde files once you have recovered or discarded their data
$ rm /var/log/journal/$(cat /etc/machine-id)/*.journal~

The journalctl --verify command serves double duty: it checks structural integrity of all journal files, and if FSS is enabled, it validates the cryptographic seals. A file can be structurally intact but fail FSS verification if it was tampered with after sealing. These are distinct failure modes and the command output distinguishes them.

Going further: automated detection and prevention

Manual checks are not a solution on systems that shut down uncleanly with any regularity. The approaches below go beyond the standard advice of "run journalctl --verify periodically."

Run journalctl --verify as a systemd timer with alerting on failure. A oneshot service that runs the verification command, pipes output through grep FAIL, and writes any matches to a named journal field (using systemd-cat -t journald-integrity-check) creates a queryable audit trail. A downstream alerting tool -- or a simple journalctl -t journald-integrity-check -p err cron check -- can then catch accumulation before it silently consumes disk quota. This is more reliable than manual checks because the timer fires even after reboots that introduced new tilde files.

Address the source, not just the symptom. Tilde files are a symptom of unclean shutdowns. On systems where OOM kills are the trigger, the root cause is memory pressure -- not a logging problem. Adjusting vm.swappiness, raising the OOM score adjustment for less critical services, or enabling MemoryMax= in service units to kill the offending service before it triggers a system-wide OOM event will reduce tilde file accumulation more effectively than any journal configuration change. On embedded hardware without a UPS, the analogous intervention is filesystem journaling mode: mounting /var with data=journal rather than data=ordered increases the consistency guarantee at the cost of write performance, which may be acceptable if log integrity is a hard requirement.

Use SyncIntervalSec= as a data-loss window control, not just a performance knob. The default 5-minute flush interval is the actual data-loss exposure window on a hard crash. On systems prone to unclean shutdowns, reducing SyncIntervalSec= to 30s or even 10s does not prevent tilde files from being created -- that is determined by whether the active file was in the middle of a write -- but it significantly reduces the number of log entries that were buffered in memory and never made it to disk. The tilde file may still appear, but it will be smaller and contain less missing history.

Quarantine tilde files to a separate directory for forensic review before deletion. On systems where those files represent the only record of a crash sequence -- an embedded device, an intermittently failing server -- deleting them immediately discards potentially recoverable data. A safer pattern is to move tilde files to a staging directory, extract their contents to a plain text file with journalctl --file=, and then delete them once the output has been reviewed or archived. This is a one-time manual step on most systems, but on high-recurrence crash environments it is worth scripting as part of the systemd timer workflow above.

Consider Storage=persistent with MaxFileSec=1week on crash-prone systems rather than the default 1month. Shorter per-file rotation intervals mean that when a crash occurs mid-write, the damaged active file represents at most a week of data rather than a month. The volume of entries potentially lost in the tilde file is bounded by the rotation interval, not the full retention window. This does not reduce the frequency of tilde files, but it limits their size and makes targeted recovery more tractable.

Pro Tip

On systems where unclean shutdowns are common -- embedded hardware, systems without a UPS, aggressive OOM environments -- consider scheduling periodic journalctl --verify runs via a systemd timer to catch accumulating tilde files before they become a silent disk consumption issue. Pair with an alert on any line containing FAIL in the output.

Containers and Ephemeral Environments

journald's behavior inside containers deserves explicit treatment because the defaults that make sense for a physical host are actively counterproductive in many container configurations.

In a container that runs a full systemd init (systemd as PID 1), journald runs normally and the same configuration surface applies. Storage=volatile is the correct default for containers that are rebuilt frequently, and the journal acts as an in-memory buffer that feeds whatever forwarding target the host provides.

In a container that does not run systemd as PID 1 -- the common case for Docker, Podman, or Kubernetes workloads -- journald is not present at all. Services in these containers write to stdout and stderr, which the container runtime captures. The question of journald configuration then shifts to the host: the container runtime forwards container output to the host's journal via the native journal protocol, and it is the host's journald.conf that controls how that data is stored, rate-limited, and retained.

One detail that matters when integrating non-systemd containers with a systemd host: the JOURNAL_STREAM environment variable. When a service's stdout or stderr is connected to journald via a stream socket, systemd sets JOURNAL_STREAM=fd:ino in the service's environment. Applications that detect this variable can switch from adding their own timestamps and PID prefixes to emitting clean log lines -- since journald adds that metadata itself -- avoiding duplication. Container runtimes that forward to journald may or may not set this variable; verify with your specific runtime if log formatting is inconsistent.

Note

On a host that runs many containers, the host-level RateLimitBurst= and RateLimitIntervalSec= settings apply per-service-unit, which means per container (assuming each container maps to a service unit). A fleet of containers each logging at the default burst ceiling can produce significant total journal write load. Journal namespaces are worth evaluating in this context: assigning container workloads to a dedicated namespace with its own quota and rate limits keeps container log volume from interfering with host system logging.

Disk Quotas: Preventing the Journal from Eating Your Filesystem

This is the configuration area that causes the most production incidents. The journal's defaults are percentage-based, which means they scale with your filesystem size -- but only up to a hard cap. Understanding both dimensions is essential.

The official freedesktop.org documentation states that SystemMaxUse= and RuntimeMaxUse= default to 10% of the filesystem size, while SystemKeepFree= and RuntimeKeepFree= default to 15%. Both pairs are capped at 4 GiB. journald respects both constraints and uses whichever limit is more restrictive. A 200 GiB /var filesystem has a theoretical 10% journal allowance of 20 GiB, but the 4 GiB cap applies instead. On a 20 GiB partition, the cap is 2 GiB.

The four main disk quota directives break down as follows. The System-prefixed options apply to persistent storage under /var/log/journal/; the Runtime-prefixed options apply to volatile storage under /run/log/journal/.

When both SystemMaxUse= and SystemKeepFree= are in play, the stricter of the two wins. If the filesystem is nearly full when journald starts and either constraint is already violated, journald raises the limit to match what is actually free -- it will not delete existing files to recover space at that point. Deletion of archived journal files only happens when writing new data would breach the configured limits.

failure mode The journal silently stops recording new entries. No error, no alert -- just a quiet ceiling hit. The 4 GiB default cap on a 200 GiB filesystem fills up unnoticed over months because no one set an explicit SystemMaxUse=. By the time it matters during an incident, weeks of log history have been rotating out to stay under the cap. journalctl --disk-usage shows the ceiling; without an explicit quota drop-in, the cap is determined by the percentage-and-4G-cap formula and is unlikely to match your actual intentions.
/etc/systemd/journald.conf.d/disk-quota.conf
[Journal]
# Hard cap: journal may use at most 2G of persistent storage
SystemMaxUse=2G

# Keep at least 1G free on /var filesystem at all times
SystemKeepFree=1G

# Rotate individual files at 200M to keep them manageable
SystemMaxFileSize=200M

# Retain at most 10 archived files before deleting oldest
SystemMaxFiles=10

# Volatile (in-memory) limits for early boot / containers
RuntimeMaxUse=128M
RuntimeKeepFree=64M

To inspect current journal disk usage without guessing:

$ journalctl --disk-usage

You can also vacuum journal files manually without restarting the daemon. These operations remove archived (rotated-out) files that meet the criteria -- active journal files are not touched:

journal vacuum commands
# Remove archived files until total usage drops below 500M
$ journalctl --vacuum-size=500M

# Remove entries older than 30 days
$ journalctl --vacuum-time=30d

# Rotate first (flush active files to archived), then vacuum
$ journalctl --rotate --vacuum-time=7d

The MaxRetentionSec= directive in journald.conf sets an age-based ceiling equivalent to the --vacuum-time flag but applied automatically as the daemon runs. MaxFileSec= (default 1month) controls how long a single journal file spans before it is rotated regardless of size. On high-volume systems, setting MaxFileSec=1week keeps individual files smaller and makes targeted deletions faster.

Rate Limiting: Protecting Signal from Noise

A misconfigured or misbehaving service can emit millions of log lines per minute and make the journal unusable for everything else. journald has a built-in rate limiter controlled by two directives:

When a service exceeds the burst limit within the interval, journald drops further messages from that service and logs a suppression notice. The limits are per-service, not system-wide, so one noisy service cannot starve others.

One behavior that is not obvious from the configuration file itself: the effective burst limit is automatically scaled upward by a factor derived from available free disk space, using a base-2 logarithm. The official man page includes a table of example modifications -- a service gets roughly 10,000 messages per 30s at a baseline, but may receive a higher effective limit if the journal filesystem has substantial free space. This dynamic scaling means a system with plenty of disk room is less likely to suppress legitimate high-volume logging during incidents. Set RateLimitBurst=0 or RateLimitIntervalSec=0 to disable rate limiting entirely, though this is generally inadvisable system-wide.

Rate limit scaling -- worked examples

With RateLimitBurst=10000 and RateLimitIntervalSec=30s, the effective burst ceiling scales with free disk space on the journal filesystem. The multiplier is derived from log2(free_gb + 1), so tightly provisioned systems get no boost while well-provisioned systems get significantly more headroom during incidents.

Free disk space Approx. multiplier Effective burst ceiling
1 GB~1x~10,000 msgs / 30s
4 GB~2.3x~23,000 msgs / 30s
16 GB~4x~40,000 msgs / 30s
100 GB~6.7x~67,000 msgs / 30s

These are approximate. The actual formula and table appear in the official journald.conf(5) man page. The practical takeaway: on a tightly provisioned system, your configured burst ceiling is your real ceiling. On a generously provisioned one, the daemon gives you more headroom automatically.

failure mode A service starts emitting thousands of structured error entries during an active incident. You need those entries to diagnose the problem. Instead, journald drops them and logs a single suppression notice. The rate limiter protected the journal -- but eliminated the evidence. Per-unit overrides using LogRateLimitBurst=0 in the [Service] stanza of security-critical or incident-relevant services prevent this from happening to the entries that matter most.
/etc/systemd/journald.conf.d/rate-limit.conf
[Journal]
# Tighter defaults for production: 1000 messages per 30s per service
RateLimitIntervalSec=30s
RateLimitBurst=1000

You can also override rate limits per individual service unit using the LogRateLimitIntervalSec= and LogRateLimitBurst= directives in the [Service] section of the unit file. Setting LogRateLimitBurst=0 in a unit disables rate limiting for that service entirely -- useful for security-critical daemons like sshd or audit services where dropped messages are unacceptable.

Pro Tip

If you are seeing the message systemd-journald[PID]: XYZ service: NNN messages in the last 30s, dropped M messages, that service is triggering the rate limiter. Before raising the system-wide burst limit, consider whether the verbosity is a symptom of a misconfiguration in the service itself. Chasing application bugs through the rate-limit suppression notice is more efficient than silently consuming unlimited messages.

Compression and Sync Behavior

Two settings that are on by default but worth understanding explicitly are Compress= and SyncIntervalSec=.

Compress=yes (the default) enables LZ4 compression for journal data objects larger than 512 bytes before writing to disk. This can be set to a numeric byte threshold instead of a boolean -- for example, Compress=4K would only compress entries larger than 4 KiB, reducing compression overhead for typical short log lines while still compressing verbose payloads. On most systems the default is a reasonable choice; on very high-throughput systems where journald CPU usage becomes measurable, raising the threshold or disabling compression entirely is worth benchmarking.

SyncIntervalSec=5m (the default) controls how frequently journald flushes buffered journal data to disk. Entries logged with priority CRIT, ALERT, or EMERG are always flushed synchronously and are not affected by this setting. Lower-priority messages are batched and written on this interval. Reducing this value (e.g., to 30s) increases durability at the cost of more frequent disk writes. On systems with battery-backed storage controllers or SSDs with power-loss protection, the default 5-minute interval is generally safe. On spinning disk without a UPS, you may want a shorter interval to reduce the window of data that could be lost on a hard crash.

failure mode A hard crash occurs 4 minutes and 50 seconds into the 5-minute sync window. Everything logged since the last flush -- up to nearly 5 minutes of operational data -- is gone. The events existed in journald's memory buffer and were never written to disk. On spinning disk without a UPS, SyncIntervalSec=30s reduces the maximum data loss window from 5 minutes to 30 seconds.

Forward Secure Sealing: Cryptographic Log Integrity

Forward Secure Sealing (FSS) is one of journald's most underused security features. When enabled, the journal periodically applies a cryptographic seal to its contents, making any unnoticed tampering with historical log data detectable.

The underlying mechanism was announced by systemd developer Lennart Poettering in August 2012, as covered by LWN.net (lwn.net/Articles/512895). The academic foundation is the paper "Seekable Sequential Key Generators" by G. A. Marson and B. Poettering (doi:10.1007/978-3-642-40203-6_7) -- note that the academic author is Bertram Poettering, Lennart's brother, whose post-doctoral research on Forward Secure Pseudo Random Generators (FSPRG) underpins the implementation. The official journald.conf(5) man page cites this paper directly. The practical result is that an attacker who compromises a system and attempts to alter or delete log entries will leave evidence -- the seal verification will fail.

FSS generates two keys: a sealing key that stays on the system and is used to seal new journal data, and a verification key that must be stored off-system. The sealing key is continuously rotated forward in a non-reversible process; after each rotation the old sealing key is securely deleted from disk (using filesystem-level secure deletion attributes where supported). The verification key can regenerate any past sealing key to verify historical entries, but an attacker with only the current sealing key cannot rewrite or fabricate past sealed entries without detection.

The Seal=yes directive in journald.conf enables FSS (it is on by default), but the feature is inactive until you generate keys:

FSS key generation and verification
# Generate FSS key pair. --interval sets how often the sealing key rotates.
# Default interval is 15 minutes; shorter = more tamper-evidence, more CPU.
$ sudo journalctl --setup-keys --interval=15min

# Output includes the verification key -- copy it off-system immediately.
# The sealing key is stored in /var/log/journal/<machine-id>/

# Later, verify journal integrity using the stored verification key
$ sudo journalctl --verify --verify-key=<verification-key-string>
Caution

The verification key must be stored outside the system being protected. Keeping it only on the same machine defeats the purpose -- an attacker with root access can read the verification key and fabricate a consistent log history. Store it in a secrets manager, a separate system, or print it and keep it physically secured. The sealing key should remain on the host; do not copy it off-system.

FSS does not prevent an attacker from deleting the journal files entirely, which would be apparent from the absence of logs. It specifically addresses the scenario of unnoticed selective alteration -- the kind of cleanup an attacker performs after an intrusion to hide their activities.

What verification output actually looks like
PASS /var/log/journal/abc123/system.journal
PASS /var/log/journal/abc123/[email protected]
FAIL /var/log/journal/abc123/[email protected]
File corruption detected: 3 entries after seal timestamp 2026-03-10 02:14:07 failed verification
A PASS line means the file is structurally intact and FSS seals verified. A FAIL line means either structural damage (unclean shutdown) or tampered entries after sealing -- the output distinguishes them. A file can PASS structural checks and FAIL FSS verification simultaneously.

Journal Namespaces: Isolating Log Streams

Journal namespaces, added in systemd 246, allow specific services to write their logs to a completely separate journal instance with its own storage, quota, and forwarding configuration. This is useful when a particular workload has dramatically different logging requirements from the rest of the system -- a high-volume database, a containerized application, or a service subject to separate audit retention policies.

To assign a service to a namespace, add LogNamespace=mynamespace to its [Service] section. This automatically starts a [email protected] instance configured by /etc/systemd/[email protected].

/etc/systemd/system/mydb.service (excerpt)
[Service]
ExecStart=/usr/bin/mydatabase --config /etc/mydb/mydb.conf
# Assign this service to the "databases" journal namespace
LogNamespace=databases
/etc/systemd/[email protected]
[Journal]
# Separate storage with its own quota
Storage=persistent
SystemMaxUse=10G
SystemKeepFree=2G
# Higher rate limit for a verbose database workload
RateLimitBurst=50000
# Longer retention for compliance
MaxRetentionSec=90days

Query a namespace with:

$ journalctl --namespace=databases -f

To query all namespaces simultaneously, use --namespace=*. Note from the official man7.org documentation that the default journal namespace uses Storage=auto by default, while all non-default namespaces use Storage=persistent by default.

Forwarding: Integrating with the Broader Logging Stack

journald does not have to be the final destination for logs. Several forwarding mechanisms allow it to act as a structured intake layer that feeds other systems.

ForwardToSyslog

Setting ForwardToSyslog=yes causes journald to forward all messages to the socket at /run/systemd/journal/syslog, where a traditional syslog daemon (rsyslog, syslog-ng) can consume them. The default is no because modern rsyslog and syslog-ng installations read journal files directly using the native journal protocol, which is more efficient and preserves structured metadata. The socket forwarding approach is retained for compatibility with older syslog implementations that only listen on the socket.

ForwardToConsole

This setting is a production footgun that is worth calling out explicitly. The freedesktop.org journald.conf(5) documentation warns that forwarding to the console is performed synchronously, and that in cloud environments the console is often a slow, virtual serial port. A hung console will block journald entirely, which in turn blocks any service that logs synchronously. For production use, the official man page recommends running a journalctl --follow style service redirected to the console rather than enabling ForwardToConsole=yes -- specifically to avoid blocking the daemon on a slow console during normal operations.

failure mode ForwardToConsole=yes is left enabled on a cloud VM after a debugging session. The virtual serial console is slow. Under log volume, journald blocks waiting for the console to drain. All services that log synchronously start hanging. The system becomes unresponsive without any obvious cause -- the hung journald daemon is not surfaced as an error in most monitoring. Always remove ForwardToConsole=yes when done debugging.

ForwardToSocket

Added in recent systemd versions, ForwardToSocket= sends journal entries over a configurable socket in Journal Export Format. This is the foundation for integrating with systemd-journal-remote for centralized logging over a network. The documentation again notes that socket forwarding is synchronous, so slow or high-latency network links should be avoided. Use a local Unix socket or a low-latency loopback address for any production forwarding pipeline.

systemd-journal-remote and systemd-journal-upload

The ForwardToSocket= directive feeds the Journal Export Format wire protocol, but the dedicated tool that implements centralized logging end-to-end is the systemd-journal-remote package. It ships two components: systemd-journal-remote, which runs on the receiving server, and systemd-journal-upload, which runs on each client and pushes entries to that server over HTTPS.

This is the correct answer to "how do I aggregate journal data from multiple hosts" -- not syslog forwarding, not rsyslog, not a log shipper reading text files. The transport preserves full structured journal entries, including all metadata fields, and the server stores them as native journal files that journalctl --file= can query directly.

The setup requires the package on both sides. On Debian/Ubuntu: apt install systemd-journal-remote. On RHEL/Fedora: dnf install systemd-journal-remote.

On the receiving server: enable and start the two units, configure TLS certificates in /etc/systemd/journal-remote.conf, and ensure port 19532 is reachable from your clients:

server: /etc/systemd/journal-remote.conf
[Remote]
# TLS certificates -- required for HTTPS transport
ServerKeyFile=/etc/ssl/private/journal-remote.key
ServerCertificateFile=/etc/ssl/certs/journal-remote.pem
TrustedCertificateFile=/etc/ssl/certs/ca.pem

# Split received journals by source hostname (the alternative is "none")
SplitMode=host

# Disk quota for received journal data
MaxUse=20G
KeepFree=5G

# Optional: apply FSS to received journal files
Seal=false
server: enable and start
# Create the output directory with correct ownership
$ sudo mkdir -p /var/log/journal/remote
$ sudo chown systemd-journal-remote:systemd-journal-remote /var/log/journal/remote

# Enable the socket and the service
$ sudo systemctl enable --now systemd-journal-remote.socket
$ sudo systemctl enable systemd-journal-remote.service

On each client: install the same package, configure /etc/systemd/journal-upload.conf with the server URL and TLS paths, then enable the upload service:

client: /etc/systemd/journal-upload.conf
[Upload]
# Replace with your log server hostname or IP
URL=https://logserver.example.com:19532

# Client TLS certificates for mutual authentication
ServerKeyFile=/etc/ssl/private/journal-upload.key
ServerCertificateFile=/etc/ssl/certs/journal-upload.pem
TrustedCertificateFile=/etc/ssl/certs/ca.pem
client: enable upload service
$ sudo systemctl enable --now systemd-journal-upload.service

systemd-journal-upload tracks its cursor position (the last entry it successfully sent) in a state file so that it resumes from where it left off after a restart or network interruption. If the server is unreachable, the upload service will exit after the configurable NetworkTimeoutSec= timeout; configure a Restart=on-failure override on the client unit if you want automatic reconnection:

/etc/systemd/system/systemd-journal-upload.service.d/restart.conf
[Service]
Restart=on-failure
RestartSec=30s

On the server, received journal files land in /var/log/journal/remote/ and are named after the client's TLS certificate CN. Query them directly:

querying received journals on the server
# View what has been received
$ ls /var/log/journal/remote/

# Read a specific client's journal
$ journalctl --file=/var/log/journal/remote/client.example.com.journal -f

# Query across all received journals simultaneously
$ journalctl --directory=/var/log/journal/remote/ -p err --since "1 hour ago"
Warning

Do not use plain HTTP for systemd-journal-upload in production. Journal entries contain sensitive operational data -- service names, PIDs, environment variables, and potentially credentials that leak into log output. The transport also has no authentication mechanism in HTTP mode, meaning any client that can reach port 19532 can inject arbitrary log entries into your central server. Always use HTTPS with mutual TLS in production environments.

MaxLevel directives

Each forwarding destination has a corresponding MaxLevel directive that gates which priority messages are forwarded. The options are MaxLevelStore= (what is written to disk), MaxLevelSyslog=, MaxLevelConsole=, MaxLevelKMsg=, and MaxLevelWall=. Each accepts a syslog priority name (emerg, alert, crit, err, warning, notice, info, debug) or a numeric value 0--7. Filtering at the journald level reduces downstream processing load, though note that what is not stored cannot be retrieved later.

SplitMode and Access Control

The SplitMode= directive controls whether journal files are split per user. The two meaningful values are uid and none.

When set to uid, every regular user (UIDs outside the system user and dynamic service user ranges) receives their own journal file, and the journal daemon assigns read access to that user's files. This means users can query their own logs with journalctl --user without needing elevated privileges. System service users log to the system journal. The official documentation notes that this is the primary use case: access control on Linux is per-file, so splitting journals is the mechanism that makes unprivileged per-user log access possible.

When set to none, all messages go into the system journal and ordinary users have no privileged access to it. This is appropriate in hardened server environments where user activity logging is an administrative function and unprivileged log access is a potential information disclosure risk.

To grant a specific user read access to the system journal without changing SplitMode=, add them to the systemd-journal group. Members of adm and wheel are also given read access on many distributions.

Audit Integration

The Audit= directive controls whether systemd-journald activates kernel auditing at startup. In the default journal namespace this defaults to yes, which means journald tells the kernel to generate audit records and then collects them. In non-default namespaces it defaults to keep (do not change the existing audit state).

This is separate from the question of whether journald collects audit records -- even if another tool enables auditing, journald will collect the resulting messages as long as the systemd-journald-audit.socket unit is enabled. To prevent journald from collecting audit messages entirely (for example, because a dedicated auditd is handling them), disable that socket unit:

$ sudo systemctl disable --now systemd-journald-audit.socket

On hardened systems running auditd alongside journald, this prevents duplicate collection and potential conflicts between the two audit pipelines.

A Production-Ready Configuration

The following drop-in represents a reasonable starting point for a persistent, disk-quota-constrained, rate-limited production server. Adjust the numeric values to match your filesystem size, retention requirements, and service workload.

/etc/systemd/journald.conf.d/production.conf
[Journal]
# Persistent storage -- create /var/log/journal/ before applying
Storage=persistent

# Disk quotas: 2G hard cap, keep 1G free, 200M per file, 15 files max
SystemMaxUse=2G
SystemKeepFree=1G
SystemMaxFileSize=200M
SystemMaxFiles=15

# Retention: discard entries older than 60 days
MaxRetentionSec=60days

# Rotate journal files monthly
MaxFileSec=1month

# Rate limiting: 2000 messages per 30s per service
RateLimitIntervalSec=30s
RateLimitBurst=2000

# Compression enabled (default is yes, explicit for clarity)
Compress=yes

# FSS: enabled when keys are generated with journalctl --setup-keys
Seal=yes

# Split journal files by UID for per-user access
SplitMode=uid

# No console forwarding in production -- avoid blocking journald
ForwardToConsole=no

# Let rsyslog/syslog-ng read the journal directly instead of via socket
ForwardToSyslog=no

# Sync to disk every 60 seconds (reduce for higher durability)
SyncIntervalSec=60s

After writing this file, apply it with:

$ sudo systemctl restart systemd-journald

Verify the running configuration by inspecting the journald service status and cross-checking disk usage against your configured limits:

verification commands
# Check journald daemon status and active config
$ systemctl status systemd-journald

# Confirm disk usage against limits
$ journalctl --disk-usage

# Inspect the effective (merged) configuration
$ systemd-analyze cat-config systemd/journald.conf

# List all boot sessions available in the journal
$ journalctl --list-boots

The systemd-analyze cat-config command deserves particular attention. It merges and displays the full effective configuration from all drop-in files in priority order, so you can see exactly what the running daemon is using without manually tracking which drop-in overrides what.

Wrapping Up

systemd-journald is a capable, production-ready logging daemon that ships with conservative, broadly compatible defaults. Those defaults are not production-optimized -- they may silently lose logs across reboots depending on whether /var/log/journal/ exists on your distribution, they will allow the journal to consume up to 4 GiB of disk space without warning, and Forward Secure Sealing is configured but entirely inert until you generate keys with journalctl --setup-keys.

Several behaviors that create operational surprises rarely appear together in documentation. journald always starts in volatile mode, even with Storage=persistent set -- the flip to persistent storage happens when systemd-journal-flush.service signals it, meaning a crash before that point loses early-boot messages regardless of your configuration. Corrupted journal files from unclean shutdowns are renamed to .journal~ and quarantined silently -- they accumulate against your disk quota and do not get cleaned up automatically. The RateLimitBurst= ceiling is not a hard constant -- it scales dynamically with free disk space using a base-2 logarithm, which means your burst capacity on a well-provisioned system may exceed the configured value. And Storage= has a compile-time default that varies by distribution; the upstream project now defaults to persistent, but your specific system may differ.

Configuration only matters if you can extract what you store. The journalctl query model -- unit filters, field-based filtering with FIELD=value, the -F enumeration flag, and structured output formats like -o json -- is the interface through which everything in journald.conf pays off in practice. For processes outside systemd service supervision, systemd-cat and the native journal protocol give scripts and cron jobs first-class, structured, queryable log entries rather than text dumped into syslog.

For multi-host environments, systemd-journal-remote and systemd-journal-upload provide centralized logging entirely within the systemd toolchain, preserving full structured journal entries -- including all metadata fields -- at the receiving server. The transport requires HTTPS with mutual TLS in production; plain HTTP offers no authentication and no protection for the operational data in transit.

In container and ephemeral workloads, the questions shift: whether journald is running at all, how the container runtime forwards to the host journal, and whether per-container log volumes warrant journal namespaces to prevent container workloads from interfering with host system logging. In all of these environments, journalctl --rotate, --sync, and --flush let you exercise fine-grained control at runtime without touching the service lifecycle.

Effective configuration is largely a matter of being explicit: explicit about storage mode, explicit about disk bounds, explicit about what gets forwarded and to where, and explicit about integrity requirements. A handful of drop-in files -- one for disk quotas, one for rate limits, one for forwarding policy -- is enough to move from defaults to a posture you have deliberately chosen. The FSS verification key must leave the machine to have any forensic value; a verification key sitting on the same disk it is meant to protect is a security theater artifact, not a control.

The authoritative references for every option covered here are the freedesktop.org journald.conf(5) man page, the man7.org journald.conf(5) Linux manual page, and for the journal query interface, journalctl(1). The systemd-journal-remote toolchain is documented at systemd-journal-remote.service(8) and systemd-journal-upload.service(8). The FSS academic foundation is the paper "Seekable Sequential Key Generators" by G. A. Marson and B. Poettering (doi:10.1007/978-3-642-40203-6_7), cited directly in the official man page. The 2012 LWN.net article on FSS (lwn.net/Articles/512895) remains the best non-academic narrative of how the key derivation scheme works.