There is a persistent myth that overclocking is a Windows activity. Walk into almost any hardware forum and the assumption is that you boot Windows to push your CPU multiplier, flash your GPU BIOS through a vendor tool, and then go back to Linux when the "real work" begins. That framing is wrong, and it has been wrong for years. Linux exposes hardware tuning surfaces that Windows frequently hides behind proprietary drivers and locked UEFI firmware -- and the kernel's own subsystems give you a level of visibility into what your hardware is actually doing that no closed ecosystem can match.

This article covers overclocking CPUs, GPUs, and RAM on Linux in practical, verifiable terms. Every command shown here targets real interfaces documented in the kernel source, the ArchWiki, official driver repositories, or vendor documentation. Where risks exist -- and they are real -- they are stated plainly. The goal is not to hype overclocking as a magic performance multiplier. The goal is to give you an accurate map of what the Linux platform actually offers and how to use it without guessing.

Before You Start

Overclocking can permanently damage hardware, void warranties, cause data corruption, and reduce the lifespan of components. Always back up critical data before beginning. Proceed only if you accept these risks. Nothing in this article constitutes a warranty or guarantee of hardware survival.

How Overclocking Actually Works on Linux

Before touching a single tool, it helps to understand what "overclocking" actually does at the hardware layer. The operating system -- any operating system -- does not control clock speeds in isolation. The CPU reads its target frequency from a register. On x86 hardware, those registers are called Model Specific Registers (MSRs). They are the hardware's configuration table.

The CPU reads its target frequency from a register, and by writing to that MSR, you change the clockspeed. As community documentation on Linux CPU overclocking explains, the MSR is where the processor looks to see "how fast they should be going."

-- Overclock.net community documentation on Linux CPU overclocking

Linux exposes the MSR interface through /dev/cpu/[N]/msr device files, where N is the CPU core number. The kernel's msr module must be loaded for this interface to exist. Write access to the MSR device files can be restricted by the msr.allow_writes=off boot parameter or when the kernel runs in lockdown mode -- a security posture common on Secure Boot-enabled systems.

bash -- verify MSR module and interface
# Load the msr module
$ sudo modprobe msr

# Verify the device files exist (one per logical CPU)
$ ls /dev/cpu/
0  1  2  3  4  5  6  7  ...

# Confirm the msr device exists for core 0
$ ls -l /dev/cpu/0/msr

# Check if lockdown mode is blocking writes
$ cat /sys/kernel/security/lockdown

There is an important nuance that trips up newcomers: the BIOS is not running while Linux is active. It sets initial register values during POST and then, as Linus Torvalds famously put it, its job is to load the OS and get out. That means a BIOS that "locks" overclocking is enforcing a default register state, not physically preventing writes. The operating system has equal standing to write those same registers. This is why tools like cpupower can change CPU frequency governors, and why direct MSR manipulation works even on systems with "locked" UEFI firmware -- provided the kernel allows writes.

The practical boundary is not BIOS locks. It is three things: whether the kernel module is loaded, whether your kernel is running in lockdown mode, and whether your specific CPU generation supports the control pathway being used. Those distinctions matter throughout everything that follows.

CPU Overclocking on Linux

Governors and the cpufreq Subsystem

Linux manages CPU frequency through the cpufreq subsystem, which has been part of the kernel for decades. The subsystem exposes scaling governors that determine how the kernel selects operating frequency. The cpupower utility, shipped as part of linux-tools, is the standard userspace interface to this subsystem.

Install it on Debian and Ubuntu-based systems:

bash
$ sudo apt install linux-tools-common linux-tools-`uname -r`

On Arch Linux: sudo pacman -S cpupower. On Fedora: sudo dnf install kernel-tools.

The available governors depend on which driver is active. Modern Intel CPUs (Sandy Bridge and newer with HWP support) and modern AMD CPUs with CPPC support use hardware P-state drivers -- intel_pstate and amd_pstate respectively. When these drivers are in "active" mode, only two software-visible governors appear: powersave and performance. These labels are misleading. They do not work like their traditional counterparts. Instead, they translate into Energy Performance Preference (EPP) hints that the CPU's internal governor interprets. The CPU makes the actual frequency decision; the OS provides a performance-versus-efficiency hint. There is considerable confusion about this online, and it is worth addressing directly: many forum posts and guides treat the performance governor as if it pins the CPU to maximum frequency on all drivers. That understanding is correct for the older acpi-cpufreq driver with traditional governors like ondemand, schedutil, and performance. It is not correct when intel_pstate or amd_pstate are running in active mode, because those drivers delegate actual frequency selection to the CPU's own firmware. This guide states it plainly because the distinction affects real-world overclocking behavior: if you set the performance governor under amd_pstate active and then observe your CPU idling at 400 MHz, the system is working as designed -- the firmware is choosing to clock down at idle regardless of the governor label. The kernel documentation for both intel_pstate and amd-pstate confirms this behavior.

To set all cores to the performance governor:

bash
$ sudo cpupower frequency-set -g performance

# Verify the active governor on each core
$ cpupower frequency-info

To persist the governor setting across reboots, enable the cpupower systemd service and configure /etc/default/cpupower (distribution-specific path):

/etc/default/cpupower
governor='performance'
bash
$ sudo systemctl enable --now cpupower

Intel CPUs: Turbo Boost and Frequency Limits

Intel's turbo boost technology allows cores to exceed their base frequency for short periods, constrained by thermal and power limits (PL1 and PL2). On Linux, you can observe and adjust these limits through the intel_pstate sysfs interface. To check the current minimum and maximum scaling frequencies:

bash
# Check min/max frequency for core 0
$ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq
$ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq

# Set max frequency ceiling (example: 4200000 = 4.2 GHz)
$ sudo cpupower frequency-set -u 4200MHz

True multiplier overclocking on Intel -- going beyond the advertised maximum turbo frequency -- requires a motherboard with an unlocked BIOS and a "K" or "KF" series CPU. These changes are made in UEFI, and Linux will then observe the higher frequencies as normal operating behavior. What Linux's own tools control is the software-visible policy envelope within which the CPU operates. Setting an absolute frequency ceiling or floor is the primary lever available.

Intel HWP and EPP

On Skylake and newer CPUs with Hardware P-State (HWP) support, x86_energy_perf_policy lets you set the Energy Performance Preference hint directly via MSR. The range is 0 (maximum performance) to 255 (maximum efficiency). This is distinct from the governor and has a more direct influence on how aggressively the CPU boosts. The msr kernel module must be loaded: sudo modprobe msr.

AMD CPUs: amd_pstate and Platform Profiles

AMD's amd_pstate driver uses the ACPI Collaborative Processor Performance Control (CPPC) interface. This is not simply a frequency table lookup; it exposes hundreds of abstract performance levels rather than the 16-entry P-state table that older ACPI CPUFreq drivers used. The driver operates in one of three modes: active (firmware manages frequency autonomously based on an EPP hint), passive (the OS specifies the desired performance level directly), and guided (the OS sets a minimum and maximum; firmware selects within that range). Starting with Linux kernel 6.5, amd_pstate in active mode became the default on Zen 2 and newer systems that expose ACPI CPPC support. On earlier kernels (6.3 and 6.4), the driver was available but had to be explicitly activated with the amd_pstate=active kernel parameter.

amd_pstate UEFI Prerequisite

The driver requires that your motherboard's UEFI exposes ACPI CPPC. If your system falls back to acpi-cpufreq on boot, look for a setting labeled "CPPC" or "Collaborative Power and Performance Control" in the AMD CBS section of your UEFI (path varies by board vendor). Set it from Auto to Enabled. Some older Zen 2 laptop boards never received this UEFI update and cannot use amd_pstate.

You can check which mode is active and switch modes at runtime without rebooting:

bash -- amd_pstate mode management
# Check active driver mode
$ cat /sys/devices/system/cpu/amd_pstate/status
active

# Switch to passive mode at runtime (no reboot needed)
$ echo passive | sudo tee /sys/devices/system/cpu/amd_pstate/status

# Switch back to active (EPP) mode
$ echo active | sudo tee /sys/devices/system/cpu/amd_pstate/status

# View current EPP hint on all cores
$ cat /sys/devices/system/cpu/cpu*/cpufreq/energy_performance_preference

# Set EPP to performance on all cores
$ echo performance | sudo tee \
  /sys/devices/system/cpu/cpu*/cpufreq/energy_performance_preference

For Ryzen CPUs where traditional multiplier overclocking is desired, the primary Linux-side tuning surface is power limit (TDP) adjustment via Precision Boost Overdrive (PBO). AMD's SMU (System Management Unit) exposes power limits through the ryzenadj utility. Note that ryzenadj targets Ryzen mobile (APU) processors by design; on desktop Ryzen CPUs it may return "unsupported model." For desktop SMU access, the ryzen_smu kernel module (gitlab.com/leogx9r/ryzen_smu) provides a lower-level interface, though it carries additional risk. PBO and Curve Optimizer settings configured in UEFI are the more reliable path for desktop platforms.

AMD Ryzen X3D CPUs: What Changes and What Doesn't

The X3D generation of Ryzen CPUs requires specific knowledge because overclocking behavior changed significantly between generations:

Ryzen 5000X3D (Zen 3, e.g., 5800X3D): No manual CPU overclocking. The 3D V-Cache stacking process was first-generation and AMD locked multiplier and voltage controls to protect the TSV interconnects. PBO and Curve Optimizer are disabled. Only XMP/EXPO memory profiles and power limit relaxation in UEFI are available.

Ryzen 7000X3D (Zen 4, e.g., 7800X3D, 7950X3D): No manual CPU overclocking. Same restriction as the 5000X3D generation. PBO is available with caveats; manual voltage and multiplier control remain locked.

Ryzen 9000X3D (Zen 5, e.g., 9800X3D, 9950X3D): Full overclocking supported. AMD redesigned the V-Cache stacking architecture for Zen 5, removing the overclocking lock that existed on previous X3D generations. Manual multiplier and voltage control are available via UEFI on 9000X3D CPUs. A note on what "full overclocking supported" means in practice, since there is active debate about this in the overclocking community: AMD has officially unlocked the multiplier and voltage controls on 9000X3D, and Newegg's own product listing for the 9800X3D describes it as "unlocked for overclocking." That is the factual status. However, the overclocking community has found that the 3D V-Cache silicon is significantly more voltage-sensitive than standard Zen 5. Overclock.net and SkatterBencher testing shows that the 9800X3D can be pushed to ~5.7-5.9 GHz, but voltages above approximately 1.25-1.3V under all-core load carry real degradation risk, and some users have reported chip death at higher static voltages without extreme cooling. The practical takeaway: the multiplier is unlocked and PBO/Curve Optimizer tuning works well, but manual voltage overclocking demands more caution than on non-X3D Zen 5 parts. This guide lists it as "full overclocking supported" because that is what AMD officially provides; treat the voltage ceiling with more respect than you would on a standard 9700X or 9900X.

AMD 3D V-Cache Optimizer Driver (Linux 6.13+)

For multi-CCD X3D processors, Linux 6.13 merged the AMD 3D V-Cache Optimizer driver, which gives Linux-exclusive runtime control over how the kernel's task scheduler prioritizes CPU cores. This is a tuning surface with no standard equivalent on Windows.

Which CPUs this applies to requires care. The driver only has meaningful impact when a processor has an asymmetric CCD configuration -- one CCD with 3D V-Cache stacked and one without. That describes the 7950X3D and 7900X3D from the Ryzen 7000 generation. The Ryzen 7 9800X3D is a single-CCD chip -- all its cores are on the same die -- so the optimizer driver has no scheduling decision to make and provides no benefit. The Ryzen 9 9950X3D and 9900X3D (Zen 5) use a different design where only one of two CCDs carries the 3D V-Cache (the other CCD runs at higher frequencies), making them candidates for the optimizer. Source: IDC review of the Ryzen 9 9950X3D, Phoronix: AMD 3D V-Cache Optimizer Driver To Be Merged For Linux 6.13.

The driver exposes a sysfs interface that biases initial task placement toward either the 3D V-Cache CCD (larger L3, better for cache-sensitive games) or the higher-frequency CCD (better for compute workloads and applications that benefit from raw clock speed). Two prerequisites apply: first, your kernel must be built with CONFIG_AMD_3D_VCACHE enabled -- most major distribution kernels (Arch linux-6.13.arch1-1, Ubuntu, Fedora, CachyOS) enable this, but verify with zcat /proc/config.gz | grep AMD_3D_VCACHE or check your distro's kernel config. Second, in AMD CBS in your UEFI, set "CPPC Dynamic Preferred Cores" to "Driver" -- without this, the sysfs write has no effect.

bash -- AMD 3D V-Cache Optimizer driver (kernel 6.13+)
# Check current mode (default is frequency on most CPUs)
$ cat /sys/bus/platform/drivers/amd_x3d_vcache/AMDI0101\:00/amd_x3d_mode
frequency

# Bias toward 3D V-Cache CCD (gaming workloads)
$ echo cache | sudo tee \
  /sys/bus/platform/drivers/amd_x3d_vcache/AMDI0101\:00/amd_x3d_mode

# Bias toward high-frequency CCD (compute / render workloads)
$ echo frequency | sudo tee \
  /sys/bus/platform/drivers/amd_x3d_vcache/AMDI0101\:00/amd_x3d_mode

# Identify which physical cores are on the X3D CCD
# (in frequency mode, X3D cores show the lowest prefcore rankings)
$ grep -r '' \
  /sys/devices/system/cpu/cpu*/cpufreq/amd_pstate_prefcore_ranking

Phoronix benchmarked the optimizer driver on the Ryzen 9 9950X3D in March 2025 using the PyPerformance Python benchmark suite. Switching to cache mode produced approximately a 50% improvement on that workload compared to frequency mode defaults. The impact varies sharply by application type: compute-bound workloads that stress all threads may perform better in frequency mode, while gaming and interpreted-language runtimes that access memory intensively tend to benefit from cache mode. Notably, some workloads showed a performance regression with cache mode enabled -- FLAC audio encoding in the same test suite was approximately 36% faster without the optimizer active. Source: phoronix.com/review/amd-3d-vcache-optimizer-9950x3d.

One detail that rarely appears in overclocking guides: on multi-CCD X3D CPUs, lm-sensors and k10temp report per-CCD temperatures separately. Under Linux, the k10temp kernel module exposes Tccd1 and Tccd2 values for two-CCD chips. The 3D V-Cache CCD typically runs cooler than the standard CCD at the same workload because it has additional die mass above it (the stacked cache itself acts as a thermal buffer). Watching Tccd1 vs Tccd2 diverge in real time is a way to verify that the scheduler is actually respecting your amd_x3d_mode setting -- if it is, the CCD assigned cache mode will show elevated utilization during cache-sensitive workloads:

bash -- per-CCD temperature monitoring on X3D CPUs
# Monitor both CCD temperatures independently (requires k10temp module)
$ watch -n 0.5 "sensors | grep -E 'Tccd|Tctl|Tdie'"

# Example output on a 9950X3D:
# Tccd1:         +52.8°C  (V-Cache CCD -- cache mode bias)
# Tccd2:         +71.2°C  (Standard CCD -- bearing full compute load)
# Tctl/Tdie:     +72.4°C  (package max)

# Confirm k10temp is loaded
$ lsmod | grep k10temp

GPU Overclocking on Linux

GPU overclocking on Linux has historically been the area where the platform lagged behind Windows. That gap has narrowed considerably. The situation today is different for AMD and Nvidia, so they are covered separately.

AMD GPUs: The amdgpu Driver and ppfeaturemask

AMD's open-source amdgpu kernel driver is built into the mainline Linux kernel and supports overclocking through its "overdrive" feature. Overdrive is disabled by default as a safety measure. Enabling it requires setting a kernel parameter: amdgpu.ppfeaturemask=0xffffffff. That is the value you will find in nearly every guide online, and it works. However, there is an important distinction that these guides rarely make, and it is worth understanding before copying the parameter into your bootloader.

Setting all 32 bits of ppfeaturemask enables every driver feature, including some that are experimental and can cause instability (such as GFXOFF issues on certain Radeon RX generations). The 0xffffffff value has become a convention because it is simple and it works for the majority of users, but it is not the most precise approach. A safer alternative is to enable only the overdrive bit (bit 14, value 0x4000) while keeping other features at their defaults. This guide recommends the minimal mask approach because the risk from the experimental features is real -- users on the ArchWiki and in AMD driver bug reports have documented suspend/resume failures, screen flickering, and GFXOFF-related hangs on specific GPU families when all bits are set. If you have been running 0xffffffff without issues, you do not need to change anything. If you are setting up a new system or have experienced unexplained driver instability, the minimal mask is the more defensible choice:

bash -- compute the minimal ppfeaturemask
# Calculate current mask with overdrive bit enabled
$ printf 'amdgpu.ppfeaturemask=0x%x\n' \
  "$(($(cat /sys/module/amdgpu/parameters/ppfeaturemask) | 0x4000))"

# Output example:
amdgpu.ppfeaturemask=0xffff7fff

Add the output value as a kernel parameter in your bootloader. On GRUB-based systems, edit /etc/default/grub and append the parameter to GRUB_CMDLINE_LINUX_DEFAULT, then run sudo update-grub. On systemd-boot, edit the appropriate loader entry. After rebooting, the sysfs interface at /sys/class/drm/card0/device/pp_od_clk_voltage becomes writable.

Direct sysfs manipulation uses a defined format for setting P-state clock and voltage pairs:

bash -- direct sysfs clock manipulation
# View current OD clock voltage table
$ cat /sys/class/drm/card0/device/pp_od_clk_voltage

# Format: s/m [P-state] [clock MHz] [voltage mV]
# Set P-state 7 (highest) to 2000 MHz at 1050 mV (core clock)
$ echo "s 7 2000 1050" | sudo tee \
  /sys/class/drm/card0/device/pp_od_clk_voltage

# Commit the change
$ echo "c" | sudo tee \
  /sys/class/drm/card0/device/pp_od_clk_voltage
Sysfs Path Stability

Paths like /sys/class/drm/card0/ are symlinks that may change between reboots if you have multiple GPUs. For scripts that need to persist, use the full PCI path found under /sys/devices/pci0000:00/. LACT and CoreCtrl handle this automatically.

LACT: The Modern AMD GPU Control Tool

Manual sysfs manipulation is powerful but fragile. LACT (Linux AMDGPU Controller) is the recommended GUI tool for AMD GPU overclocking on Linux. As of its 0.8.4 release (January 2026), LACT supports AMD, Nvidia, and Intel GPUs. It handles the ppfeaturemask configuration automatically through its "Enable Overclocking" button, which creates an entry in /etc/modprobe.d/ and regenerates the initramfs. Earlier in the 0.7.x series, 0.7.2 (March 2025) introduced RDNA4 support (RX 9070 series) with kernel 6.13 workarounds; 0.8.0 (June 2025) dropped those workarounds and required kernel 6.14+ for RDNA4, added advanced profile management with multiple trigger conditions, and power-profiles-daemon integration; 0.8.1 (August 2025) added full voltage and frequency point control for older AMD GPUs (RX Vega and earlier), profile hooks for running arbitrary scripts on profile activation, and localization support; 0.8.2 (October 2025) introduced OpenTelemetry metrics export and improved multi-GPU handling; and 0.8.4 brought UI refinements, Docker image availability for headless server deployments, and additional bugfixes. The LACT GitHub releases page is the authoritative source: github.com/ilya-zlobintsev/LACT/releases.

LACT is available in official repositories for Arch Linux (pacman -S lact), via COPR on Fedora, as a .deb on Debian 12+ and Ubuntu 22.04+, as a Flatpak on Flathub for immutable distributions like Bazzite, as an RPM for OpenSUSE Tumbleweed, and in nixpkgs for NixOS:

bash -- LACT installation by distro
# Arch Linux
$ sudo pacman -S lact
$ sudo systemctl enable --now lactd

# Fedora (COPR)
$ sudo dnf copr enable ilyaz/LACT
$ sudo dnf install lact
$ sudo systemctl enable --now lactd

# Flatpak (universal, including Bazzite/SteamOS/Fedora Atomic)
$ flatpak install flathub io.github.lact-linux

# OpenSUSE Tumbleweed -- download RPM from GitHub releases
# NixOS -- available in nixpkgs: nix-env -iA nixpkgs.lact

LACT runs a system daemon (lactd) that holds root privileges, while the GUI connects to it over a Unix socket. This design means overclocking changes survive session boundaries and apply on system resume from sleep. One known conflict: power-profiles-daemon (installed by default on many GNOME-based distros) may override LACT's amdgpu performance level. LACT 0.8.0 and newer automatically coordinates with power-profiles-daemon 0.30 and newer to disable the conflicting action, without touching any other functionality ppd manages.

RDNA4 Kernel Requirement

LACT added RDNA4 support (RX 9070 series) in version 0.7.2 (March 2025). Full overclocking on RDNA4 requires kernel 6.14 or newer; LACT 0.7.2 included workarounds for kernel 6.13, but those workarounds were dropped in 0.8.0 (June 2025), making kernel 6.14 a hard requirement for RDNA4 overclocking on current LACT versions. Earlier kernels had a broken clockspeed control interface in the driver. If you are running an RX 9070 or 9070 XT, update your kernel before expecting functional overclock controls. Source: LACT release notes.

An alternative to LACT is CoreCtrl, which uses a Qt5 UI and supports per-application performance profiles in addition to global overclocking. CoreCtrl also provides CPU configuration alongside GPU tuning, making it a single-pane tool for users who want both in one interface. It is available in official repositories for Ubuntu 24.04+, Fedora 39+, Arch Linux, Debian Sid, and Gentoo.

A third option worth knowing about is Tuxclocker, a Qt5/QML application developed specifically for Linux GPU tuning. Unlike LACT and CoreCtrl, Tuxclocker exposes a more granular clock/voltage curve editor and supports both AMD and Nvidia GPUs. It uses the same amdgpu sysfs interface under the hood, but its curve editor provides a visual plotting surface that makes it easier to set precise voltage-frequency pairs without memorizing the sysfs write syntax. It is available in the AUR (tuxclocker-git) and as a Flatpak.

Nvidia GPUs: Coolbits and Wayland Complications

Nvidia overclocking on Linux is handled through the proprietary driver. Prior to driver 570, unlocking overclocking controls in nvidia-settings required enabling a set of features called Coolbits, set via Xorg configuration. Starting with driver 570 (released February 2025), GPU overclocking controls are shown by default for GPUs that support programmable clock control -- Coolbits is no longer required to surface the controls. However, on older drivers, or on systems still running X11 with pre-570 drivers, Coolbits remains the mechanism.

For systems running pre-570 drivers: The Coolbits value is a bitmask where each bit enables a specific overclocking capability in nvidia-settings. The relevant bits for Fermi-architecture and newer GPUs (GeForce 400 series and later) are: bit 2 (value 4) enables manual fan speed control; bit 3 (value 8) enables additional clock offset controls on the PowerMizer page; bit 4 (value 16) enables overvoltage. For clock offsets, overvoltage, and fan control, the correct value is 4+8+16 = 28. This is published as 28 rather than 31 because there is widespread confusion online about the correct Coolbits value. Virtually every overclocking guide written before 2022 recommends 31 (or sometimes 24 or 12), and those values get copied forward without re-examination. The value 31 (binary 11111) includes bit 0 (value 1), which enabled overclocking on pre-Fermi GPUs and was removed from the driver in version 343.13, and bit 1 (value 2), which historically enabled SLI with mismatched VRAM amounts and was removed from the driver in version 470.42.01. Both are no-ops on any modern driver. Setting 31 instead of 28 will not cause harm -- the driver ignores unrecognized bits -- but it perpetuates an inaccurate understanding of what you are enabling. The ArchWiki's NVIDIA/Tips and tricks page documents the per-bit functionality and removal versions. This guide recommends 28 because it enables exactly the three capabilities that matter for modern single-GPU overclocking, nothing more:

bash -- configure Coolbits via nvidia-xconfig
$ sudo nvidia-xconfig --cool-bits=28

# Verify the Device section in xorg.conf -- should show "Coolbits" "28"
$ grep -A5 'Device' /etc/X11/xorg.conf

# Alternatively, place in /etc/X11/xorg.conf.d/ (persists across driver updates)
# Note: online guides disagree on whether Coolbits goes in the "Screen" or
# "Device" Xorg section. nvidia-xconfig places it in Device. The ArchWiki
# documents it in Screen. Both locations work; the driver reads the option
# from either. Use whichever your configuration already has.

Some important caveats apply to Nvidia on Linux. Enabling DRM kernel mode setting (KMS) -- required for Wayland -- may make overclocking unavailable regardless of Coolbits value, depending on the driver version. Some overclocking operations also cannot be applied if the Xorg server is running in rootless mode; running nvidia-settings as root may be necessary.

Wayland-based desktops required workarounds for overclocking through the Nvidia driver until driver 570, released in February 2025. The Nvidia driver historically required an active X server for nvidia-settings to communicate with the GPU via the NV-CONTROL extension. Starting with driver 570, nvidia-settings uses NVML (Nvidia Management Library) instead of NV-CONTROL to control GPU clocks and fan speed on Wayland systems -- eliminating the X server dependency for overclocking operations. Driver 570 also enables GPU overclocking controls by default for GPUs that support programmable clock control, removing the requirement to set the Coolbits option just to see overclocking controls in nvidia-settings. Power limit adjustments via nvidia-smi have been Wayland-compatible for years and remain so. Source: NVIDIA Linux driver release notes, 9to5Linux coverage of NVIDIA 570.

For driver versions older than 570, the workaround for Wayland setups that need clock offsets involves running a headless X server in the background at login specifically to issue overclock commands via nvidia-settings, then detaching. This setup uses a systemd user service or a session startup script that initializes the display, applies settings, and exits. On driver 570+, this workaround is no longer needed.

LACT also supports Nvidia GPUs on Linux with the proprietary driver installed. It requires nvidia-smi for reading GPU information and the CUDA libraries for write access. Note that due to Nvidia's proprietary driver license, distribution-provided LACT packages may be built without Nvidia modules; the LACT project provides separate installation instructions for Nvidia support via Flatpak.

For scripted control, nvidia-smi provides a command-line interface for setting power limits and monitoring. Clock offsets can be set through the API exposed by nvidia-settings in headless mode:

bash -- nvidia-smi power limit example
# Enable persistent mode (required for power limit changes)
$ sudo nvidia-smi -pm 1

# Set power limit to 200W (check your GPU's supported range first)
$ sudo nvidia-smi -pl 200

# Query current GPU clock, power, and temperature
$ nvidia-smi --query-gpu=clocks.gr,power.draw,temperature.gpu \
  --format=csv,noheader

RAM Overclocking on Linux

RAM overclocking on Linux is, in practical terms, almost entirely a UEFI activity. The operating system does not have runtime control over DDR timing parameters, DRAM voltage, or memory frequency once the system has booted. These are set by the memory controller during POST and locked in place for the life of the session. Linux's role is to provide the stability testing environment and performance measurement -- and that role is significant.

XMP and EXPO: Enabling Profiles in UEFI

XMP (Intel Extreme Memory Profile) and EXPO (AMD Extended Profiles for Overclocking) are JEDEC-adjacent standards that allow RAM manufacturers to ship pre-tested timing configurations above the JEDEC base frequency. A DDR5 kit rated for 6000 MT/s ships with JEDEC defaults at 4800 MT/s; enabling XMP or EXPO in the UEFI applies the manufacturer's validated faster profile.

Enable XMP or EXPO in your UEFI settings, then boot into Linux and verify the memory is operating at the intended frequency:

bash -- check DRAM frequency from Linux
# dmidecode reads SMBIOS data -- requires root
$ sudo dmidecode -t memory | grep -E 'Speed|Configured'

# Example output:
        Speed: 6000 MT/s
        Configured Memory Speed: 6000 MT/s

If "Configured Memory Speed" is lower than "Speed", the XMP/EXPO profile is not active. Return to UEFI and enable it explicitly -- do not rely on "Auto" settings, which often default to JEDEC. This is a point of confusion worth addressing: many users believe that setting their memory to "Auto" in UEFI will automatically enable their XMP or EXPO profile, because the DIMM has the profile programmed into it. On some boards this is true; on others, "Auto" means "use JEDEC defaults and ignore the XMP/EXPO profile entirely." The behavior is vendor-specific and sometimes BIOS-version-specific. The only reliable approach is to explicitly select XMP Profile 1 (or EXPO Profile 1) in UEFI, boot into Linux, and verify with dmidecode that the configured speed matches the rated speed. If you have been running your DDR5-6000 kit at 4800 MT/s for months because "Auto" looked like it should work, you are not alone -- this is one of the more common performance oversights on new builds.

Manual Timing Adjustments

Going beyond XMP/EXPO means manually adjusting primary timings (tCL, tRCD, tRP, tRAS), secondary timings, and DRAM voltage in UEFI. The specifics depend heavily on the memory IC (integrated circuit) used in your DIMM -- Hynix A-Die, Samsung B-Die, Micron E-Die, and others each have different characteristics and preferred tuning methodologies. Finding your RAM's IC is the prerequisite step, achieved through tools like thaiphoon burner on Windows or by checking the DRAM SPD data via decode-dimms on Linux:

bash -- read DIMM SPD data
# Install i2c-tools if not present
$ sudo apt install i2c-tools   # Debian/Ubuntu

# DDR4: load the ee1004 module (correct module for DDR4 SPD)
$ sudo modprobe ee1004

# DDR3 and older: load the eeprom module instead
# Note: eeprom was removed from the kernel in 6.7 -- use ee1004 or at24
$ sudo modprobe eeprom   # kernels older than 6.7 only

# Decode SPD from all DIMM slots
$ sudo decode-dimms

The output includes the manufacturer, part number, rated speed, and timing parameters programmed by the manufacturer. This data guides rational manual tuning.

Stability Testing After Overclocking

An overclock that crashes after two hours of gaming is not a stable overclock. Stability testing is not optional; it is the mechanism by which you determine whether the changes you made are safe to run long-term. Linux has an excellent toolkit for this.

CPU Stability: stress-ng and mprime

stress-ng is the most comprehensive CPU stress tool available on Linux. Unlike Prime95 / mprime, which primarily exercises floating-point units with prime number calculations, stress-ng ships with over 280 distinct stressors covering every CPU subsystem, memory controller, cache hierarchy, and I/O path:

bash -- stress-ng CPU stress test examples
# All CPU stressors, all cores, 30 minutes
$ stress-ng --cpu $(nproc) --cpu-method all --timeout 30m --metrics

# Matrix math stressor (sensitive to floating-point errors)
$ stress-ng --matrix $(nproc) --timeout 30m

# Combined CPU + memory stressor to stress the whole pipeline
$ stress-ng --cpu $(nproc) --vm 2 --vm-bytes 75% --timeout 30m

mprime (the Linux port of Prime95) remains useful specifically because it uses error-checking in its calculations. An incorrect FFT result means the CPU produced wrong output, which indicates an unstable overclock. Run mprime in "torture test" mode, selecting "Small FFTs" to maximize CPU core load, or "Blend" to stress both CPU and memory simultaneously.

One stability signal that many guides miss: Machine Check Exceptions (MCEs). When the CPU detects a hardware error -- memory controller errors, bus errors, cache ECC errors -- it logs a Machine Check Event. These do not always crash the system; they can be silently logged while the system continues running. Check dmesg and the MCE log after any stress test session:

bash -- check for MCE hardware errors after stress testing
# Check dmesg for Machine Check Events
$ sudo dmesg | grep -i "mce\|machine check\|hardware error"

# Install mcelog (older tool) or rasdaemon (modern replacement)
$ sudo apt install rasdaemon   # Debian/Ubuntu
$ sudo systemctl enable --now rasdaemon
$ sudo ras-mc-ctl --errors    # show logged hardware errors

# Clean MCE log before starting a stress test, then check after
$ sudo ras-mc-ctl --reset-errors

Any MCE log entry after an overclocked stress run is a red flag. A clean log, combined with clean Memtest86+ and stress-ng runs, gives you a much higher confidence threshold than any single tool alone.

GPU Stability: Workload Benchmarks

For GPU stability, synthetic benchmarks that produce deterministic scores are preferred over open-ended workloads. Unigine Heaven and Superposition are cross-platform OpenGL/Vulkan benchmarks that stress the GPU consistently. Running multiple loops and checking that scores remain consistent (within ~2%) indicates thermal and electrical stability.

Practical workload testing -- running an actual game with MangoHud enabled to track frame times, GPU temperature, and clock stability -- is arguably more representative than synthetic benchmarks. A GPU that passes Heaven but crashes in-game likely has a borderline overclock that synthetic tools are not exercising in the same way:

bash -- MangoHud via Steam launch options
# Add to Steam game's launch options to enable MangoHud overlay
mangohud %command%

# Or run any application with the overlay
$ mangohud glxgears

RAM Stability: Memtest86+

Memtest86+ is the open-source (GPL v2) standard for memory testing. It runs from a bootable USB or is included in many distribution GRUB menus, running before the OS loads to test RAM without any OS interference. This is critical: testing RAM from within a running OS means the OS itself is consuming memory and potentially masking intermittent errors.

The Memtest86+ project documentation describes the tool's purpose as ensuring hardware is sound before critical use: it validates that RAM is "faulty or not" through algorithms designed to detect failures reliably. The documentation specifically calls out post-overclocking testing as one of the primary use cases.

-- memtest.org official documentation

Memtest86+ is distinct from PassMark's Memtest86, which has been closed-source since 2013. This is a persistent source of confusion: both projects share the "Memtest86" name, both test memory, and both are available as bootable ISOs, but they are not the same software. Memtest86+ (the open-source project at memtest.org) is the one included in many Linux distribution GRUB menus by default and the one this guide recommends. PassMark's Memtest86 (no plus sign, at passmark.com) is a proprietary product with a free tier and a paid tier. Both are functional memory testing tools; this guide specifies Memtest86+ because it is GPL-licensed, actively maintained by the open-source community, and universally packaged for Linux distributions. When searching online, check the URL and the plus sign -- if the download page is at passmark.com, you are looking at the closed-source version. Run at least two full passes after any XMP/EXPO or manual timing change. Zero errors after two passes indicates basic stability; for production or mission-critical systems, run overnight.

Testing Methodology Note

Google's stressapptest (available in Linux package managers) is what Google uses to evaluate memory stability in their server fleets. For RAM overclocking verification on Linux, it is a strong complement to Memtest86+. Run it after booting your OS with the new settings: stressapptest -s 3600 -M 4096 -m 4 -W (3600 seconds, 4GB, 4 threads, with more aggressive memory exercise).

Monitoring During and After Overclocking

Overclocking without monitoring is guesswork. Linux provides excellent real-time visibility into temperatures, voltages, and clock frequencies through both command-line and graphical tools.

lm-sensors: Thermal Monitoring

The lm-sensors package reads hardware monitor chips exposed by the kernel. After installation, run sudo sensors-detect once to probe for available sensors, then use watch -n 1 sensors to view temperatures in real time during stress testing. Critical temperature thresholds to know: CPU cores typically throttle and protect themselves above 90-100°C depending on the chip; AMD GPUs report two relevant temperatures -- edge temperature (die surface) and junction temperature (internal hotspot). The junction temperature is the throttling limit. It runs 10-20°C above edge temperature and is the number to watch. AMD RDNA2 and newer GPUs will throttle when junction reaches 110°C:

bash
$ sudo apt install lm-sensors   # or equivalent
$ sudo sensors-detect           # one-time probe
$ watch -n 1 sensors            # live temperature view

AMD GPU Monitoring via amdgpu_top

amdgpu_top is a terminal tool for real-time AMD GPU statistics: clock speeds, VRAM usage, shader and compute activity, power draw, and temperature. It uses the DRM (Direct Rendering Manager) interface and does not require root. Install from your distribution's repositories or build from the GitHub source:

bash
$ sudo pacman -S amdgpu_top   # Arch
$ amdgpu_top                   # run interactively

CoreFreq: CPU Frequency and Voltage Visibility

corefreq is a terminal-based CPU monitor that reads performance counters directly from MSRs, providing per-core frequency, temperature, and power data with higher accuracy than tools that rely on the cpufreq sysfs interface. It requires loading a kernel module. It is particularly useful for verifying that CPU frequency targets are actually being met after setting overclock parameters:

bash -- corefreq usage
# After building from source and loading the module:
$ sudo ./corefreqd &   # start daemon
$ ./corefreq-cli -t   # live top view with per-core clocks
$ ./corefreq-cli -V   # power and voltage monitoring

Undervolting: The Other Side of the Equation

Undervolting deserves its own mention because it is frequently a more practical tuning activity than aggressive overclocking, particularly on laptops and systems where thermal headroom is limited. Reducing core voltage below default while maintaining the same operating frequency can significantly lower temperatures and power consumption without any performance cost -- in fact it often improves sustained performance by reducing thermal throttling.

On Intel platforms, undervolting is done through the same MSR interface used for other voltage control. The intel-undervolt tool (available in the AUR and on GitHub) provides a configuration file-driven approach for Haswell and newer Intel CPUs.

Plundervolt and Intel Undervolt Protection

Plundervolt (CVE-2019-11157) was a vulnerability affecting Intel 6th–10th generation Core CPUs and certain Xeon E3/E-2000 series. It allowed undervolting to corrupt values inside SGX (Software Guard Extensions) enclaves. Intel's official FAQ explicitly states: "If you do not use SGX, you do not need to do anything." The mitigation -- enabling the Overclocking Lock MSR bit -- is only recommended for systems running SGX workloads. For general undervolting on non-SGX workloads, Plundervolt is not a concern. However, many OEMs applied the lock unconditionally in firmware updates, which effectively disabled software undervolting on those systems regardless of SGX usage. Check your UEFI settings and Intel Security Advisory INTEL-SA-00289 for your specific CPU. Newer Intel CPUs outside the K/HK/HX unlocked series may also have hardware-fused undervolt protection unrelated to Plundervolt.

What to Avoid: Common Linux Overclocking Mistakes

Several patterns cause problems repeatedly for people overclocking on Linux.

Using ryzenadj on desktop Ryzen without verifying support. ryzenadj targets Ryzen mobile (APU) processors. Running it on a desktop Ryzen CPU will likely return "unsupported model." The tool communicates with AMD's SMU (System Management Unit) through interfaces that laptop firmware exposes differently than desktop firmware. For desktop TDP adjustment, use your UEFI's PBO settings or the ryzen_smu kernel module -- and research the specific model before modifying SMU registers, as incorrect values can cause immediate system instability.

Trusting the governor label at face value. Setting performance governor on a system with intel_pstate or amd_pstate in active mode does not pin the CPU to maximum frequency. It sends a performance hint to the hardware governor. The CPU may still clock down at idle. If you need the CPU pegged to a specific frequency for testing purposes, switch intel_pstate to passive mode first and use the generic cpufreq governors.

Assuming amdgpu.ppfeaturemask=0xffffffff is always safe. Enabling all 32 bits of the ppfeaturemask activates undocumented and experimental features, not just overdrive. Some of those features cause screen flickering or broken resume from suspend on specific GPU families. Use the computed minimal mask that enables only overdrive (PP_OVERDRIVE_MASK, bit 14) unless you have a specific reason for the others.

Skipping stability tests and assuming boot success means stability. A system that boots and runs idle is not a validated overclock. Memory errors, in particular, can be intermittent and manifest hours into a workload. The Memtest86+ documentation notes that some memory errors only appear when cells are physically hot from sustained operation -- testing at room temperature with brief runs may miss errors that appear after 30 minutes under load.

Ignoring power supply headroom. Overclocked components draw more power. Overclocked GPUs in particular can spike well above their rated TDP during transient loads. A power supply operating at 90%+ of rated capacity under normal load may become unstable under peak overclocked draw. This manifests as random system resets that look like CPU or RAM instability but are actually PSU dropouts.

Using thermald on a system you are trying to overclock. thermald, the Intel thermal management daemon, proactively throttles CPU frequency to prevent overheating. It is beneficial on laptops and production systems but directly counteracts CPU overclocking efforts on desktop systems. Disable it before testing: sudo systemctl disable --now thermald.

The Platform Is Not the Bottleneck

Linux is not a compromised overclocking environment. It is a different one. The kernel's direct access to MSR interfaces, the AMD driver's open-source architecture, the depth of monitoring available through lm-sensors, amdgpu_top, and corefreq -- these are advantages, not limitations. The tooling gap with Windows has closed substantially as projects like LACT, CoreCtrl, and Tuxclocker have matured.

What Linux does not have is the hand-holding of closed vendor software that hides hardware complexity behind sliders. That is a trade. The people who benefit from those sliders often also accept their constraints without realizing it. On Linux, the constraints are your hardware's actual capabilities, the kernel's interfaces, and your willingness to read documentation. None of those are unreasonable.

Start conservative. Increment slowly. Test completely. Measure what matters. The rest is patience.

Key Sources

ArchWiki AMDGPU: wiki.archlinux.org/title/AMDGPU -- ArchWiki CPU Frequency Scaling: wiki.archlinux.org/title/CPU_frequency_scaling -- ArchWiki Ryzen: wiki.archlinux.org/title/Ryzen -- ArchWiki NVIDIA/Tips and tricks: wiki.archlinux.org/title/NVIDIA/Tips_and_tricks -- LACT GitHub (current stable: 0.8.4, January 2026; RDNA4 since 0.7.2): github.com/ilya-zlobintsev/LACT -- Tuxclocker GitHub: github.com/Lurkki14/tuxclocker -- Linux kernel amd-pstate documentation: docs.kernel.org/admin-guide/pm/amd-pstate.html -- Linux kernel AMD 3D V-Cache sysfs ABI: kernel.org/doc/Documentation/ABI/testing/sysfs-bus-platform-drivers-amd_x3d_vcache -- Phoronix AMD 3D V-Cache Optimizer review: phoronix.com/review/amd-3d-vcache-optimizer-9950x3d -- Phoronix Linux 6.5 amd_pstate default: phoronix.com/review/linux65-ryzen-servers -- NVIDIA 570 Linux driver release (NVML Wayland overclocking): 9to5linux.com/nvidia-570-linux-graphics-driver -- Intel Security Advisory INTEL-SA-00289 (Plundervolt): intel.com/content/www/us/en/security-center/advisory/intel-sa-00289.html -- Memtest86+: memtest.org -- rasdaemon / MCE monitoring: github.com/mchehab/rasdaemon -- Fedora GPU Overclocking Docs: docs.fedoraproject.org -- LWN.net on processor undervolting: lwn.net/Articles/835594 -- Linux kernel k10temp driver documentation: docs.kernel.org/hwmon/k10temp.html