You just finished installing Linux. The desktop is clean, the terminal is waiting, and everything feels fresh. It's tempting to start installing applications and customizing your wallpaper right away -- but hold off. There's a sequence of steps that will save you hours of frustration later and transform that raw installation into a system that's secure, up to date, and genuinely pleasant to work with. Whether you're running Ubuntu, Fedora, Arch, or any other distribution, these twelve steps apply broadly. Adjust the specific commands for your package manager, but the principles are universal.
1. Update Everything
This is the single most important step, and it should always come first. Your installation media is a snapshot in time -- it could be days, weeks, or even months behind the latest packages. Running a full system update ensures you get the latest security patches, bug fixes, and driver improvements before you do anything else. On a Debian-based system like Ubuntu or Linux Mint, the process looks like this:
# Debian / Ubuntu / Mint $ sudo apt update && sudo apt full-upgrade -y # Fedora / RHEL $ sudo dnf upgrade --refresh -y # Arch / Manjaro $ sudo pacman -Syu
On Debian-based systems, full-upgrade (formerly dist-upgrade) is preferred over upgrade for a fresh install. While upgrade never removes packages, full-upgrade will handle dependency changes -- such as installing new packages or removing obsolete ones -- that are common when a fresh install pulls in a new kernel or major library update.
Reboot after the update finishes. Kernel updates, in particular, won't take effect until you restart. It's common for a fresh install to pull in hundreds of updated packages, so give this step the time it needs.
2. Enable and Configure Your Firewall
Many Linux distributions ship with a firewall framework installed but not enabled by default. This means your system is sitting on the network with no firewall rules filtering traffic to your running services -- not ideal, even on a home network. Ubuntu and its derivatives ship with ufw (Uncomplicated Firewall), which is exactly what its name suggests: a straightforward interface for managing firewall rules without touching the underlying complexity. On modern Ubuntu systems (20.10 and later), ufw routes its rules through the nftables backend via iptables-nft, though the commands remain the same.
# Enable the firewall $ sudo ufw enable # Set default policies $ sudo ufw default deny incoming $ sudo ufw default allow outgoing # Allow SSH if you need remote access $ sudo ufw allow ssh # Check the status $ sudo ufw status verbose
On Fedora or RHEL-based systems, firewalld is the default and is typically enabled out of the box. Use firewall-cmd to manage zones and services. The principle is the same: deny by default, then explicitly allow only the traffic you need.
# Verify firewalld is running $ sudo firewall-cmd --state # Check the default zone $ sudo firewall-cmd --get-default-zone # List what's allowed in the active zone $ sudo firewall-cmd --list-all # Allow SSH if needed $ sudo firewall-cmd --permanent --add-service=ssh $ sudo firewall-cmd --reload
This simple step eliminates an enormous category of network-based attacks.
If you're configuring a remote server over SSH, always allow SSH traffic before enabling the firewall. Locking yourself out of a remote machine is an unpleasant way to learn this lesson.
3. Configure Additional Repositories
The default repositories that ship with your distribution contain a curated selection of software, but they rarely include everything you'll need. Adding supplementary repositories gives you access to newer package versions, proprietary drivers, and applications that can't be distributed in the base repos due to licensing restrictions.
On Ubuntu, this typically means enabling the universe and multiverse repositories for community-maintained and restricted packages. On Fedora, many users add RPM Fusion to gain access to multimedia codecs and proprietary software that Red Hat can't include in the base distribution.
# Ubuntu -- enable universe and multiverse $ sudo add-apt-repository universe $ sudo add-apt-repository multiverse # Fedora -- add RPM Fusion (free and nonfree) $ sudo dnf install \ https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \ https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
If your distribution supports Flatpak or Snap, this is also a good time to set those up. Flatpak, in particular, offers sandboxed access to a huge software library through Flathub, and it works across nearly every distribution. Ubuntu doesn't ship with Flathub pre-configured, so you'll need to add it manually:
# Install Flatpak (if not already installed) $ sudo apt install flatpak # Add the Flathub repository $ flatpak remote-add --if-not-exists flathub \ https://dl.flathub.org/repo/flathub.flatpakrepo
4. Install Hardware Drivers
Linux has come a long way with hardware support, but some components -- particularly discrete GPUs, Wi-Fi chipsets, and certain printers -- still require proprietary drivers for full functionality. Running without the correct GPU driver, for example, means you're stuck with a generic framebuffer that lacks hardware acceleration entirely. That translates to sluggish desktop performance, no 3D graphics, and significantly higher power consumption on laptops.
Ubuntu provides a convenient tool for detecting and installing proprietary drivers:
For NVIDIA users on other distributions, the process involves adding the appropriate repository and installing the driver package manually. AMD and Intel GPU users generally have a smoother experience since their open-source drivers are included in the kernel, but it's still worth verifying that your system is using the right driver by checking the output of lspci -k for your graphics card entry.
5. Install Multimedia Codecs
Out of the box, many Linux distributions can't play MP3 files, stream H.264 video, or handle common media formats. This isn't a technical limitation -- it's a legal one. Codecs like H.264, AAC, and MP3 carry patent encumbrances in some jurisdictions, so distributions play it safe by excluding them from the default installation. The fix is straightforward.
# Ubuntu / Mint -- install the full codec pack $ sudo apt install ubuntu-restricted-extras # Fedora -- install codecs via RPM Fusion $ sudo dnf install gstreamer1-plugins-bad-free gstreamer1-plugins-ugly \ gstreamer1-plugins-bad-freeworld gstreamer1-plugin-openh264
After installing these packages, video players like VLC or the built-in GNOME Videos will handle virtually any format you throw at them. This also resolves the common frustration of web browsers failing to play certain embedded media on websites that rely on proprietary codecs for their video content.
6. Set Up System Snapshots
Before you make any more changes, set up a snapshot tool. System snapshots let you roll back your entire filesystem to a known good state if a package update breaks something, if a configuration change goes sideways, or if you simply want to experiment without consequences. Think of it as version control for your entire operating system.
Timeshift is the go-to tool for this on ext4 filesystems. It creates incremental snapshots using rsync or, if you're on a Btrfs filesystem, it leverages native Btrfs snapshots for near-instantaneous backups that consume minimal disk space.
# Install Timeshift $ sudo apt install timeshift # Create an initial snapshot $ sudo timeshift --create --comments "Fresh install baseline" # List existing snapshots $ sudo timeshift --list
Configure Timeshift to take automatic daily snapshots and keep the last five. This gives you a rolling safety net without consuming excessive disk space. If you're running Btrfs, snapshots are essentially free in terms of disk usage until the data actually diverges.
7. Review Your Swap Configuration
Swap space acts as overflow memory when your RAM fills up. The installer usually configures a swap partition or swap file automatically, but the defaults aren't always ideal for your workload. Too little swap and your system will freeze or kill processes under memory pressure. Too much and you're wasting disk space. A reasonable starting point is to match your RAM size up to 8 GB, then add half for anything beyond that.
Check your current swap configuration and the swappiness value, which controls how aggressively the kernel moves data from RAM to swap. On Linux kernel 5.8 and later, swappiness can range from 0 to 200, though values between 0 and 100 cover the range relevant to desktops:
# Check current swap $ swapon --show $ free -h # Check swappiness (default is usually 60) $ cat /proc/sys/vm/swappiness # Lower swappiness for desktop use (keeps more in RAM) $ sudo sysctl vm.swappiness=10 # Make it permanent (drop-in file, preferred over editing sysctl.conf) $ echo 'vm.swappiness=10' | sudo tee /etc/sysctl.d/99-swappiness.conf
For desktop workstations, lowering swappiness to 10 tells the kernel to prefer keeping data in RAM and only resort to swap when memory is genuinely running low. This results in a noticeably snappier desktop experience since RAM access is orders of magnitude faster than disk I/O, even on an SSD.
8. Set Your Hostname and Timezone
The installer often assigns a generic hostname like localhost or something based on your username. If you have multiple machines on your network, meaningful hostnames make life significantly easier -- especially when you're SSH-ing between systems and need to know at a glance which terminal belongs to which machine. Setting your timezone correctly ensures that logs, cron jobs, and file timestamps all reflect the right time.
# Set a descriptive hostname $ sudo hostnamectl set-hostname workstation-01 # Verify it $ hostnamectl # Set timezone $ sudo timedatectl set-timezone America/New_York # Enable NTP for automatic time sync $ sudo timedatectl set-ntp true
Enabling NTP synchronization is especially important. Clock drift on a system that isn't synced to a time server can cause subtle and maddening issues with TLS certificate validation, Kerberos authentication, log correlation, and any application that depends on accurate timestamps.
9. Customize Your Shell Environment
The default Bash configuration works, but it doesn't do you any favors. A few tweaks to your shell environment can dramatically improve your productivity at the terminal. At minimum, you should configure a useful command history, add aliases for common operations, and consider whether an alternative shell like Zsh or Fish would serve you better.
# Increase history size and add timestamps HISTSIZE=10000 HISTFILESIZE=20000 HISTTIMEFORMAT="%F %T " HISTCONTROL=ignoreboth:erasedups # Useful aliases alias ll='ls -alFh --color=auto' alias gst='git status' alias update='sudo apt update && sudo apt full-upgrade -y' alias ports='sudo ss -tulnp' alias myip='curl -s ifconfig.me' # Enable colored output for grep alias grep='grep --color=auto'
If you want to go further, installing zsh along with the Oh My Zsh framework gives you features like intelligent autocompletion, syntax highlighting as you type, a rich plugin ecosystem, and themes that display git branch information, exit codes, and execution times right in your prompt. For many users, switching shells is the single biggest quality-of-life improvement they make on a new system.
10. Install Essential Software
With your system updated, secured, and configured, it's time to install the tools you actually need. Rather than installing applications one at a time over the next few weeks as you realize they're missing, batch-install your essentials now. Here's a sensible starting point that covers development tools, system utilities, and everyday applications.
# Development essentials $ sudo apt install build-essential git curl wget vim # System monitoring and diagnostics $ sudo apt install htop fastfetch net-tools tmux # Archive and compression tools $ sudo apt install unzip p7zip-full rar # Useful CLI utilities $ sudo apt install tree jq bat ripgrep fd-find
The build-essential meta-package installs GCC, make, and other compilation tools that many software packages require as build dependencies -- even if you never write C code yourself. Note that fastfetch replaces the now-discontinued neofetch (archived April 2024) and is actively maintained with broader hardware and Wayland support. Tools like ripgrep and fd-find are modern replacements for grep and find that are dramatically faster and have more intuitive syntax. Once you try them, it's hard to go back.
Some of these packages may have different names on different distributions. On Fedora, build-essential is called @development-tools, and fd-find is simply fd. On Ubuntu versions before 24.04, fastfetch is not in the default repositories -- install it via PPA (ppa:zhangsongcui3371/fastfetch) or download the .deb from the project's GitHub releases. Check your distro's documentation or search the package manager if a package name doesn't resolve.
11. Set Up Automated Backups
Snapshots protect you from system-level problems, but they won't save your personal data if a drive fails. Proper backups -- meaning copies stored on a separate physical device or in the cloud -- are non-negotiable. The distinction matters: snapshots are for rollbacks, backups are for disaster recovery. You need both.
Deja Dup (GNOME Backups) provides a clean graphical interface for scheduling encrypted backups to local drives, network shares, or cloud storage. For a command-line approach, rsync combined with a cron job or systemd timer gives you full control over what gets backed up and where it goes.
# Simple rsync backup to an external drive $ rsync -avh --delete --exclude='.cache' \ /home/username/ /mnt/backup/home-backup/ # Install Deja Dup for a GUI approach $ sudo apt install deja-dup
The --delete flag in the rsync command ensures that files you've removed from the source are also removed from the backup, keeping the two in sync. Be careful with this flag -- if you accidentally reverse the source and destination paths, --delete will wipe your data. The --exclude='.cache' flag skips cached data that doesn't need to be preserved. For critical data, consider using a tool like restic or borgbackup, which provide encrypted, deduplicated, and versioned backups that are suitable for long-term archival.
A backup that has never been tested is not a backup -- it's a hope. After setting up your backup routine, actually restore a file from it to verify the process works end to end. Do this before you need it, not during a crisis.
12. Harden SSH (If Applicable)
If you plan to access your machine remotely -- or if it's a server that will be exposed to the internet -- hardening SSH is critical. The default SSH configuration is designed for broad compatibility, not security. A few targeted changes can reduce your attack surface dramatically while still keeping remote access convenient and reliable.
# Disable root login over SSH PermitRootLogin no # Disable password authentication (use keys instead) PasswordAuthentication no # Limit SSH to specific users AllowUsers yourusername # Change the default port (optional but reduces noise) Port 2222 # Set idle timeout ClientAliveInterval 300 ClientAliveCountMax 2
Before disabling password authentication, make sure you've already copied your SSH public key to the server using ssh-copy-id. Switching to key-based authentication eliminates brute-force password attacks entirely, which is significant when you consider that any publicly reachable SSH server will receive thousands of automated login attempts per day.
# Generate an SSH key pair (if you don't have one) $ ssh-keygen -t ed25519 -C "you@example.com" # Copy your public key to the server $ ssh-copy-id -i ~/.ssh/id_ed25519.pub user@server # Restart SSH to apply changes $ sudo systemctl restart sshd
For additional protection, install fail2ban, which monitors log files for repeated failed login attempts and automatically bans offending IP addresses. Its default configuration provides solid SSH protection out of the box.
# Install fail2ban $ sudo apt install fail2ban # Enable and start the service $ sudo systemctl enable --now fail2ban # Check banned IPs $ sudo fail2ban-client status sshd
Wrapping Up
A fresh Linux install is like moving into a new house. The structure is solid, but you wouldn't sleep there before checking that the locks work, the plumbing runs, and you know where the breaker box is. These twelve steps handle the Linux equivalent of all that: updates patch the known vulnerabilities, the firewall locks the doors, snapshots and backups give you a safety net, and the rest of the configuration turns a generic installation into a system that works the way you work.
The order matters too. Updating first ensures that every subsequent step benefits from the latest package versions. Setting up the firewall early protects you while you're still configuring everything else. Snapshots before heavy customization mean you can always get back to a clean baseline. And hardening SSH last gives you time to set up key-based authentication properly before you lock down password access.
The best time to configure your system properly is right after installation. The second best time is right now.
Run through this list once, adapt it to your specific distribution and workflow, and save your own version as a checklist. The next time you spin up a fresh install -- whether it's a desktop workstation, a home server, or a cloud VPS -- you'll have it done in under an hour.