$ xmrhost-cli docs show --topic=wireguard-vpn-on-vps
[$ ] doc: wireguard-vpn-on-vps
// Run a self-hosted WireGuard VPN on a XMRHost VPS — server config, peer keys, NAT, kill-switch
// published=2026-04-29 · updated=2026-04-29 · diff=intermediate · read=20min · tags=[wireguard, vpn, networking, hardening]
// ABSTRACT
abstract
Procedure for bringing up a personal-use WireGuard VPN endpoint on a XMRHost VPS. Covers the kernel-mode wg install, the canonical wg0.conf, key generation per peer (with the spend-key-equivalent treatment of the server private key), the source-NAT iptables/nftables ruleset, the client wg-quick config with kill-switch directives, the peer rotation procedure, and the four post-launch checks (handshake within 5s, no clearnet leak, MTU correct, peers persist after reboot).
Scope and assumptions
This is the procedure for bringing up a personal-use WireGuard VPN endpoint on a XMRHost VPS — the kind of setup an operator runs for their own laptop / phone / home router to tunnel traffic out through an offshore exit. It is NOT a procedure for running a multi-tenant commercial VPN: that has a different operational profile (per-customer isolation, abuse-handling at scale, AUP enforcement) which is out of scope.
WireGuard is the right choice for this workload over OpenVPN / IPsec because: the wire-format and crypto are simple enough to audit (the original spec is 19 pages including the formal-methods section [Donenfeld, 'WireGuard: Next Generation Kernel Network Tunnel']); the kernel-module path on Linux 5.6+ adds essentially zero overhead vs raw routing; and the configuration model — public-key-per-peer, no usernames, no PKI — closes a class of misconfiguration that OpenVPN historically suffered from.
Baseline assumptions:
- Debian 12 (bookworm) on a XMRHost VPS in Iceland, Romania, or Switzerland. WireGuard is in mainline Linux from 5.6+ and Debian 12 ships kernel 6.1; no PPA needed.
- The brand’s
hardened-by-defaultbaseline is in place — seeharden-sshdandkernel-hardening-checklist. - A static public IPv4 on the VPS. WireGuard works over CGNAT but with extra hops (you’d need a port-forward on the upstream); not covered here.
- nftables already configured on the box (the brand baseline). If you’re still on iptables-legacy, the rule syntax differs but the semantics are identical.
Step 1 — install wireguard-tools
WireGuard’s kernel side is in mainline Linux. The userspace tools come from wireguard-tools (small package, just wg and wg-quick):
apt update
apt install -y wireguard-tools
# Confirm the kernel module is present.
lsmod | grep wireguard || modprobe wireguard
modinfo wireguard | head -2
Expected:
filename: /lib/modules/6.1.0-15-amd64/kernel/drivers/net/wireguard/wireguard.ko license: GPL v2
If the module load fails (modprobe: FATAL: Module wireguard not found in directory /lib/modules/...), the kernel was compiled without it — exceedingly rare on Debian 12, but possible on a custom kernel. Switch to a Debian-shipped kernel or build the module externally.
Step 2 — generate the server keypair
Generate a private key for the VPS. The private key NEVER leaves the VPS — there is no key-rotation use case where you’d want to import it elsewhere. The public key is what every peer (laptop, phone) configures as their “remote endpoint” public key.
mkdir -p /etc/wireguard
chmod 0700 /etc/wireguard
cd /etc/wireguard
umask 077
wg genkey | tee server-private.key | wg pubkey > server-public.key
chmod 0600 server-private.key
# Print the public key — this is what every peer will need.
cat server-public.key
Expected:
xY7+pQ3K8aBdLm9N2fGhJpRtVwZ6sOuTcDeFv4HiNkL=
The 44-character base64 string is your server’s public-key identity. Treat it as semi-public — peers need it, but you don’t need to put it on the website.
Step 3 — generate one peer keypair per device
Each peer device (operator’s laptop, phone, home router) gets its own keypair. The peer’s PRIVATE key is generated on the peer device or, in some setups, on the VPS and transferred securely (the second is operationally easier; the first is more conservative). The brand recommends generating each peer’s keypair on the peer device itself.
On the peer device (e.g. a Linux laptop):
sudo apt install -y wireguard-tools
sudo mkdir -p /etc/wireguard
cd /etc/wireguard
sudo umask 077
sudo sh -c 'wg genkey | tee peer-private.key | wg pubkey > peer-public.key'
sudo cat peer-public.key
Send the peer’s PUBLIC key (and only the public key) to the VPS via SCP, encrypted email, or QR-code transcription. The peer’s private key never leaves the peer device.
For phones / iOS devices using the official WireGuard app, the keypair is generated inside the app — same property: private key stays on device, public key is exported as a QR code or text.
Step 4 — the canonical server wg0.conf
Drop into /etc/wireguard/wg0.conf:
# /etc/wireguard/wg0.conf — XMRHost self-hosted WireGuard endpoint.
# All [Peer] blocks: one per device that connects.
[Interface]
# The VPS-side address inside the tunnel. The /24 means we have ~250 peer
# slots before we need to rethink; that's plenty for personal-use.
Address = 10.42.0.1/24
ListenPort = 51820
# Server private key — generated in step 2.
PrivateKey = <CONTENTS-OF-server-private.key>
# Save and load wg0.conf modifications via `wg-quick save` — when set true,
# adding a peer at runtime via `wg set` persists across reboots. The brand
# default is FALSE because we want every peer added through the conf file
# (so `git diff /etc/wireguard/wg0.conf` is meaningful as an audit trail).
SaveConfig = false
# PostUp / PostDown — set up SNAT so peer traffic egresses through the
# VPS's public IP. eth0 is the public interface; rename if your VPS uses
# ens3 / enp1s0 / etc.
PostUp = nft add table ip nat \; \
nft add chain ip nat postrouting '{ type nat hook postrouting priority 100 ; }' \; \
nft add rule ip nat postrouting oifname "eth0" ip saddr 10.42.0.0/24 masquerade
PostDown = nft delete table ip nat
# ---------- peer 1: operator's laptop --------------------------------------
[Peer]
# Each peer's PUBLIC key (from step 3) goes here. Peer private keys NEVER
# appear in this file or on the VPS at all.
PublicKey = <PEER-1-PUBLIC-KEY>
# The IP the peer is allowed to use inside the tunnel. /32 means exactly
# one address per peer; the server's /24 is the broadcast domain.
AllowedIPs = 10.42.0.10/32
# Optional: a hint for keepalives if the peer is behind NAT. Default off
# (server-side); peers behind NAT set their own PersistentKeepalive in
# their config.
# ---------- peer 2: operator's phone --------------------------------------
[Peer]
PublicKey = <PEER-2-PUBLIC-KEY>
AllowedIPs = 10.42.0.11/32
# ---------- peer 3: home router -------------------------------------------
[Peer]
PublicKey = <PEER-3-PUBLIC-KEY>
# Note the wider AllowedIPs — the home router is forwarding the entire
# home LAN through the VPN; allow the LAN range.
AllowedIPs = 10.42.0.12/32, 192.168.1.0/24
Set permissions and start:
chmod 0600 /etc/wireguard/wg0.conf
systemctl enable --now wg-quick@wg0
systemctl status wg-quick@wg0
The systemd unit wg-quick@wg0 reads /etc/wireguard/wg0.conf, brings up the tunnel interface, applies the SNAT rule via PostUp, and adds the peer key/IP mappings to the kernel.
Open the firewall for the listen port:
nft add rule inet filter input udp dport 51820 accept
Confirm wg is listening:
interface: wg0 public key: xY7+pQ3K8aBdLm9N2fGhJpRtVwZ6sOuTcDeFv4HiNkL= private key: (hidden) listening port: 51820 peer: <PEER-1-PUBLIC-KEY> allowed ips: 10.42.0.10/32 peer: <PEER-2-PUBLIC-KEY> allowed ips: 10.42.0.11/32 peer: <PEER-3-PUBLIC-KEY> allowed ips: 10.42.0.12/32, 192.168.1.0/24
Step 5 — kernel sysctl for IP forwarding
The brand kernel-hardening baseline disables net.ipv4.ip_forward (per kernel-hardening-checklist step 1). A WireGuard server needs forwarding ON. Add a per-host override:
cat > /etc/sysctl.d/01-wireguard.conf <<'EOF'
# WireGuard server — IP forwarding required. Override the brand kernel
# baseline that disables forwarding on non-routing boxes.
net.ipv4.ip_forward = 1
net.ipv4.conf.all.forwarding = 1
# IPv6 only if the VPS has working IPv6 AND the WireGuard tunnel is dual-
# stack. Default off.
# net.ipv6.conf.all.forwarding = 1
EOF
sysctl --system | grep ip_forward
Expected:
net.ipv4.ip_forward = 1
Step 6 — the canonical peer config (laptop side)
On the peer device, write /etc/wireguard/wg0.conf:
# /etc/wireguard/wg0.conf on the peer (laptop / phone via app / etc).
[Interface]
# Peer's private key — generated in step 3 on the peer device. Do NOT
# transfer this to the VPS or anywhere else.
PrivateKey = <PEER-PRIVATE-KEY>
# The peer's tunnel-internal address — must match the AllowedIPs the VPS
# configured for this peer.
Address = 10.42.0.10/24
# DNS to use over the tunnel — defaults to the VPS provider's DNS or to a
# privacy-aware resolver. The brand recommends Cloudflare's 1.1.1.1 OR a
# self-hosted resolver on the VPS at 10.42.0.1.
DNS = 10.42.0.1
[Peer]
# VPS public key from step 2.
PublicKey = <SERVER-PUBLIC-KEY>
# Endpoint — the public IP + port of the VPS.
Endpoint = <VPS-PUBLIC-IP>:51820
# AllowedIPs from the peer's perspective: 0.0.0.0/0 means "send all my
# traffic through this tunnel". This is the kill-switch-default config —
# no traffic falls back to clearnet outside the tunnel.
AllowedIPs = 0.0.0.0/0, ::/0
# Send a tiny keepalive every 25s to prevent NAT mappings from expiring.
# Required if the peer is behind a home NAT; harmless otherwise.
PersistentKeepalive = 25
Bring the tunnel up:
sudo wg-quick up wg0
Test that traffic egresses through the VPS:
<VPS-PUBLIC-IP>
If curl ifconfig.me returns the laptop’s home IP, the tunnel is up but routing isn’t actually going through it — re-check AllowedIPs = 0.0.0.0/0 on the peer side, and the PostUp SNAT rule on the server side. [RFC 3022]Traditional IP NAT
Step 7 — kill-switch directives
The default AllowedIPs = 0.0.0.0/0 already constitutes a kill-switch: when the tunnel is down, the peer’s routing table loses the default route and traffic stops flowing rather than falling back to clearnet. To strengthen this, on systemd-networkd or NetworkManager peers, add PostUp / PostDown rules that explicitly drop non-tunnel traffic:
# Add to the peer's [Interface] block:
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
PostDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
These rules block any non-tunnel traffic at the iptables OUTPUT chain when the tunnel is up. When wg-quick down wg0 runs, PostDown removes them. The semantics: even if the peer’s routing table changes during a network glitch, packets cannot escape unencrypted.
For mobile devices using the official WireGuard apps, the equivalent setting is “Block Untunneled Traffic (kill-switch)” in the iOS / Android app’s per-tunnel settings — toggle on.
Step 8 — peer rotation procedure
Each peer’s keypair has a useful operational lifetime — the brand recommends rotating annually OR on any peer-device compromise, whichever comes first. The procedure:
- On the peer device, generate a new keypair (step 3 procedure).
- On the VPS, edit
/etc/wireguard/wg0.conf. ADD the new public key as a new[Peer]block; do NOT remove the old block yet. - Reload:
systemctl reload wg-quick@wg0(Debian’s wg-quick unit supports reload viawg syncconf). - On the peer device, replace the private key in its
/etc/wireguard/wg0.conf, restart the tunnel. - Test that the peer is reachable via the new key.
- Once confirmed, remove the OLD
[Peer]block from the VPS config and reload again.
Same shape as the SSH rotation procedure in ssh-key-migration — both keys live simultaneously during rollout, the old one is retired only after the new one is verified.
Step 9 — four post-launch checks
Check 1 — handshake within 5s
When a peer comes up, the Latest handshake field on wg show should populate within ~5 seconds:
xY7+pQ... 1715030217 aB3cD5... 1715030217
Subtract from current epoch — should be < 60 seconds for any active peer. A value of 0 means the peer never handshook (config error somewhere — usually wrong public key or wrong endpoint).
Check 2 — no clearnet leak
From the peer:
curl -s ifconfig.me # should be VPS IP
curl -s https://1.1.1.1/cdn-cgi/trace | grep ip # should also be VPS IP
ping -c 3 1.1.1.1 # should route via tun
If the curl returns the peer’s local IP, the routing table didn’t take.
Check 3 — MTU correct
WireGuard adds 60 bytes of overhead to each packet. If the underlay link is a typical 1500-byte Ethernet, the in-tunnel MTU should be 1420. wg-quick auto-detects but mis-detects on some pathways. Test with:
ping -M do -s 1392 <ANY-INTERNET-HOST>
If you get back frag needed, the path MTU is below 1420 — set MTU = 1380 (or lower) in the peer’s [Interface] block.
Check 4 — wg-quick@wg0 enabled, persists across reboot
systemctl is-enabled wg-quick@wg0
enabled means the unit will come up on boot. disabled means a reboot drops the tunnel; fix with systemctl enable wg-quick@wg0.
Closing — what to do next
The VPN endpoint is operational. Order of next steps:
- Set up node-exporter scrapes for
wireguard_peer_last_handshake_secondsandwireguard_peer_bytes_total— the two metrics that matter for “is each peer healthy”. - If the VPS provider’s AUP allows it, consider running an internal DNS resolver (Unbound) at 10.42.0.1 for the peers — closes the DNS-leak surface that
DNS = 1.1.1.1otherwise leaves open. - For peers on cellular / hotel-wifi where outbound UDP/51820 is sometimes blocked, consider running a second tunnel on UDP/443 — same key material, different listen port.
- Read the WireGuard formal-verification paper if you haven’t — the wire format is genuinely simple enough to fit in your head, which is a property neither OpenVPN nor IPsec can claim. [Donenfeld + Milner, 'Formal Verification of the WireGuard Protocol']
Don’t co-tenant a public WireGuard endpoint with a Tor relay or an I2P floodfill on the same VPS. The VPN exit traffic and the relay traffic become correlatable side-channels for an attacker watching the VPS’s egress; separate boxes for separate roles.
// END OF DOC
$ cd /docs # back to the manual