$ xmrhost-cli docs show --topic=setup-i2p-floodfill
[$ ] doc: setup-i2p-floodfill
// Set up an i2pd router and opt into floodfill on a XMRHost VPS — i2pd.conf, ratelimit, NetDB
// published=2026-04-29 · updated=2026-04-29 · diff=advanced · read=21min · tags=[i2p, i2pd, floodfill, netdb, hardening]
// ABSTRACT
abstract
Procedure for bringing up an i2pd router on a XMRHost VPS and electing it into the floodfill role. Covers the i2pd 2.x install from the upstream PPA, the i2pd.conf shape for a participating router, the bandwidth ratings that the I2P NetDB uses for floodfill candidate selection, the firewall opening for transport ports, the 24-72 hour propagation window before the router shows up in NetDB, and the post-launch checks (NetDB size, peer count, transit tunnels, leaseSets stored).
Scope and assumptions
This is the operational walk-through for running a publicly-routable i2pd router on a XMRHost VPS and opting into the floodfill role — the I2P-network role analogous to a Tor directory cache, where a small minority of high-bandwidth, high-uptime routers store and serve the NetDB (the network database mapping I2P destinations to leaseSets). [I2P spec: netdb] Per the I2P spec the floodfill role is voluntary: any router can enable it, and the NetDB-flood-selection algorithm picks among the highest-uptime + highest-bandwidth participants in each Kademlia-distance bucket.
This doc is for i2pd (the C++ daemon), NOT i2p (the Java daemon). The two implementations are interoperable on the wire but their config files, log paths, and ops surface differ entirely. The brand uses i2pd everywhere because it is materially lighter on memory (a typical i2pd router runs at ~150 MiB resident; the Java i2p needs 512 MiB+).
Baseline assumptions:
- Debian 12 (bookworm) on a XMRHost VPS in Iceland or Romania.
- The brand’s
hardened-by-defaultbaseline is in place. Seeharden-sshdandkernel-hardening-checklist. - The VPS plan includes ≥ 2 TB/month egress and a static public IPv4. Floodfill routers are reachable directly; an IP-only-via-NAT router is not eligible (it can be a regular participating router but the NetDB picker will not prefer it).
chronydis configured and tracking — I2P, like Tor, depends on synchronised time and rejects NetDB writes from routers with > 60 seconds of clock skew. [I2P spec: ssu2]
This doc does NOT cover I2P hidden services (eepsites) — those are a separate procedure. A floodfill router can ALSO host eepsites, but the brand recommends keeping the two roles on separate VPSes for the same reason a Tor relay should not co-tenant a hidden service: the floodfill traffic creates a side-channel against the hidden-service traffic on the same box.
Step 1 — install i2pd from the upstream PPA
The Debian-shipped i2pd package is several minor versions behind upstream. The upstream maintainer publishes a PPA with timely builds; use that.
apt install -y apt-transport-https gnupg ca-certificates wget
# Upstream signing key — fingerprint pinned in the URL.
wget -qO- https://repo.i2pd.xyz/.help/add_repo | bash -s -- bookworm
# This script is published by the i2pd project and:
# 1. fetches the i2pd signing key into /usr/share/keyrings/i2pd-archive-keyring.gpg
# 2. writes /etc/apt/sources.list.d/i2pd.list
#
# If you want to skip the helper, do those two steps by hand — keys are
# at https://repo.i2pd.xyz/r4sas.gpg.
apt update
apt install -y i2pd
Confirm:
i2pd version 2.55.0 (0.9.65) Boost version 1.74.0 OpenSSL 3.0.16 11 Feb 2025
The two parenthesised version numbers are i2pd’s internal version and the protocol version (the second one matches the Java i2p version it speaks). If your i2pd version starts with 2.4 or earlier, the upstream PPA didn’t take — re-check /etc/apt/sources.list.d/i2pd.list.
Step 2 — the canonical floodfill i2pd.conf
The package writes a default /etc/i2pd/i2pd.conf and /etc/i2pd/tunnels.conf. Most of the i2pd.conf is fine; the brand override goes into /etc/i2pd/i2pd.conf (single file in i2pd; no drop-in pattern). Replace the relevant sections — KEEP the existing [ipv4], [ipv6], and [upnp] blocks the package generated, override the rest:
# /etc/i2pd/i2pd.conf — XMRHost floodfill router.
# ---------- transport ports -----------------------------------------------
# port = 0 makes i2pd pick a random ephemeral port at first start, then pin
# it in router.info. The brand convention: pin to a known port for firewall
# operations. 25001 is arbitrary — choose any free port > 1024.
port = 25001
# ---------- floodfill ------------------------------------------------------
# This is the directive that opts the router INTO floodfill candidacy. The
# NetDB picker chooses among candidates per Kademlia bucket; setting this
# does NOT guarantee the router will be USED as floodfill (only that it's
# eligible). See post-launch check 4.
floodfill = true
# ---------- bandwidth participation ---------------------------------------
# bandwidth letter codes per i2pd docs:
# K = 12 KB/s, L = 48 KB/s, M = 64 KB/s, N = 96 KB/s,
# O = 128 KB/s, P = 256 KB/s, X = unlimited
# Floodfill candidates are picked from routers with bandwidth >= L, but the
# picker prefers higher tiers. The brand default for a 2 TB/mo VPS:
# bandwidth class O = 128 KB/s sustained = ~330 GB/mo of transit, well
# inside a 2 TB cap.
bandwidth = O
# Hard byte cap — i2pd self-throttles before hitting this. Match it to
# the upstream's monthly cap, with 10% safety margin.
share = 50
# ---------- enable transit tunnels -----------------------------------------
# Without transit tunnels, the router is a leecher — clients use it for
# their own traffic but it returns nothing. Floodfill nodes MUST accept
# transit; the NetDB picker rejects router.info entries that can't.
notransit = false
# ---------- HTTP web console (LOCALHOST ONLY) ------------------------------
# The web console is the operator's only interface to live router state.
# Bind LOCALHOST ONLY; never expose. Use SSH port-forward to access.
http.enabled = true
http.address = 127.0.0.1
http.port = 7070
http.auth = true
http.user = xmrhost
http.pass = CHANGE_ME_TO_A_LONG_RANDOM_VALUE
# ---------- HTTP proxy + SOCKS proxy (DISABLED on a floodfill) ------------
# A floodfill router is a pure infrastructure node; the operator does NOT
# use it as a personal i2pd proxy. Disable both proxies — every open proxy
# is an attack surface.
httpproxy.enabled = false
socksproxy.enabled = false
# ---------- I2CP (DISABLED on this box) -----------------------------------
# I2CP is the API used by external apps (e.g. an eepsite running outside
# i2pd) to register destinations. We do not run apps on this box.
i2cp.enabled = false
sam.enabled = false
bob.enabled = false
# ---------- logging --------------------------------------------------------
log = file
logfile = /var/log/i2pd/i2pd.log
loglevel = warn
# debug-level logs are huge; only enable transiently for troubleshooting.
# ---------- limits / safety nets ------------------------------------------
limits.transittunnels = 5000
limits.openfiles = 4096
# coreSize: Kademlia routing table size — leave default unless you have
# specific reason to deviate. Larger means more memory + more accurate
# floodfill routing; smaller means less memory + occasional NetDB misses.
# ---------- meshnets (DISABLED unless explicitly required) ----------------
meshnets.yggdrasil = false
The bandwidth = O choice is worth understanding. The single-letter bandwidth class is what other routers use to decide whether to route through you. [I2P spec: ntcp2] Floodfill candidates need a minimum of L (48 KB/s); the brand default of O (128 KB/s) sits comfortably in the floodfill-preferred tier without overcommitting the VPS bandwidth quota. If your VPS has 10 TB/mo egress, raise to P (256 KB/s) or X (unlimited).
The share = 50 directive is a percentage of the bandwidth class the router will offer to transit. Default is 50%; raise to 80 if the VPS has nothing else to do.
Make sure /var/log/i2pd/ exists:
mkdir -p /var/log/i2pd
chown i2pd:i2pd /var/log/i2pd
chmod 0750 /var/log/i2pd
Open the firewall (the i2pd transport port + the loopback web console):
nft add rule inet filter input tcp dport 25001 accept
nft add rule inet filter input udp dport 25001 accept
# 7070 is loopback-only — no firewall rule needed; SSH-tunnel from workstation:
# ssh -L 7070:127.0.0.1:7070 root@<vps-ip>
Restart i2pd:
systemctl restart i2pd
journalctl -u i2pd -n 100 --no-pager
Watch for Daemon: starting NTCP2 server on 0.0.0.0:25001 and Daemon: starting SSU2 server on 0.0.0.0:25001 — those are the two transport listeners coming up. Both must succeed; an SSU2-only or NTCP2-only router is non-functional.
Step 3 — confirm reachability via the web console
Tunnel the console:
# On your workstation:
ssh -L 7070:127.0.0.1:7070 root@<vps-ip>
# Then open http://127.0.0.1:7070/ in a browser, log in as xmrhost/<pass>.
The console main page shows:
- Network status — should be
OKafter 5-10 minutes. IfFirewalledorUnknown, the firewall or upstream is blocking the transport port (test from a third box:nc -z <vps-ip> 25001). - Router uptime — counter starting from the last restart.
- Number of peers — climbs from 0 to ~200 over the first 30 minutes as the router discovers the I2P NetDB.
- Transit tunnels — appears once other routers start using yours; takes 30-60 minutes initially.
- Floodfill — yes / no — set on candidacy basis, but the network may take 24-72 hours to PROMOTE the router from candidate to active floodfill.
Step 4 — the 24-72 hour floodfill promotion window
floodfill = true opts the router INTO candidacy. Promotion to active floodfill happens via the NetDB-Kademlia bucket-fill algorithm: each NetDB Kademlia bucket maintains 1-3 floodfill routers, and when a bucket has openings, the algorithm picks among candidates with the highest uptime + highest bandwidth class + recent NetDB-write success. [I2P spec: netdb]
For a brand-new router with floodfill = true:
| Hours after launch | Status | Indicator |
|---|---|---|
| 0-2h | Candidate, not yet floodfill | Console shows “Floodfill: Yes” but no NetDB stores arriving |
| 2-24h | Sometimes floodfill (test storage attempts) | Occasional LeaseSet stored lines in the log |
| 24-72h | Steady-state floodfill | LeaseSet store rate stabilises; LeaseSets in storage counter on console grows steadily |
| Week 1+ | Established floodfill | NetDB picks include this router routinely; transit tunnel count rises proportionally |
Don’t restart during the promotion window. Each restart resets the uptime counter and the directory-side measurements that drive the bucket-fill picker. After the first 7 days, restarts are merely an annoyance — the bucket re-promotes within a few hours.
Step 5 — peering health and NetDB size
The two diagnostic numbers that matter on a floodfill:
# From the web console JSON endpoint (auth required).
curl -s -u xmrhost:<pass> http://127.0.0.1:7070/qr_code | head
Easier: from the web console’s “Local Destinations” → “Information” page, read off:
- Known routers — total NetDB size as seen by this router. Steady state for a healthy floodfill is ~3000-5000 known routers (the I2P network is materially smaller than the Tor network).
- Peers — currently-connected peers via NTCP2 + SSU2. Steady state ~150-300 for a floodfill.
- Transit tunnels — clients using this router as a relay. Steady state for a 128 KB/s
Ofloodfill is ~50-200 transit tunnels at any moment. - LeaseSets — destination-to-lease mappings stored on this router as part of its floodfill role. Steady state varies wildly with network usage; ~100-2000 LeaseSets is typical.
If “Known routers” stays below 1000 after 24 hours, the router is poorly bootstrapped — check the reseed status:
journalctl -u i2pd -n 200 --no-pager | grep -i 'reseed'
A healthy bootstrap shows one Reseed: ... routers downloaded from <reseed-url> line in the first 30 seconds. A failed reseed means the router is starting from a stale or empty NetDB. Force a reseed via the console’s “Reseed from URL” button (or restart i2pd; the package re-attempts reseed on each cold start).
Step 6 — abuse handling for the rare i2pd complaint
Like Tor non-exit relays, I2P participating routers (including floodfill) are intermediate forwarders — they handle encrypted, garlic-routed traffic between other I2P peers and never originate or terminate user-visible traffic. Abuse complaints are extremely rare and have a similar template to the Tor middle-relay template:
Subject: Re: Abuse report regarding <ip> — I2P participating router
Thank you for the report.
The IP address <ip> hosts an I2P (Invisible Internet Project) participating
router and opt-in NetDB floodfill, running the i2pd implementation. An I2P
router forwards encrypted, garlic-routed traffic between other I2P peers; it
does not originate, terminate, or have visibility into traffic content.
I2P is documented at https://geti2p.net/. The router is operated under
[Iceland Höfundalög / Romania Law 8/1996] which establishes operator-side
neutral-intermediary status for traffic forwarded in this role.
Operator contact: <CONTACT-EMAIL>
The brand’s listed I2P-friendly providers handle these as “noted, no further action” routinely. Reply within 48 hours.
Step 7 — four post-launch checks
Check 1 — transports reachable
journalctl -u i2pd -n 500 --no-pager | grep -E 'NTCP2|SSU2' | grep -i 'started'
Expected: one line each for NTCP2 and SSU2 starting on the configured port. If only one shows, the other transport is broken — i2pd can run with one only, but floodfill candidacy prefers routers reachable on both.
Check 2 — NetDB size > 1000 after 24h
curl -s -u xmrhost:<pass> http://127.0.0.1:7070/?page=netdb_stats | grep -i 'known'
Expected: 3000+. If < 1000, reseed.
Check 3 — clock skew
chronyc tracking | grep -E 'Stratum|System time'
Expected: Stratum ≤ 3, System time deviation < 1s. I2P rejects NetDB writes from routers with > 60s skew.
Check 4 — floodfill is actively being used
grep -i 'leaseset stored' /var/log/i2pd/i2pd.log | tail
Expected after 72h: regular LeaseSet stored lines, ~1 per few minutes. Zero such lines after 72h means the router is candidacy-only, not promoted — usually a bandwidth or uptime issue (router has restarted, bandwidth class is too low for the bucket picker).
Closing — what to do next
The floodfill router is operational. Order of next steps:
- Subscribe operator email to the i2pd project’s
[security]mailing list — the channel for security advisories. - Set up node-exporter + Prometheus for transit-tunnel-count and bandwidth-used metrics. The brand’s monitoring stack lives separately (no operator-of-record for I2P metrics; the network is small enough that operator-side observability is the only signal).
- Consider running multiple floodfill routers across geographically diverse VPSes — the I2P network is materially smaller than Tor’s and each additional well-operated floodfill measurably improves NetDB resilience. Three routers in three jurisdictions is a generous brand contribution.
- Review the i2pd changelog before each minor-version upgrade; the project occasionally makes wire-protocol-affecting changes that need correlated-deployment timing.
Don’t run i2pd and Tor on the same box. They both want exclusive use of the box’s network capacity, both want low-latency clock sync, and a co-tenancy creates side-channels between the two networks that the cross-network-correlation literature has documented. Separate VPS, separate operator-account on the upstream where possible.
// END OF DOC
$ cd /docs # back to the manual