$ xmrhost-cli notes show --slug=i2p-floodfill-vs-router
[$ ] note: i2p-floodfill-vs-router
// I2P node hosting — floodfill vs router, why and how, with i2pd configs that survive a reboot
// 2026-04-19 · diff=advanced · read=16min · tags=[i2p, vps, i2pd, hardening, monitoring] · by=0xLambda
// ABSTRACT
abstract
The two distinct things you can run on an I2P-capable VPS are a regular *router* (carries other people's tunnels) and a *floodfill* (additionally participates in the network database). Both are useful contributions; the trade-offs are not the same. This note walks through the i2pd configuration for each role, the bandwidth-share decision, the firewall posture, the reseed-server question, and the systemd / monitoring shape that lets you forget the box is running until something actually goes wrong.
Two roles, one binary
I2P, like Tor, is built around a network of volunteer-run routers carrying each other’s encrypted traffic. Unlike Tor, the network has no central directory authorities; instead, a self-organising distributed hash table called the netDb maps destinations to the lease-sets and router-info records they need. The nodes that participate in netDb are called floodfill routers. Every node, floodfill or not, participates in carrying tunnels for other peers — that’s the router role.
This note is about running an I2P node on an offshore VPS in either of those two roles. We’ll use i2pd, the C++ implementation, rather than the Java reference router: smaller memory footprint (typical resident set ~80-180MB vs ~400-700MB for the Java router), fewer moving parts, runs cleanly under systemd, and it’s the implementation that’s actually maintained for the embedded / VPS use case. [I2P spec: streaming]I2P streaming-protocol spec — context for tunnel-build cost
Floodfill, in one paragraph
A floodfill router participates in the I2P network database (netDb): it stores router-info records, replies to lookups for lease-sets, and floods updates to neighbouring floodfills. Floodfills are how lookups happen in I2P — when a client wants to talk to a destination, it asks a floodfill for the destination’s current lease-set. There are typically 700-1500 floodfill routers in the network at any given time; this is a small enough number that every additional well-behaved floodfill measurably improves netDb-lookup latency for the whole network. [I2P spec: netdb]I2P netDb spec, §3 (floodfill operation)
The cost of running a floodfill is mostly memory (the netDb is held in memory; expect 250-450MB resident depending on network conditions) and a constant ~30-80 lookups/sec inbound. Bandwidth use is modest — netDb lookups are small messages — but CPU use is non-trivial because of the cryptographic verification on every record stored.
The procurement decision: what to ask the upstream provider
The good news: I2P is a less politically loaded workload than Tor exit relays. The provider’s AUP almost always allows it, often without explicit mention because nobody on the abuse desk is flagging it. The bad news: that absence of explicit policy means a year from now, when an abuse desk does notice the box, the answer might change. Get the workload approved in writing if your provider allows that — the brand’s /playbook/tor-relay covers the same procurement discipline and the same template applies.
Specifically:
- UDP outbound on the I2P-typical port range (a single configurable port for SSU2; 32000-65535 by default if unconfigured).
- TCP inbound on the configured NTCP2 port.
- No “single-flow rate limit” that throttles the long-lived UDP / TCP connections I2P relies on.
- Explicit allowance of a “decentralised peer-to-peer network” workload — this is the AUP language most providers use, and confirming the answer in writing is cheap.
Step 1 — install i2pd from the upstream apt repo
Like Tor, the distro-packaged i2pd is usually behind upstream by 1-3 minor versions. Use the upstream PPA / repo:
# /etc/apt/sources.list.d/i2pd.list
deb [signed-by=/usr/share/keyrings/i2pd.gpg] https://repo.i2pd.xyz/debian bookworm main
wget -qO- https://repo.i2pd.xyz/r4sas.gpg | gpg --dearmor > /usr/share/keyrings/i2pd.gpg
apt update
apt install -y i2pd
Verify version:
i2pd version 2.55.0 (0.9.65) Boost version 1.74.0 OpenSSL 3.0.16 11 Feb 2025
Step 2 — minimal /etc/i2pd/i2pd.conf for a regular router
This is the smaller of the two configs: a regular non-floodfill router, suitable as a first deployment to learn the operational shape. The headline knobs:
bandwidth = X— share rate in Bytes/sec, also accepts letter suffixes (L,O,P,Xmap to preset tiers).share = N— percentage ofbandwidthto dedicate to transit (carrying others’ tunnels). 50-80 is the sane range for a dedicated VPS.port = N— the single port used by both NTCP2 and SSU2. Pick something stable; you’ll need to open it on the firewall.
# /etc/i2pd/i2pd.conf — regular (non-floodfill) router on a dedicated VPS.
# This is the BASE config; override values in /etc/i2pd/i2pd.conf.d/*.conf.
# ---------- core --------------------------------------------------------
loglevel = warn
logfile = /var/log/i2pd/i2pd.log
daemon = true
service = true
# ---------- network identity --------------------------------------------
# A specific external port (UDP+TCP) that the upstream firewall MUST open.
# Pick something high and stable.
port = 35741
# ---------- bandwidth + share ------------------------------------------
# 'bandwidth = X' is "extra-large" preset (>2000 KB/s share). Use the
# byte-value form if you need finer control:
# bandwidth = 8192 # 8 MB/s
bandwidth = X
share = 70
# ---------- explicit non-floodfill -------------------------------------
floodfill = false
# ---------- IPv6 (enable if your VPS has v6) ---------------------------
ipv4 = true
ipv6 = true
# ---------- NTP --------------------------------------------------------
# i2pd is sensitive to clock skew. The brand baseline runs chrony already;
# enable the i2pd-internal NTP fallback in case chrony fails.
nat = true
ntp = true
[ntcp2]
enabled = true
published = true
port = 35741
[ssu2]
enabled = true
published = true
port = 35741
[http]
# Web console, bound to localhost only. Reach via SSH tunnel.
enabled = true
address = 127.0.0.1
port = 7070
auth = true
user = console
pass = REPLACE_WITH_A_LONG_RANDOM_STRING
[meshnets]
yggdrasil = false
Step 3 — the floodfill-promoted variant
To promote the same router to a floodfill, set floodfill = true and bump the resource budget. The other config sections stay the same.
# /etc/i2pd/i2pd.conf.d/floodfill.conf — promote to floodfill.
# Drop this file alongside the base config; the .d directory is read after
# the main config and overrides the floodfill flag.
floodfill = true
# Floodfills hold a much larger netDb in memory. Bump the per-tunnel limits.
[limits]
transittunnels = 10000
openfiles = 16384
coresize = 0
zombies = 0
# Increased SSU2 + NTCP2 buffer sizes — floodfills handle a steady stream of
# small lookup messages from many peers; defaults are sized for a regular
# router's traffic shape.
[ssu2]
mtu4 = 1488
mtu6 = 1280
After dropping in the override file, restart and watch the log for the floodfill flag:
14:22:31@1/inf - i2pd v2.55.0 (0.9.65) starting... 14:22:31@1/inf - Daemon: running as service 14:22:32@1/inf - Router: starting in floodfill mode 14:22:32@1/inf - SSU2: bound to udp://0.0.0.0:35741 14:22:32@1/inf - NTCP2: bound to tcp://0.0.0.0:35741 14:22:39@1/inf - NetDb: loaded 1284 router infos
The “starting in floodfill mode” line is the milestone. After that, the netDb starts populating; expect 30-90 minutes for the router to be discovered by enough peers that the inbound-lookup rate stabilises.
Step 4 — firewall (nftables) — the actual rules
The brand baseline uses nftables (not legacy iptables; if you’re still on iptables, the /docs section has a migration guide). The rules for an i2pd box are minimal: allow the configured port on UDP and TCP, allow established/related, drop everything else inbound. Outbound stays unrestricted.
# /etc/nftables.conf — i2pd router.
# Apply: nft -f /etc/nftables.conf
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
# Loopback always allowed.
iif lo accept
# Established + related — replies to outbound i2pd traffic.
ct state established,related accept
# SSH from the operator's bastion only — replace with your CIDR.
# tcp dport 22 ip saddr 198.51.100.0/24 accept
# SSH from anywhere (with key auth + fail2ban from the brand baseline).
tcp dport 22 accept
# i2pd: NTCP2 (TCP) + SSU2 (UDP) on the configured port.
tcp dport 35741 accept
udp dport 35741 accept
# ICMP for path-MTU discovery.
ip protocol icmp icmp type { echo-request, destination-unreachable, time-exceeded } accept
ip6 nexthdr icmpv6 icmpv6 type { echo-request, destination-unreachable, time-exceeded, packet-too-big } accept
# Drop the rest, with a counter so we can see scan volume.
counter log prefix "nft drop input: " level info drop
}
chain forward { type filter hook forward priority 0; policy drop; }
chain output { type filter hook output priority 0; policy accept; }
}
Step 5 — sysctl tuning for the UDP-heavy I2P stack
I2P’s SSU2 transport runs over UDP, and a busy floodfill will see hundreds of thousands of small UDP datagrams per minute. The default Linux UDP buffer sizes are too small; bump them.
# /etc/sysctl.d/99-i2p-router.conf
# UDP receive / send buffers — bumped for a busy SSU2 endpoint.
net.core.rmem_max = 25165824
net.core.rmem_default = 4194304
net.core.wmem_max = 25165824
net.core.wmem_default = 4194304
# Raise the local port range — i2pd opens many short-lived outbound conns.
net.ipv4.ip_local_port_range = 10000 65535
# Raise the conntrack table — a floodfill sees enough peers to overflow defaults.
net.netfilter.nf_conntrack_max = 524288
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
# IP6 RA disabled — static v6 addressing.
net.ipv6.conf.all.accept_ra = 0
Apply and confirm:
* Applying /etc/sysctl.d/99-i2p-router.conf ... net.core.rmem_max = 25165824 net.core.rmem_max = 25165824
Step 6 — the reseed-server question
When i2pd starts on a fresh install, it has no router-info records and can’t bootstrap into the network on its own. It downloads an initial set of router infos from a reseed server — a small handful of HTTPS-served archives published by trusted I2P contributors. The reseed list ships with i2pd; for most operators the defaults are fine.
A more cautious posture is to (a) verify the reseed server’s TLS certificate against a pinned set, and (b) optionally add your own reseed mirror for distribution to other operators. Both are documented upstream; the second matters mostly for operators running fleets of I2P nodes and is out of scope for this note.
Step 7 — monitoring with the i2pd web console
i2pd exposes an HTML web console on port 7070 (bound to 127.0.0.1 per the config above). Reach it via SSH tunnel:
Then point a browser at http://127.0.0.1:7070/. The console shows: bandwidth in/out, transit-tunnel count, router-info count, peer table, the netDb size (for a floodfill), and per-transport stats. For Prometheus integration, i2pd doesn’t ship a native exporter; the practical pattern is a sidecar that scrapes the HTTP console’s /?cmd=stats endpoint and emits Prometheus textfile metrics, then node_exporter reads the textfile collector. [i2pd web-console source]
The four alerts worth setting:
up{job="i2pd"}going to 0 — the daemon is down.- Transit-tunnel count dropping below 100 on a floodfill — the network has stopped using your floodfill; almost always a firewall or NAT issue at the upstream.
- NetDb size < 600 router infos — the floodfill is isolated.
- UDP RX-error rate > 100/sec — SSU2 packets are being dropped by the kernel; bump
net.core.rmem_max.
Closing — the floodfill question, restated
Whether to run as a floodfill or as a regular router depends mostly on whether the box has the memory headroom (250-450MB extra over a regular router) and whether the operator wants to take on the slightly elevated visibility of being in the floodfill set. Floodfill routers are listed in any router’s netDb subset; regular routers are visible only to peers they’ve directly carried tunnels for. Neither is “deanonymising” in any meaningful sense — but floodfill participation is opt-in for a reason, and the right answer is to start as a regular router for the first 30-60 days, prove the box’s operational stability, and promote to floodfill only after that.
// END OF NOTE
$ cd /notes # back to the listing