This is how to setup a nostr relay, fast!
First, get an Ubuntu 22.04 VPS somewhere. Second, add its IP to a domain name.
Then:
ssh root@yourdomain.com apt update apt upgrade apt install -y curl build-essential gcc make librust-openssl-dev protobuf-compiler libprotobuf-dev cd /var/ mkdir nostr adduser deploy # all empty, keep hitting enter chown -R deploy:deploy nostr
Then, install Rust:
sudo -i -u deploy curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # choose option 1: recommended settings exit # we log out to re-source the bash profile with rust in the path
Then, install nostr relay:
sudo -i -u deploy cd /var/nostr/ git clone https://git.sr.ht/~gheartsfield/nostr-rs-relay cd nostr-rs-relay/ cargo build -r RUST_LOG=warn,nostr_rs_relay=info ./target/release/nostr-rs-relay &
Done. Nostr relay is running in the background now. You should get some logs in that terminal.
Now we need something in front of it. Let's put a Caddy server. Start a new terminal and:
ssh root@yourdomain.com apt install -y debian-keyring debian-archive-keyring apt-transport-https curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.list apt update apt install caddy
Now you should see the caddy default page at your domain. Let's connect the nostr relay to it. Edit the Caddyfile (/etc/caddy/Caddyfile
), replace its contents with:
# /etc/caddy/Caddyfile
yourdomain.com:443 {
reverse_proxy 127.0.0.1:8080
tls youremail+nostr@example.com {
on_demand
}
log {
output file /var/log/caddy/nostr.log {
roll_size 1gb
roll_keep 1
roll_keep_for 720h
}
}
}
And finally restart caddy:
systemctl restart caddy
Now you should be able to add your relay server in your nostr account with something like:
wss://yourdomain.com
Monitor Caddy with:
journalctl -b -u caddy -f tail -f /var/log/caddy/nostr.log
Add systemd entry
It's likely that the nostr-rs-relay process will die after a few hours/days. We don't want that, but it will happen anyway. To mitigate, we can add a systemd entry that will (a) restart nostr-rs-relay if it dies and (b) start it as soon as the server boots up.
ssh root@yourdomain.com touch /lib/systemd/system/nostr-rs-relay.service
Edit /lib/systemd/system/nostr-rs-relay.service
and enter:
# /lib/systemd/system/nostr-rs-relay.service
[Unit]
Description=nostr-rs-relay
After=network.target
[Service]
Type=simple
WorkingDirectory=/var/nostr/nostr-rs-relay
ExecStart=/bin/bash -lc 'exec ./target/release/nostr-rs-relay'
User=deploy
Group=deploy
Environment="RUST_LOG=warn,nostr_rs_relay=info"
TimeoutSec=15
Restart=always
[Install]
WantedBy=multi-user.target
Then:
ln -s /lib/systemd/system/nostr-rs-relay.service /etc/systemd/system/multi-user.target.wants/ systemctl daemon-reload systemctl enable nostr-rs-relay.service systemctl start nostr-rs-relay.service systemctl status nostr-rs-relay.service
The final (status) command should display Active: active (running)
.
root@nostr01:~# systemctl status nostr-rs-relay.service
● nostr-rs-relay.service - nostr-rs-relay
Loaded: loaded (/lib/systemd/system/nostr-rs-relay.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2023-03-18 18:47:03 UTC; 4min 45s ago
Main PID: 127715 (nostr-rs-relay)
Tasks: 21 (limit: 2257)
Memory: 1008.4M
CPU: 6.710s
CGroup: /system.slice/nostr-rs-relay.service
└─127715 ./target/release/nostr-rs-relay
Logs of systemd processes are being kept with journactl. To examine them:
journalctl -b -u nostr-rs-relay -f
Prometheus
nostr-rs-relay exposes a /metrics endpoint, which can be used for a Prometheus instance.
ssh root@yourdomain.com groupadd --system prometheus useradd -s /sbin/nologin --system -g prometheus prometheus mkdir /var/lib/prometheus for i in rules rules.d files_sd; do sudo mkdir -p /etc/prometheus/${i}; done wget https://github.com/prometheus/prometheus/releases/download/v2.43.0/prometheus-2.43.0.linux-amd64.tar.gz tar -xzvf prometheus-2.43.0.linux-amd64.tar.gz cd prometheus-2.43.0.linux-amd64/ cp prometheus promtool /usr/local/bin/ prometheus --version promtool --version cp prometheus.yml /etc/prometheus/prometheus.yml cp -r consoles/ console_libraries/ /etc/prometheus/
# vim /lib/systemd/system/prometheus.service
[Unit]
Description=Prometheus
Documentation=https://prometheus.io/docs/introduction/overview/
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
User=prometheus
Group=prometheus
ExecReload=/bin/kill -HUP %p
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/var/lib/prometheus \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.listen-address=0.0.0.0:9090 \
--web.external-url=
SyslogIdentifier=prometheus
Restart=always
[Install]
WantedBy=multi-user.target
cd /root/prometheus/ for i in rules rules.d files_sd; do sudo chown -R prometheus:prometheus /etc/prometheus/${i}; done for i in rules rules.d files_sd; do sudo chmod -R 775 /etc/prometheus/${i}; done chown -R prometheus:prometheus /var/lib/prometheus/ ln -s /lib/systemd/system/prometheus.service /etc/systemd/system/multi-user.target.wants/ systemctl daemon-reload systemctl start prometheus systemctl enable prometheus systemctl status prometheus
Finally, add an entry in the scrape_configs
section of prometheus.yml
like so:
# vim /etc/prometheus/prometheus.yml global: scrape_interval: 15s evaluation_interval: 15s alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 rule_files: # - "first_rules.yml" # - "second_rules.yml" scrape_configs: - job_name: "prometheus" static_configs: - targets: ["localhost:9090"] - job_name: "nostr-rs-relay" static_configs: - targets: ["localhost:8080"]