I’ve been selfhosting my services for a couple of years with dedicated servers from different hosting providers, but lately I decided to move all my services on a machine inside my server rack at home. As I didn’t want to make my IP publicly known I decided against opening ports on my router and instead used a setup consisting of a VPS and an VPN.
I also didn’t want to use Cloudflare as I wanted to be independent of the provider that is used to make everything available outside. With the setup I have now I can basically switch the VPS and only need to do some small setup to get it working again.
What is needed?
- A VM or physical machine where your services will be hosted
- A VPS from an hosting provider. I use a CAX11 Cloud Server from Hetzner.
- Wireguard as VPN to connect the VM and the VPS together.
Setting up the VPS
I bought the CAX11 and selected Debian as operation system. Afterwards I did my normal base setup by installing fail2ban, switching iptables with nftables and creating a dedicated user instead of using root to log into ssh.
Setting up the VPN
As VPN I’m using Wireguard as it is very easy to install and configure.
I installed wg-quick by running sudo apt install wireguard-tools
on the VPS. In order to connect two machines with wireguard you need to create three things on each machine:
- public key
- private key
- pre-shared key
The keys will be used to authenticate the wireguard clients with each other and encrypt the traffic between them. They will need to be created on both clients, which in my case is the VPS and the service machine.
Run wg genkey | (umask 0077 && tee peer_VPS.key) | wg pubkey > peer_VPS.pub
to generate both public (peer_A.pub) and private (peer_A.key) key.
Then run wg genpsk > peer_VPS-peer_service.psk
to generate the pre-shared key. After creating the keys on both machines you should have the following files:
- peer_VPS.key
- peer_VPS.pub
- peer_service.key
- peer_service.pub
- peer_VPS-peer_service.psk
The next step is to create an wireguard config file for the VPS under /etc/wireguard/wg0.conf
. Mine looks like this:
[Interface]
Address=192.168.10.1/24
ListenPort=51820
PrivateKey=PRIVATE_KEY <-- copy the content of peer_VPS.key here
PreUp=sysctl -w net.ipv4.ip_forward=1
PreUp=sysctl -w net.ipv6.conf.all.forwarding=1
PostDown=sysctl -w net.ipv4.ip_forward=0
PostDown=sysctl -w net.ipv6.conf.all.forwarding=0
MTU=1280
# service
[Peer]
PublicKey=PUBLIC_KEY <-- copy the content of peer_service.pub here
PresharedKey=PRESHARED_KEY <-- copy the content of peer_VPS-peer_service.psk here
AllowedIPs=192.168.10.2/32
You should then be able to start the VPN by running systemctl enable --now wg-quick@wg0
. After running wg
in the command line you should seen an output like this:
interface: wg0
public key: PUBLIC_KEY <-- should be the value of peer_VPS.pub
private key: (hidden)
listening port: 51820
peer: PUBLIC_KEY <-- should be the value of peer_service.pub
preshared key: (hidden)
endpoint: your-ip
allowed ips: 192.168.10.2/32
As the other side of the VPN is not yet running you will not be able to ping the other side, so we still need to set it up.
Create the Wireguard configuration in /etc/wireguard/wg0.conf
on the other client (in my case the service machine) like this:
[Interface]
ListenPort = 51821
PrivateKey = PRIVATE_KEY <-- should be the value of peer_service.key
Address = 192.168.10.2/24
[Peer]
PublicKey = PUBLIC_KEY <-- should be the value of peer_VPS.pub
PresharedKey = PRESHARED_KEY <-- copy the content of peer_VPS-peer_service.psk here
AllowedIPs = 192.168.10.1/32
Endpoint = your-ip:51820 <-- your-ip should be the ip address of the VPS
PersistentKeepalive = 25
You can see that this configuration file has an endpoint and PersistentKeepalive configured. In order for wireguard to work the machines need to exchange traffic with each other, which normally happens something gets routed over the wg0 interface.
As the service machine is not reachable from the outside, it needs to connect to the VPS. For that reason we use endpoint to specify to which ip and port the service machine needs to connect and PersistentKeepalive=25
to have the machine send a keep-alive message every 25s to the vps, in order to keep the connection running.
Now you can start wireguard on the service machine by running systemctl enable --now wg-quick@wg0
and you should be able to ping the machines after some seconds. Your wg output should also now look like this:
interface: wg0
public key: PUBLIC_KEY <-- should be the value of peer_VPS.pub
private key: (hidden)
listening port: 51820
peer: PUBLIC_KEY <-- should be the value of peer_service.pub
preshared key: (hidden)
endpoint: your-ip
allowed ips: 192.168.10.2/32
latest handshake: 20 seconds ago
transfer: 900.22 MiB recived, 211.6 MiB sent
If you can’t ping the machines check if the ip range used (here 192.168.10.0/24) is in use anywhere or a firewall is enabled on the machines. To debug you can ping on one machine and run tcpdump -i wg0
on the other machine to check if traffic even arrives on the other machine.
If it doesn’t it’s also possible that your keys don’t match - try to regenerate them in this case.
Making the services available from the outside
In my case I run my services in docker compose on the service machine and use traefik, so my docker-compose.yml of traefik looks like this:
version: "3.3"
services:
traefik:
image: "traefik:latest"
container_name: "traefik"
restart: always
network_mode: "host"
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.le.acme.tlschallenge=true"
- "--certificatesresolvers.le.acme.storage=/acme.json"
volumes:
- "$PWD/acme.json:/acme.json"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
An simple docker-compose.yml of string-is looks like this:
version: '3'
services:
string-is:
image: daveperrett/string-is
restart: always
container_name: string-is
labels:
- "traefik.enable=true"
- "traefik.http.routers.string-is.rule=Host(`string-is.aucubin.de`)"
- "traefik.http.routers.string-is.entrypoints=websecure"
- "traefik.http.routers.string-is.tls.certresolver=le"
Traefik handles TLS and certificates, so the only thing open is to connect the 443/tcp port of the VPS with the 443/tcp port of the service machine. You could forward the ports directly with nftables (or iptables), but I decided against it and use HAProxy in order to have better logging and be able use load-balancing if needed.
The first thing is to install haproxy on the VPS by running sudo apt install haproxy
. There will be a default haproxy.cfg shipped under /etc/haproxy/haproxy.cfg
to which I added the following block:
frontend main_https_listen
bind :443 v4v6
bind :::443 v6only
mode tcp
option tcplog
default_backend service
backend service
mode tcp
balance source
option tcp-check
server service 192.168.10.2:443 check
This will basically do the following:
- The frontend block tells HAProxy to listen on 443/tcp (both IPv4 and IPv6) in TCP Proxy mode. This means HAProxy will just forward TCP traffic to the backend, other than the HTTPS mode where HAProxy would run HTTP and you can adjust headers or enable TLS on it.
- The backend block tells HAProxy that the backend service has 192.168.10.2:443 as endpoint to forward to and to check whether its available by trying to open a TCP connection on the backend.
After adjusting the configuration file you can restart haproxy with systemctl restart haproxy
and after some seconds you should see that the service is up when running journalctl -b -u haproxy
. You then just need to wait until Traefik could fetch TLS Certificates from Let’s Encrypt and you are ready to go. You should now be able to connect to your services without being in your internal network.