I’ve wanted to setup a VPN on my server for a while. I didn’t want to forward all of my traffic over it (the typical use case), but I wanted have the ability to access my home network anywhere over a secure tunnel.
I didn’t like accessing my internal services through a publicly available subdomain through Cloudflare’s DNS. Instead, I wanted to access my self-hosted services with cute domain names that only I could use. A DNS server and some elaborate nginx configs would be needed.
And to top it all off, I decided to do all of this remotely at 3 AM. (Pro tip: don’t do this.)
WireGuard
WireGuard is a neat VPN solution that I’ve meaning to try out for a while. It seems like it won’t cause any headaches. A friend recently set it up on her network, and I thought to myself, “why not?”
I have minimal (read: none) knowledge when it comes to networking, so things didn’t work out so great.
When I first created the WireGuard network interface, wg0
, I assigned it to
10.0.0.1/24
. Seems like it’ll work, right? Well, no. Big mistake, actually.
This is the same range of IPs that my router uses for its local network.
Broken pipes
Soon after, I started experiencing 100% packet loss every few seconds. The SSH connection was starting to get really spotty, and soon I was getting kicked off every several seconds or so. I was a bit confused. Was my ISP having an outage?
With difficulty, I tried using socat
and ssh
to create a proxy that would
let me access the router’s configuration page. Somehow, this led to the server
itself being proxied instead of the router. Hmm, I wonder why.
It didn’t take long for me to realize (and after getting teased by nep) that
10.0.0.1
was being routed back to the server instead of the router due to me
assigning WireGuard to use 10.0.0.1/24
. I essentially told my server that it
was now 10.0.0.1
in the local network.
I changed the routing prefix to something else and the packet loss disappeared. Nice.
The other half
Now that WireGuard was up and running on my server, it was time to set it up on my laptop. This took a while because I had no idea what IPs to use for the configuration… on both sides.
At one point, I even blocked myself from accessing my server by assigning my
laptop’s external IP to the AllowedIPs
field of the [Peer]
section, thinking
that it was some kind of external IP access control allowlist. Nope. (It works
differently, as explained to me by someone actually competent in this field.)
Eventually, I was able to get in by using my phone’s hotspot to SSH in from
another IP. Hooray.
Once I got everything figured out through a long process of blog post reading, documentation hunting, and trial and error, I was able to access stuff using the routing prefix on both machines. Success! (That took around 3 hours to do…)
By the end of this, I now knew more about subnets, CIDR notation, routers, subnet masks, and some other miscellaneous networking stuff. Such fun.
(But in all seriousness, I did learn a ton, and thank you to everyone who put up with my questions and lack of experience in the Discord. ♡)
DNS
Now that my laptop and server can access each other through WireGuard, it’s time to setup a DNS server.
The first step was to pick one. I initially went with dnsmasq
, because it
was the only DNS server that I knew of at the time. However, I had some problems
setting up my own records, as it seems like it only reads from /etc/hosts
.
After a bit of poking around, I tried out unbound
. While it was better at
first, I discovered that it acted much like dnsmasq
in that its primary
purpose is to perform caching and resolve recursive queries. Even more poking
around led to the realization that I needed an authoritative DNS server, not a
recursive one!
nsd
was exactly what I wanted—an authoritative DNS server that lets me
write my own zone files for any DNS record I want. Perfect! After a bit of
configuration and typing, it was up and running, and I manually verified its
operation with dig
.
Now that it’s up, I needed to point my laptop’s network configuration to use it.
I initially tried adding the server’s IP to the DNS
field in the WireGuard
configuration, but it didn’t seem to work. Putting it in the list of DNS servers
in System Preferences worked, though.
Cloudflare records
Now that my machine was using the local DNS server for the entire domain, the normally accessible records on Cloudflare were no longer being given. (I gave the internal DNS server priority over the other ones.)
This was fixed by exporting the zone file from Cloudflare and manually
flattening the CNAME
records into A
records, then appending it to the
internal DNS server’s zone file.
nginx
Now that I had two pieces of the puzzle, only the last remained: dissolving the public subdomains and setting up nginx to serve and proxy to the VPN.
Setting up the config file was easy enough. A server_name
here and there
should do the trick. However, I realized after a bit that anyone can access
these services by modifying their Host
header, like so:
$ curl -H "Host: grafana.private.my.domain" https://real.server.ip
I needed to restrict the web services from being accessed by someone outside of
the WireGuard network. This is done with nginx’s deny
directives, like this:
# only allow users connected from the vpn
allow 10.xx.xx.0/24;
deny all;
(Unfortunately you can’t make nginx just listen on an interface that you want.)
I tested it and quickly discovered (after a bit of searching as well) that
nginx’s “rewrite directives” are evaluated before the access
controls, effectively preventing the latter from having any effect when the
former is used. The sample text that I had instructed nginx to display at the
root of my internal domain via return
was still being displayed to outsiders.
Well, that’s interesting. I just removed it and moved on.
Everything was working to much relief and joy. But now I had to provision TLS certificates—self-signed ones.
I know this traffic was going through WireGuard, but I wanted the extra layer of security anyways. Besides, who can resist that green padlock‽
The magic incantation that I had to paste into my terminal window was located after a few Google searches, but I still ran into some problems after handing the file paths over to nginx.
No fun allowed
For some reason, my web browser was complaining that the certificate didn’t
match with the URL I was visiting. I came across an answer on Server Fault
which explained my problem quite nicely. TL;DR: Wildcards must
have an “effective TLD” plus one non-wildcard component; *.website.tld
and
*.network.server.tld
are valid, while *.tld
and *
are not.
And up until now, I was using a cute, invented TLD for my internal domains. Server Fault says I shouldn’t be doing that, so I switched to a subdomain of a real domain that I had registered. This made the internal domains more easier to access in my browser, while also probably fixing some other problems behind the scenes that I never encountered.
Mismatch mishmash
My certificate wasn’t working for a while until I eventually found out that you have to use the SAN (“Subject Alternative Name”) field to give my certificate a domain (or a list of domains) to associate itself with. The CN (“Common Name”) field is now deprecated, and is probably no longer used by modern browsers to verify that a certificate is associated with a certain domain. (Google Chrome certainly doesn’t.) The downside of CN is that you can only specify one domain name in a cert, so SAN completely replaces that.
Unfortunately, specifying a SAN requires you to do some fiddling with OpenSSL’s configuration file, but it wasn’t too difficult.
Success!
Everything’s working great now. And thanks again to those who helped me out with the WireGuard headaches. (Hey, at least I finally know what a subnet is… kinda.)