tldr: I’d like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I’m not sure what the best/safest way to do it is. Asking my partner to use tailscale or wireguard is asking too much unfortunately. I was curious to know what you all recommend.
I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I’m kind of unsure what the best approach is. Hosting services on the internet has risk and I’d like to reduce that risk as much as possible.
-
I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?
-
Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?
-
What’s the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.
-
Any other tips or info you care to share would be greatly appreciated.
-
Feel free to talk me out of it as well.
EDIT:
If anyone comes across this and is interested, this is what I ended up going with. It took an evening to set all this up and was surprisingly easy.
- domain from namecheap
- cloudflare to handle DNS
- Nginx Proxy Manager for reverse proxy (seemed easier than Traefik and I didn’t get around to looking at Caddy)
- Cloudflare-ddns docker container to update my A records in cloudflare
- authentik for 2 factor authentication on my immich server
Tailscale is completely transparent on any devices I’ve used it on. Install, set up, and never look at it again because unless it gets turned off, it’s always on.
I’ve run into a weird issue where on my phone, tailscale will disconnect and refuse to reconnect for a seemingly random amount of time but usually less than hour. It doesn’t happen often but it is often enough that I’ve started to notice. I’m not sure if it’s a network issue or app issue but during that time, I can’t connect to my services. All that to say, my tolerance for that is higher than my partner’s; the first time something didn’t work, they would stop using it lol
So I have it running on about 20 phones for customers of mine that use Blue Iris with it. But these are all Apple devices, I’m the only one with Android. I’ve never had a complaint except one person that couldn’t get on at all, and we found that for some reason the Blue Iris app was blacklisted in the network settings from using the VPN. But that’s the closest I’ve seen to your problem.
I wonder if you set up a ping every 15 seconds from the device to the server if that would keep the tunnel active and prevent the disconnect. I don’t think tailscale has a keepalive function like a wireguard connection. If that’s too much of a pain, you might want to just implement Wireguard yourself since you can set a KeepAlive value and the tunnel won’t go idle. Tailscale is probably wanting to reduce their overhead so they don’t include a keepalive.
relatable
It doesn’t improve security much to host your reverse proxy outside your network, but it does hide your home IP if you care.
If your app can exploited over the web and through a proxy it doesn’t matter if that proxy is on the same machine or over the network.
- I got started with a guide from these guys back in 2020. I still use traefik as my reverse proxy and Authelia for authentication and it has worked great all this time. As someone else said, everything is in containers on the one host and it is super easy this way. It all runs on a single box using containers for separation. I should probably look into a secondary server as a live backup, but that’s a lot of work / expense. I have a Cloudflare dynamic DNS container running for that.
- I would definitely advocate for owning your own domain, for the added use case of owning your own email addresses. I can now switch email providers and don’t have to worry about losing anything. This would also lean towards a more memorable domain, or at least a second domain that is memorable. Stay away from the country TLDs or “cute” generic TLDs and stay with a tried and true .com or .net (which may take some searching).
- I don’t bother with this, I just run my server behind Cloudflare, and let them protect my server. Some might disagree, but it’s easy for me and I like that.
- Containers, containers, containers! Probably Docker since it’s easy, but Podman if you really want to get fancy / extra secure. Also, make sure you have a git repo for your compose files, and a solid backup strategy from the start (so much easier than going back and doing it later). I use Backblaze for my backups and it’s $2/month for some peace of mind.
- Do it!!!
I’m an idiot and never linked the link
https://www.smarthomebeginner.com/authentik-docker-compose-guide-2025/
I’ve tried 3 times so far in Python/gradio/Oobabooga and never managed to get certs to work or found a complete visual reference guide that demonstrates a complete working example like what I am looking for in a home network. (Only really commenting to subscribe to watch this post develop, and solicit advice:)
I’ve played around with reverse proxies and ssl certs and the easiest method I’ve found so far was docker. Just haven’t put anything in production yet. If you don’t know how to use docker, learn, it’s so worth it.
Here is the tutorial I used and the note I left for myself. You’ll need a domain to play around with. Once you figure out how to get NGINX and certbot set up, replacing the helloworld container with a different one is relatively straight forward.
DO NOT FORGET, you must give certbot read write permissions in the docker-compose.yml file which isn't shown in this tutorial -----EXAMPLE, NOT PRODUCTION CODE---- nginx: container_name: nginx restart: unless-stopped image: nginx depends_on: - helloworld ports: - 80:80 - 443:443 volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf - ./certbot/conf:/etc/letsencrypt:ro - ./certbot/www:/var/www/certbot:ro certbot: image: certbot/certbot container_name: certbot volumes: - ./certbot/conf:/etc/letsencrypt:rw - ./certbot/www:/var/www/certbot:rw command: certonly --webroot -w /var/www/certbot --keep-until-expiring --email *email* -d *domain1* -d *domain2* --agree-tosdeleted by creator
You don’t even have to worry about setting up SSL on every individual service
I probably need to look into it more but since traefik is the reverse proxy, doesn’t it just get one ssl cert for a domain that all the other services use? I think that’s how my current nginx proxy is set up; one cert configured to work with the main domain and a couple subdomains. If I want to add a subdomain, if I remember correctly, I just add it to the config, restart the containers, and certbot gets a new cert for all the domains
deleted by creator
Nginx Proxy Manager + LetsEncrypt.
Either tailscale or cloudflare tunnels are the most adapted solution as other comments said.
For tailscale, as you already set it up, just make sure you have an exit node where your services are. I had to do a bit of tinkering to make sure that the ips were resolved : its just an argument to the tailscale command.
But if you dont want to use tailscale because its to complicated to your partner, then cloudlfare tunnels is the other way to go.
How it works is by creating a tunnel between your services and cloudlare, kind of how a vpn would work. You usually use the cloudlfared CLI or directly throught Cloudflare’s website to configure the tunnel. You should have a DNS imported to cloudflare by the way, because you have to do a binding such as : service.mydns.com -> myservice.local Cloudlfare can resolve your local service and expose it to a public url.
Just so you know, cloudlfare tunnels are free for some of that usage, however cloudlfare has the keys for your ssl traffic, so they in theory could have a look at your requests.
best of luck for the setup !
Thanks for the info, I appreciate it
I came here to upvote the post that mentions haproxy, but I can’t see it, so I’m resorting to writing one!
Haproxy is super fast, highly configurable, and if you don’t have the config nailed down just right won’t start so you know you’ve messed something up right away :-)
It will handle encryption too, so you don’t need to bother changing the config on your internal server, just tweak your firewall rules to let whatever box you have haproxy running on (you have a DMZ, right?) see the server, and you are good to go.
Google and stackexchange are your friends for config snippets. And I find the actual documentation is good too.
Configure it with certificates from let’s encrypt and you are off to the races.
I use a central nginx container to redirect to all my other services using a wildcard let’s encrypt cert for my internal domain from acme.sh and I access it all externally using a tailscale exit node. The only publicly accessible service that I run is my Lemmy instance. That uses a cloudflare tunnel and is isolated in it’s own vlan.
TBH I’m still not really happy having any externally accessible service at all. I know enough about security to know that I don’t know enough to secure against much anything. I’ve been thinking about moving the Lemmy instance to a vps so it can be someone else’s problem if something bad leaks out.
Don’t fret, not even Microsoft does.
You’re not as valuable as a target as Microsoft.
It’s just about risk tokerance. The only way to avoid risk is to not play the game.
wildcard let’s encrypt cert
I know what “wildcard” and “let’s encrypt cert” are separately but not together. What’s going on with that?
How do you have your tailscale stuff working with ssl? And why did you set up ssl if you were accessing via tailscale anyway? I’m not grilling you here, just interested.
I know enough about security to know that I don’t know enough to secure against much anything
I feel that. I keep meaning to set up something like nagios for monitoring and just haven’t gotten around to it yet.
So when I ask Let’s Encrypt for a cert, I ask for *.int.teuto.icu instead of specifically jellyfin.int.teuto.icu, that way I can use the same cert for any internally running service. Mostly I use SSL on everything to make browsers complain less. There isn’t much security benefit on a local network. I suppose it makes harder to spoof on an external network, but I don’t think that’s a serious threat for a home net. I used to use home.lan for all of my services, but that has the drawback of redirecting to a search by default on most browsers. I have my tailscale exit node running on my router and it just works with SSL like anything else.
Ok so I currently have a cert set up to work with:
www.domain.com (some browsers seemingly didn’t like it if I didn’t have www)
Are you saying I could just configure it like this:
*.domain.com
The idea of not having to keep updating the cert with new subdomains (and potentially break something in the process) is really appealing
Yes. If you’re using lets encrypt then note that they do not support wildcard certs with the HTTP-01 challenge type. You will need to use the DNS-01 challenge type. To utilize it you would need a domain registrar that supports api dns updates like cloudflare and then you can use the acme.sh package. Here is an example guide i found.
Note that you could still request multiple explicit subdomains in the same issue/renew commands so it’s not a huge deal either way but the wildcard will be more seamless in the future if you don’t know what other services you might want to selfhost.
awesome, thanks for the info
deleted by creator
Do you mind giving a high level overview of what a Cloudlfare tunnel is doing? Like, what’s connected to what and how does the data flow? I’ve seen cloudflare mentioned a few other times in the comments here. I know Cloudflare offers DNS services via their 1.1.1.1 and 1.0.0.1 IPs and I also know they somehow offer DDoS protection (although I’m not sure how exactly. caching?). However, that’s the limit of my knowledge of Cloudflare
deleted by creator
ISPs shouldn’t care unless it is explicitly prohibited in the contract. (I’ve never seen this)
I still wouldn’t expose anything locally though since you would need to pay for a static IP.
Instead, I just use a VPS with Wireguard and a reverse proxy.
Caddy with cloudflare support in a docker container.
This the solution.
Caddy is simple.
I currently have a nginx docker container and certbot docker container that I have working but don’t have in production. No extra features, just a barebones reverse proxy with an ssl cert. Knowing that, I read through Caddy’s homepage but since I’ve never put an internet facing service into production, it’s not obvious to me what features I need or what I’m missing out on. Do you mind sharing what the quality of life improvements you benefit from with Caddy are?
I never went too far down the nginx route, so I can’t really compare the two. I ended up with caddy because I self-host vaultwarden and it really doesn’t like running over http (for obvious reasons) and caddy was the instruction set I found and understood first.
I don’t make a lot of what I host available to the wider internet, for the ones that I do, I recently migrated to using a Cloudflare tunnel to deal with the internet at large, but still have it come through caddy once it hits my server to get ssl. For everything else I have a headscale server in Oracle’s free tier that all my internal services connect to.
What caddy does are automatic certs. You set up your web-portal and make a wildcard subdoman that points to your portal. Then you just enter two lines in the config and your new app is up. Lets say you want to put your hone assistant there. You could add hass.portal.domain.tld {reverse_proxy internal.ip:8123 } and it works. Possible with other setups too, but its no hassle
Honestly, if you know nginx just stick with it. There’s nothing to be gained by learning a new proxy.
Use Mozilla’s SSL generator if you want to harden nginx (or any proxy you choose)- https://ssl-config.mozilla.org/
I didn’t know about that tool. Thanks for sharing
Does Caddy have an OWASP plugin like nginx?
I don’t use it, but it looks like yes.
or a domain with a random string of characters so no one could reasonably guess it? Does it matter?
That does not work. As soon as you get SSL certificates, expect the domain name to be public knowledge, especially with Let’s Encrypt and all other certificate authorities with transparency logs. As a general rule, don’t rely on something to be hidden from others as a security measure.
Damn, I didn’t realize they had public logs like that. Thanks for the heads up
Https://crt.sh would make anyone who thought obscurity would be a solution poop themselves.
deleted by creator
deleted by creator
Do you have instructions on how you set that up?
deleted by creator
Why am I forwarding all http and https traffic from WAN to a single system on my LAN? Wouldn’t that break my DNS?
You would be forwarding ingress traffic(traffic not originating from your internal network) to 443/80, this doesn’t affect egress requests(requests from users inside your network requesting external sites) so it wouldn’t break your internal DNS resolution of sites. All traffic heading to your router from outside origins would be pushed to your reverse proxy where you can then route however you please to whatever machine/port your apps live on.
deleted by creator
nixos with nginx services does all proxying and ssl stuff, fail2ban is there as well
I know I should learn NixOS, I even tried for a few hours one evening but god damn, the barrier to entry is just a little too high for me at the moment 🫤
i guess you were able to install the os ok? are you using proxmox or regular servers?
i can post an example configuration.nix for the proxy and container servers that might help. i have to admit debugging issues with configurations can be very tricky.
in terms of security i was always worried about getting hacked. the only protection for that was to make regular backups of data and config so i can restore services, and to create a dmz behind my isp router with a vlan switch and a small router just for my services to protect the rest of my home network
i guess you were able to install the os ok? are you using proxmox or regular servers?
I was. It was learning the Nix way of doing things that was just taking more time than i had anticipated. I’ll get around to it eventually though
I tried out proxmox years ago but besides the web interface, I didn’t understand why I should use it over Debian or Ubuntu. At the moment, I’m just using Ubuntu and docker containers. In previous setups, I was using KVMs too.
Correct me if I’m wrong, but don’t you have to reboot every time you change your Nix config? That was what was painful. Once it’s set up the way you want, it seemed great but getting to that point for a beginner was what put me off.
I would be interested to see the config though
i have found this reference very useful https://mynixos.com/options/
yeah proxmox is not necessary unless you need lots of separate instances to play around with
you only need to reboot Nix when something low level has changed. i honestly don’t know where that line is drawn so i reboot quite a lot when i’m setting up a Nix server and then hardly reboot it at all from then on even with auto-updates running oh and if i make small changes to the services i just run
sudo nixos-rebuild switchand don’t rebootthis is my container config for element/matrix podman containers do not run as root so you have to get the file privileges right on the volumes mapped into the containers. i used
topto find out what user the services were running as. you can see there are some settings there where you can change the user if you are having permissions problems{ pkgs, modulesPath, ... }: { imports = [ (modulesPath + "/virtualisation/proxmox-lxc.nix") ]; security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ]; system.stateVersion = "23.11"; system.autoUpgrade.enable = true; system.autoUpgrade.allowReboot = false; nix.gc = { automatic = true; dates = "weekly"; options = "--delete-older-than 14d"; }; services.openssh = { enable = true; settings.PasswordAuthentication = true; }; users.users.XXXXXX = { isNormalUser = true; home = "/home/XXXXXX"; extraGroups = [ "wheel" ]; shell = pkgs.zsh; }; programs.zsh.enable = true; environment.etc = { "fail2ban/filter.d/matrix-synapse.local".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter '' [Definition] failregex = .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Failed password login.* .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Attempted to login as.*\n.*Invalid username or password.* ''); }; services.fail2ban = { enable = true; maxretry = 3; bantime = "10m"; bantime-increment = { enable = true; multipliers = "1 2 4 8 16 32 64"; maxtime = "168h"; overalljails = true; }; jails = { matrix-synapse.settings = { filter = "matrix-synapse"; action = "%(known/action)s"; logpath = "/srv/logs/synapse.json.log"; backend = "auto"; findtime = 600; bantime = 600; maxretry = 2; }; }; }; virtualisation.oci-containers = { containers = { postgres = { autoStart = false; environment = { POSTGRES_USER = "XXXXXX"; POSTGRES_PASSWORD = "XXXXXX"; LANG = "en_US.utf8"; }; image = "docker.io/postgres:14"; ports = [ "5432:5432" ]; volumes = [ "/srv/postgres:/var/lib/postgresql/data" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; synapse = { autoStart = false; environment = { LANG = "C.UTF-8"; # UID="0"; # GID="0"; }; # user = "1001:1000"; image = "ghcr.io/element-hq/synapse:latest"; ports = [ "8008:8008" ]; volumes = [ "/srv/synapse:/data" ]; log-driver = "json-file"; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--log-opt" "max-size=10m" "--log-opt" "max-file=1" "--log-opt" "path=/srv/logs/synapse.json.log" "--pull=newer" ]; dependsOn = [ "postgres" ]; }; element = { autoStart = true; image = "docker.io/vectorim/element-web:latest"; ports = [ "8009:80" ]; volumes = [ "/srv/element/config.json:/app/config.json" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; # dependsOn = [ "synapse" ]; }; call = { autoStart = true; image = "ghcr.io/element-hq/element-call:latest-ci"; ports = [ "8080:8080" ]; volumes = [ "/srv/call/config.json:/app/config.json" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; livekit = { autoStart = true; image = "docker.io/livekit/livekit-server:latest"; ports = [ "7880:7880" "7881:7881" "50000-60000:50000-60000/udp" "5349:5349" "3478:3478/udp" ]; cmd = [ "--config" "/etc/config.yaml" ]; entrypoint = "/livekit-server"; volumes = [ "/srv/livekit:/etc" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; livekitjwt = { autoStart = true; image = "ghcr.io/element-hq/lk-jwt-service:latest-ci"; ports = [ "7980:8080" ]; environment = { LK_JWT_PORT = "8080"; LIVEKIT_URL = "wss://livekit.XXXXXX.dynu.net"; LIVEKIT_KEY = "XXXXXX"; LIVEKIT_SECRET = "XXXXXX"; }; entrypoint = "/lk-jwt-service"; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; }; }; }this is my nginx config for my element/matrix services
as you can see i am using a proxmox NixOS with an old 23.11 nix channel but i’m sure the config can be used in other NixOS environments
{ pkgs, modulesPath, ... }: { imports = [ (modulesPath + "/virtualisation/proxmox-lxc.nix") ]; security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ]; system.stateVersion = "23.11"; system.autoUpgrade.enable = true; system.autoUpgrade.allowReboot = true; nix.gc = { automatic = true; dates = "weekly"; options = "--delete-older-than 14d"; }; networking.firewall.allowedTCPPorts = [ 80 443 ]; services.openssh = { enable = true; settings.PasswordAuthentication = true; }; users.users.XXXXXX = { isNormalUser = true; home = "/home/XXXXXX"; extraGroups = [ "wheel" ]; shell = pkgs.zsh; }; programs.zsh.enable = true; security.acme = { acceptTerms = true; defaults.email = "XXXXXX@yahoo.com"; }; services.nginx = { enable = true; virtualHosts._ = { default = true; extraConfig = "return 500; server_tokens off;"; }; virtualHosts."XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/_matrix/federation/v1" = { proxyPass = "http://192.168.10.131:8008"; extraConfig = "client_max_body_size 300M;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header X-Forwarded-Proto $scheme;"; }; locations."/" = { extraConfig = "return 302 https://element.XXXXXX.dynu.net;"; }; extraConfig = "proxy_http_version 1.1;"; }; virtualHosts."matrix.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; extraConfig = "proxy_http_version 1.1;"; locations."/" = { proxyPass = "http://192.168.10.131:8008"; extraConfig = "client_max_body_size 300M;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header X-Forwarded-Proto $scheme;"; }; }; virtualHosts."element.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:8009/"; extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;"; }; }; virtualHosts."call.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:8080/"; extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;"; }; }; virtualHosts."livekit.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/wss" = { proxyPass = "http://192.168.10.131:7881/"; # proxyWebsockets = true; extraConfig = "proxy_http_version 1.1;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header Connection \"upgrade\";" + "proxy_set_header Upgrade $http_upgrade;"; }; locations."/" = { proxyPass = "http://192.168.10.131:7880/"; # proxyWebsockets = true; extraConfig = "proxy_http_version 1.1;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header Connection \"upgrade\";" + "proxy_set_header Upgrade $http_upgrade;"; }; }; virtualHosts."livekit-jwt.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:7980/"; extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;"; }; }; virtualHosts."turn.XXXXXX.dynu.net" = { enableACME = true; http2 = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:5349/"; }; }; }; }
Cloudflare
I presume you’re referring to Cloudflare tunnel?
Yep, cloudflare tunnel / Zero trust.
Dead easy to set up.
AWS
McDonald’s
Sears & Roebuck
Johnson & Johnson
Smith & Wesson
deleted by creator










