I already host multiple services via caddy as my reverse proxy. Jellyfin, I am worried about authentication. How do you secure it?

  • borax7385@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    3 days ago

    I use fail2ban to ban IPs that fall to login and also IPs that perform common scans in the reverse proxy

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    Tailscale is awesome. Alternatively if you’re more technically inclined you can make your own wireguard tailscale and all you need is to get a static IP for your home network. Wireguard will always be safer than each individual service.

      • GreenKnight23@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        web application firewall.

        think of it like an intelligent firewall proxy that can take action against perceived threats like injection attacks or timed attacks. some can also help fight against DDOS when integrated with an actual firewall upstream.

  • Batman@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    I am using tailscale but I went a little further to let my family log in with their Gmail( they will not make any account for 1 million dollars)

    Tailscale funneled Jellyfin Keycloak (adminless)

    Private Tailscale Keycloak admin Postgres dB

    I hook up jellyfin to Keycloak (adminless) using the sso plugin. And hook Keycloak up (using the private instance) to use Google as an identity provider with a private app.

    • λλλ@programming.devOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      SSO plugin is good to know about. Does that address any of the issues with security that someone was previously talking about?

      • Batman@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        23 hours ago

        I’d say it’s nearly as secure as

        basic authentication. If you restrict deletion to admin users and use role (or group) based auth to restrict that jellyfin admin ability to people with strong passwords in keycloak, i think you are good. Still the only risk is people could delete your media if an adminusers gmail is hacked.

        Will say it’s not as secure as restricting access to vpn, you could be brute forced. Frankly it would be preferable to set up rate limiting, but that was a bridge too far for me

        • Appoxo@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          15 hours ago

          I set mine up with Authelia 2FA and restricted media deletion to one user: The administrator.
          All others arent allowed to delete. Not even me.

  • DefederateLemmyMl@feddit.nl
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    What I used to do was: I put jellyfin behind an nginx reverse proxy, on a separate vhost (so on a unique domain). Then I added basic authentication (a htpasswd file) with an unguessable password on the whole domain. Then I added geoip firewall rules so that port 443 was only reachable from the country I was in. I live in small country, so this significantly limits exposure.

    Downside of this approach: basic auth is annoying. The jellyfin client doesn’t like it … so I had to use a browser to stream.

    Nowadays, I put all my services behind a wireguard VPN and I expose nothing else. Only issue I’ve had is when I was on vacation in a bnb and they used the same IP range as my home network :-|

  • jagged_circle@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    2 days ago

    Kinda hard because they have an ongoing bug where if you put it behind a reverse proxy with basic auth (typical easy button to secure X web software on Internet), it breaks jellyfin.

    Best thing is to not. Put it on your local net and connect in with a vpn

    • satans_methpipe@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I’m not experiencing that bug. My reverse proxy is only accessed locally at the moment though. I did have to play with headers a bit in nginx to get it working.

  • Gagootron@feddit.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    I use good ol’ obscurity. My reverse proxy requires that the correct subdomain is used to access any service that I host and my domain has a wildcard entry. So if you access asdf.example.com you get an error, the same for directly accessing my ip, but going to jellyfin.example.com works. And since i don’t post my valid urls anywhere no web-scraper can find them. This filters out 99% of bots and the rest are handled using authelia and crowdsec

    • Nibodhika@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 hours ago

      If you’re using jellyfin as the url, that’s an easily guessable name, however if you use random words not related to what’s being hosted chances are less, e.g. salmon.example.com . Also ideally your server should reply with a 200 to * subdomains so scrappers can’t tell valid from invalid domains. Also also, ideally it also sends some random data on each of those so they don’t look exactly the same. But that’s approaching paranoid levels of security.

    • andreluis034@bookwormstory.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      Are you using HTTPS? It’s highly likely that your domains/certificates are being logged for certificate transparency. Unless you’re using wildcard domains, it’s very easy to enumerate your sub-domains.

      • Gagootron@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 hours ago

        It seems to that it works. I don’t get any web-scrapers hitting anything but my main domain. I can’t find any of my subdomains on google.

        Please tell me how you believe that it works. Maybe i overlooked something…

        • ocean@lemmy.selfhostcat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          My understanding is that scrappers check every domain and subdomain. You’re making it harder but not impossible. Everything gets scrapped

          It would be better if you also did IP whitelisting, rate limiting to prevent bots, bot detection via cloudflare or something similar, etc.

    • sludge@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 days ago

      And since i don’t post my valid urls anywhere no web-scraper can find them

      You would ah… be surprised. My urls aren’t published anywhere and I currently have 4 active decisions and over 300 alerts from crowdsec.

      It’s true none of those threat actors know my valid subdomains, but that doesn’t mean they don’t know I’m there.

      • Gagootron@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Of course i get a bunch of scanners hitting ports 80 and 443. But if they don’t use the correct domain they all end up on an Nginx server hosting a static error page. Not much they can do there

        • DefederateLemmyMl@feddit.nl
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          This is how I found out Google harvests the URLs I visit through Chrome.

          Got google bots trying to crawl deep links into a domain that I hadn’t published anywhere.

          • zod000@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 day ago

            This is true, and is why I annoyingly have to keep robots.txt on my unpublished domains. Google does honor them for the most part, for now.

    • darkknight@discuss.online
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I was thinking of setting this up recntly after seeing it on Jim’s garage. Do you use it for all your external services or just jellyfin? How does it compare to a fairly robust WAF like bunkerweb?

      • sludge@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I use it for all of my external services. It’s just wireguard and traefik under the hood. I have no familiarity with bunkerweb, but pangolin integrates with crowdsec. Specifically it comes out of the box with traefik bouncer, but it is relatively straightforward to add the crowdsec firewall bouncer on the host machine which I have found to be adequate for my needs.

  • jagged_circle@feddit.nl
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    I have another site on a different port that sits behind basic auth and adds the IP to a short ipset whitelist.

    So first I have to auth into that site with basic auth, then I load jellyfin on the other port.

  • Kusimulkku@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    I’ve put it behind WireGuard since only my wife and I use it. Otherwise I’d just use Caddy or other such reverse proxy that does https and then keep Jellyfin and Caddy up to date.

  • geography082@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    2 days ago

    My setup is: Proxmox - restricted LXC running docker which runs jellyfin, tailscale funnel as reverse proxy and certificate provider. So so don’t care about jellyfin security, it can get hacked / broken , its an end road. If so i will delete the LXC and bring it up again using backups. Also i dont think someone will risk or use time to hack a jellyfin server. My strategy is, with webservices that don’t have critical personal data, i have them isolated in instances. I don’t rely on security on anything besides the firewall. And i try not to have services with personal sensitive data, and if i do, on my local lan with the needed protections. If i need access to it outside my local lan, vpn.