Background: 15 years of experience in software and apparently spoiled because it was already set up correctly.

Been practicing doing my own servers, published a test site and 24 hours later, root was compromised.

Rolled back to the backup before I made it public and now I have a security checklist.

  • otacon239@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    8 days ago

    I’ve always felt that if you’re exposing an SSH or any kind of management port to the internet, you can avoid a lot of issues with a VPN. I’ve always setup a VPN. It prevents having to open up very much at all and then you can open configured web portal ports and the occasional front end protocol where needed.

    • FauxLiving@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      7 days ago

      Exactly.

      All of my services are ‘local’ to the VPN. Nothing happens on the LAN except for DHCP and WireGuard traffic.

      Remote access is as simple as pressing the WireGuard button.

  • recklessengagement@lemmy.world
    link
    fedilink
    arrow-up
    25
    ·
    8 days ago

    This sounds like something everyone should go through at least once, to underscore the importance of hardening that can be easily taken for granted

  • Punkie@lemmy.world
    link
    fedilink
    arrow-up
    174
    ·
    8 days ago

    Basic setup for me is scripted on a new system. In regards to ssh, I make sure:

    • Root account is disabled, sudo only
    • ssh only by keys
    • sshd blocks all users but a few, via AllowUsers
    • All ‘default usernames’ are removed, like ec2-user or ubuntu for AWS ec2 systems
    • The default ssh port moved if ssh has to be exposed to the Internet. No, this doesn’t make it “more secure” but damn, it reduces the script denials in my system logs, fight me.
    • Services are only allowed connections by an allow list of IPs or subnets. Internal, when possible.

    My systems are not “unhackable” but not low-hanging fruit, either. I assume everything I have out there can be hacked by someone SUPER determined, and have a vector of protection to mitigate backwash in case they gain full access.

    • feddylemmy@lemmy.world
      link
      fedilink
      arrow-up
      73
      ·
      8 days ago
      • The default ssh port moved if ssh has to be exposed to the Internet. No, this doesn’t make it “more secure” but damn, it reduces the script denials in my system logs, fight me.

      Gosh I get unreasonably frustrated when someone says yeah but that’s just security through obscurity. Like yeah, we all know what nmap is, a persistent threat will just look at all 65535 and figure out where ssh is listening… But if you change your threat model and talk about bots? Logs are much cleaner and moving ports gets rid of a lot of traffic. Obviously so does enabling keys only.

      Also does anyone still port knock these days?

      • josefo@leminal.space
        link
        fedilink
        arrow-up
        4
        ·
        8 days ago

        Literally the only time I got somewhat hacked was when I left the default port of the service. Obscurity is reasonable, combined with other things like the ones mentioned here make you pretty much invulnerable to casuals. Somebody needs to target you to get anything.

      • kernelle@0d.gs
        link
        fedilink
        arrow-up
        20
        ·
        8 days ago

        Also does anyone still port knock these days?

        Enter Masscan, probably a net negative for the internet, so use with care.

        • davidgro@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          8 days ago

          I didn’t see anything about port knocking there, it rather looks like it has the opposite focus - a quote from that page is “features that support widespread scanning of many machines are supported, while in-depth scanning of single machines aren’t.”

          • kernelle@0d.gs
            link
            fedilink
            arrow-up
            4
            ·
            8 days ago

            Sure yeah it’s a discovery tool OOTB, but I’ve used it to perform specific packet sequences as well.

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    ·
    8 days ago

    Lol you can actually demo a github compromise in real time to an audience.

    Make a repo with an API key, publish it, and literally just watch as it takes only a few minutes before a script logs in.

  • phx@lemmy.ca
    link
    fedilink
    arrow-up
    21
    ·
    8 days ago

    Had this years ago except it was a dumbass contractor where I worked who left a Windows server with FTP services exposed to the Internet and IIRC anonymous FTP enabled, on a Friday.

    When I came in on Monday it had become a repository for warez, malware, and questionable porn. We wiped out rather than trying to recover anything.

    • DefederateLemmyMl@feddit.nl
      link
      fedilink
      arrow-up
      3
      ·
      8 days ago

      Do not allow username/password login for ssh

      This is disabled by default for the root user.

      $ man sshd_config
      
      ...
             PermitRootLogin
                     Specifies whether root  can  log  in  using  ssh(1).   The  argument  must  be  yes,  prohibit-password,
                     forced-commands-only, or no.  The default is prohibit-password.
      ...
      
      
    • LordCrom@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      8 days ago

      If it’s public facing, how about dont turn on ssh to the public, open it to select ips or ranges. Use a non standard port, use a cert or even a radius with TOTP like privacyIdea. How about a port knocker to open the non standard port as well. Autoban to lock out source ips.

      That’s just off the top of my head.

      There’s a lot you can do to harden a host.

      • Faresh@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        dont turn on ssh to the public, open it to select ips or ranges

        What if you don’t have a static IP, do you ask your ISP in what range their public addresses fall?

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 days ago

    This is like browsing /c/selfhosted as everyone portforwards every experimental piece of garbage across their router…

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      2
      ·
      8 days ago

      portforwards every experimental piece of garbage across their router…

      Man some of those “It’s so E-Z bro” YouTubers are WAY too cavalier about doing this.

    • smiletolerantly@awful.systems
      link
      fedilink
      arrow-up
      10
      ·
      8 days ago

      Meh. Each service in its isolated VM and subnet. Plus just generally a good firewall setup. Currently hosting ~10 services plubicly, never had any issue.

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 days ago

        Well, if you actually do that, bully for you, that’s how that should be done if you have to expose services.

        Everyone else there is probably DMZing their desktop from what I can tell.

    • InputZero@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      8 days ago

      Yeah the only thing forwarded past my router is my VPN. Assuming I did my job decently, without a valid private key it should be pretty difficult to compromise.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    77
    ·
    8 days ago

    One time, I didn’t realize I had allowed all users to log in via ssh, and I had a user “steam” whose password was just “steam”.

    “Hey, why is this Valheim server running like shit?”

    “Wtf is xrx?”

    “Oh, it looks like it’s mining crypto. Cool. Welp, gotta nuke this whole box now.”

    So anyway, now I use NixOS.

    • pageflight@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      8 days ago

      Good point about a default deny approach to users and ssh, so random services don’t add insecure logins.

  • potentiallynotfelix@lemmy.fish
    link
    fedilink
    arrow-up
    9
    ·
    7 days ago

    Weird. My last setup had a NAT with a few VMs hosting a few different services. For example, Jellyfin, a web server, and novnc/vm. That turned out perfectly fine and it was exposed to the web. You must have had a vulnerable version of whatever web host you were using, or maybe if you had SSH open without rate limits.

  • frezik@midwest.social
    link
    fedilink
    arrow-up
    17
    ·
    7 days ago

    I’m having the opposite problem right now. Tightend a VM down so hard that now I can’t get into it.

  • Fedegenerate@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    I don’t think I’m ever opening up anything to the internet. It’s scary out there.

    I don’t trust my competence, and if I did, I dont trust my attention to detail. That’s why I outsource my security: pihole+firebog for links, ISP for my firewall, and Tailscale for tunnels. I’m not claiming any of them are the best, but they’re all better than me.

      • Fedegenerate@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 days ago

        You over estimate my competence. I do intend to leave my ISP firewall up and intact, but I could build layers behind it.

        I run everything on a minipc (beelink eq12), which I intend to age into a network box (router, dns, firewall) when I outgrow it as a server. It’ll be a couple years and few more users yet though.

  • Fair Fairy@thelemmy.club
    link
    fedilink
    arrow-up
    29
    ·
    8 days ago

    I’m confused. I never disable root user and never got hacked.

    Is the issue that the app is coded in a shitty way maybe ?

    • cley_faye@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      7 days ago

      You can’t really disable it anyway.

      Hardening is mostly prevent root login from outside in case every other layer of authentication and access control broke, do not allow regular user to su/sudo into it for free, and have a tight grip on anything that’s executable and have a setuid bit set. I did not install a system from scratch in a long time but I believe this would be the default on most things that are not geared toward end-user devices, too.

    • Xanza@lemm.ee
      link
      fedilink
      English
      arrow-up
      24
      ·
      7 days ago

      You can’t really disable the root user. You can make it so they can’t login remotely, which is highly suggested.

      • MehBlah@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Another thing you can do under certain circumstances which I’m sure someone on here will point out is depreciated is use TCP Wrappers. If you are only connecting to ssh from known IP addresses or IP address ranges then you can effectively block the rest of the world from accessing you. I used a combination of ipset list, fail2ban and tcp wrappers along with my firewall which like is also something old called iptables-persistent. I’ve also moved my ssh port up high and created several other fake ports that keep anyone port scanning my IP guessing.

        These days I have all ports closed except for my wireguard port and access all of my hosted services through it.

        • Xanza@lemm.ee
          link
          fedilink
          English
          arrow-up
          12
          ·
          7 days ago

          There’s no real advantage to disable the root user, and I really don’t recommend it. You can disable SSH root login, and as long as you ensure root has a secure password that’s different than your own account your system is just as safe with the added advantage of having the root account incase something happens.

          • Possibly linux@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            7 days ago

            That wouldn’t be defense in depth. You want to limit anything that’s not necessary as it can become a source of attack. There is no reason root should be enabled.

            • Xanza@lemm.ee
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              7 days ago

              Why do like, houses have doors man. You gotta eliminate all points of egress for security, maaaan. /s

              There’s no particular reason to disable root, and with a hardened system, it’s not even a problem you need to worry about…

            • Faresh@lemmy.ml
              link
              fedilink
              English
              arrow-up
              6
              ·
              7 days ago

              I don’t understand. You will still need to do administrative tasks once in a while so it isn’t really unnecessary, and if root can’t be logged in, that will mean you will have to use sudo instead, which could be an attack vector just as su.