My environment is a (freshly installed) Debian server with ZFS pools. I would like to store files in ZFS and share them using Samba.

My question is which is better from efficiency, effort, and security (for the host) perspectives? Running it natively on the bare-metal Debian host, running it in an LXC container, or running it in a VM? Why do you think one way is better than the others? I’m pretty familiar with VMs, but don’t have much experience or knowledge of containers.

This is what I’m thinking at the moment, but I would appreciate any feedback:

  1. Natively: no resource overhead, medium admin overhead (manual Samba configuration), least secure(?)
  2. LXC: small resource overhead, least admin overhead (preconfigured containers and/or reproducible configs), possibly more security than native(?)
  3. VM: most resource overhead, most admin overhead (not only manual configuration, but also managing virtual disk [including snapshots, backups, etc]), most secure
  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    I do LXC, just seems easier since I can mess with things and use Cockpit or whatever to manage it, without worrying about the host system.

  • friend_of_satan@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    5 months ago

    Personally I run almost everything in docker, with the launch configs stored in git, backed by zfs. This means that if the host dies I can import that zpool, docker compose up -d and be done with it.

    I suppose the same could be done with VMs or LXC. The main thing is to keep it all separate from the bare metal OS, and in a technology that allows quick provisioning from a launch config of some sort, be it makefile, shell script, docker-compose, or whatever.

    • xapr [he/him]@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      Thank you. Is the only reason that you run it in containers for the easy reproducibility, or is there any other reason that you want that separation from the bare metal OS?

      • HybridSarcasm@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Right. You kind of want your bare metal OS as vanilla as possible. If you need to nuke and pave, you don’t need to worry about re-applying various configs. Additionally, on a theoretical level, if there’s a bug in something on the bare metal OS, the separation provided by VMs and containers should mean it doesn’t affect the the apps in those VMs / containers.

        That seems easier - at least to me - than keeping track of configs in text files or even Ansible playbooks.

        • xapr [he/him]@lemmy.sdf.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Thank you, that makes sense. I figure that separation provided by VMs and containers is also a security advantage, in case the software in them has vulnerabilities.

      • friend_of_satan@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        Both, actually, and those things are directly related. If I need to migrate a single thing to another machine it’s just rsync and make run. Of course this requires the bare metal to have docker and make, so some bare metal configuration management is also needed.