There’s been some Friday night kernel drama on the Linux kernel mailing list… Linus Torvalds has expressed regrets for merging the Bcachefs file-system and an ensuing back-and-forth between the file-system maintainer.

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          Not under a license which prohibits also licensing under the GPL. i.e. it has no conditions beyond what the GPL specifies.

              • ryannathans@aussie.zone
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                3 months ago

                There’s no requirement for them to apply to the same file? There’s already blobs in the kernel the gpl doesn’t apply to the source of

                • wewbull@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  The question was “How do you define GPL compatible?”. The answer to that question has nothing to do with code being split between files. Two licenses are incompatible if they can’t both apply at the same time to the same thing.

                  • ryannathans@aussie.zone
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    3 months ago

                    The two works can live harmoniously together in the same repo, therefore, not incompatible by one definition and the one that matters.

                    There’s already big organisations doing it and they haven’t had any issues

        • bastion@feddit.nl
          link
          fedilink
          arrow-up
          11
          arrow-down
          4
          ·
          edit-2
          3 months ago

          Do your own research, that’s a pretty well-discussed topic, particularly as concerns ZFS.

          • ryannathans@aussie.zone
            link
            fedilink
            arrow-up
            2
            arrow-down
            10
            ·
            edit-2
            3 months ago

            I’m all over ZFS and I am not aware of any unresolved “licence issues”. It’s like a decade old at this point

            • apt_install_coffee@lemmy.ml
              link
              fedilink
              arrow-up
              3
              ·
              3 months ago

              License incompatibility is one big reason OpenZFS is not in-tree for Linux, there is plenty of public discussion about this online.

                • apt_install_coffee@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  3 months ago

                  Yes, but note that neither the Linux foundation nor OpenZFS are going to put themselves in legal risk on the word of a stack exchange comment, no matter who it’s from. Even if their legal teams all have no issue, Oracle has a reputation for being litigious and the fact that they haven’t resolved the issue once and for all despite the fact they could suggest they’re keeping the possibility of litigation in their back pocket (regardless of if such a case would have merit).

                  Canonical has said they don’t think there is an issue and put their money where their mouth was, but they are one of very few to do so.

                  • ryannathans@aussie.zone
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    3 months ago

                    Keen to see how Canonical goes. There’s another one or two distros doing the same. Maybe everyone will wake up and realise they have been fighting over nothing

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      3 months ago

      ZFS doesn’t support tiered storage at all. Bcachefs is capable of promoting and demoting files to faster but smaller or slower but larger storage. It’s not just a cache. On ZFS the only option is really multiple zpools. Like you can sort of do that with the persistent L2ARC now but TBs of L2ARC is super wasteful and your data has to fully fit the pool.

      Tiered storage is great for VMs and games and other large files. Play a game, promote to NVMe for fast loadings. Done playing, it gets moved to the HDDs.

      • ryannathans@aussie.zone
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        3 months ago

        You’re misrepresenting L2ARC and it’s a silly comparison to claim to need TBs of L2ARC and then also say you’d copy the game to nvme just to play it on bcachefs. That’s what ARC does. RAM and SSD caching of the data in use with tiered heuristics.

        • Max-P@lemmy.max-p.me
          link
          fedilink
          arrow-up
          4
          ·
          3 months ago

          I know, that was an example of why it doesn’t work on ZFS. That would be the closest you can get with regular ZFS, and as we both pointed out, it makes no sense, it doesn’t work. The L2ARC is a cache, you can’t store files in it.

          The whole point of bcachefs is tiering. You can give it a 4TB NVMe, a 4TB SATA SSD and a 8 GB HDD and get almost the whole 16 TB of usable space in one big filesystem. It’ll shuffle the files around for you to keep the hot data set on the fastest drive. You can pin the data to the storage medium that matches the performance needs of the workload. The roadmap claims they want to analyze usage pattern and automatically store the files on the slowest drive that doesn’t bottleneck the workload. The point is, unlike regular bcache or the ZFS ARC, it’s not just a cache, it’s also storage space available to the user.

          You wouldn’t copy the game to another drive yourself directly. You’d request the filesystem to promote it to the fast drive. It’s all the same filesystem, completely transparent.

            • apt_install_coffee@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              3 months ago

              Brand new anything will not show up with amazing performance, because the primary focus is correctness and features secondary.

              Premature optimisation could kill a project’s maintainability; wait a few years. Even then, despite Ken’s optimism I’m not certain we’ll see performance beating a good non-cow filesystem; XFS and EXT4 have been eeking out performance for many years.

                • apt_install_coffee@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  3 months ago

                  A rather overly simplistic view of filesystem design.

                  More complex data structures are harder to optimise for pretty much all operations, but I’d suggest the overwhelmingly most important metric for performance is development time.

                  • ryannathans@aussie.zone
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    3 months ago

                    At the end of the day the performance of a performance oriented filesystem matters. Without performance, it’s just complexity