And it failed spectacularly.

We only needed a simple form, but we wanted to be fancy, so we used “nextcloud forms”.

The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower. No warning whatsoever. It’s an official app, couldn’t they wait that it was ready for NC 30 before launching it? The newsletter boasts “NC hub 9 is the best thing after sliced bread” yet i don’t see any difference both in visual or performance compared to NC hub 2

Conclusion: we made our business to rely on nextcloud forms as a signup form, but the only reason we were using it was disabled who knows how many weeks ago.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    1 month ago

    Docker images should never self update - that’s an anti pattern. They should be static code. The only time I would expect a docker image to “auto update” is if I was using the “latest” or “stable” tag and Compose/Kubernetes/I repull the image - but the image should never update itself.

    Yes, OP bit off more than they could chew. Nextcloud, however, is breaking the entire purpose of Docker images by having an auto-updater at all.

    • ShortN0te@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      What are you talking about? If you are not manual (or by something like watchtower) pull the newest image it will not update by itself.

      I have never seen an auto-update feature by nextcloud itself, can you pls link to it?

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        I don’t have the link here, but essentially yes, nextcloud can update it’s own app code in it’s image because you have to mount the code to your own filesystem. This means that between docker images you can have a mismatch of the code that you have stored and the code that the image is expecting, which frequently causes mismatches for me. This is an antipattern. The code should be stored in the image, not as a volume mount. There should never be a mismatch of code in a docker image - that’s the whole point. The configuration could be out of date sure, or if there’s a data file that’s needed, that’s expected. The actual running code thought, that should never be on a mountable volume.

        Next time you update the image you will probably be greeted with a “Nextcloud needs to update”. That should not exist. You already pulled the image, that should be everything you need to do. The caveats are extensions, kind of a grey area in my book, but I know it’s not a clean pattern with those either. (The best one I’ve seen lets you pin the extension version with environment variables or a config file, and then once again you are in control of when they update, and no running code is stored outside of the image.)

        • ShortN0te@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          You can disable the web updater in the config which is the default when deploying via docker. The only time i had a mismatch is when i migrated from a nativ debian installation to a docker one and fucked up some permissions. And that was during tinkering while migrating it. Its solid for me ever since.

          Again, there is no official nextcloud auto updater, OP chose to use an auto updater which bricked OPs setup (a plugin was disabled).

          • Scrubbles@poptalk.scrubbles.tech
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            Thanks, I’ll disable that. I’m extra salty right now because I had to rollback a bad version and had to rebuild some of my config over the last week. I got into version hell because the code on the volume (which is why I’m pissed at it) said that I couldn’t run the image that I had set. So I pinned an earlier version, but then there were extensions that were pinned to a later one and said that I couldn’t rollback and didn’t start. I had to end up redoing the whole drive manually, forcing specific versions in the version.php and the config.php to finally make it work (Why is it in two places). Then after all that I had to run the upgrade command. Extremely annoying, and a waste of time for me. Other docker containers if I need to pin a version? I just… pin the version. Nextcloud is the only one I’ve seen where they store code on my volume and then pin specific versions to it.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 month ago

      If you say

      Thing:latest
      

      and then redeploy your compose file or what not,

      well, you’re getting the latest!

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 month ago

          That is a very bad idea. Use the stable tag instead. Better yet, create an Ansible playbook that updates the containers in bulk and then manually run it when you have time.

          • Scott@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 month ago

            Naw I mostly do it for my own personal shit, can’t be fucked to update Plex 3 times a week and so on with other homelab stuff. Everything production is tagged with gitops version managed kubernetes manifests

            Edit: should also mention I build quite a bit of the software being deployed