So, I’m selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I’m wondering if there’s any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    Git Popular version control system, primarily for code
    NAS Network-Attached Storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    3 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

    [Thread #953 for this sub, first seen 5th Sep 2024, 23:05] [FAQ] [Full list] [Contact] [Source code]

  • Nibodhika@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 months ago

    This will be almost impossible. The short answer is that those pictures might be 95% similar but their binary data might be 100% different.

    Long answer:

    Images are essentially a long list of pixels, each pixel is 3 numbers for Red, Green and Blue (and optionally Alpha if you’re dealing with a transparent image, but you’re talking pictures so I’ll ignore that). This is a simple but very stupid way to store the data of an image, because it’s very likely that the image will use the same color in multiple places, so you can instead list all of the colors a image uses, and then represent the pixels as the number in that list, this makes images occupy a LOT less space. Some formats add to that, because your eye can’t see the difference between two very close colors, they group all colors that are similar into one only color, making their list of colors used on the image WAY smaller, thus having the entire image be a LOT more compressed (but you might noticed we lost information in this step). Because of this it’s possible that one image choose color X in position Y, while the other choose Z in position W, the binaries are now completely different, but an image comparison tool can tell you that color X and Z are similar enough to be the same, and they account for a given percentage of the image depending on the amount minimum of the values Y and W. But outside of image software, nothing else knows that these two completely different binaries are the same. If you hadn’t loss data by compressing get images in the first place you could theoretically use data from different images to compress (but the results wouldn’t be great, since even uncompressed images won’t be as similar as you think), but images can be compressed a LOT more by losing unimportant data so the tradeoffs are not worth it, which is why JPEG is so ubiquitous nowadays.

    All of that being said, a compression algorithm specifically designed for images could take advantage of this, but no general purpose compression can, and it’s unlikely someone went to the trouble of building a compression for this specific case, when each image is already compressed there’s little to be gained by writing something that takes colors from multiple images in consideration, needing to decide if an image is similar enough to be bundled in together with that group or not, etc. This is an interesting question, and I wouldn’t br surprised to know that Google has one such algorithm to store all images you snap together that it can already know will be sequential. But for home NAS I think it’s unlikely you’ll find something.

    Besides all of this, storage is cheap, just buy an extra disk and move over some files there, that’s likely to be your best way forward anyways.

    • smpl@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      The first thing I would do writing such a paper would be to test current compression algorithms by create a collage of the similar images and see how that compares to the size of the indiviual images.

      • simplymath@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Compressed length is already known to be a powerful metric for classification tasks, but requires polynomial time to do the classification. As much as I hate to admit it, you’re better off using a neural network because they work in linear time, or figuring out how to apply the kernel trick to the metric outlined in this paper.

        a formal paper on using compression length as a measure of similarity: https://arxiv.org/pdf/cs/0111054

        a blog post on this topic, applied to image classification:

        https://jakobs.dev/solving-mnist-with-gzip/

        • smpl@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          I was not talking about classification. What I was talking about was a simple probe at how well a collage of similar images compares in compressed size to the images individually. The hypothesis is that a compression codec would compress images with similar colordistribution in a spritesheet better than if it encode each image individually. I don’t know, the savings might be neglible, but I’d assume that there was something to gain at least for some compression codecs. I doubt doing deduplication post compression has much to gain.

          I think you’re overthinking the classification task. These images are very similar and I think comparing the color distribution would be adequate. It would of course be interesting to compare the different methods :)

          • smpl@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            Wait… this is exactly the problem a video codec solves. Scoot and give me some sample data!

            • simplymath@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 months ago

              Yeah. That’s what an MP4 does, but I was just saying that first you have to figure out which images are “close enough” to encode this way.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      The problem is that OP is asking for something to automatically make decisions for him. Computers don’t make decisions, they follow instructions.

      If you have 10 similar images and want a script to delete 9 you don’t want, then how would it know what to delete and keep?

      If it doesn’t matter, or if you’ve already chosen the one out of the set you want, just go delete the rest. Easy.

      As far as identifying similar images, this is high school level programming at best with a CV model. You just run a pass through something with Yolo or whatever and have it output similarities in confidence of a set of images. The problem is you need a source image to compare it to. If you’re running through thousands of files comprising dozens or hundreds of sets of similar images, you need a source for comparison.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    we can have 5~10 photos which are basically duplicates

    Have any of you guys handled a similar situation?

    I decide which one is the best and then delete the others. Sometimes I keep 2, but that’s an exception. I do that as early as possible.

    I don’t mind about storage space at all (still many TB free), but keeping (near-)duplicates costs valuable time of my life. Therefore I avoid it.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    Well how would you know which ones you’d be okay with a program deleting or not? You’re the one taking the pictures.

    Deduplication checking is about files that have exactly the same data payload contents. Filesystems don’t have a concept of images versus other files. They just store data objects.

    • pe1uca@lemmy.pe1uca.devOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I’m not saying to delete, I’m saying for the file system to save space by something similar to deduping.
      If I understand correctly, deduping works by using the same data blocks for similar files, so there’s no actual data loss.

      • Dave.@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I don’t think there’s anything commercially available that can do it.

        However, as an experiment, you could:

        • Get a group of photos from a burst shot
        • Encode them as individual frames using a modern video codec using, eg VLC.
        • See what kind of file size you get with the resulting video output.
        • See what artifacts are introduced when you play with encoder settings.

        You could probably/eventually script this kind of operation if you have software that can automatically identify and group images.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        I believe this is what some compression algorithms do if you were to compress the similar photos into a single archive. It sounds like that’s what you want (e.g. archive each day), for immich to cache the thumbnails, and only decompress them if you view the full resolution. Maybe test some algorithms like zstd against a group of similar photos vs individually?

        FYI file system deduplication works based on file content hash. Only exact 1:1 binary content duplicates share the same hash.

        Also, modern image and video encoding algorithms are already the most heavily optimized that computer scientists can currently achieve with consumer hardware, which is why compressing a jpg or mp4 offers negligible savings, and sometimes even increases the file size.