So, I’m selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I’m wondering if there’s any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

  • @cizra@lemm.ee
    link
    fedilink
    English
    09 months ago

    Cool idea. If this doesn’t exist, and it probably doesn’t, it sounds like a worthy project to get one’s MSc or perhaps even PhD.

    • @just_another_person@lemmy.world
      link
      fedilink
      English
      09 months ago

      The problem is that OP is asking for something to automatically make decisions for him. Computers don’t make decisions, they follow instructions.

      If you have 10 similar images and want a script to delete 9 you don’t want, then how would it know what to delete and keep?

      If it doesn’t matter, or if you’ve already chosen the one out of the set you want, just go delete the rest. Easy.

      As far as identifying similar images, this is high school level programming at best with a CV model. You just run a pass through something with Yolo or whatever and have it output similarities in confidence of a set of images. The problem is you need a source image to compare it to. If you’re running through thousands of files comprising dozens or hundreds of sets of similar images, you need a source for comparison.

    • smpl
      link
      fedilink
      English
      09 months ago

      The first thing I would do writing such a paper would be to test current compression algorithms by create a collage of the similar images and see how that compares to the size of the indiviual images.

      • @simplymath@lemmy.world
        link
        fedilink
        English
        09 months ago

        Compressed length is already known to be a powerful metric for classification tasks, but requires polynomial time to do the classification. As much as I hate to admit it, you’re better off using a neural network because they work in linear time, or figuring out how to apply the kernel trick to the metric outlined in this paper.

        a formal paper on using compression length as a measure of similarity: https://arxiv.org/pdf/cs/0111054

        a blog post on this topic, applied to image classification:

        https://jakobs.dev/solving-mnist-with-gzip/

        • smpl
          link
          fedilink
          English
          09 months ago

          I was not talking about classification. What I was talking about was a simple probe at how well a collage of similar images compares in compressed size to the images individually. The hypothesis is that a compression codec would compress images with similar colordistribution in a spritesheet better than if it encode each image individually. I don’t know, the savings might be neglible, but I’d assume that there was something to gain at least for some compression codecs. I doubt doing deduplication post compression has much to gain.

          I think you’re overthinking the classification task. These images are very similar and I think comparing the color distribution would be adequate. It would of course be interesting to compare the different methods :)

          • @simplymath@lemmy.world
            link
            fedilink
            English
            09 months ago

            Yeah. I understand. But first you have to cluster your images so you know which ones are similar and can then do the depulication. This would be a powerful way to do that. It’s just expensive compared to other clustering algorithms.

          • smpl
            link
            fedilink
            English
            09 months ago

            Wait… this is exactly the problem a video codec solves. Scoot and give me some sample data!

            • @simplymath@lemmy.world
              link
              fedilink
              English
              0
              edit-2
              9 months ago

              Yeah. That’s what an MP4 does, but I was just saying that first you have to figure out which images are “close enough” to encode this way.

              • smpl
                link
                fedilink
                English
                09 months ago

                It seems that we focus our interest in two different parts of the problem.

                Finding the most optimal way to classify which images are best compressed in bulk is an interesting problem in itself. In this particular problem the person asking it had already picked out similar images by hand and they can be identified by their timestamp for optimizing a comparison of similarity. What I wanted to find out was how well the similar images can be compressed with various methods and codecs with minimal loss of quality. My goal was not to use it as a method to classify the images. It was simply to examine how well the compression stage would work with various methods.