Once again I have to handle the same situation : no more spaces and directories with a lot of duplicates.
That's not the first time I have to do it
(last time it was to remove all the duplicates .mp3 with different names when I merged my various 'Music' directories from different boxes/disks.), so I already have a small script waiting in my repos.
But this time I choose to explore another way, instead of removing duplicate files, I've tried the hardlink substitution way.
Using hardlinks is not always applicable but my perl5/perlbrew directory seemed a good candidate (read only duplicate data...).
And it was : after running my script on it, the size went from 765M to 670M, and all the test suites of the tested modules passed with all the Perl versions.
I first thought to release the script as a patch for perlbrew, but thinking more about it I realized that a need probably exists for a more generic tool.
That's why App::Phlo was created :-)
Not a killer module, but one that suit my needs, and that will enable me to test some ideas (multi digest algorithms, use with "unionfs like" fs, Perl dirs optimization...)
If you want to experiment with me, don't hesitate : Ideas, patches, comment are always welcome...