-
Daniel Vrátil authored
The compression is completely transparent to clients, serializers and Akonadi. The idea is that when serializing payload, we can compress the serialized data using LZMA compression to save space. The data are usually large enough to benefit from the compression and at the same time small enough for the compression to not cause any significant performance overhead. In my local experiment, compressing each file in file_db_data reduced the overall size by ~30%. The only place where the compression aspects "leak" to the user is regarding the Item and part sizes stored in Akonadi. The change is backwards compatible, so it can handle uncompressed payloads created before this change just fine. All newly created or updated payloads will get compressed automatically. Eventually a StorageJanitor task to compress th entire storage may be introduced, but we may need some proper progress reporting for that, since it may take a lot of time, even on fast SSD disk to compress all the files in file_db_data (depending on size of the database).