We have reached the end of the course and this is our final post on this blog. CALIC, a Context-Based, Adaptive, Lossless Image Coding is finally implemented!! For the implementation of this click here. You may go through the video explaining the project as well. Click here to go to the entire folder. You can view our GitHub repo here.
As discussed, we have implemented the encoder of CALIC (Context-Based, Adaptive, Lossless Image Coding), which is a famous codec for compression of continuous-tone images. The implementation of this codec as part of a coursework is solely targeted for research work & experimental purposes. Our implementation is driven towards a more comprehensible and an over-simplified implementation of an already existing and powerful lossless image compression codec. As discussed in the paper, 8-bit grayscale images form a good testing set since they are computer generated continuous-tone images.
To keep in touch with new features and algorithms for required manipulations of image and its properties, the scikit-image library of python was used along with the NumPy module. We also made use of the ‘arcode’ package for Arithmetic coding of the current pixel based on the values of its 6 neighboring pixels (n, w, nn, ww, nw, ne). Entropy coding in the paper refers to either Arithmetic Coding or Huffman coding, since the particular pixel is always coded till the entropy limit. We chose to perform Arithmetic encoding over Huffman coding as Huffman encoding always results in rounding errors because its code length is restricted to multiples of a bit. Arithmetic coding, on the other hand, achieves a value which actually corresponds to the values mentioned theoretically as opposed to Huffman coding.
Overall an improvement in our implementation would be better error remapping, histogram tail truncation (to increase coding efficiency), and certain error feedback mechanisms that were missed out during the context formation and error estimation. We haven’t coded the decoder as well.
The test images used are basically 8-bit greyscale images that includes well-known images like Lena, Cameraman, and pirate, among several others. The level of compression was, however, not significant, as can be seen from the below graph:
As can be seen, some images have a lesser degree of compression, and some are compressed relatively more, the ‘Moon’ here being at the highest degree. This is because while entropy coding of prediction errors, the main parts encoded are the number of occurrences of errors of a given context and the accumulator of errors, and both of these CALIC components remain of the same size always irrespective of the size and the quality of the image. Due to this reason, bigger images (such as Moon) face more compression gains than those whose size is smaller (Ex. Lena).
If you’ve gone through our output files, here’s the explanation of the files present:
The files with
.calic extension are the files that are finally compressed.
The files with
.craw extension store the error accumulator as well as the error count respectively.
ROLE OF MEMBERS:
01FB14ECS212 – Sanjay SS – Research in encoding schemes + Providing arithmetic encoding implementation + Image collection
01FB14ECS233 – Shreyash S Sarnayak – Gradient Adjusted Prediction + Error Energy Estimator + Error and Context Quantizer + Error Feedback & Sign Flipping
01FB14ECS241 – Siddharth Srinivasan – Textual Context + Final Context Formation + Entropy coding according to binary/continuous mode