Priyansh, Subhigya (School: D.B.M.S English School)
Compression is useful because it reduces resources required to store and transmit data. It is used to represent data in relatively smaller number of bits. Many now-a-days used general purpose compression algorithms are mostly customised versions of Lempel-Ziv(LZ) algorithm families or consist of primitive-standard-algorithms such as DEFLATE. Compression algorithms are designed on the grounds of what purpose they serve, for example organisations like facebook stress of decompression-speed in their algorithms and not about ratio or Compression-speed. There are two categories of compression-techniques, lossy and lossless. Whilst each uses different techniques to compress files, both have the same aim:To look for duplicate data in a file and use a much more compact data representation. Lossless-compression reduces bits by identifying and eliminating statistical-redundancy. No information is lost in lossless-compression. This project focuses on making a universal-lossless-compression algorithm and on the concept of Context-Mixing(for text), but with a practical end-user compression. We have attempted to create algorithms that attain enormous compression-ratios but with a practical timespan, that is we try to increase compression-ratios while increasing compression/decompression-speed, by using derivations from Logistic-Context-mixing algorithms(uses Neural-Networks). These two are the main focuses of this project and can be proved useful in data archiving or secure and efficient transmission of data. Context-mixing is a type of data compression algorithm in which the next-symbol predictions of two or more statistical models are combined to yield a prediction that is often more accurate than any of the individual predictions.