DocumentCode :
3269879
Title :
Selective use of multiple entropy models in audio coding
Author :
Mehrotra, Sanjeev ; Chen, Wei-ge
Author_Institution :
Microsoft Corp., Redmond, WA
fYear :
2008
fDate :
8-10 Oct. 2008
Firstpage :
933
Lastpage :
938
Abstract :
The use of multiple entropy models for Huffman or arithmetic coding is widely used to improve the compression efficiency of many algorithms when the source probability distribution varies. However, the use of multiple entropy models increases the memory requirements of both the encoder and decoder significantly. In this paper, we present an algorithm which maintains almost all of the compression gains of multiple entropy models for only a very small increase in memory over one which uses a single entropy model. This can be used for any entropy coding scheme such as Huffman or arithmetic coding. This is accomplished by employing multiple entropy models only for the most probable symbols and using fewer entropy models for the less probable symbols. We show that this algorithm reduces the audio coding bitrate by 5%-8% over an existing algorithm which uses the same amount of table memory by allowing effective switching of the entropy model being used as source statistics change over an audio transform block.
Keywords :
audio coding; statistical distributions; Huffman coding; arithmetic coding; audio coding; audio transform block; multiple entropy models; source statistics; Arithmetic; Audio coding; Bit rate; Clustering algorithms; Decoding; Entropy coding; Memory management; Probability distribution; Statistics; Video coding;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Multimedia Signal Processing, 2008 IEEE 10th Workshop on
Conference_Location :
Cairns, Qld
Print_ISBN :
978-1-4244-2294-4
Electronic_ISBN :
978-1-4244-2295-1
Type :
conf
DOI :
10.1109/MMSP.2008.4665208
Filename :
4665208
Link To Document :
بازگشت