Author_Institution :
AT&T Bell Labs., Holmdel, NJ, USA
Abstract :
For pt.I see ibid, vol.2, no.2, p.187, 1992, For applications in graphic computers, image and video composition, high-definition television (HDTV), and optical fiber networks, Huffman-coded images need to be reconstructed at a high throughput rate. Part I showed several architectural and architecture-specific optimization techniques. However, due to the recursion within the reconstruction algorithm, the achievable throughput rate for a given decoding architecture in a given IC technology is limited. The authors propose various concurrent decoding methods to relax the throughput limit by using parallel or pipelined hardware. These methods are simple, effective, flexible, and applicable to general decoder architectures. Unlimited concurrency can be achieved at the expense of additional latency, the overhead is low, and the complexity increases linearly with the throughput improvement. It is believed that the proposed methods and architectures make it possible to reconstruct arbitrarily high resolution Huffman-coded images and video in real time with current electronics
Keywords :
computerised picture processing; data compression; decoding; digital signal processing chips; error correction codes; parallel architectures; pipeline processing; HDTV; Huffman-coded images; VLC decoder; concurrent decoding; decoding architecture; graphic computers; high-definition television; image decomposition; image reconstruction; optical fiber networks; parallel decoding methods; parallel hardware; pipelined hardware; reconstruction algorithm; throughput rate; video composition; video reconstruction; Application software; Computer applications; Computer graphics; Computer networks; Decoding; HDTV; High definition video; Image reconstruction; Optical computing; Throughput;