Image Compression

Home


Image Compression using Variable Blocksize Vector Quantization Based on Rate-Distortion Decomposition

Authors:

Ching Yang Wang, National Tsing Hua University. (Taiwan)
Chyuan Huei Yang, National Tsing Hua University. (Taiwan)
Long Wen Chang, National Tsing Hua University. (Taiwan)

Volume 4, Page 2693

Abstract:

In this paper, we propose an optimal quadtree segmentation of an image for variable blocksize vector quantization(VBVQ) such that the total distortion of the reconstructed image is minimal and the total required bits don't exceed the bit budget. The above constrain problem is converted into an equivalent unconstrained problem by using Lagrange multiplier. We prune the full quadtree by comparing the Lagrangian costs of the parent and four child nodes. If the adjacent subblocks merge into a larger block reduce the Lagrangian cost, these subblocks will be merged. Otherwise, these subblocks will be vector quantized. From our simulation results, we see that the reconstructed image of our proposed algorithm has 1-3 db higher than the fixed blocksize VQ and conventional VBVQ algorithms.

ic972693.pdf

ic972693.pdf

TOP



Design of Successive Approximation Lattice Vector Quantizers

Authors:

Stephan F. Simon, Aachen University of Technology (RWTH) (Germany)
Lambert Bosse, Aachen University of Technology (RWTH) (Germany)

Volume 4, Page 2697

Abstract:

Two methods to overcome the problems with large vector quantization (VQ) codebooks are lattice VQ (LVQ) and product codes. The approach described in this paper takes advantage of both methods by applying residual VQ with LVQ at all stages. Using LVQ in conjunction with entropy coding is strongly motivated by the fact that entropy constrained but structurally unconstrained VQ design leads to more equally sized VQ cells. The entropy code of the first LVQ stage should aim at exploiting the statistical properties of the source. The refinement LVQ stages quantize the residuals. Simulations show that there exist certain scales of the refinement lattices yielding extraordinary performance. We focus on the search of these scales.

ic972697.pdf

ic972697.pdf

TOP



VQ-Encoding of Luminance Parameters in Fractal Coding Schemes

Authors:

Hannes Hartenstein, University of Freiburg (Germany)
Dietmar Saupe, University of Freiburg (Germany)
Kai-Uwe Barthel, Berlin University of Technology (Germany)

Volume 4, Page 2701

Abstract:

This paper is concerned with the efficient storage of the luminance parameters in a fractal code by means of vector quantization (VQ). For a given image block (range) the collage error as a function of the luminance parameters is a quadratic function with ellipsoid contour lines. We demonstrate how these functions should be used in an optimal codebook design algorithm leading to a non-standard VQ-scheme. In addition we present results and an evaluation of this approach. The analysis of the quadratic error functions also provides guidance for optimal scalar quantization.

ic972701.pdf

ic972701.pdf

TOP



A System/Graph Theoretical Analysis of Attractor Coders

Authors:

Mohammad Gharavi-Alkhansari, University of Illinois at Urbana-Champaign (U.S.A.)
Thomas Huang, University of Illinois at Urbana-Champaign (U.S.A.)

Volume 4, Page 2705

Abstract:

This paper provides links between the young field of attractor coding and the well-established fields of systems theory and graph theory. Attractor decoders are modeled as linear systems whose stability is both necessary and sufficient for convergence of the decoder. This stability is dictated by the location of the eigenvalues of the sparse state transition matrix of the system. The relationship between these eigenvalues, spatial causality of the system, and the patterns of interdependency between signal elements (or image pixels) is investigated for several cases using concepts from graph and matrix theory.

ic972705.pdf

ic972705.pdf

TOP



Wavelet Coding of Image Using Quadtree Representation and Block Entropy Coding

Authors:

Wenxing Zheng, University of BUPT (China)
Hua Nan, Univ. BUPT (China)

Volume 4, Page 2709

Abstract:

A new efficient image coding scheme , based on Quadtree Representation and Block Entropy Coding ( QRBEC ), for encoding the wavelet transform coefficients of images is presented. The property of HVS is also incorporated into the quantization process. In addition, how to flexibly control the quantization level as well as output bitrate of the coder is also investigated. The coding efficiency of the coder is quite competitive with the well-known EZW coder,and requires less computation burden. The proposed coding scheme can also be applied in image sequence coding, resulting in satisfactory performance.

ic972709.pdf

TOP



Hybrid KLT-SVD image compression

Authors:

Patrick Waldemar, NTNU, Trondheim (Norway)
Tor Audun Ramstad, NTNU, Trondheim (Norway)

Volume 4, Page 2713

Abstract:

This paper investigates a transform adaptation technique, applied to transform coding of images, as a way of exploiting the variation in local statistics within an image. The method makes use of the relationship between KLT and SVD, and their energy compaction properties. We compare this approach to a standard KLT coding system. Motivated by increased coding efficiency an analysis-by-synthesis approach using switching between the KLT coding system and the hybrid KLT-SVD system is proposed. The switching is implemented using a global rate-distortion criterion. The results are encouraging and the proposed techniques provide new insights on how to use SVD in an image compression system.

ic972713.pdf

ic972713.pdf

TOP



Embedded DCT Coding with Significance Masking

Authors:

Neil K. Laurance, University of Bath (U.K.)
Donald M. Monro, University of Bath (U.K.)

Volume 4, Page 2717

Abstract:

We evaluate two schemes for significance switching of DCT coefficients for block-based embedded DCT image compression. Both schemes deliver their best compression at any PSNR or vice versa, within a data stream that can be terminated within a few bits of any point. An image is partitioned into equal sized blocks, and a fixed point DCT of each block is calculated. The coefficients are then passed through successive `significance sweeps' of the whole image from the most significant down to the least significant coefficient bitplanes. The coded data stream includes bits to refine only the significant coefficients at each sweep. With each new sweep, newly significant coefficients may appear within a block, and the two switching schemes evaluated are efficient methods based on block addressing and block masking. Both methods give good compression when used losslessly to the fixed point DCT precision. The best coder outperforms the baseline JPEG method in PSNR at any compression, and is similar to state of the art

ic972717.pdf

ic972717.pdf

TOP



Adaptive Block-Size Transform Coding For Image Compression

Authors:

Javier Bracamonte, University of Neuchâtel (Switzerland)
Michael Ansorge, University of Neuchâtel (Switzerland)
Fausto Pellandini, University of Neuchâtel (Switzerland)

Volume 4, Page 2721

Abstract:

In this paper we report the results of an adaptive block-size transform coding scheme that is based on the sequential JPEG algorithm. This minimum information-overhead method implies a transform coding technique with two different block sizes: N x N and 2N x 2N pixels. The input image is divided into blocks of 2N x 2N pixels and each of these blocks is classified according to its image activity. Depending on this classification, either four N-point or a single 2N-point 2-D DCT is applied on the block. The purpose of the algorithm is to take advantage of large uniform regions that can be coded as a single large unit instead of four small units---as it is made by a fixed block-size scheme. For the same reconstruction quality, the results of the adaptive algorithm show a significant improvement of the compression ratio with respect to the non-adaptive scheme.

ic972721.pdf

ic972721.pdf

TOP



Distortion/Decoding Time Tradeoffs in Software DCT-based Image Coding

Authors:

Krisda Lengwehasatit, University of Southern California (U.S.A.)
Antonio Ortega, University of Southern California (U.S.A.)

Volume 4, Page 2725

Abstract:

We present a general framework for variable complexity algorithms (VCA) and study the related issue of defining a minimum average complexity implementation. As an example we consider implementations of the inverse DCT (IDCT) which minimize the average computation time by taking advantage of the sparseness of the quantized input data. Since the decoding speed depends on the number of zeros in the input we then present a formulation that enables the encoder to optimize its quantizer selection so as to meet a prescribed ``decoding time budget''. This leads to a complexity-distortion optimization technique which is analogous to well known techniques for rate-distortion optimization. In our experiments we demonstrate significant reductions in decoding time.

ic972725.pdf

ic972725.pdf

TOP



Computation-Distortion Characteristics of Block Transform Coding

Authors:

Vivek K. Goyal, UC-Berkeley (U.S.A.)
Martin Vetterli, EPFL (Switzerland)

Volume 4, Page 2729

Abstract:

A distortion-computation function D(C) is defined as the minimum expected distortion in computing some quantity while using no more than C computational units. In a communication framework, where the computational problem is to determine a representation that can be transmitted with expected rate not exceeding R, this gives slices of a rate-distortion-computation surface. The convexity of distortion-computation functions and rate-distortion-computation surfaces is asserted. Transform coding is studied as a particular instance of this theory. Explicit comparisons between the efficacies of the Karhunen-Loève Transform and the Discrete Cosine Transform for coding of a Gauss-Markov source are given. Results are also given on joint optimization of the block length and the computational precision.

ic972729.pdf

ic972729.pdf

TOP



Conditional Weighted Universal Codes: Second Order Statistics in Universal Source Coding

Authors:

Michelle Effros, California Institute of Technology, Pasadena, CA (U.S.A.)

Volume 4, Page 2733

Abstract:

We consider the use of second order statistics in two-stage universal source coding. In this paper, we describe an optimal two-stage conditional entropy constrained universal source code along with its associated optimal design algorithm and a fast (but non-optimal) variation of the original code. The design technique and coding algorithm result in a new family of conditional entropy constrained universal codes including but not limited to the conditional entropy constrained WUVQ (CWUVQ) and its fast variation. We demonstrate the performance of the proposed codes on a collection of medical brain scans. On the given data set, the CWUVQ achieves up to 7.5 dB performance improvement over variable-rate WUVQ and up to 12 dB performance improvement over ECVQ. On the same data set, the fast variation of the CWUVQ achieves identical performance to that achieved by the original code at all but the lowest rates (less than 0.125 bits per pixel).

ic972733.pdf

ic972733.pdf

TOP



A Low Complexity Error-Resilient H.263 Coder

Authors:

Masoud Khansari, HP-Labs (U.S.A.)
Vasudev Bhaskaran, HP-Labs (U.S.A.)

Volume 4, Page 2737

Abstract:

We propose a low-complexity error resilience method which is compatible with the emerging H.263 standard. The proposed method, which corrects one slice of lost information exactly or partially, is specifically suited to the packet networks such as internet where the majority of the errors are due to packet losses. The proposed method is simple and to decode the overhead information no extra software or hardware is required. The amount of generated overhead is minimal as is the delay to reconstruct lost information. Using this method, it is possible to easily trade-off bits between the source and protection data, hence achieving a simple adaptation method to the current status of the channel.

ic972737.pdf

ic972737.pdf

TOP