Image Coding II

Home


Adaptive Vector Quantization of Image Sequences Using Generalized Threshold Replenishment

Authors:

James E. Fowler, Ohio State University (U.S.A.)
Stanley C. Ahalt, Ohio State University (U.S.A.)

Volume 4, Page 3085

Abstract:

In this summary, we describe a new adaptive vector quantization (AVQ) algorithm designed for the coding of nonstationary sources. This new algorithm, generalized threshold replenishment (GTR), differs from prior AVQ algorithms in that it features an explicit, online consideration of both rate and distortion. Rate-distortion cost criteria are used in the determination of nearest-neighbor codewords and as well as in the decision to update the codebook. Results presented indicate that, for the coding of an image sequence, 1) most AVQ algorithms achieve distortion much lower than that of nonadaptive VQ for the same rate (about 1.5 bits/pixel), and 2) the GTR algorithm achieves rate-distortion performance substantially superior to that of other AVQ algorithms for low-rate coding, being the only algorithm to achieve a rate below 1.0 bits/pixel.

ic973085.pdf

ic973085.pdf

TOP



Robust Subband Image Coding for Waveform Channels with Optimum Power- and Bandwidth Allocation

Authors:

John Markus Lervik, NTNU (Norway)
Thomas R. Fischer, NTNU (Norway)

Volume 4, Page 3089

Abstract:

Image coding for power- and bandwidth-limited continuous-amplitude channels is considered. We address the problem of power and bandwidth allocation for subband image coding where the goal is to minimize the overall end-to-end distortion. The decomposed image is modeled as a composite source, and an algorithm for allocating power and bandwidth among the subsources of this source is proposed. The algorithm is used to compute estimates of the optimum performance theoretically attainable (OPTA) for subband image communication over a power- and bandwidth-limited AWGN channel. A gracefully degrading subband image coder with dynamic power- and bandwidth allocation is simulated and the performance compared to OPTA and to results of other schemes for robust image communication.

ic973089.pdf

ic973089.pdf

TOP



Bandwidth Compression for Continuous Amplitude Channels Based on Vector Approximation to a Continuous Subset of the Source Signal Space

Authors:

Arild Fuldseth, NTNU (Norway)
Tor Audun Ramstad, NTNU (Norway)

Volume 4, Page 3093

Abstract:

Two methods for transmission of a continuous amplitude source signal over a continuous amplitude channel with a power constraint are proposed. For both methods, bandwidth reduction is achieved by mapping from a higher dimensional source space to a lower dimensional channel space. In the first system, a source vector is quantized and mapped to a discrete set of points in a multidimensional PAM signal constellation. In the second system the source vector is approximated with a point in a continuous subset of the source space. This operation is followed by mapping the resulting vector to the channel space by a one-to-one continuous mapping resulting in continuous amplitude channel symbols. The proposed methods are evaluated for a memoryless Gaussian source with an additive white Gaussian noise channel, and offer significant gains over previously reported methods. Specifically, in the case of two-dimensional source vectors, and one-dimensional channel vectors, the gap to the optimum performance theoretically attainable is less than 1.0 dB for a wide range of channel signal-to-noise ratios.

ic973093.pdf

ic973093.pdf

TOP



Context Modeling and Entropy Coding of Wavelet Coefficients for Image Compression

Authors:

Xiaolin Wu, University of Western Ontario (Canada)
Jianhua Chen, The Chinese Univ. (Hong Kong)

Volume 4, Page 3097

Abstract:

In this paper we study the problem of context modeling and entropy coding of the symbol streams generated by the well-known EZW image coder (embedded image coding using zerotrees of wavelet coefficients). We present some simple context modeling techniques that can squeeze out more statistical redundancy in the wavelet coefficients of EZW-type image coders and hence lead to improved coding efficiency.

ic973097.pdf

ic973097.pdf

TOP



Optimization of image sequences scalable coding

Authors:

Erwan Launay, THOMSON multimedia (France)

Volume 4, Page 3101

Abstract:

In a previous article ,we examined to what extent and in what conditions scalable coding of image sequences as defined in MPEG2 could be a useful tool. We thus showed that, at a constant coding quality, up to 35 percent of the base layer rate could be spared by scalable coding. This gain was achieved mostly on the luminance (on I and P frames particularly) and increased when the motion content of the sequences coded was difficult to handle. This previous work, however, entirely relied on earlier results provided by literature. This contribution aims at further investigating on optimization of scalable coding based on our own previous observations. As will be seen our experiments confirm the choices made in MPEG2, but also give some interesting insights on the mechanisms of scalable coding, examining problems such as motion handling and optimal segmentation in a scalable scheme.

ic973101.pdf

ic973101.pdf

TOP



Optimal bit allocation among dependent quantizers for the minimum maximum distortion criterion

Authors:

Guido M. Schuster, U.S. Robotics (U.S.A.)
Aggelos K. Katsaggelos, Northwestern University (U.S.A.)

Volume 4, Page 3105

Abstract:

In this paper we introduce an optimal bit allocation scheme for dependent quantizers for the minimum maximum distortion criterion. First we show how minimizing the bit rate for a given maximum distortion can be achieved in a dependent coding framework using dynamic programming (DP). Then we employ an iterative algorithm to minimize the maximum distortion for a given bit rate, which invokes the DP scheme. We prove that it converges to the optimal solution. Finally we present a comparison between the minimum total distortion criterion and the minimum maximum distortion criterion for the encoding of an H.263 Intra frame. In this comparison we also point out the similarities between the proposed minimum maximum distortion approach and the Lagrangian multiplier based minimum total distortion approach.

ic973105.pdf

ic973105.pdf

TOP



Jointly Optimal Classification and Uniform Threshold Quantization in Entropy Constrained Subband Image Coding

Authors:

Are Hjørungnes, NTNU (Norway)
John Markus Lervik, NTNU (Norway)

Volume 4, Page 3109

Abstract:

A method for coding a source modeled by an infinite Gaussian mixture distribution is proposed. The source is first split into N classes. The samples of each class are then quantized by an infinite-level uniform threshold quantizer followed by an entropy coder designed for each class. The problem of joint optimization of this system's rate distortion performance is first solved theoretically, assuming an exponential mixing density. A comparison to a system optimal for high rates, using one common quantizer for all classes, showed that for a fixed distortion the rate was reduced by 11-12% at low rates for a fixed N=5. A subband image coder, using the optimum theoretical parameter values was simulated. The resulting coder has high performance and low complexity.

ic973109.pdf

ic973109.pdf

TOP



Wavelet Packet Based On The Top-Down Method

Authors:

Chung Mi-Sook, Chonbuk National University (Korea)
Rhim Bong-Kyun, Chonbuk National University (Korea)
Choi Jae-Ho, Chonbuk National University (Korea)
Kwak Hoon-Sung, Chonbuk National University (Korea)

Volume 4, Page 3113

Abstract:

The wavelet packet based on top-down method is proposed. Previously, Ramchandran and Vetterli have proposed a single tree algorithm, however, it requires of a relatively longer execution time. In comparison to the single tree algorithm, our proposed technique reduces the packet computation time by adopting top-down packet tree architecture in which the variance of the band roles as the decomposition and bit allocation criterion. The bits are allocated to each band by the target bit-rate and variance. Coding is performed for each band within the bit allocation. The simulation results using various still images show that the proposed algorithm requires of a less execution time in comparison to that of the conventional single tree algorithm at the minimal cost of the reduced PSNR. Applications of our proposed method in which time factor is crucial should be useful.

ic973113.pdf

TOP



Fingerprint Compression using a Piecewise-Uniform Pyramid Lattice Vector Quantization

Authors:

Shohreh Kasaei, SPRC, Queensland University of Technology (Australia)
Mohamed Deriche, SPRC, Queensland University of Technology (Australia)

Volume 4, Page 3117

Abstract:

A new compression algorithm for fingerprint images is introduced. Using Lattice Vector Quantization (LVQ), a technique for determining the largest radius of the Lattice and its scaling factor is presented. The design is based on obtaining the smallest possible Expected Total Distortion (ETD) measure, using a given bit budget, while using the smallest codebook size. In the proposed Piecewise-Uniform Pyramid LVQ, the wedge problem encountered with the Pyramidal Lattice point shells is resolved. At very low bit rates, for the coefficients with high-frequency content, the Positive-Negative Mean (PNM) method is proposed to improve the resolution of the reconstructed image. The proposed algorithm results in a high compression ratio and a high reconstructed image quality with a low computational load compared to other existing algorithms.

ic973117.pdf

ic973117.pdf

TOP



Fast Directional Fractal Coding Of Subbands Using Decision-Directed Clustering For Block Classification

Authors:

Kamel Belloulata, CREATIS (France)
Atilla Baskurt, CREATIS (France)
Rémy Prost, CREATIS (France)

Volume 4, Page 3121

Abstract:

We propose a new image compression scheme based on fractal coding of a wavelet transform coefficients using a fast non-iterative algorithm for the codebook generation. The original image is first decomposed into subbands containing information in different spatial directions and different scales, using an orthogonal wavelet filter bank. Subbands are encoded using Local Iterated Function Systems (LIFS) with range and domain blocks presenting horizontal or vertical directionalities. Their sizes are adapted to the correlation lengths and the resolution of each subband. This hybrid compression scheme allows to use the natural self-similarity existing in each subband and considerably reduces the edge degradation and the blocking effects visible at low bit rates when the conventional LIFS algorithm is applied on the original images. The computational complexity of the fractal compression algorithm is also reduced when generating LIFS for subbands of lower resolutions instead of for a full resolution image. In orde

ic973121.pdf

ic973121.pdf

TOP



Region-Based Fractal Image Compression with Quadtree Segmentation

Authors:

Yung-Ching Chang, Univ. Tsing Hua (Taiwan)
Bin-Kai Shyu, Univ. Tsing Hua (Taiwan)
Jia-Shung Wang, Univ. Tsing Hua (Taiwan)

Volume 4, Page 3125

Abstract:

Fractal image coding is a novel technique for still image compression. In this paper, a low bit rate region-based fractal image compression algorithm is proposed, several techniques are included as follows. First, we improve the performance of quadtree segmentation by adaptive threshold. Then, a merging scheme is employed to the resulting quadtree segmentation that combines several similar blocks into a small number of regions. We also provide a quadtree-based segmented chain code to efficiently record the contours of the regions. The experimental results show that the proposed scheme has the lowest bit rate among the existing schemes at the same level of image quality.

ic973125.pdf

ic973125.pdf

TOP



A simple and efficient binary shape coding technique based on bitmap representation

Authors:

Frank Bossen, EPFL (Switzerland)
Touradj Ebrahimi, EPFL (Switzerland)

Volume 4, Page 3129

Abstract:

This paper presents a technique based on the JBIG algorithm for binary shape coding in both lossless and lossy modes. Because it is applied directly to the bitmap representing the shape information, it bypasses the overhead in computation of an intermediate contour representation and its associated conversions. This leads to a simpler algorithm which is more suitable for a larger class of shape data. In addition a mechanism is proposed which allows a rate control for lossy coding mode.

ic973129.pdf

ic973129.pdf

TOP



Vision Model Based Video Perceptual Distortion Measure for Video Processing and Applications

Authors:

Fu-Huei Lin, Silicon Magic Corporation (U.S.A.)
Wanda Gass, TI (U.S.A.)
Russell M. Mersereau, Georgia Institute of Technology (U.S.A.)

Volume 4, Page 3133

Abstract:

In this paper a perceptual video distortion measure system based on the human vision model is presented. This system is an extension of the still perceptual distortion measure system. One advantage of the distortion measure is that the distortion can be weighted for frames in the vicinity of scene cuts. Our video distortion measure also requires less computation compared to other approaches. The perceptual distortion measure has wide applicability in video processing and applications, such as in the selection of the quantization matrices, the selection of the mquant parameter, and as a criterion for mode decision in MPEG encoders. Simulation results are presented.

ic973133.pdf

ic973133.pdf

TOP



A Motion Estimation and Image Segmentation Technique based on the Variable Block Size

Authors:

Hangu Yeo, University of Wisconsin (U.S.A.)
Yu Hen Hu, University of Wisconsin (U.S.A.)

Volume 4, Page 3137

Abstract:

In this paper, we discuss our effort to develop a motion estimation algorithm based on variable block size, which reduces computational complexity dramatically while maintaining good picture quality as well as high compression ratio. A key step in this work is to segment each image frame into different regions using a simple binary-level classifier which performs bit-wise comparison. In the second stage, the motion estimation is performed for every block of variable block size within the changed region with a predetermined maximum search range. The proposed technique has been applied to interframe video coding, and it has been shown that this scheme can be a feasible solution for the low bit rate coding application such as video telephony.

ic973137.pdf

ic973137.pdf

TOP