Vadim Avrin, BGU,Beer-Sheva. (Israel)
Itshak Dinstein, BGU,Beer-Sheva. (Israel)
Given a sequence of blurred low resolution images, the aim of this work is to produce a sequence of higher resolution and restored images. It is assumed that the point spread function of the given imaging process is a combination of known blurring function and an estimated local motion function. The local motion estimation is obtained by the respective group delays of local adaptive filters. Preliminary experimental results are presented.
Anil Kokaram, Cambridge University. (U.K.)
Peter J.W. Rayner, Cambridge University. (U.K.)
Peter van Roosmalen, Delft University. (The Netherlands)
Jan Biemond, Delft University. (The Netherlands)
With the iminent widespread availablility of digital video broadcasts and the subsequent increase in the demand for broadcast material, image sequence restoration is an increasing source of concern for both archivists and broadcasters. This paper presents a two stage technique for registering the lines in video data digitized from a noisy source. In such situations the horizontal synchronization pulses may not have the correct amplitude causing the loss of `lock' in the digitizing apparatus. The effect is that the image lines are randomly shifted horizontally with respect to their true locations. This manifests as jagged vertical edges in the observed sequence, an annoying artefact. The algorithm presented here relies on a two-dimensional autoregressive (2D AR) model of the image to measure the line displacements using a multiresolution scheme.
Showbhik Kalra, Nanyang Technological University (Singapore)
M.N. Chong, Nanyang Technological University (Singapore)
Dilip Krishnan, Nanyang Technological University (Singapore)
This paper proposes a new AR model-based restoration algorithm which is able to suppress mixed noise processes and recover lost signals in an image sequence. A drawback of an AR model is the limiting size of the block (of pixels) that can be adequately modeled. Using a single set of AR coefficients to restore large region of missing data will result in a homogenized texture region. In order to overcome this inherent limitation of the AR model, a block-based divide-and-conquer approach is proposed. In addition, a new Gaussian weighting scheme is used to better estimate the AR coefficients for the interpolation process.
Zhichun Lei, University of Dortmund (Germany)
Peter Appelhans, University of Dortmund (Germany)
Hartmut Schröder, University of Dortmund (Germany)
This paper describes a technique to ghosts cancellation without a Ghost Cancellation Reference signal. The existing vertical edges in TV image are used to estimate the channel characteristics. With the estimating results, the coefficients of the equalizer are updated. In this paper, a method to speed up the convergence is given. The equalizer structure for cancelling all kinds of ghosts is discussed too. A new improved solution is achieved by interpreting the TV synchronisation signal as a genuine edge.
Jianping Hu, Portland State University (U.S.A.)
Nadir Sinaceur, Portland State University (U.S.A.)
Fu Li, Portland State University (U.S.A.)
Kwok-Wai Tam, Portland State University (U.S.A.)
Zhigang Fan, Xerox (U.S.A.)
Presently Block-based Discrete Cosine Transform (BDCT) image coding techniques are widely used in image and video compression applications such as JPEG and MPEG. At a moderate bit rate, BDCT is usually a quite satisfactory solution to most of practical coding applications. However, for high compression it produces noticeable blocking and ringing artifacts in the decompressed image. In this paper, we propose a novel post-processing algorithm to remove the blocking and ringing effects at low bit rate. The main steps in this algorithm include block classification, boundary low-pass filtering and mid-point interpolation, edge detection and filtering, and DCT coefficient constraint. The improvement is demonstrated both subjectively and objectively.
Patrick L. Combettes, City University New York (U.S.A.)
Pascal Bondon, CNRS (France)
We consider the problem of synthesizing feasible signals in the presence of inconsistent convex constraints, some of which are hard in the sense that they must absolutely be satisfied. This problem is formalized as that of minimizing an objective function measuring the degree of unfeasibility with respect to the soft constraints over the intersection of the sets associated with the hard constraints. We first investigate the process of aggregating soft constraints in order to define relevant objectives and then address the question of solving the resulting convex programs. Finally, we provide numerical results to illustrate the benefits of our analysis.
Nader Moayeri, Hewlett-Packard Labs.. (U.S.A.)
Konstantinos Konstantinides, Hewlett-Packard Labs.. (U.S.A.)
This paper presents a technique for deblurring noisy images. It includes two processing blocks, one for denoising and another for blind image restoration. The denoising step is based on the theories of singular value decomposition and compression-based filtering. The deblurring step is based on a double-regularization technique. Experimental results show that the combination of these techniques is quite effective in restoring severely blurred and noise-corrupted images, without prior knowledge of either the noise or image characteristics.
Ambasamudram N. Rajagopalan, Dept. of Electrical Eng., IIT, Bombay (India)
Subhasis Chaudhuri, Dept. of Electrical Eng., IIT, Bombay (India)
A limitation of the existing maximum likelihood (ML) based methods for blur identification is that the estimate of blur is poor when the blurring is severe. In this paper, we propose an ML-based method for blur identification from multiple observations of a scene. When the relations among the blurring functions of these observations are known, we show that the estimate of blur obtained by using the proposed method is very good. The improvement is particularly significant under severe blurring conditions. With an increase in the number of images, direct computation of the likelihood function, however, becomes difficult as it involves calculating the determinant and the inverse of the cross-correlation matrix. To tackle this problem, we propose an algorithm that computes the likelihood function recursively as more observations are added.
Spiros Fotopoulos, University of Patras (Greece)
Dimitris Sindoukas, University of Patras (Greece)
Nikos Laskaris, University of Patras (Greece)
George Economou, University of Patras (Greece)
In this paper, a new filter that is performing color image enhancement is presented. The filter is achieving this through the minimization of a weighted cost function. The weights are determined using potential functions which are calculated in such a way as to convey spatial information. Application of the proposed filter on a real blurred and noisy color image is performed to verify its enhancement capabilities.
Lars Floreby, Lund University (Sweden)
Farook Sattar, Lund University (Sweden)
Göran Salomonsson, Lund University (Sweden)
An algorithm for multiresolution pyramid decomposition is described. At each stage, the smoothed (``lowpass'') image is obtained by combining morphological grayscale opening and closing. Using this technique, we avoid the systematic bias of traditional approaches, as illustrated by an example. As our application, we perform image enhancement by modifying the reconstruction scheme using a morphological edge detector. The processing scheme offers a method for edge-preserving noise (speckle) suppression, in which only a minor number of multiplications is required.
Mitsuhiko Meguro, Musashi Institute of Tech. (Japan)
Akira Taguchi, Musashi Institute of Tech. (Japan)
In this paper, a new adaptive filter, called learning type of mean and median hybrid (LMMH) filters, is introduced. This filter is a combination of FIR filtering and order statistics (OS) filtering for removal all kinds of distributed noise. LMMH filter is regarded as the extension of MMH filters which can't be learned. On the other hand, LMMH filters can be optimized by using a priori information on input signal. A procedure for designing an optimal LMMH filters under the mean square error criterion has been developed. Experimental results show that the performances of the optimal LMMH filter are superior to those of the Wiener filter and OS filter, for signal corrupted by from short- to long-tailed distributed noise.
Daniel Leo Lau, University of Delaware (U.S.A.)
Juan Guillermo Gonzalez, University of Delaware (U.S.A.)
Median based filters have gained wide-spread use because of their ability to preserve edges and suppress impulses. In this paper, we introduce the Closest-to-Mean(CTM) filter which outputs the input sample closest to the sample mean. The CTM filtering framework offers lower computational complexity and better performance in near Gaussian environments than median filters. The formulation of the CTM is derived from the theory of S-filters which form a class of generalized selection-type filters with the features of edge preservation and impulse suppression. S-filters can play a significant role in image processing, where edge and detail preservation are of paramount importance. We compare the performance of CTM, median, and mean filters in the smoothing of edges and impulses immersed in Gaussian noise. A sufficient condition for a signal to be a root of the CTM filter is included. Data, figures and source code utilized in this paper are available at: http://www.ee.udel.edu/signals/robust