Hertog Nugroho, Keio University (Japan)
Shinji Ozawa, Keio University (Japan)
Shigeyoshi Takahashi, INES Corp. (Japan)
Yoshiteru Ooi, INES Corp. (Japan)
This work investigates a new approach to detect human face from monocular image sequences. Our method consists of two main search procedures, both using Genetic Algorithms. The first one is to find a head inside the scene and the second one is to identify the existence of face within the extracted head area. For this purpose, we have developed two models to be used as a tool to calculate the fitness for each observation in the search procedure: One is a head model which is approached by an ellipse and the other one is a face template which size is adjustable. The procedures work sequentially. The head search is activated first, and after the head area is found, the face identification is activated. The experiment demonstrates the effectiveness of the method.
Constantine Kotropoulos, AUT (Greece)
Ioannis Pitas, AUT (Greece)
Face detection is a key problem in building automated systems that perform face recognition. A very attractive approach for face detection is based on multiresolution images (also known as mosaic images). Motivated by the simplicity of this approach, a rule-based face detection algorithm in frontal views is developed that extends the work of G. Yang and T.S. Huang. The proposed algorithm has been applied to frontal views extracted from the European ACTS M2VTS database that contains the videosequences of 37 different persons. It has been found that the algorithm provides a correct facial candidate in all cases. However, the success rate of the detected facial features (e.g. eyebrows/eyes, nostrils/nose, and mouth) that validate the choice of a facial candidate is found to be 86.5 % under the most strict evaluation conditions.
Markus Michaelis, GSF Neuherberg (Germany)
Rainer Herpers, GSF Neuherberg (Germany)
Lars Witta, TU-München (Germany)
Gerald Sommer, University of Kiel (Germany)
Usually, the first processing step in computer vision systems consists of a spatial convolution with only a few simple filters. Therefore, information is lost or it is not represented explicitly for the following processing steps. This paper proposes a new hierarchical filter scheme that can efficiently synthesize the responses for a large number of specific filters. The scheme is based on steerable filters. It also allows for an efficient on-line adjustment of the trade off between the speed and the accuracy of the filters. We apply this method to the detection of facial keypoints, especially the eye corners. These anatomically defined keypoints exhibit a large variability in their corresponding image structures so that a flexible low level feature extraction is required.
Maxim A. Grudin, JMU (U.K.)
David M. Harvey, JMU (U.K.)
Leonid I. Timchenko, VDTU (U.K.)
Paulo G.J. Lisboa, JMU (U.K.)
In this paper a novel approach is proposed, which allows for an efficient reduction of the amount of visual data required for representing structural information in the image. This algorithm is tolerant to minor structural changes and can be used for automatic face recognition. The approach is based on a multistage architecture, which investigates partial clustering of structural image components. The initial grey-scale representation of the input image is transformed into a structural representation, so that each image component contains information about the spatial structure of its neighbourhood. The output result is represented as a pattern vector, whose components are computed one at a time to allow the quickest possible response. The input pattern is identified as the best match between the output pattern vector and the model vectors from the database.