Prediction-and-Verification for Face Detection



Download 424 Kb.
Page1/4
Date28.06.2018
Size424 Kb.
  1   2   3   4
Prediction-and-Verification for Face Detection
Zdravko Lipošćak and Sven Lončarić

Faculty of Electrical Engineering and Computing, University of Zagreb

Unska 3, 10000 Zagreb, Croatia

E.mail: zdravko.liposcak@sk.tel.hr, sven.loncaric@fer.hr



Abstract
This paper presents a segmentation scheme for automatic detection of human face in color video sequence. The method relies on a three-step procedure. First, regions are detected which are likely to contain human skin in the color image. In the second step, differences in regions produced by motion in the video sequence are detected. In the final step, Karhunen-Loeve transformation is used to predict and verify the segmentation result. Experimental results are presented and discussed in the paper. Finally, conclusions are provided.

1. Introduction
In recent years, face recognition research has attracted much attention in both academia and industry. Segmentation of moving faces from video is very important area in image sequence analysis with direct applications to face recognition. Methods based on analysis of difference images, discontinuities in flow fields using clustering, line processes or Markov random field models are available [1], [2], [3]. For engineers interested in designing algorithms and systems for face detection and recognition, numerous studies in psychophysics and neurophysiological literature serve as useful guides [4], [5].

One of the essential goals of face detection and recognition is to develop new methods that can derive low dimensional feature representations with enhanced discriminatory power. Low dimensionality facilitates real time implementation, and enhanced discriminatory power yields high recognition accuracy [6].

In this paper, we detect the face of the subject within a source video sequence containing a face or head movements and complex backgrounds. The detection of the face is based on the segmentation approach originally presented for hand detection by Cui and Weng [7]. The major advantage of Cui and Weng scheme is that it can handle a large number of different deformable objects in various complex backgrounds. Lowering the subspace dimension is done using Karhunen-Loeve transformation. Use of the color and motion information for the extracted regions of interest speeds up the process of the segmentation.

An integrated system for the acquisition, normalization and recognition of moving faces in dynamic scenes using Gaussian color mixtures can be find in the [8].

In the training phase partial views of face are generated manually in such a way that they do not contain any background pixels. These partial views are used to train the system to learn mapping each face shape to the face mask. During the performance phase, the face position is predicted using face sample provided by color and motion information and the learned mapping. Each face position is further verified by the learnt information about a large set of learnt face appearances againts what is cut off by the predicted mask position.

2. Dimensionality reduction

Face detection procedure locates the face images and crops them to a pre-defined size. Feature extraction derives efficient (low-dimensional) features, while the classifier makes decisions about valid detection using the feature representations derived earlier.

Face detection in video sequence with unknown complex background requires a set of visual tasks to be performed in a fast and robust way. In this paper, detection process includes the computation and fusion of three different visual cues: color, motion and face appearance models.

2.1. Color and motion information
The pre-processing step isolates the color of skin in each image inside image sequence. The input color images should be in RGB format. If a pixel color ratios R/B, G/B and R/G fall into given ranges the pixel is marked as being skin in a binary skin map array where 1 corresponds to skin pixels in the original image and 0 corresponds to non-skin pixels. The skin filter is not perfect but removes pixels with significantly different color then skin color.

In the next step we estimate motion of the pixels that belong to the skin color for every two consecutive frames. Methods based on analysis of difference images are simple and fast. First, we convert color video sequence to greyscale video sequence. Given an image I and the neighbouring image I’ in the greyscale video sequence:


1. get the difference image D such that

, (1)

2. threshold D,

3. find the centroid of regions containing

the largest connected component in D.





Directory: datoteka
datoteka -> Aim It is known that the smear layer does not consist only of anorganic substance, but also of the remnants of odontoblastic processes, pulp tissue and bacteria
datoteka -> Original scientific paper
datoteka -> New technologies used in person oriented medicine
datoteka -> Pain after tooth extraction masking primary extranodal non-hodgkin's lymphoma of the oral cavity abstract
datoteka -> Leakage of bovine serum albumin (bsa) in root canals obturated with super-eba and irm abstract
datoteka -> Stress distribution in different type of post and core
datoteka -> Recolonization pattern of cariologenic microorganisms in children after treatment with different procedures for caries prevention
datoteka -> Filip Hameršak Faculty of Law
datoteka -> Oral Health Indices and Obstacles to Dental Care Related to Demographic Features in Bosnian Children Aged Six
datoteka -> M. Benković, D. Novotni, D. Tušak, M. Krpan, I. Bauman, D. Ćurić


Share with your friends:
  1   2   3   4


The database is protected by copyright ©dentisty.org 2019
send message

    Main page