Eigenfaces for Face Recognition ece 533 Final Project Report, Fall ‘03

Download 0.9 Mb.
Size0.9 Mb.
  1   2   3   4   5   6   7   8   9   ...   25

Eigenfaces for Face Recognition

ECE 533 Final Project Report, Fall ‘03

Min Luo, Department of Biomedical Engineering

Yuwapat Panitchob, Department of Electrical & Computer Engineering

Abstract—Eigenfaces approach for face recognition is implemented as our final project. Face recognition has been an active area of research with numerous applications since late 1980s. Eigenface approach is one of the earliest appearance-based face recognition methods, which was developed by M. Turk and A. Pentland [1] in 1991. This method utilizes the idea of the principal component analysis and decomposes face images into a small set of characteristic feature images called eigenfaces. Recognition is performed by projecting a new face onto a low dimensional linear “face space” defined by the eigenfaces, followed by computing the distance between the resultant position in the face space and those of known face classes. A number of experiments were done to evaluate the performance of the face recognition system we have developed. The results demonstrate that the eigenface approach is quite robust to head/face orientation, but sensitive to scale and illumination. At the end of the report, a couple of ways are suggested to improve the recognition rate. The report is organized as follows: the first part provides an overview of face recognition algorithms; the second part states the theory of the eigenfaces approach for face recognition; Part III focuses on implementation issues, such as system structure design, interface and use of each functional block, etc.; in Part IV, a number of experiment results are demonstrated for evaluating the system’s performance under different circumstances; Finally, a conclusion is drawn based on the experiment results, and a couple of possible improvements are suggested.

I. Introduction

The face plays a major role in our social intercourse in conveying identity and emotion. The human ability to recognize faces is remarkable. We can recognize thousands of faces learned throughout our lifetime and identify familiar faces at a glance even after years of separation. The skill is quite robust, despite large changes in the visual stimulus due to viewing conditions, expression, aging, and distractions such as glasses or changes in hairstyle.

Computational models of faces have been an active area of research since late 1980s, for they can contribute not only to theoretical insights but also to practical applications, such as criminal identification, security systems, image and film processing, and human-computer interaction, etc. However, developing a computational model of face recognition is quite difficult, because faces are complex, multidimensional, and subject to change over time.

Generally, there are three phases for face recognition, mainly face representation, face detection, and face identification.

Face representation is the first task, that is, how to model a face. The way to represent a face determines the successive algorithms of detection and identification. For the entry-level recognition (that is, to determine whether or not the given image represents a face), a face category should be characterized by generic properties of all faces; and for the subordinate-level recognition (in other words, which face class the new face belongs to), detailed features of eyes, nose, and mouth have to be assigned to each individual face. There are a variety of approaches for face representation, which can be roughly classified into three categories: template-based, feature-based, and appearance-based.

The simplest template-matching approaches represent a whole face using a single template, i.e., a 2-D array of intensity, which is usually an edge map of the original face image. In a more complex way of template-matching, multiple templates may be used for each face to account for recognition from different viewpoints. Another important variation is to employ a set of smaller facial feature templates that correspond to eyes, nose, and mouth, for a single viewpoint. The most attractive advantage of template-matching is the simplicity, however, it suffers from large memory requirement and inefficient matching. In feature-based approaches, geometric features, such as position and width of eyes, nose, and mouth, eyebrow's thickness and arches, face breadth, or invariant moments, are extracted to represent a face. Feature-based approaches have smaller memory requirement and a higher recognition speed than template-based ones do. They are particularly useful for face scale normalization and 3D head model-based pose estimation. However, perfect extraction of features is shown to be difficult in implementation [5]. The idea of appearance-based approaches is to project face images onto a linear subspace of low dimensions. Such a subspace is first constructed by principal component analysis on a set of training images, with eigenfaces as its eigenvectors. Later, the concept of eigenfaces were extended to eigenfeatures, such as eigeneyes, eigenmouth, etc. for the detection of facial features [6]. More recently, fisherface space [7] and illumination subspace [8] have been proposed for dealing with recognition under varying illumination.

Face detection is to locate a face in a given image and to separate it from the remaining scene. Several approaches have been proposed to fulfil the task. One of them is to utilize the elliptical structure of human head [9]. This method locates the head outline by the Canny's edge finder and then fits an ellipse to mark the boundary between the head region and the background. However, this method is applicable only to frontal views, the detection of non-frontal views needs to be investigated. A second approach for face detection manipulates the images in “face space” [1]. Images of faces do not change radically when projected into the face space, while projections of nonface images appear quite different. This basic idea is uded to detect the presence of faces in a scene: at every location in the image, calculate the distance between the local subimage and face space. This distance from face space is used as a measure of “faceness”, so the result of calculating the distance from face space at every point in the image is a “face map”. Low values, in other words, short distances from face space, in the face map indicate the presence of a face.

Face identification is performed at the subordinate-level. At this stage, a new face is compared to face models stored in a database and then classified to a known individual if a correspondence is found. The performance of face identification is affected by several factors: scale, pose, illumination, facial expression, and disguise.

The scale of a face can be handled by a rescaling process. In eigenface approach, the scaling factor can be determined by multiple trials. The idea is to use multiscale eigenfaces, in which a test face image is compared with eigenfaces at a number of scales. In this case, the image will appear to be near face space of only the closest scaled eigenfaces. Equivalently, we can scale the test image to multiple sizes and use the scaling factor that results in the smallest distance to face space.

Varying poses result from the change of viewpoint or head orientation. Different identification algorithms illustrate different sensitivities to pose variation.

To identify faces in different illuminance conditions is a challenging problem for face recognition. The same person, with the same facial expression, and seen from the same viewpoint, can appear dramatically different as lighting condition changes. In recent years, two approaches, the fisherface space approach [7] and the illumination subspace approach [8], have been proposed to handle different lighting conditions. The fisherface method projects face images onto a three-dimensional linear subspace based on Fisher's Linear Discriminant in an effort to maximize between-class scatter while minimize within-class scatter. The illumination subspace method constructs an illumination cone of a face from a set of images taken under unknown lighting conditions. This latter approach is reported to perform significantly better especially for extreme illumination.

Different from the effect of scale, pose, and illumination, facial expression can greatly change the geometry of a face. Attempts have been made in computer graphics to model the facial expressions from a muscular point of view [10].

Disguise is another problem encountered by face recognition in practice. Glasses, hairstyle, and makeup all change the appearance of a face. Most research work so far has only addressed the problem of glasses [7][1].
II. Eigenfaces for Recognition

Before the publication of [1], much of the work on automated face recognition has ignored the issue of what aspects of the face stimulus are important for identification, assuming that predefined measurements were relevant and sufficient. In early 1990s, M. Turk and A. Pentland have realized that an information theory approach of coding and decoding face images may give insight into the information content of face images, emphasizing the significant local and global “features”. Such features may or may not be directly related to our intuitive notion of face features such as the eyes, nose, lips, and hair.

In the language of information theory, the objective is to extract the relevant information in a face image, encode it as efficiently as possible, and compare one face encoding with a database of models encoded in the same way. A simple approach to extract the information contained in a face image is to somehow capture the variation in a collection of face images, independent of any judgement of features, and use this information to encode and compare individual face images.

In mathematical terms, the objective is to find the principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face iamges. These eigenvectors can be thought of as a set of features which together characterize the variation between face images. Each image location contributes more or less to each eigenvector, so that we can display the eigenvector as a sort of ghostly face called an eigenface. Some of these faces are shown in Figure 4.

Each face image in the training set can be represented exactly in terms of a linear combination of the eigenfaces. The number of possible eigenfaces is equal to the number of face images in the training set. However, the faces can also be approximated using only the “best” eigenfaces—those that have the largest eigenvalues, and which therefore account for the most variance within the set of face images. The primary reason for using fewer eigenfaces is computational efficiency. The most meaningful M eigenfaces span an M-dimensional subspace—“face space”—of all possible images. The eigenfaces are essentially the basis vectors of the eigenface decomposition.

The idea of using eigenfaces was motivated by a technique for efficiently representing pictures of faces using principal component analysis. It is argued that a collection of face images can be approximately reconstructed by storing a small collection of weights for each face and a small set of standard pictures. Therefore, if a multitude of face images can be reconstructed by weighted sum of a small collection of characteristic images, then an efficient way to learn and recognize faces might be to build the characteristic features from known face images and to recognize particular faces by comparing the feature weights needed to (approximately) reconstruct them with the weights associated with the known individuals.

The eigenfaces approach for face recognition involves the following initialization operations:

  1. Acquire a set of training images.

  2. Calculate the eigenfaces from the training set, keeping only the best M images with the highest eigenvalues. These M images define the “face space”. As new faces are experienced, the eigenfaces can be updated.

  3. Calculate the corresponding distribution in M-dimensional weight space for each known individual (training image), by projecting their face images onto the face space.

Having initialized the system, the following steps are used to recognize new face images:

  1. Given an image to be recognized, calculate a set of weights of the M eigenfaces by projecting the it onto each of the eigenfaces.

  2. Determine if the image is a face at all by checking to see if the image is sufficiently close to the face space.

  3. If it is a face, classify the weight pattern as eigher a known person or as unknown.

  4. (Optional) Update the eigenfaces and/or weight patterns.

  5. (Optional) Calculate the characteristic weight pattern of the new face image, and incorporate into the known faces.

Calculating Eigenfaces

Let a face image (x,y) be a two-dimensional N by N array of intensity values. An image may also be considered as a vector of dimension , so that a typical image of size 256 by 256 becomes a vector of dimension 65,536, or equivalently, a point in 65,536-dimensional space. An ensemble of images, then, maps to a collection of points in this huge space.

Images of faces, being similar in overall configuration, will not be randomly distributed in this huge image space and thus can be described by a relatively low dimensional subspace. The main idea of the principal component analysis is to find the vector that best account for the distribution of face images within the entire image space. These vectors define the subspace of face images, which we call “face space”. Each vector is of length , describes an N by N image, and is a linear combination of the original face images. Because these vectors are the eigenvectors of the covariance matrix corresponding to the original face images, and because they are face-like in appearance, they are referred to as “eigenfaces”.

Share with your friends:
  1   2   3   4   5   6   7   8   9   ...   25

The database is protected by copyright ©dentisty.org 2019
send message

    Main page