Face recognition is a part of a wide area of pattern recognition technology. Recognition and especially face recognition covers a range of activities from many walks of life. Face recognition is something that humans are particularly good at and science and technology have brought many similar tasks to us. Face recognition in general and the recognition of moving people in natural scenes in particular, require a set of visual tasks to be performed robustly. That process includes mainly three-task acquisition, normalisation and recognition. By the term acquisition we mean the detection and tracking of face-like image patches in a dynamic scene. Normalisation is the segmentation, alignment and normalisation of the face images, and finally recognition that is the representation and modelling of face images as identities, and the association of novel face images with known models.
2.2 Why Face Recognition?
Given the requirement for determining people's identity, the obvious question is what technology is best suited to supply this information? The are many ways that humans can identify each other, and so is for machines. There are many different identification technologies available, many of which have been in commercial use for years. The most common person verification and identification methods today are Password/PIN known as Personal Identification Number, systems. The problem with that or other similar techniques is that they are not unique, and is possible for somebody to forget loose or even have it stolen for somebody else. In order to overcome these problems there has developed considerable interest in “biometrics” identification systems, which use pattern recognition techniques to identify people using their characteristics. Some of those methods are fingerprints and retina and iris recognition. Though these techniques are not easy to use. For example in bank transactions and entry into secure areas, such technologies have the disadvantage that they are intrusive both physically and socially. The user must position the body relative to the sensor, and then pause for a second to declare himself or herself. That doesn’t mean that face recognition doesn’t need specific positioning. As we are going to analyse later on the poses and the appearance of the image taken is very important.
While the pause and present interaction are useful in high-security, they are exactly the opposite of what is required when building a store that recognise its best customers, or an information kiosk that remembers you, or a house that knows the people who live there. Face recognition from video and voice recognition have a natural place in these next generation smart environments, they are unobtrusive, are usually passive, do not restrict user movement, and are now both low power and inexpensive. Perhaps most important, however, is that humans identify other people by their face and voice, therefore are likely to be comfortable with systems that use face and voice recognition. [I2, I5]
Face recognition rises from the moment that machine started to become more and more ”intelligent” and had the advance of fill in, correct or help the lack of human abilities and senses.
The subject of face recognition is as old as computer vision and both because of the practical importance of the topic and theoretical interest from cognitive science. Face recognition is not the only method of recognising other people. Even humans between each other use senses in order to recognise others. Machines have a wider range for recognition purposes, which use thinks such as fingerprints, or iris scans. Despite the fact that these methods of identification can be more accurate, face recognition has always remains a major focus of research because of its non-invasive nature and because it is people's primary method of person identification.
Face recognition using face geometry. ince the start of that field of technology there were two main approaches. The two main approaches to face recognition are
The geometrical approach uses the spatial configuration of facial features. That means that the main geometrical features of the face such as the eyes, nose and mouth are first located and then faces are classified on the basis of various geometrical distances and angles between features. On the other hand, the pictorial approach uses templates of the facial features. That method is using the templates of the major facial features and entire face to perform recognition on frontal views of faces. Many of the projects that where based on those two approaches have some common extensions that handle different poses backgrounds. Apart from these two techniques we have other recent template-based approaches, which form templates from the image gradient, and the principal component analysis approach, which can be read as a sub-optimal template approach. Finally we have the deformable template approach that combines elements of both the pictorial and feature geometry approaches and has been applied to faces at varying pose and expression.
Since the early start of face recognition there is a strong relation and connection with the science of neural networks. Neural networks are going to be analyses in more detail later on. The most famous early example of a face recognition “system”, using neural networks is the Kohonen model. That system was a simple neural network that was able to perform face recognition for aligned and normalised face images. The type of network he employed computed a face description by approximating the eigenvectors of the face image's auto-correlation matrix; these eigenvectors are now known as “eigenfaces”. [O1]
After that there were more other methods that where developed based on older techniques. If we want to summarise the methods that the “idea” of face recognition is based we have a geometrical approach or pictorial approach, and after that we have methods like eigenfaces, Principal Component Analysis, or other methods that process images in combination with neural networks or other expert systems.
In the recognition stage, the input is compared against all selected model views of each person. To compare the input against a particular model view, the face is first geometrically aligned with the model view. An affine transform is applied to the input to bring the facial features automatically located by the system into correspondence with the same features on the model. A technique based on the optical flow between the transformed input and the model is used to compensate for any remaining small transformation between the two. Templates from the model are then compared with the image using normalised correlation. Both the model and input images are pre-processed with a differential operator such the ones mentioned just above. In the future, we plan to address the problem of recognising faces when only one view of the face is available. The key to making this work will be an example-based learning system that uses multiple images of prototype faces undergoing changes in pose to learn, with the help of neural networks. The system will apply this knowledge to synthesise new virtual views of the person's face.