Face Emotion Detection by Neural Network

Download 404.76 Kb.
Size404.76 Kb.
  1   2   3   4   5
Face Emotion Detection by Neural Network

Alfiya Mahat Prof. Ganesh Sable Dipali Rindhe

Student, SPWEC Professor, SPWEC Student, SPWEC

Aurangabad, Maharashtra Aurangabad, Maharashtra Aurangabad, Maharashtra

alfiya704@gmail.com Sable.eesa@gmail.com dipalirindhe8943@gmail.com


Emotions are important part of human communication.There has been a huge interest in automatic recognition of human emotion because of its wide spread application in security, surveillance, marketing, advertisement, and human-computer interaction. To communicate with a computer in a natural way, it will be desirable to use more natural modes of human communication based on voice, gestures and facial expressions. In this paper, consists of implementation of simple algorithm for Face emotion detection. In spite of the most complex algorithms developed, robust detection and location of faces or their constituent is difficult to attain in many cases. Several classifier based method have been developed in this area but facial expression recognition system remains a hot area of research due to Environmental cluster illumination and miscellaneous source of facial variability. In the present work, four emotions namely angry, sad, happy along with neutral is tested from user defined database and real time database. The proposed method uses virtual face model method for feature extraction and Neural Networks for training and classification.


Adaptive Median filter, Virtual face model , Face emotion detection, neural network, database.

1. Introduction

Biometric is the science and technology of recording and authenticating identity using physiological or behavioral characteristics of the subject. A biometric representation of an individual, it is a measurable characteristic, whether physiological or behavioral, of a living organism that can be used to differentiate that organism as an individual. Biometric data is captured when the user makes an attempt to be authenticated by the system. This data is used by the biometric system for real-time comparison against biometric samples. Biometrics offers the identity of an individual may be viewed as the information associated with that person in a particular identity management system [3]. There are many ways that humans can express their emotions. The most natural way to express emotions is using facial expressions. A human can express his/her emotion through lip and eye. A category of emotions which universally developed by Ekmen are: sadness, angry, joy, fear, disgust and surprise without consider natural emotion [2].

Humans are capable of producing thousands of facial actions during communication that vary in complexity, intensity and meaning. Emotion or intention is often communicated by subtle changes in one or several discrete features. The addition or absence of one or more facial actions may alter its interpretation.

2. Proposed Method

There are number of Algorithms for the face emotion detection. But as seen, all have some disadvantages or they have less accuracy. The design and implementation of the Facial Expression Recognition System can be subdivided into three main parts. The first part is Pre-processing then post-processing which includes feature extraction and training of the images and the third part is testing. The image processing part consists of image acquisition of noisy image as input through scanning The second part consists of the feature extraction by VFM and training by Back Propagation Neural Network[4].

Figure 1: Block diagram of face motion detection system

Above figure 1 shows the block diagram of face emotion detection system. This whole system is divided into three steps. pre-processing, training and testing.

2.1 Preprocessing

According to the need of the next level the pre processing step convert the image. The different steps in pre-processing stage are shown below.

Step 1: RGB to gray conversion, Step 2: Adaptive Median Filtering, Step 3: Histogram of Filtered image.

Input Image of different emotion is read and stored in variable in matrix form, and RGB to grey conversion is required for images because the processing takes place on gray scale image. This conversion is carried out by eliminating hue and saturation information while retaining the luminance information of image.

(a) (b)

Figure 2 (a) Input Image ,(b) RGB to Gray

Step 2: Adaptive Median Filtering

Filtering of noise and other artifacts in the image and sharpening the edges in the image. It includes adaptive median filter for noise removal. But for the complete system it needs the process of noise removal. For filtering a mask with center pixel is rotated over image.

(a) (b) (c)

Figure 3 a) Input Image, b)RGB to Gray Image, c) Filtered Image

Step 3: Histogram of Filtered Image

The Histogram block computes the frequency distribution of the elements in the input. Where, the following output shows the Histogram of filtered image.

(a) (b) Figure 4: (a) Filtered Image (b) Histogram of Filtered image

After taking the image for testing, it shows the pre-processing output. Statistically analysis is also done by that mean, median and standard deviation of the input image, Filtered by adaptive median filter and. The significance of these shows that mean median and standard deviation should be less in enhanced frame. These extracted features of image are then fed into back propagation neural network for the training and then emotions are detected. Following table shows the mean, median and standard deviation of the input image, filtered image, also there PSNR value and the elapsed time.

Table 1 Preprocessing Output

Sr No

Name of person





Elapsed Time































Share with your friends:
  1   2   3   4   5

The database is protected by copyright ©dentisty.org 2019
send message

    Main page