EMOTIONAL MANIPULATION WITH THE HELP OF EMOTIONAL RECOGNITION-A SURVEY
Pragya Singhal*, Rohan Mandhanya, Surbhi Verma
* Acropolis Institute of Technology & Research, Computer Science & Engineering, Indore, India
Acropolis Institute of Technology & Research, Computer Science & Engineering, Indore, India
Acropolis Institute of Technology & Research, Computer Science & Engineering, Indore, India
It is a rule written by government that robots can kill others to save their kinds. So, a robot should know how to decide who is its enemy and who is its friend. It should feel the emotion of love or hate. We can use search technique to find an answer to this but it will result in nothing and will not give emotions. Another way can be the use of knowledge base which will give us as well as robots a brief knowledge to love or hate someone. We can put some pre memories and store them in the robots. These memories consist of different tasks and whether the task is good one or the bad one. If a human helps them then it is a positive task and if human’s act is bad then it’s a negative task. Words software should also be used to check whether words are positive or negative. There are various techniques for emotion recognition which we will discuss in this paper.
KEYWORDS Affective Computing, Face Affect Recognition, Emotional Speech, Body Gesture, Physiological Monitoring, Emotional Intelligence.
The basic view on the nature of rationality says that emotions and reasons should not mix. For a person to act rationally, his emotions should not come in the way of his reason process. But contrary to this, researches in neuroscience provide evidence that emotions have a great role in perception, learning, attention and memory   which can be stated as ‘Emotional Intelligence”. Emotional intelligence is a person’s ability to perceive, understand, manage and express emotion within oneself and in dealing with others . Many researchers in Computer science and Artificial areas are being attracted to the findings about emotions in neuroscience and psychology . Emotion is being used in two computer science area which are sibling are and the emotion based internal architecture systems. Human computer interaction comes under the sibling area and if relation between human and machine can be improved and how . Second area which is emotion based internal architecture system, focuses in an attempt to evolve it . Affecting computing is also used to recognize, interpret and simulate human effects and emotions. It is spanning computer science,psychology, andcognitive science . Its origin can traced to the inquiries about emotions in the early philosophical days  but its more modern branch originated with Rosalind Picard’s paper on affective computing in 1995.A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response for those emotions. 
DIFFERENT EMOTION RECOGNITION TECHNIQUES
There are two models in cognitive science and neuroscience, which describes that how perseverance and classification of emotions is done by humans: the continuous and the categorical model. The former one defines each facial expression as a feature vector in a face space. It shows how expressions of emotion can be seen at different intensities. The categorical model consists of C classifiers, each tuned to a specific emotion category. This model explain why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between.
Both the models cannot identify multiple emotions, so a new way to model it is to consider new categories as overlap of a small set of categories. The following sections consider the possible features which can be used for the task of emotion recognition.
Face Affect Recognition
Various methods are used to detect and process facial expression which is optical flow, hidden Markov model, neural network processing and active appearance model. Subject’s emotional state can be estimated in the more robust form by fusing more than one modality. (Multimodal recognition, e.g. facial expressions and speech prosody , facial expressions and hand gestures , or facial expressions with speech and text for multimodal data and metadata analysis)
To recognize human emotions, a system has to be created to recognize them. An important feature of such a machine is the database creation. Database can be spontaneous and posed expression database. The database in which different basic emotional expressions are displayed by the participants is the posed expression database. While, there are natural expressions in spontaneous expression database. Only posed facial expressions are available in the publically available emotional databases. Two of the widely used databases are CK+ and JAFFE. Face Affect Detection has some obstacles which need to be removed to fully unlock the hidden potential of the overall algorithm or method employed. Especially, in the incipient stages of affective computing there is issue of accuracy of modeling and tracking. As lack of accuracy fades with the evolution of hardware, discovery of new things and introduction of practices, noise issues are left behind. But noise removal methods exist including neighborhood averaging, linear Gaussian smoothing, median filtering or newer methods such as the Bacterial Foraging Optimization Algorithm . Degree of accuracy in facial recognition has not been brought to a high level to permit its widespread use. Posed expressions are used by majority which is not natural and not 100% accurate .
To detect a particular emotional state of a user, gestures are mainly used, especially used in the combination with speech and face recognition. Gestures could be simple reflexive responses, like lifting your shoulders when you don't know the answer to a question, or they could be complex and meaningful as when communicating with sign language. Gestures can be like, using objects i.e, we can touch or point them and without using any object or surrounding environment like waving hands. These should be efficiently recognized, analyzed and responded in a meaningful way bee computer. There are many proposed methods  to detect the body gesture. Gesture recognition can be differentiated in two approaches: a 3D model based and an appearance-based . The 3d information of the key elements of the body parts is used in 3D model approach to obtain various important parameters like joint angles. Direct interpretation is done in Appearance based systems with the use of images and videos.
Speech is indirectly altered by autonomic nervous system. This information can be used to produce systems that recognize affect based on the extracted features of speech. Like, speech produced in a state of fear or joy becomes faster, louder and with a higher, wider pitch range. Other emotions such as tiredness or sadness, lead to slower, lower-pitched and slurred speech .By analyzing speech pattern user’s emotional state can be recognized. Through pattern recognition, vocal parameters and prosody features are analyzed . Speech recognition is a great method having a success rate of 63%. The result is satisfying but not as good as other forms of emotional recognition. Furthermore, it is a promising technique to use as many speech characteristics are independent of semantics or culture.
This process requires the creation of a reliable database, knowledge base, or vector space  model, which will fit needs for every application and also the a successful classifier which will provide quick and accurate emotion identification. The most frequently used classifiers are linear discriminant classifiers (LDC), k-nearest neighbour (k-NN), Gaussian mixture model (GMM), support vector machines (SVM), artificial neural networks (ANN), decision tree algorithms and hidden Markov models (HMMs).Various studies have shown that choosing the appropriate classifier has improved the performance of the system.
Emotional state of a user can be detected by monitoring and analyzing their physiological signs. It can be done through physical monitoring. Physiological signs can be their pulse, minute contractions of the facial muscles, heart rate, etc. These signs range from their pulse and heart rate, to the minute contractions of the facial muscles. The three main physiological signs that can be analyzed are: blood volume pulse,galvanic skin response,facial electromyography. Photoplethysmography is used to measure a person’s blood volume pulse. This process produces a graph which indicates the blood flow through extremities . Cardiac cycle is indicated by the peak of the waves where blood is pumped by the heart to the extremities. Amplitude of cardiac cycle increases if a person experiences fear or is startled as their heart jumps and beats quickly for some time. As the user calms, cycle will return to the normal level. Another physiological sign is facial electromyography. Electrical activity of the facial muscles is measured through this technique. This is done by amplifying the tiny electrical impulses that are generated by muscle fibers when they contract . To detect emotion usually two main facial muscle groups are studies: corrugator supercilii and zygomaticus major muscle. The former one is the frowning muscle; it is best used for negative, unpleasant emotional response. The latter pulls the corners of the mouth back when we smile  and best used for positive emotional response. The third one is Galvanic Skin Response (GSR). It measures skin conductivity which depends on the how moist the skin is. GSR is related to the arousal state of the body as the skin moisture is produced by sweat glands and these glands are under the control of body’s nervous system.GSR reading will be high if a user is more aroused resulting to greater skin conductivity . All these methods will help us to decide or calculating our emotions for any action as these techniques are used to understand human emotions. We’ll use them for understanding human emotions toward the machine, then the machine will understand what emotion human is showing at the given condition and then develop a search tree behalf of the information for the emotions which well be further used at the time of action. This decision making is way easier than others. In this we are manipulating the emotions of humans to use against them. The human reaction towards a machine will be checked. If ‘good’, it will be counted +1 in the tree for love or affection and if the human is not acting properly with the machine then it will be counted as -1 in the tree for hate. These basic emotions with some values will be sub divided into other emotions. This will be a developed on the basic rules of addition and subtraction that will help machine to decide easily at the time of decisions.
RELATED WORK James A. Crowder, Shelli Friess, ”Artificial Psychology: Psychology of AI”: in this paper authors suggested problems of artificial intelligence related psychological constructs and how they might play out in an artificial mind. They mainly focus on artificial cognition and artificial intuition. It gives information about how humans percept and accept artificial intelligence. They have come to a conclusion that it would be sensible to take in consideration the ethics and human reaction to AI as all this seems to head towards superhuman technology.
Jackeline Spinola de Freitas, Ricardo R. Gudwin, João Queiroz, “Emotion in Artificial Intelligence and Artificial Life Research: Facing Problems” in this paper authors have suggested problems that there is lack of study in Artificial Emotions. The comparisons between different projects and comparative study between emotion based and non-emotion based experiments within the same project is also very less. Authors came to a conclusion that the extent to which we will be able to answer remaining open questions will increase our understanding to mind phenomena.New robust and trustworthy artifacts can be developed through it. There is no guarantee that an answer to a question can be through a single model. Various attempts might also result in some limits that emotion-based research can face.
James A. Crowder, “The Artificial Cognitive Neural Framework”, in this paper author discusses about artificial cognitive neural framework architecture, ACNF memory system and its capabilities. He concluded that this framework will provide artificially intelligent framework and the system can learn, self-adapt and react to rapid changes in situational conditions.
Sara Owsley Sood, “Emotional Computation in Artificial Intelligence Computation”, in this paper author discusses about emotional computation in artificial intelligence. She discusses about emotional computation as an important aspect in research in AI. Author concluded that the research in this area is increasing continuously and she finds it an important topic to study.
Juan D. Velaquez, “When Robots Weeps: Emotional Memories and Decision Making”, in this paper author shows how acquisition of emotional memories can be achieved through mechanisms of primary emotions that serve as biasing mechanisms during the process of making decisions and selecting actions. Author have presented a flexible agent architecture that integrates drives, emotions, and behaviors and that focuses on emotions as the main motivational system that influences how behaviors are selected and controlled. Author concluded that to create emotional memories or secondary emotions, the mechanisms of primary emotions included in the proposed model can be used.
Sandeep Kumar, Medha Sharma, “Convergence of artificial intelligence, Emotional Intelligence, Neural Network and Evolutionary Computing”, in this paper author discussed about augmentation of emotions with AI. Paper also discusses that, to match the behaviour and intelligence of human beings, artificial agents use neural architecture from parent generation to child generation. Paper also discusses that how revised definition of artificial intelligence can give confirmations to some natural phenomenon. Authors concluded that all the concepts they have introduced in the paper are based on natural phenomenon and there is no formal proof regarding them. 
In this paper we have explained that how humans and emotion are interrelated. We showed that how different researchers have researched in the field of artificial intelligence and affective computing to detect emotions in humans. We have seen different techniques for emotion recognition and how they are used and what are their limitations. If well do more research in this field we can develop many techniques better than the existing ones and successful in detecting emotions from humans without any failure.
This technique can be further enhanced in many ways such as emotion decision, emotion manipulation. Adding this method with decision making algorithms and problem solving algorithms can later on increase the chances of decision making with emotion in it. We can implement in machines to increase the chances of emotional intelligence in them.
REFERENCES Velasqueez, Juan d., “When Robots Weep: Emotional Memories and Decision Making”, America Association for Artificial Intelligence(1998)
Crowder,James A., Friess,Shelli,”Artificial Psychology: The Psychology of AI”, Systemetics, Cybernetics and Infomatics(2013)
Kumar ,Sandeep, Sharma, Medha, “Convergence of Artificial Intelligence, Emotonal Intelligence, Neural Network And Evoluionary Computing”, International Journal of Advanced Research in Computer Science and Software Engineering(2012)
Sood,Sara, “Emotional Computation in Artificial Intelligence Education”, Association for Advancement of Artificial Intelligence(2008)
Freitas , Jackeline Spinola de, Gudwin ,Ricardo R., Queiroz ,João,” Emotion in Artificial Intelligence and Artificial Life Research: Facing Problems”, 5th International Working Conference, IVA 2005, Kos, Greece, September 12-14(2005).
Crowder , James. A, ”The Artificial Cognitive Neural framework(ACNF)”, Proceedings of the 13th Annual International Conference on Artificial Intelligence, Las Vegas, Nevada(2012)
Damasio, A.,” Descartes’ Error: Emotion, Reason, and the Human Brain”. New York: Gosset/Putnam.(1994)
LeDoux, 56. Petta, Paolo and Cañamero, D , “Grounding emotions in adaptive systems: volume II.” Cybernetics and systems: an international journal. V. 32, 2001, pp. 581-583.(2001)
Petta, Paolo and Trappl, Robert “Emotions and agents”. Multi-agents systems and applications.(2001) .
Salovey, P. & Mayer, J.D,” Emotional intelligence. Imagination, Cognition, and Personality”, 9, 185-211.(1990)
Custódio, L.; Ventura. R. and Pinto-Ferreira C., “Artificial emotions and emotion-based control systems”. Proceedings of 7th IEEE International Conference on Emerging Technologies and Factory Automation. V. 2. pp. 1415-1420.(1999)
Gadanho, S. and Hallam, J., “Emotion-triggered learning in autonomous robot control”. Cybernetics and Systems: an international journal. V. 32, July, 2001, pp.531-559.(2001)
Breazeal, Cynthia. ,“Emotion and sociable humanoid robots”. International Journal of Human Computer Studies, 2003, pp. 119–155.(2003)
Bryson, Joanna and Flack, Jessica,"Emotions and Action Selection in an Artificial Life Model of Social Behavior in Non-Human Primates" (2001).
Nagpal, Renu, Pooja Nagpal, and Sumeet Kaur (2010). "Hybrid Technique for Human Face Emotion Detection" (PDF). International Journal of Advanced Computer Science and Applications.
Williams, Mark. "Better Face-Recognition Software – Technology Review". Technology Review: The Authority on the Future of Technology. Retrieved 21 March 2011.
Caridakis, G.; Malatesta, L.; Kessous, L.; Amir, N.; Raouzaiou, A.; Karpouzis, K. (November 2–4, 2006) “ Modeling naturalistic affective states via facial and vocal expressions recognition.” International Conference on Multimodal Interfaces (ICMI'06). Banff, Alberta, Canada.
Balomenos, T.; Raouzaiou, A.; Ioannou, S.; Drosopoulos, A.; Karpouzis, K.; Kollias, S. (2004). "Emotion Analysis in Man-Machine Interaction Systems". In Bengio, Samy; Bourlard, Herve. Machine Learning for Multimodal Interaction. Lecture Notes in Computer Science. 3361. Springer-Verlag. pp. 318–328.
J. K. Aggarwal, Q. Cai, “Human Motion Analysis: A Review”, Computer Vision and Image Understanding, Vol. 73, No. 3, 1999
Pavlovic, Vladimir I.; Sharma, Rajeev; Huang, Thomas S. (1997). "Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review" (PDF).IEEE Transactions on Pattern Analysis and Machine Intelligence.