«Volume 4, Issue 12, December 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research ...»
Volume 4, Issue 12, December 2014 ISSN: 2277 128X
International Journal of Advanced Research in
Computer Science and Software Engineering
Available online at: www.ijarcsse.com
Classification of Human Facial Expression based on Mouth
Feature using SUSAN Edge Operator
Prasad M* Ajit Danti
De pt. of Master of Computer Applications, Dept. of Computer Applications, Bapuji Institute of Engineering and Technology, JNN College of Engineering, Davanagere-4, India Shimog-4, India Abstract: In this paper, human facial expressions are recognized based on the mouth feature using Susan edge detector [3,4]. Face part is segmented from the face image, in which mouth feature is separated and potential geometrical features are used for the determination of facial expression such as surprise, neutral, sad and happy.
Experimentation is done on standard JAFFE database  images of different people and efficacy of the results are discussed.
Keywords- Edge projection analysis, Facial features, feature extraction, segmentation SUSAN Threshold edge detection operator.
Method to identify the facial expressions of a user by processing images taken from a webcam. This is done by passing the image through 3 stages -face detection, feature extraction, and expression recognition. The combination of SUSAN edge detector, edge projection analysis and facial geometry distance measure is best combination to locate and extract the facial feature for gray scale images in constrained environments and feed forward back-propagation neural network is used to recognize the facial expression. To attain successful recognition performance, most current expression recognition approaches require some control over the imaging conditions because many real-world applications require operational flexibility. In particular, research into automatic expression recognition systems capable of adapting their knowledge periodically or continuously has not received much attention. A system that performs these operations accurately and in real time would form a big step in achieving a human-like interaction between man and machine.
The most expressive way humans display emotions is through facial expressions. method for expression recognition based on global LDP features and local LDPv features with SVM decision-level fusion, which can retain the influence of global facial face and while highlight the local region with more contribution on expression changes.
Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition.There are two common approaches to extract facial features: geometric feature-based methods and appearance-based methods . Accuracy of facial expression recognition is mainly based on accurate extraction of facial feature components. Facial feature contains three types of information i.e texture, shape and combination of texture and shape information . Face is represented based on statistical local features, local binary patterns(LBP) for person independent expression recognition.
LBP is used for texture analysis along with Support Vector Machine for low resolution and better performance. One of the fundamental issues about the facial expression analysis is the representation of the visual information that an examined face might reveal . For successful facial expression recognition, deriving an effective facial representation from original face images is a crucial step. There are two common approaches to extract facial features: geometric feature-based methods and appearance-based methods [4,9].
Multiple face region features are selected by Adaboost algorithm. Face is divided into sub regions by Adaboost based on multiple region Orthogonal component principle component analysis features like eyes, mouth and nose. The region combination were used as input to AdaBoost classifier, this at each stage chooses the best such combination before changing the weights for next iteration . Susan operator is used to locate corners for different feature point to increase Accuracy.
II. DATA COLLECTIONIn this work JAFFE (Japanese Female Facial Expression) Database developed by Kyushu University for Japanese women expression is used. The JAFFE database is made up 213 individual images of ten persons, and each person shows anger, disgust, fear, happiness, sadness, surprise and neutral. There are 2 - 4 images for every face expression, and images are all 256 × 256 grayscale images. The photos were taken at the Psychology Department in Kyushu University. Few samples are shown in Fig. 1
The proposed method detects the face part depending upon the measurement given. Then the algorithm crops the selected facial part from the image, this cropped image is then divided horizontally into two parts depending upon the central point of the image located.
We ignore the upper right portion and concentrate lower left portion. In this portion the SUSAN algorithm[3,4] selects the larger part which is mouth compared to eyes and nose. The SUSAN algorithm then generates binary image of the mouth, which is then complimented for conversion. Later the SUSAN algorithm checks the complimented image for noise, if present. If they are present they are removed to larger extent and small bits are filled. Then the image is filled using RGB as parameters.
Steps involved in the proposed system.
Step1: Preprocessing: In given input image, quality of image is enhanced by different filters such as median filter, average filter, wiener filter according to noise present in image, improving contrast of image by histogram equalization,adoptive equalization etc.
Step 2: Mouth Detection: Edge detector such as Sobel, canny, pewitt etc are applied on the image to detect edges on the given image and face boundary is located by using suitable threshold value. Further Facial feature Candidate are located by Geometrical method. It is assume that in most of faces the vertical distance between eyes and mouth are proportional to the horizontal distance between the two centres of eyes. In which we consider mouth area only.
© 2014, IJARCSSE All Rights Reserved Page | 441 Prasad et al., International Journal of Advanced Research in Computer Science and Software Engineering 4(12), December - 2014, pp. 440-443 Step 3: SUSAN operator to detect corner for different features:- There are various edge detector available in DIP such as Sobel, Canny, Prewitt but they can only detect the edges. But SUSAN operator having advantages to locate corners of image in addition to edges. So to improve accuracy of feature point extraction SUSAN operator is applied on face area to detect far and near corner for two eyes and two corner for mouth area.
Step4: Geometrical features such as area, height and width of the mouth features are extracted for the purpose of expression recognition.
Step5: Facial expressions such as surprise, neutral, sad and happy are recognized based on the range of statistical values given for each expressions satisfying the condition.
The algorithm matches with the facial expression types stored in trained image and outputs the matched one for which various geometrical features such as area, height and width of mouth portion are calculated. Sample experimental results are shown in Fig 3 and statistical results are tabulated in Table I.
Fig. 3 (a) Input Image (b) Detected Face (c) Cropped Face (d) Extracted Mouth Portion (e) Binary Convert (f) Complemented (g) Noise Removed (h) Holes Filled (i) Mouth Expression
IV. EXPERIMENTAL RESULTS AND ANALYSISIn this work, 25 images are selected from three persons from JAFFE database. Table II shows different experimental results between JAFFE database of the four different facial expressions(1 is happiness, 2 is neutral, 3 is sadness, 4 is surprise).
V. CONCLUSION AND FUTURE SCOPEIn this paper an attempt is made to generate different expression of people by using mouth as a parameter using SUSAN operator. Proposed system is tested on JAFEE data base for facial expressions of different peoples in different moods and obtained satisfactory results.
In the future, other facial expressions are also recognized.
© 2014, IJARCSSE All Rights Reserved Page | 442 Prasad et al., International Journal of Advanced Research in Computer Science and Software Engineering 4(12), December - 2014, pp. 440-443 REFERENCES Caifeng Shan, Shaogang Gong, Peter W. McOwan, “Facial expression recognition based on Local Binary  Patterns: A comprehensive study”, ELSEVIER, PP 803-816, 2009.
 Caifeng shan,Shaogang Gong,Peter W,Mcowan "Facial expression recognition based on Local Binary Patterns:
A comprehensive Study" Image and Vision Computing 27(2009) 803-816.
Ms. B.J.Chilke, Mr D.R.Dandekar,” Facial Feature Point Extraction Methods-Review”, International  Conference on Advanced Computing, Communication and Networks’11.
T. Gritti, C. Shan, V. Jeanne and R. Braspenning, “Local Features based Facial Expression Recognition with  Face Registration Errors” 978-1-4244-1/08/2008. IEEE G.Hemalatha, C.P. Sumathi,” A Study of Techniques for Facial Detection and Expression Classification”,  International Journal of Computer Science & Engineering Survey (IJCSES) Vol.5, No.2, April 2014.
 Jiaming Li, Geoff Poulton, Ying Guo,Rong-Yu Qiao "Face Recognition Based on Multiple Region Features" Proc.VIIth Digital Image Computing:Techniques and Applications,Sunc,Talbot H,OurselinS. and Adriaansen T.(Eds), 10-12 Dec 2003,Sydney.
Juxiang Zhou, Tianwei Xu and Jianhou Gan,” Feature Extraction based on Local Directional Pattern with SVM  Decision-level Fusion for Facial Expression Recognition”, International Journal of Bio-Science and BioTechnology Vol. 5, No. 2, April, 2013.
S.P.Khandait, Dr. R.C.Thool, “Automatic Facial Feature Extraction and Expression Recognition based on  Neural Network”, International Journal of Advanced Computer Science and Applications,Vol. 2, No.1, January 2011.
V. P. Lekshmi and M. Sasikumar, “Analysis of Facial Expression using Gabor and SVM”, International Journal  of Recent Trends in Engineering, vol. 1, no. 2, (2009) May.
Liying Lang, Zuntao Hu, “ The Study of Multi-Expression Classification Algorithm Based on Adaboost and  Mutual Independent Feature”, Journal of Signal and Information Processing,, PP 270-273,2011.
Maja Pantic, Leon J.M. Rothkrantz,” Automatic Analysis of Facial Expressions:The State of the Art”, ieee  transactions on pattern analysis and machine intelligence, vol. 22, no. 12, december 2000.
Srinivasa K G, Inchara Shivalingaiah, Lisa Gracias, and Nischit Ranganath, “Facial Expression Recognition  System Using Weight-Based Approach”, Msrit.
 Y. Tian, T. Kanade, J. Cohn, Handbook of Face Recognition, Springer, 2005 (Chapter 11. Facial Expression Analysis).
Xiaoyi Feng, Baohua Lv, Zhen Li, Jiling Zhang, ―A Novel Feature Extraction Method for Facial expression  Recognition.
 H. Yamada, ªVisual Information for Categorizing Facial Expressions of Emotions,º Applied Cognitive Psychology, vol. 7, pp. 257-270, 1993.
© 2014, IJARCSSE All Rights Reserved Page | 443