Charles Sturt University, Sydney, Australia 2Computer Science Department

1 Detecting Occluded Faces in Unconstrained Crowd Digital Pictures S. Janahiram1, Abeer Alsadoon1, P.W.C. Prasad1, A. M. S. Rahma2, A. Elchouemi3 SMN Arosha Senanayake4 1School of Computing and Mathematics, Charles Sturt University, Sydney, Australia 2Computer Science Department, University of Technology, Baghdad, Iraq 3Hewlett Packard Enterprise 4Faculty of Science, Universiti Brunei Darussalam Abstract— Face detection and recognition mechanisms are widely used in many multimedia and security devices. The concept is called face detection and there are significant numbers of studies into face recognition, particularly for image processing and computer vision. However, there remain significant challenges in the existing systems due to limitations behind algorithms. Viola Jones and Cascade Classifier are considered the best algorithms from among existing systems. They can detect faces in unconstrained Crowd Scene with half and full face detection methods. However, limitations of these systems are affecting accuracy and processing time. This project presents a propose dsolution called VJaC (Viola Jones and Cascade). It is based on the study of current systems, features and limitations. This system considered three main factors, processing time, accuracy and training. These factors are tested on different sample images, and compared with current systems. Keywords—Face Detection; Unconstrained Crowd Digital Pictures; Face Recognition; I. INTRODUCTION In recent years, face detection and recognition systems have been widely used in every aspect of life. The initial stage of the face recognition system is face detection in the images. This concept detects the face of a person and the location in the image to verify the information, workig through image processing and computer vision. The function of the system, is described as a pre-processing stage of face detection. However, due to the fact that faces may not be fully focused on the camera or be partially hidden due to crowed scene, difficulties are creating in detecting the face accurately. The main challenge seems to come from differences in facial appearance. To overcome this, current solutions use an Adaboost Machine Learning (AML) approach and Viola- Jones (VJ) algorithm to detect the faces. The main feature of this solution is that the algorithm includes a skin color detection module that allows it to exactly focus on faces through image skin color to compare and identify the images [1]. This solution has improved the accuracy and performance of face detection. That means, the system runs separate algorithms to detect full and half faces which improves the capability of the system. The primary objective of this solution is to detect several types of objects in the image, namely holistic faces, half faces, and skin color. Despite the success of this solution, however, there are challenges as blurred images cannot be clearly identified and the response time is greater. To overcome these limitations, our proposed project focuses on implementing the VJaC (Viola Jones and Cascade). This paper is organized as follows: Section I introduces the project, and section II reviews existing research into face detection. This is followed by a description of the methodology used in the proposed solution, in section III, together with an analysis and overview of implementation strategies. Results of the current research into the proposed solution are discussed in section IV and results and discussions are presented in section V, while the conclusion is covered under section VI, which is also considers possible future work. II. RELATED WORK A current solution to face detection in unconstrained crowd scene images was proposed by [1] based on a machine learning approach. The main features of this solution are that several types of objects in images can be detected, including holistic and half faces. Based on testing, the accuracy rate of the solution is 95%. This is higher than for other current solutions. Xinjun, Hongqiao and Xin [2], identified the Modified Skin-color Model as detecting faces in the images although they are affected by different lighting environments. In the same way, Sharifara, Mohd, Mohd and Anisi [3] proposed Neural Networks and Haar Feature-based Cascade Classifier in Face Detection. This solution is tackling the issues caused by different expressions and appearances of faces in the image. Reney and Tripaathi [4] proposed work whereby the algorithm can detect the face and emotion of the face. Another algorithm for Face Detection using Multi-modal Features was proposed by Lee, Kim, Kim and Lee [5]. Kumar and Bindu [6] proposed an algorithm for Skin Region Detection based on a skin illumination compensation model for efficient face detection. Likewise Xingjing, Chunmei and Yongxia [7] produced an algorithm for detection and tracking a Partially Occluded Face. The goal of this solution is that the system detects the faces in an image or video and compares the similarity between the faces which has been detected by the system. Sun, Zhang, Zhang, Chen and Lv [8] implemented the Multi-feature Driver Face Detection Based on Area Coincidence Degree and Prior Knowledge. Guo, Lin, Wu, Chang and Lee [9] presented Complexity Reduced Face Detection Using Probability-Based Face Mask Prefiltering and Pixel-Based Hierarchical-Feature Adaboosting. In the same 2 way, Zhu & Cai [10] proposed a Real-Time Face Detection using Gentle AdaBoost Algorithm and Nesting Cascade Structure. This solution uses the nested cascade structure by reducing weak classifiers in a cascade of classifier to speed up the face detection in the image, whereby classifiers use Haarlike features to improve the generalizability of the node classifier. Similarly, Zhang, Kamata, Zhang [11] produced an algorithm for Face Detection and Tracking in Color images using Color Centroids Segmentation. The system uses RGB color centroids segmentation for face detection, making this a face detection method that combines region growing and facial features structure character. A different approach was taken by Chae, Han, Seo, and Yang [12] who proposed a solution called an efficient face detection based on a color filtering method. This system also uses skin color to expand the conventional face detectors for detection. Ying-Ying and Jun [13] produced the solution to detecting Multi-angle Faces based on DP-Adaboost algorithm. DP-Adaboost helps to detect more faces in the image using the frontal cascade classifier and profile cascade classifier. The author improved the Adaboost algorithm with the fusion of a frontal face classifier. This helped to create strong classifiers for face detection. Pavani, Gomez and Frangi used Gaussian weak classifiers based on co-occurring Haar-like features for face detection [14]. This algorithm increased the accuracy by 38% and reduced the false detection rate by 42%. The main goal of this solution is to provide a decision tree to reduce the computation efforts of the system by allowing it to select either the object or clutter. Hassaballah, Murakami and Ido proposed an algorithm for face detection based on a new approach, relying on the golden ratio [15]. This is more realistic and accurate than other methods. Kasinski and Schmidt [16] proposed an architecture and performance of the face and eye detection system based on Haar cascade classifiers. The system was implemented for face detection and eye detection through Haar cascade classifiers. Based on the comparison of the algorithms, it is clear that these solutions have used different algorithms and training to produce accurate output. However, Viola Jones and cascade classifiers are more accurate than other algorithms. Also the OpenCV provides better training for face recognition compared to other components. A. Processing Time Analysis Based on our analysis of the current solutions, we have chosen Viola-Jones by Gul et al. [1] and Frontal and Profile Cascade by Zheng et al. [13] as the best algorithms. Here, the algorithm detects the faces in two parts. First the system detects the full faces, and then it det
ects half faces in the images. Howver, due to the need to separate the steps, the system requires longer processing time for detecting faces. To overcome this issue, in our proposed solution, the half face detection method and full face detection method are working in parallel to produce toutput, which will be presented later in more details. Therefore, our proposed model has reduced the processing time. B. Accuracy Analysis Based on the accuracy of the VJ proposed by Gul et al. [1] and the Frontal and Profile cascade classifiers (FPC) proposed by Zheng et al. [13], thesehave been identified as the best algorithms. When comparing the accuracy rate of each algorithm, the VJ has 95% and the FPC has 94%, as shown in table 1. The accuracy rate is calculated by analyzing the algorithms on significant numbers of sample images. The VJ algorithm is tested with different quality images with crowed scene and different face appearances while the FPC is tested in different lighting environments. The VJ uses the adaboost for classifying image,s but in the FPC the cascade classifiers help to identify the frontal as well as the profile face. This helps to find more faces in the image compared to the VJ to increase the accuracy. Hence, the proposed hybrid solution is using VJ with cascade classifier to increase the accuracy by identifying full, half and profile faces in the image. C. Training for Face Recognition The current solution Viola Jones VJ by Gul et al. [1], trains the classifier using sample images to identify similarities of faces. While the system detects the face, the face recognition function separates the face from the image and uses the recognition algorithm to identify similarities between the faces, using trained images. Furthermore, the algorithm used in this system knows how to identify the dynamic classifications, because the face detection method is dynamic. However, if the detected face is not available in the directory, the system cannot recognize it. Hence, our proposed solution is implemented with OpenCV to train the cascade classifier. This helps to train the system manually and by the time the system detects any new face, it can take it as a sample for further face recognition. III. PROPOSED WORK The proposed integrated system includes Viola Jones and Cascade (VJaC) as a first part for image detection, and training as a second component for recognition purposes. VJaC is implemented with Visual Studio program and C++ programming language to produce better outcomes than existing solutions. The VJaC can detect frontal faces and faces in profile using cascade classifier. Further, the system uses the eye detection method to detect the different faces in the case of women wearing a hijab or niqab, half covered face s and thickly madeup faces. VJaC system begins with right and left half face detection in the first stage, and filters the faces in the second stage. The system also has a training stage in the third stage in the second part of this system to recognize faces in the images using OpenCV. Based on the diagram of the system, every of the three stages has a function to perform to get the results according to the input. These three stages are preformed one after another. A. Right and Left half face detection The Right and Left half detection method is the first stage of the system. Initilly, the face detection part VJaC of the system 3 receives the sample images, which are entered by the user, to detect the right and left half faces. Fig. 1. VJaC Diagram The system uses the Viola-Jones algorithm, modified by us, to perform this task. This algorithm reads the image and copies it to the frame. Then, the algorithm reads the entire pixel and compares it the face detection properties with the xml file which contains the information about the face detection properties. Here, the system checks all right and left occluded faces. Then, the system will send results from this stage to the face filtering stage. Fig 1 shows the input, output, and the process steps of our first proposed algorithm. Table 1 shows the results in processing time of the VJaC system. Table 1. Processing Time analysis Sample Images Imag e Form at Size Features Processi ng Time JPEG 172*15 9 Thick makeup 0.071 B. Face filtering In the Face filtering stage, the system filters all the faces in the image. In this stage, we have used frontal and profile cascade classifier to create a strong classifier for frontal and profile faces. The classifier eliminates the false faces in the image to increase accuracy. Also, these classifiers filter out faces from related objects for full face detection in the system. These classifiers help to detect the multi angle faces with classifier fusion. Additionally, they create less weak classifiers to reduce false detection in the image. Tables 2 shows the results of the accuracy analysis of VJaC system. Table 2. Accuracy Analysis Sample Images Accuracy Proposed Solution (VJaC) (Conversion to grey scale) Proposed Solution (VJaC) (Output) Detecte d Faces (VJaC) 1 out of 1 Accuracy Rate 97% C. Training and face recognition Face recognition is the third and final step of the proposed system. Before the system can recognize the faces, it has to be trained In the training process, the user has to upload the sample images to the system with the text file, which contains information about the sample images and their names. Then, the system keeps all the files in the system directory and creates a Comma Separated Value CSV file to locate the images and the names of the images. After that, the OpenCV converts the input image onto gray scale and compares it with training samples regarding the skin-color, face appearance, features, the size of the face and the distance to recognize the face. Table 3 shows the results of training the VJaC system. Samples Training Grey scale conversion Proposed Solution (VJaC) Extracted Images Output IV. RESULTS AND DISCUSSION In this section, we consider the implementation of the proposed face detection and recognition system and present final results for each factor. For testing and training the system, sample images are collected from two different databases. They are Computer Vision Laboratory (CVL) and Google images. The CVL database was created by the 4 University of Ljubljana. This database contains 114 Subject s and for 111 subjects, 7 images were included for each person. These 7 images contain 7 different appearances of the image. For example, in the far left face pose, the face appears at a 45o angle, with a serious expression, whereas the far right pose is at a 135o angle, smiling face not showing and showing teeth. These image formats are JPEGs and the resolution of the images are 640 * 480 pixels. Also these images are done in Sony Digital Mavica with uniform illumination and no flash light and a projection screen is used as background. Google Images are shared by Google. This database contains a large number of images, available for download in different sizes and quality Here, we can search and download the images we need for testing and training. The collected images were tested using all current and proposed solutions. Fig 2, 3 and 4 show the results in a covariance matrix of raw data for VJ, FPC and VJaC part of the proposed system. Based on the covariance matrix, the VJaC results are clearer than the other two. The VJaC has reduced noise in the image compared to VJ and FPC. This means, the VJaC’s accuracy is higher and the processing time is reduced. After the 20 sample image experiment, the average accuracy of each system is respectively 97%, 70%, 75% for VJaC, VJ and FPC. The VJaC, VJ and FPC systemss were tested with different image standards, thick makeup, different lighting environment, Faces with glasses, blurred images, half cover ed face, images with hijab and niqab and faces in profile. And these images were taken from two different formats. They are JPEG and PNG. Further, the VJaC system detects full and half faces in parallel, but VJ runs it as two steps to accomplish the task. This means, the VJaC can do the task faster than the VJ with less processing time. Finally, the VJaC
conducts training with the OpenCV. Here, the system has to be trained through the sample dataset. By the time, the user tries to recognize the face, the system checks against all the sample images and shows information about the face. In case the system cannot recognize the face, the system will crop the face and keep it in the system directory with face information for further face recognition. In each of the factors, the VJaC has shown significant improvements over the current systems. Table 4 shows the results for all three solutions. Fig. 2. VJ Results Fig. 3. FPC Results Fig. 4. VJaC Results Table 4. Samples of different stages of each solutions Sample 1 VJaC VJ FPC Sample Grey Scale conversion Face Detection V. CONCLUSION As a conclusion, a significant body of research exists in the area of image processing. Howeverm there as yet no perfect solutions for technology based face recognition.. Current Face detection systems have significant application in areas such as automated access control, computer vision communication and machine learning approaches. Substatial numbers of systems have been implemented for face detection with 5 increasing the accuracy. The proposed system Viola Jones and Cascade Classifier VJaC is mainly focused on accuracy, processing time and training. This is a hybrid solution based on existing solutions called VJ and FPC. The VJaC, is a multistage system that can detect full faces, half faces and faces in profile using Viola Jones and cascade classifier. Further, the system uses the eye detection method to detect different faces, wearing hijab or niqab, for example. The system has a training stage where it trains to recognize faces in images using OpenCV. This proposed system has been compared with existing solutions across different samples. The result shows that the proposed system is significantly improved in accuracy and needs less processing time than the existing solutions. REFERENCES [1] S. Gul and H. Farooq, “A machine learning approach to detect occluded faces in unconstrained crowd scene,” Cognitive Informatics & Cognitive Computing (ICCI*CC), 2015 IEEE 14th International Conference on, Beijing, 2015, pp. 149-155. [2] X. Ma, H. Zhang and X. Zhang, “A face detection algorithm based on modified skin-color model,” Control Conference (CCC), 2013 32nd Chinese, Xi’an, 2013, pp. 3896-3900. [3] A. Sharifara, M. S. Mohd Rahim and Y. Anisi, “A general review of human face detection including a study of neural networks and Haar feature-based cascade classifier in face detection,”Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, Kuala Lumpur, 2014, pp. 73-78. [4] D. Reney and N. Tripathi, “An Efficient Method to Face and Emotion Detection,” Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, Gwalior, 2015, pp. 493-497. [5] Hyobin Lee, Seongwan Kim, Sooyeon Kim and Sangyoun Lee, “Face detection using multi-modal features,” Control, Automation and Systems, 2008. ICCAS 2008. International Conference on, Seoul, 2008, pp. 2152-2155. [6] C. N. R. Kumar and A. Bindu, “An Efficient Skin Illumination Compensation Model for Efficient Face Detection,” IEEE Industrial Electronics, IECON 2006 – 32nd Annual Conference on, Paris, 2006, pp. 3444-3449. [7] X. Du, C. Liu and Y. Yu, “Analysis of Detection and Track on Partially Occluded Face,” Information Technology and Applications, 2009. IFITA ’09. International Forum on, Chengdu, 2009, pp. 158-161. [8] Wei Sun, Weigong Zhang, Xiaorui Zhang, Gang Chen and Chengxu Lv, “Multi-feature driver face detection based on area coincidence degree and prior knowledge,” Industrial Electronics and Applications, 2009. ICIEA 2009. 4th IEEE Conference on, Xi’an, 2009, pp. 222-225. [9] J. M. Guo, C. C. Lin, M. F. Wu, C. H. Chang and H. Lee, “Complexity Reduced Face Detection Using Probability-Based Face Mask Prefiltering and Pixel-Based Hierarchical-Feature Adaboosting,” in IEEE Signal Processing Letters, vol. 18, no. 8, pp. 447-450, Aug. 2011. [10] J. q. Zhu and C. h. Cai, “Real-time face detection using Gentle AdaBoost algorithm and nesting cascade structure,” Intelligent Signal Processing and Communications Systems (ISPACS), 2012 International Symposium on, New Taipei, 2012, pp. 33-37. [11] N. B. Zahir, R. Samad and M. Mustafa, “Initial experimental results of real-time variant pose face detection and tracking system,” Signal and Image Processing Applications (ICSIPA), 2013 IEEE International Conference on, Melaka, 2013, pp. 264- 268. [12] Q. Zhang, S. i. Kamata and J. Zhang, “Face detection and tracking in color images using color centroids segmentation,” Robotics and Biomimetics, 2008. ROBIO 2008. IEEE International Conference on, Bangkok, 2009, pp. 1008- 1013. [13] Y. Zheng, J. Yao, (2015). “ Multi-angle face detection based on DP-Adaboost ” . International Journal of Automation and Computing Int. J. Autom. Comput., 12(4), pp. 421-431. [14] S. Pavani, D. Delgado-Gomez, A. Frangi, “ Gaussian weak classifiers based on co-occurring Haar-like features for face detection ” . Pattern Anal Applic Pattern Analysis and Applications, 2012. 17(2), pp. 431-439. [15] M. Hassaballah, K. Murakami, S. Ido, “ Face detection evaluation: A new approach based on the golden ratio ” . Signal, Image and Video Processing SIViP, 2011. 7(2), pp. 307-316. [16] A. Kasinski, A. Schmidt, “ The architecture and performance of the face and eyes detection system based on the Haar cascade classifiers ” . Pattern Anal Applic Pattern Analysis and Applications, 2009. 13(2), pp. 197-211.

Leave a Reply

Your email address will not be published. Required fields are marked *