International Journal of Electrical and Computer Engineering (IJECE) Vol. No. February 2017, pp. ISSN: 2088-8708. DOI: 10. 11591/ijece. Improved Algorithm for Pathological and Normal Voices Identification Brahim Sabir1. Fatima Rouda2. Yassine Khazri3. Bouzekri Touri4. Mohamed Moussetad5 1,3,5 Physics Department. Faculty of Science Ben MAoSik AeCasablanca. Morocco Chemsitry Department. Faculty of Science Ben MAoSik AeCasablanca. Morocco Language and Communication Department. Faculty of Science Ben MAoSik AeCasablanca. Morocco Article Info ABSTRACT Article history: There are a lot of papers on automatic classification between normal and pathological voices, but they have the lack in the degree of severity estimation of the identified voice disorders. Building a model of pathological and normal voices identification, that can also evaluate the degree of severity of the identified voice disorders among students. In the present work, we present an automatic classifier using acoustical measurements on registered sustained vowels /a/ and pattern recognition tools based on neural networks. The training set was done by classifying studentsAo recorded voices based on threshold from the literature. We retrieve the pitch, jitter, shimmer and harmonic-to-noise ratio values of the speech utterance /a/, which constitute the input vector of the neural network. The degree of severity is estimated to evaluate how the parameters are far from the standard values based on the percent of normal and pathological values. In this work, the base data used for testing the proposed algorithm of the neural network is formed by healthy and pathological voices from German database of voice disorders. The performance of the proposed algorithm is evaluated in a term of the accuracy . 9%), sensitivity . 6%), and specificity . 1%). The classification rate is 90% for normal class and 95% for pathological class. Received Mar 4, 2016 Revised May 18, 2016 Accepted Jun 4, 2016 Keyword: Classification methods Communication disorders Neural networks Voice disorders Copyright A 2017 Institute of Advanced Engineering and Science. All rights reserved. Corresponding Author: Brahim Sabir. Departement of Physics. Faculty of Science Ben MAoSik. Casablanca. Morocco. Email: sabir. brahim@hotmail. INTRODUCTION In academic field, among others, voice is considered the main tool of human communication, and it has been shown to carry much information about the general health and well-being of a student. Thus, voice disorders cause significant changes in speech and impact particularly student academic results and his overall Medical methods to assess these voice disorders are either by inspection of vocal folds or by a physicianAos direct audition . , which is subjective, based on perceptual analysis and both depend on physicianAos experiences. Acoustic analysis based on instrumental evaluation which comprises acoustic and aerodynamic measure of normal and pathological voices have become increasingly interesting to researchers because of its nonintrusive nature and its potential providing quantitative data with reasonable analysis time. Speech disorder assessment can be made by a comparative analysis between pathological acoustic patterns and the normal acoustic patterns saved in a database . In voice processing we distinguish three principal approaches: acoustic, parametric and nonparametric approach and statistical methods. The first approach consist to compare acoustics parameters between normal and abnormal voices such as fundamental frequency, jitter, shimmer, harmonic to noise ratio, intensity . The second approach is Journal homepage: http://iaesjournal. com/online/index. php/IJECE IJECE ISSN: 2088-8708 the parametric and non-parametric for features selection . , . The classification of voice pathology can be seen as pattern recognition so statistical methods are an important approach. This paper is organized as a follow: in second Section is dedicated to relevant acoustic parameters that differentiate pathological from normal voices, the classification methods are presented in Section 3. The degree of severity in Section 4. Section 5 deals with the defined threshold in literature to differentiate pathological and normal voices. The proposed method is presented in Section 5, the results in Section 6, and the last Section is reserved for the conclusion and future work. Relevant acoustic parameters Analysis of voice signal is performed by the extraction of acoustic parameters using digital signal processing techniques . However the amount of these parameters is huge to be analyzed, which lead to define the relevant ones. Many papers identify HNR . armonic-to-noise rati. Pitch, shimmer as relevant . , . to identify pathological voices among normal ones. Other papers . , . found that jitter, normalized autocorrelation of the residual signal in pitch lag, shimmer and noise measures like HNR (Harmonic to Noise Rati. , are used to identify pathological Amplitude perturbation, voice break analysis, subharmonic analysis, and noise related analysis are the most relevant ones in . And in . , pitch, jitter, shimmer. Amplitude Perturbation Quotient (APQ). Pitch Perturbation Quotient (PPQ). Harmonics to Noise Ratio (HNR), pitch perturbation quotient (PPQ). Normalized Noise Energy (NNE). Voice Turbulence Index (VTI). Soft Phonation Index (SPI). Frequency Amplitude Tremor (FATR). Glottal to Noise Excitation (GNE), have been seen as relevant ones. There are also many works that tested the combination of mentioned features . Classification methods Various pattern classification methods have been used such as : Gaussian Mixture Models(GMM) and artificial neural network (ANN) . , . Bidirectional Neural Network (BNN) . , multilayer perceptron (MLP) which achieved a classification rate of 96% . , support vector machine (SVM) . Genetic Algorithm (GA) . , . , method based on hidden markov model . , . The use of modulation spectra, classification based on multilayer network . The correct classification rate obtained in previous researches to distinguish between pathological and healthy voices varies significantly: 89. 1% . , 91. 8% . , 99. 44% . , 90. 1%, 85. 3% and 88. 2% . However, the comparison among the researches carried out is very complex due to the wide range of measures, data sets and classifiers employed. Authors reported detection accuracy from 80% to 99%. The results depend on efficiency of methods, choice of classifiers and characteristics of databases. The best classification was obtained using nine acoustic measures and achieving an accuracy of 96. 5 % . Degree of severity Actually, the hard part of pathological voice detection is to discriminate light or moderate pathological voices from normal subjects. The degree according to the G parameter of the GRBAS scale proposed by . On this G-based scale, a normal voice is rated as 0, a slight voice disorder as 1, a moderate voice disorder as 2 and finally, a severe voice disorder as 3. THRESHOLD IN LITERATURE Based on the literature, we have considered: pitch, jitter, shimmer and harmonic-to-noise ratio (HNR) as relevant parameters to identify the pathological voices. In order to classify initially the recorded utterances, we are based on threshold defined in the literature as listed in Table 1 and Table 2. Table 1. Recommended Values of Pitches for Male and Female Signals (. ) Female signal recommended value Male signal recommended value Mean Pitch Minimum Pitch Maximum Pitch 225 Hz for adult females, 155 for adult females, 334Hz for adult , adult males 128 Hz , adult males 85 Hz , adult males 196 Hz. Improved Algorithm for Pathological and Normal Voices Identification (Brahim Sabi. A ISSN: 2088-8708 Table 2. Recommended Values to differentiate Pathological and Normal Voices Jitter ddp% Shimmer dda% HNR . B) Female signal recommended value Male signal recommended Female signal recommended value Male signal recommended Female signal recommended value Male signal recommended Praat . Teixeira . <=1. <=0. <=1. <=0. <=3. <=2. <=3. <=2. <20 dB <20 dB 3 dB PROPOSED METHOD The corpus used in this paper is composed of 50 voices of male and 50 voices of females aged 19 to 22 . ean: 20. The speech material is obtained by sustained vowel /a/ varies from 3 to 5 seconds . Among the 50 voices, 25 are normal and 25 are pathological based on initial classification. The signals are recorded keeping mic 5 cm away from the mouth using a Dictaphone (Sony ICDPX240 4GB). The record consisted in a 3-4 seconds of sustained sound of the vowel /a/ for each student. The sampling frequency used for recording these signals was 22. 05 kHz, with 16 bit resolution and mono. Praat software is used to extract the acoustic parameters after transferring the recorded utterance from Dictaphone to a personal computer Dell . ntel A core E i7 CPU M640 @ 2. 8 Ghz 2. 8 Ghz, 4 Go memor. using audacity software. Connect the headphone jack to the line input of the PC with a 3. 5 jack cable male / male, then activates audio recording with audio processing software audacity. An initial classification based on threshold from the literature, in order to classify the recorded utterances on healthy and pathological. The technic used to identify pathological voices is ANN . rtificial neural networ. : 4 coefficients are extracted . HNR, jitter and shimme. from the signal, these coefficients are the vector input of the net. The net is formed by 3 layers and the used algorithm is back propagation algorithm. The activation function is sigmoid. The proposed algorithm was tested by German database (The sample are from 19 to 22 years ol. utterances of /a/. The degree of the severity is evaluated by: Degree of Severity in %= (Measured value Aenormal valu. /( normal valu. Figure 1 shows the normal value is related to the threshold defined in the literature. Macro steps of the proposed algorithm and Figure 2 shows training of ANN with input vector . acoustic parameter. Input audio Extract acoustic parameters: Pitch. Jitter. Shimmer and HNR -Initial classification based on threshold from the literature of acoustic -Evaluate the degree of severity -Feed the neural network -Train the neural network Test the neural network Figure 1. Macro Steps of the Proposed Algorithm IJECE Vol. No. February 2017 : 238 Ae 243 IJECE ISSN: 2088-8708 Initialization of synaptic weights through each neuron in the hidden layer Initializing synapse weight bias of each neural layer exit. Initializing synapse weight of each neuron of the output layer: Activation function Function calculates the output ys Function calculates the error between the observed and desired output Function calculates the local output layer Gradient Function lets you choose the maximum error Function to adjust the weights of the neurons in the output layer Function calculates the gradient error in the hidden layer Function to adjust the weights of the hidden layer neurons Function calculates the global error Record the weight in the file "poids. Record the weight of the hidden layer Add the weight of the output layer Figure 2. Training of ANN with Input Vector . Acoustic Parameter. RESULTS Description of the dataset as shown in Table 3: Table 3. Description of the Dataset Pronounced vowel /a/ Training number Test number Correct classification Rate of classification Normal In this work, the testing set we have choose a German database for voice disorder developed by Putzer in . which contain healthy and pathological voice, where each one pronounce vowels . , a, . /1-2 s in wav format at different pitch . ow, normal, hig. The results obtained in that work indicate a percentage of correct classification of 90% for normal voices and 95% for pathological voices. These results prove that the pitch. HNR, jitter and shimmer can be best input parameters for discrimination and identification of pathological voice using neural network. Also the degree of severity of the identified pathology was estimated in order to give the clinician clear idea on severity of the communication disorder. In this process the accuracy, sensitivity and specificity for each threshold were observed to get the threshold which achieves the best accuracy, and in the same time preserves a high sensitivity and specificity. True positive (TP): refer to pathological voices that were classified as pathological by proposed algorithm. True negative (TN): refer to normal voices that were classified as normal by proposed algorithm. False positive (FP): refer to normal voices that were incorrectly classified as pathological by proposed False negative (FN): refer to pathological voices that were incorrectly classified as normal by proposed . Specificity = TN/(TN FP)= 95. Sensitivity = TP/(TP FN)=1. Accuracy = (TN TP)/(TN FP TP FN )=97. Improved Algorithm for Pathological and Normal Voices Identification (Brahim Sabi. A ISSN: 2088-8708 Table 4 shows evaluation of the proposed algorithm. Table 4. Evaluation of the Proposed Algorithm Pathological Normal Classification Pathological Normal TP:58. FN:0. FP:0. TN:13. CONCLUSION The purpose of this work is to conceive a model to assist the clinicians and the professors to follow the evolution of the voice disorders among students, based only on acoustic properties of a studentAos voice. The results using the ANN (Artificial neural networ. classifier gives 90%for normal class and 95% for pathological class. The performance of the proposed algorithm is evaluated in a term of the accuracy . 9%), 6%), and specificity. 1%). This work can be applied in the field of preventive medicine in order to achieve early detection of voice pathologies. In addition to that, an estimation of the degree of severity was proposed. The major advantage of this type of automatic identification tool is a determinism that is currently lacking in the subjective analysis. As a possible improvement, system performance can be improved by increasing the corpus Learning. And as a Future work : identify the type of pathology for each voice disorder( stuttering, dysphoniaA), the study of the continuous speech is a superior objective and an evident next step and implement the proposed algorithm on a hardware ready to use device as a DSP(Digital signal processing boar. or FPGA (Field Programmable Gate Arrays Boar. ACKNOWLEDGEMENT The authors fully acknowledged the students Ayoub Ratnane and Abdellah Elhaoui for their valuable contributions. REFERENCES