Repository logoRepository logo

การจำแนกพยางค์ไทยที่ใช้ในการฟื้นฟูอาการพูดไม่เป็นความ

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

มหาวิทยาลัยสงขลานครินทร์

Abstract

This thesis presented a Thai syllables classification system used for dysarthria rehabilitation based on five channels of surface electromyography (sEMG) and a channel of acoustic signal for classifying twelve Thai syllables. The proposed syllables classification system was divided into four important parts including signal pre-processing, feature representation, dimensionally reduction and classification. Firstly, we studied the characteristic of sEMG signal between healthy and dysarthric volunteers by calculating three feature groups as amplitude, frequency and probabilistic value. Two features from each feature group were determined and analyzed. Subsequently, a spectral regression extreme learning machine (SRELM) was used as the feature projection technique to reduce the dimension of the feature vector. Finally, the projected features were classified using a feed forward neural network (NN) classifier with 5-fold cross-validation. The results showed that amplitude and frequency feature affected to the syllable recognition performance. Secondly, the individual sEMG channel and the 2, 3, 4 and 5 combination sEMG channels were evaluated using the proposed system. The results found showed that when the channel of the electrode was reduced, the syllables classification performance was decreased. Thirdly, in case of the acoustic signal, the number of Mel frequency cepstral coefficients (MFCC) as 8, 13 and 18 were investigated. Moreover, two feature groups between five time domains and MFCC were compared. The results indicated that MFCC was better than another feature group and 18 coefficients gave the best performance. Finally, the best combination of features and channels of sEMG signal was chosen to be fused with the mel-frequency cepstral coefficients extracted from the acoustic signal. Results showed that the multimodal fusion outperformed the use of a single signal source achieving up to ~97% of accuracy. In other words, an accuracy improvement up to 51% could be achieved when using the proposed multimodal fusion. Moreover, its low standard deviations in classification accuracy compared to those from the unimodal fusion indicated the improvement in the robustness of the syllable recognition.

Description

วิทยานิพนธ์ (ปร.ด. (วิศวกรรมไฟฟ้า))--มหาวิทยาลัยสงขลานครินทร์, 2562

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 Thailand