Automatic real-time facial expression recognition for signed language translation
dc.contributor.advisor | Omlin, Christian W | |
dc.contributor.author | Whitehill, Jacob Richard | |
dc.date.accessioned | 2023-05-16T09:38:58Z | |
dc.date.accessioned | 2024-10-30T14:00:45Z | |
dc.date.available | 2023-05-16T09:38:58Z | |
dc.date.available | 2024-10-30T14:00:45Z | |
dc.date.issued | 2006 | |
dc.description | >Magister Scientiae - MSc | en_US |
dc.description.abstract | We investigated two computer vision techniques designed to increase both the recognition accuracy and computational efficiency of automatic facial expression recognition. In particular, we compared a local segmentation of the face around the mouth, eyes, and brows to a global segmentation of the whole face. Our results indicated that, surprisingly, classifying features from the whole face yields greater accuracy despite the additional noise that the global data may contain. We attribute this in part to correlation effects within the Cohn-Kanade database. We also developed a system for detecting FACS action units based on Haar features and the Adaboost boosting algorithm. This method achieves equally high recognition accuracy for certain AUs but operates two orders of magnitude more quickly than the Gabor+SVM approach. Finally, we developed a software prototype of a real-time, automatic signed language recognition system using FACS as an intermediary framework. | en_US |
dc.identifier.uri | https://hdl.handle.net/10566/16944 | |
dc.language.iso | en | en_US |
dc.publisher | University of the Western Cape | en_US |
dc.rights.holder | University of the Western Cape | en_US |
dc.subject | Machine learning | en_US |
dc.subject | Facial expression recognition | en_US |
dc.subject | Sign language | en_US |
dc.subject | Facial action units | en_US |
dc.subject | Segmentation | en_US |
dc.title | Automatic real-time facial expression recognition for signed language translation | en_US |
dc.type | Thesis | en_US |