Virtual human modelling and animation for real-time sign language visualisation
dc.contributor.advisor | Connan, James | |
dc.contributor.author | Van Wyk, Desmond Eustin | |
dc.contributor.other | Dept. of Computer Science | |
dc.date.accessioned | 2014-03-17T07:04:04Z | |
dc.date.accessioned | 2024-10-30T14:00:49Z | |
dc.date.available | 2013/09/16 | |
dc.date.available | 2013/09/16 10:04 | |
dc.date.available | 2014-03-17T07:04:04Z | |
dc.date.available | 2024-10-30T14:00:49Z | |
dc.date.issued | 2008 | |
dc.description | >Magister Scientiae - MSc | en_US |
dc.description.abstract | This thesis investigates the modelling and animation of virtual humans for real-time sign language visualisation. Sign languages are fully developed natural languages used by Deaf communities all over the world. These languages are communicated in a visual-gestural modality by the use of manual and non-manual gestures and are completely di erent from spoken languages. Manual gestures include the use of hand shapes, hand movements, hand locations and orientations of the palm in space. Non-manual gestures include the use of facial expressions, eye-gazes, head and upper body movements. Both manual and nonmanual gestures must be performed for sign languages to be correctly understood and interpreted. To e ectively visualise sign languages, a virtual human system must have models of adequate quality and be able to perform both manual and non-manual gesture animations in real-time. Our goal was to develop a methodology and establish an open framework by using various standards and open technologies to model and animate virtual humans of adequate quality to e ectively visualise sign languages. This open framework is to be used in a Machine Translation system that translates from a verbal language such as English to any sign language. Standards and technologies we employed include H-Anim, MakeHuman, Blender, Python and SignWriting. We found it necessary to adapt and extend H-Anim to e ectively visualise sign languages. The adaptations and extensions we made to H-Anim include imposing joint rotational limits, developing exible hands and the addition of facial bones based on the MPEG-4 Facial De nition Parameters facial feature points for facial animation. By using these standards and technologies, we found that we could circumvent a few di cult problems, such as: modelling high quality virtual humans; adapting and extending H-Anim; creating a sign language animation action vocabulary; blending between animations in an action vocabulary; sharing animation action data between our virtual humans; and e ectively visualising South African Sign Language. | en_US |
dc.description.country | South Africa | |
dc.identifier.uri | https://hdl.handle.net/10566/16963 | |
dc.language.iso | en | en_US |
dc.publisher | University of the Western Cape | en_US |
dc.rights.holder | Copyright: University of the Western Cape | en_US |
dc.subject | 3D computer graphics | en_US |
dc.subject | Open modelling animation framework | en_US |
dc.subject | Virtual human modelling animation | en_US |
dc.subject | Sign language | en_US |
dc.subject | Visualisation | en_US |
dc.subject | Sign writing | en_US |
dc.subject | Make human | en_US |
dc.subject | Blender | en_US |
dc.subject | Python | en_US |
dc.subject | MPEG-4 | en_US |
dc.title | Virtual human modelling and animation for real-time sign language visualisation | en_US |
Files
Original bundle
1 - 1 of 1