Virtual human modelling and animation for real-time sign language visualisation

dc.contributor.advisorConnan, James
dc.contributor.authorVan Wyk, Desmond Eustin
dc.contributor.otherDept. of Computer Science
dc.date.accessioned2014-03-17T07:04:04Z
dc.date.accessioned2024-10-30T14:00:49Z
dc.date.available2013/09/16
dc.date.available2013/09/16 10:04
dc.date.available2014-03-17T07:04:04Z
dc.date.available2024-10-30T14:00:49Z
dc.date.issued2008
dc.description>Magister Scientiae - MScen_US
dc.description.abstractThis thesis investigates the modelling and animation of virtual humans for real-time sign language visualisation. Sign languages are fully developed natural languages used by Deaf communities all over the world. These languages are communicated in a visual-gestural modality by the use of manual and non-manual gestures and are completely di erent from spoken languages. Manual gestures include the use of hand shapes, hand movements, hand locations and orientations of the palm in space. Non-manual gestures include the use of facial expressions, eye-gazes, head and upper body movements. Both manual and nonmanual gestures must be performed for sign languages to be correctly understood and interpreted. To e ectively visualise sign languages, a virtual human system must have models of adequate quality and be able to perform both manual and non-manual gesture animations in real-time. Our goal was to develop a methodology and establish an open framework by using various standards and open technologies to model and animate virtual humans of adequate quality to e ectively visualise sign languages. This open framework is to be used in a Machine Translation system that translates from a verbal language such as English to any sign language. Standards and technologies we employed include H-Anim, MakeHuman, Blender, Python and SignWriting. We found it necessary to adapt and extend H-Anim to e ectively visualise sign languages. The adaptations and extensions we made to H-Anim include imposing joint rotational limits, developing exible hands and the addition of facial bones based on the MPEG-4 Facial De nition Parameters facial feature points for facial animation. By using these standards and technologies, we found that we could circumvent a few di cult problems, such as: modelling high quality virtual humans; adapting and extending H-Anim; creating a sign language animation action vocabulary; blending between animations in an action vocabulary; sharing animation action data between our virtual humans; and e ectively visualising South African Sign Language.en_US
dc.description.countrySouth Africa
dc.identifier.urihttps://hdl.handle.net/10566/16963
dc.language.isoenen_US
dc.publisherUniversity of the Western Capeen_US
dc.rights.holderCopyright: University of the Western Capeen_US
dc.subject3D computer graphicsen_US
dc.subjectOpen modelling animation frameworken_US
dc.subjectVirtual human modelling animationen_US
dc.subjectSign languageen_US
dc.subjectVisualisationen_US
dc.subjectSign writingen_US
dc.subjectMake humanen_US
dc.subjectBlenderen_US
dc.subjectPythonen_US
dc.subjectMPEG-4en_US
dc.titleVirtual human modelling and animation for real-time sign language visualisationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
van Wyk_MSC_2008.pdf
Size:
9.66 MB
Format:
Adobe Portable Document Format