Omlin, Christian W.P.Rajah, ChristopherDept. of Computer ScienceFaculty of Science2013-08-142024-10-302007/07/032007/07/032013-08-142024-10-302006https://hdl.handle.net/10566/16939Magister Scientiae - MScMuch work has been done in building systems that can recognize gestures, e.g. as a component of sign language recognition systems. These systems typically use whole gestures as the smallest unit for recognition. Although high recognition rates have been reported, these systems do not scale well and are computationally intensive. The reason why these systems generally scale poorly is that they recognize gestures by building individual models for each separate gesture; as the number of gestures grows, so does the required number of models. Beyond a certain threshold number of gestures to be recognized, this approach become infeasible. This work proposed that similarly good recognition rates can be achieved by building models for subcomponents of whole gestures, so-called cheremes. Instead of building models for entire gestures, we build models for cheremes and recognize gestures as sequences of such cheremes. The assumption is that many gestures share cheremes and that the number of cheremes necessary to describe gestures is much smaller than the number of gestures. This small number of cheremes then makes it possible to recognized a large number of gestures with a small number of chereme models. This approach is akin to phoneme-based speech recognition systems where utterances are recognized as phonemes which in turn are combined into words.enOptical pattern recognitionMathematical modelsImage processingDigital techniques - Mathematical modelsMarkov processesChereme-based recognition of isolated, dynamic gestures from South African sign language with Hidden Markov ModelsThesisUniversity of the Western Cape