Philosophiae Doctor - PhD (Microbiology)
Permanent URI for this collection
Browse
collection.page.browse.recent.head
Item Thermal stability and defect structure of hot-wire deposited amorphous silicon(UWC, 2004) Arendse, CJ; Knoesen, DHydrogenated amorphous silicon (a-Si:H) thin films are presently used in several large-area thin-film applications. However, one major concern of a-Si:H is the fact that the stability of the material degrades when it is exposed to prolonged sunlight illumination. This effect, referred to as the Staebler-Wronski effect (SWE), is however reduced when using hot-wire (HW) deposited a-Si:H material with a low hydrogen concentration and favorable microstructure. In this thesis we report on the thermal stability of HW-deposited a-Si:H thin films, with different H-concentrations and bonding configurations, when exposed to elevated temperatures in excess of 100 "C.Item The molecular characterisation of a baculovirus isolated from trichoplusia ni(University of the Western Cape, 2001) Fielding, Burtram Clinton; Davison, S.In South Africa there are more than 106 insects pests which attack a wide variety of crops. The top ten or twenty of these can seriously limit successful production on the farm. Costs involved in controlling these pests are considerable, often higher than the value of the crop itself. Trichoplusia ri (common name: cabbage looper) is a pest that can cause considerable damage to a wide variety of economically important crops. Although Trichoplusia ni has successfully been controlled with synthetic chemical pesticides, awareness about the negative impact of these control measures on the environment has necessitated the development of safer alternatives. Additionally, cabbage looper resistance to the commonly used pesticides has been reported. Since a Trichoplusia ni multiple nuclear polyhedrosis virus has previously been used in the effective control of the pest, the potential of characterizing a South African baculovirus isolate showed great potential. A latent baculovirus infecting a field population of Trichoplusia ni was isolated and characterised. Initial DNA and protein characterization identified it as a novel baculovirus. The aim of this research was to characterise the baculovirus at a molecular level. This could lead to future improvement of the viral insecticidal properties The family Baculoviridae include more than 600 viruses with only nineteen receiving species status (Murphy et ol., 1995). The genome sequencing and mapping of NPVs could prove important in determining the relationships between these viruses. Additionally, it could be useful in understanding the importance of gene arrangement and the essential domains of genes' This could provide insight into the crs- and trans-regulation among genes (Jin et al., 1997). The determination of gene order and arrangement of the novel baculovirus isolated from a field population of Trichoplusia ni was presented in Chapter 2. Data were used to construct a partial functional map of the TniSNPV genome. Subsequently, the order and homology of genes identified were used as a phylogenetic marker, identi$,ing TniSNpv as a putative member of the Group II NpVs.Item What constitutes morphological normal and abnormal human sperm heads(University of the Western Cape, 1998) Janse van Rensburg, Tholoana 'M'atahleho Leubane; van der Horst, G.The percentage morphologically normal sperm appears to be of predictive value in the in vitro fertilization laboratory. However, the methodology used in this context is technically inaccurate and imprecise (subjective) and needs to be improved. Nevertheless, most laboratories continue to use such techniques. It is therefore not surprising to find that three different sperm morphology classification systems are in use. Some of the published methods include, the World Health Organization system (WHO), the Tygerberg strict criteria (TSC) and the Dusseldorf criteria (DC) to define morphologically normal sperm. Each of these methods use different criteria and different cut-off values to define sperm morphology normality in patients. For WHO it is >30%, for TSC it is >14% and for DC it is >30%. Consequently, there is no objective morphological criteria for defining normal spermatozoa in human semen at present. Such criteria can be established only on the basis of extensive studies that assess the morphometric characteristics of spermatozoa. Therefore, there is a great need to standardize methodology in this context. The second problem lies with the training of technicians. Unfortunately implementation of visual semen analysis often differs between laboratories. Moreover, few laboratories systematically train their technicians by one standard method, and monitor within and between technician variability. Accurate and precise visual semen analysis will only be achieved by implementing a program of international standardization and technician training and proficiency testing. The third problem is that specimen handling and preparation for evaluation of sperm morphology are not standardized and this needs to be done to the highest degree possible. ln this investigation four microscopic techniques were used to study sperm morphology. The purpose of this part of the investigation was to test whether Papanicolaou stained (PAP) sperm smears studied by means of bright field microscopy represents a reliable method to study sperm morphology when compared to more sophisticated microscopic techniques. Consequently Bright field microscopy (PAP staining), Normaski differential interference microscopy (NDIM), Scanning electron microscopy (SEM) and Confocal microscopy of normal and abnormal sperm types were compared. However, a new technique had to be developed for preparing sperm for confocal microscopy. For this purpose both unfixed and fixed sperm were embedded in agarose to avoid motion artifacts during confocal imaging. Both groups of unfixed and fixed sperm were placed in PBS buffer containing O.874mM dihexaoxacarbocyanine iodide (D!OC6(3)) at room temperature. Sperm were pre-loaded for 30 minutes with Tetramethyl rhodamine methylester (TMRM). DIOC6(3) is a lipophilic fluorescent dye, and it has been found that the fluorescent intensity of sperm increased over time. For Laser Scanning Confocal Microscopy 3D sperm morphology was reconstructed from a series of 20 - 30 optical sections taken at 0.2:m intervals through each sperm. ln order to examine 3D morphology a series of projected (rotated) views were reconstructed to allow visualization of a 3D animation set. All forms of abnormal sperm could be clearly identified by all four microscopic techniques. Despite the fact that human sperm structure can be visualized and studied more comprehensively by means of both NDIM and confocal microscopy than with bright field microscopy, the latter technique of PAP stained smears is adequate to identify abnormal sperm morphology on a routine basis in the clinical laboratory. However, confocal microscopy of human sperm reveals that sperm may be scored normal/abnormal on the basis of its orientation. Only slight rotation of a 3D constructed image of a spermatozoon by means of confocal microscopy can change the classification of a sperm from normal to abnormal. ln the second part of the investigation bright field microscopy of PAP stained smears were used to test the reliability/repeatability of scoring among three technicians from three different laboratories using the Tygerberg Strict Criteria. All three technicians scored the same 77 patients. While two technicians scored within a relatively close range, the third technician varied more when TSC was employed. One technician classified 43"/o of the samples in a different TSC category when compared to a second technician. A fourth technician scored the same sperm smears but used the WHO criteria (1992). The scores for TSC and WHO were consequently compared. It was found that a 3-5% TSC range of abnormal sperm corresponded to a 6-30"/" WHO range.Item New Algorithms for EST clustering(University of the Western Cape, 2000) Ptitsyn, Andrey; Hide, WinstonExpressed sequence tag database is a rich and fast growing source of data for gene expression analysis and drug discovery. Clustering of raw EST data is a necessary step for further analysis and one of the most challenging problems of modem computational biology. There are a few systems, designed for this purpose and a few more are currently under development. These systems are reviewed in the "Literature and software review". Different strategies of supervised and unsupervised clustering are discussed, as well as sequence comparison techniques, such as based on alignment or oligonucleotide compositions. Analysis of potential bottlenecks and estimation of computation complexity of EST clustering is done in Chapter 2. This chapter also states the goals for the research and justifies the need for new algorithm that has to be fast, but still sensitive to relatively short (40 bp) regions of local similarity. A new sequence comparison algorithm is developed and described in Chapter 3. This algorithm has a linear computation complexity and sufficient sensitivity to detect short regions of local similarity between nucleotide sequences. The algorithm utilizes an asymmetric approach, when one of the compared sequences is presented in a form of oligonucleotide table, while the second sequence is in standard, linear form. A short window is moved along the linear sequence and all overlapping oligonucleotides of a constant length in the frame are compared for the oligonucleotide table. The result of comparison of two sequences is a single figure, which can be compared to a threshold. For each measure of sequence similarity a probability of false positive and false negative can be estimated. The algorithm was set up and implemented to recognize matching ESTs with overlapping regions of 40bp with 95% identity, which is better than resolution ability of contemporary EST clustering tools This algorithm was used as a sequence comparison engine for two EST clustering programs, described in Chapter 4. These programs implement two different strategies: stringent and loose clustering. Both are tested on small, but realistic benchmark data sets and show the results, similar to one of the best existing clustering programs, 02_cluster, but with a significant advantage in speed and sensitivity to small overlapping regions of ESTs. On three different CPUs the new algorithm run at least two times faster, leaving less singletons and producing bigger clusters. With parallel optimization this algorithm is capable of clustering millions of ESTs on relatively inexpensive computers. The loose clustering variant is a highly portable application, relying on third-party software for cluster assembly. It was built to the same specifications as 02_ cluster and can be immediately included into the STACKPack package for EST clustering. The stringent clustering program produces already assembled clusters and can apprehend alternatively processed variants during the clustering process.Item Science text: Facilitating access to physiology through cognition-based reading intervention(University of the Western Cape, 1995) Wesso, Iona; Sinclair, A. J. L.Reading and understanding science text is the principal means by which students at tertiary level access scientific information and attain scientific literacy. However, understanding and learning from science texts require cognitive processing abilities which students mayor may not have. If students fail to understand scientific text, their acquisition of subject knowledge and expertise will be impeded and they will fail to develop into thinking and independent learners, so crucial for academic progress and achievement. A major assumption in this study is thus that in order to increase access to science subjects there is a need to explicitly teach the thinking abilities involved in learning science from text. A review of the literature showed that while reading to learn from scientific text poses special challenges to students faced with this unfamiliar genre, little is known about reading (and thinking) for science learning. A synthesis of current research which describes the neglected interface between science learning, science reading and cognition is given in the literature review of this study. This synthesis highlights, in particular, the parallel developments in research into science learning and reading; the lack of integration of research in these areas; the absence of investigations on science reading located within the cognitive domain; and the absence of research into reading as it affects cognition and cognition as it affects reading in subject-specific areas such as physiology Possibilities for improving students' cognitive performance in reading to learn through intervention were considered from a cognitive perspective. From this perspective, students' observable intellectual performance can be attributed to their underlying knowledge, behaviour, and thought processes. Accordingly, the mental processes involved in comprehending scientific concepts from text and the cognitive processes which the students bring to the learning situation become highly relevant to efforts to improve cognitive skills for learning science Key questions which were identified to serve as a basis for intervention included: a) What cognitive abilities are needed for competent reading comprehension as demanded by physiology text?; b) How adequate is the cognitive repertoire of students in dealing with physiology text? With regard to these questions a catalogue of cognitive functions as formulated by Feuerstein et al (1980) was identified as optimally suited for establishing the cognitive match between reading tasks and students. Micro-analyses of the cognitive demands of students' textbook material and the cognitive make-up of second-year university students revealed a profound mismatch between students and their learning material. Students lacked both comprehension fostering and comprehension monitoring abilities appropriate to the demands of the learning task. The explication of the cognitive requirements which physiology text demands served as a basis for systematically designing instruction whereby appropriate intellectual performance for scientific comprehension from text may be attained Subsequent intervention was based on the explicit teaching of thinking abilities within the context of domain-specific (physiology) knowledge. An instructional framework was developed that integrated cognitive learning theories and instructional prescriptions to achieve an effective learning environment and improve students' cognitive abilities to employ and extend their knowledge. The objective was that the instructional model and resultant instructional methods would ensure that students learn not only the desired kinds of knowledge by conceptual change, but also the thought processes embedded and required by reading scientific material for appropriate conceptual change to take place. Micro-analysis of the cognitive processes intrinsic to understanding physiology text illuminated cognitive demands such as, for example, the ability to: transform linearly presented material into structural patterns which illuminate physiological relationships; analyse conceptually dense text rich in "paradoxical jargon"; activate and retrieve extensive amounts of topic-specific and subject-specific prior knowledge; to visualise events; and contextualise concepts by establishing an application for it. Within the above instructional setting, the study shows that the notion of explicitly teaching the cognitive processes intrinsic to physiology text is possible. By translating the cognitive processes into cognitive strategies such as assessing the situation, planning, processing, organisation, elaboration, monitoring and reflective responses, the heuristic approach effectively served to guide students through various phases of learning from text. Systematic and deliberate methods of thought that would enhance students problem-solving and thinking abilities were taught. One very successful strategy for learning from physiology text was the ability to reorganise the linearly presented information into a different text structure by means of the construction of graphic organisers. The latter allowed students to read systematically, establish relationships between concepts, identify important ideas, summarise passages, readily retrieve information from memory, go beyond the given textual information and very effectively monitor and evaluate their understanding In addition to teaching appropriate cognitive strategies as demanded by physiology text, this programme also facilitated an awareness of expository text conventions, the nature of physiological understanding, the value of active strategic involvement in constructing knowledge and the value of metacognitive awareness. Also, since the intervention was executed within the context of physiology content, the acquisition of content-specific information took place quite readily. This overcame the problem of transfer, so often experienced with "content-free" programmes. In conclusion, this study makes specific recommendations to improve science education. Inparticular, the notion of teaching the appropriate cognitive behaviour and thought processes as demanded by academic tasks such as reading to learn physiology seems to be a particularly fruitful area into which science educational research should develop and be encouraged.Item New algorithms for EST clustering(University of the Western Cape, 2000) Ptitsyn, Andrey; Hide, Winston; Davidson, Sean; Dept. of Microbiology; Faculty of ScienceSummary: Expressed sequence tag database is a rich and fast growing source of data for gene expression analysis and drug discovery. Clustering of raw EST data is a necessary step for further analysis and one of the most challenging problems of modem computational biology. There are a few systems, designed for this purpose and a few more are currently under development. These systems are reviewed in the "Literature and software review". Different strategies of supervised and unsupervised clustering are discussed, as well as sequence comparison techniques, such as based on alignment or oligonucleotide compositions. Analysis of potential bottlenecks and estimation of computation complexity of EST clustering is done in Chapter 2. This chapter also states the goals for the research and justifies the need for new algorithm that has to be fast, but still sensitive to relatively short (40 bp) regions of local similarity. A new sequence comparison algorithm is developed and described in Chapter 3. This algorithm has a linear computation complexity and sufficient sensitivity to detect short regions of local similarity between nucleotide sequences. The algorithm utilizes an asymmetric approach, when one of the compared sequences is presented in a form of oligonucleotide table, while the second sequence is in standard, linear form. A short window is moved along the linear sequence and all overlapping oligonucleotides of a constant length in the frame are compared for the oligonucleotide table. The result of 85 comparison of two sequencesis a single figure, which can be compared to a threshold. For each measure of sequence similarity a probability of false positive and false negative can be estimated. The algorithm was set up and implemented to recognize matching ESTs with overlapping regions of 40bp with 95% identity, which is better than resolution ability of contemporary EST clustering tools. This algorithm was used as a sequence comparison engine for two EST clustering programs, described in Chapter 4. These programs implement two different strategies: stringent and loose clustering. Both are tested on small, but realistic benchmark data sets and show the results, similar to one of the best existing clustering programs, D2_cluster, but with a significant advantage in speed and sensitivity to small overlapping regions of ESTs. On three different CPUs the new algorithm run at least two times faster, leaving less singletons and producing bigger clusters. With parallel optimization this algorithm is capable of clustering millions of ESTs on relatively inexpensive computers. The loose clustering variant is a highly portable application, relying on third-party software for cluster assembly. It was built to the same specifications as D2_cluster and can be immediately included into the ST ACKPack package for EST clustering. The stringent clustering program produces already assembled clusters and can apprehend alternatively processed variants during the clustering process.