# I. Introduction tate-of-the-art technology offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms (Scalise & Gifford, 2006). Examination or testing in higher education plays a pivotal role when combatively assessing the learning outcome of a process; nonetheless, it determines whether effective teaching and learning has taken place in an academic process. Jamil, Tariqand Shami (2012) opine that examinations determine the extent to which educational objectives have been achieved as well as the extent to which educational institutions have served the needs of community and society. This highlights the awareness that examination, also described as test, extends beyond the frontier of measuring educational or societal objectives. The role played by examination in education process is to significantly define what transpires in the classroom and how teachers teach and students learn, and its impact on teaching and learning (Khattak, 2012). In the higher education process, lecturers(Instructors) employ several high-stake summative methods to assess learning out comes; a key purpose of summative assessment is to record, and often grade, the students' performance in relation to the stated learning objectives of the programme (JISC, 2008). These summative assessment methods include paper and pencil-based examination, assignments, peer and group assessments, and projects-based assessments. When students are many, to effectually assess paper-based exams in bulk, man-power is not substantial; dead line have to be extended in such circumstances; Marking therefore, is a terrible experience (Bacon, 2012). None the less, deploying any of these high-stake assessment methods in Ghanaian higher education become difficult and occasionally ineffective due to large class size. The issue of large class size has arisen because of increase in the population, the quest for higher education and better living conditions of life (Yelkpieri, Namale, Esia-Donkoh, & Ofosu-Dwamena, 2012).Ricketts, Filmore, Lowry and Wilks (2003), opine that tuition fees are shooting up tremendously in higher education due to the cost of assessing large classes. Conversely, assessments which employ digital or open content shrink the cost of tuition (Wales & Baraniuk, 2008). Large class size is an issue that bed evils comprehensive high-stake examination of students in the Kwame Nkrumah University of Science and Technology (KNUST). An initial observation on conventional examination evident that most lecturers in KNUST employed the Objective testing -Multiple Choice Question (MCQ) exams to be precise -as an auxiliary method to measure undergraduate students' academic achievements when confronted with large class sizes. According to Nicol (2007) multiple-choice questions (MCQs) are being progressively used in higher education as a means of augmenting or even substitute ingup-to-date assessment practices. As a stop-gap approach, the university academic board made it mandatory for all lecturers in KNUST to conduct high stake send-of-semester summative exams with MCQs for undergraduate programmes. However, in times of large class size and declining resources, MCQs can offer a viable addition to range of assessment types accessible to lecturers (McKenna & Bull, 1999). The introduction of MCQs, though effective with regards to assessments for large class sizes, has not utterly stamped out the issues associated with conventional high-stake summative examination somewhat making it unappealing (Archana & Leela vathi, 2013). The issue of examination malpractices, delay in results generation and instant feedback, mismanagement of print resources and invariable human errors due to negligence persist in conventional high-stake examination in Higher Education. Moreover, there is continually enormous pressure on the Optical Mark Recognition (OMR) device as lecturers who used Optical Answer Sheets (Scantron sheets) submit for marking. Though, lecturers occasionally resort to marking the sheets manually in lieu of anticipated delays and erroneous results in marking the optical answer sheets with OMR device. Suggestively, it has become imperative to look at alternative exemplars of assessing high-stake summative examinations at the KNUST; henceforward, the need to explore and adopt computerbased examination or e-Examinations. The study, however, sought to test the practicality of implementing high stakes computerbased examination by exploring examinees (students) exposure to and performance of computer-based exam, and factors relating to acceptance or rejection of e-Exam exams. The factors considered for the study included prior experience in computer-based exams, digital literacy skills, gender, age and academic standing. Additionally, the study is limited to MCQ sand other objective-based question types; however, the study does not consider the validity of the question types implemented. # II. Computer-based Examination Computer-based systems of examinations which are termed as "Computer Assisted Testing, Computerized Assessment, Computer-Based Testing (CBT), Computer-Aided Assessment (CAA), Computer-Based Assessment (CBA), Online Assessment, e-Assessment and Web-Based assessment"(JISC, 2008), are shifting the paradigm of examinations in higher education from traditional paper and pencil based examination (Uysal & Kuzu, 2009). Luecht & Sireci(2011) opines that Computer-Based Examination in corporate myriad assessment types, purposes, test delivery designs, and item types appropriated for educational accountability and achievement testing, college and graduate admission testing, and adult education use. According to Chalmers and McAusland (2002) pedagogically Computer-Based Examinations enable instructors to test their students covering a wide range of content, reduced instructors' workload especially in the case of double marking, saved time and resources, and helping to identify students' learning problems by adapting to match their abilities. # III. Materials and Methods # a) Research Instrument An Online Survey System (OSS) was used to obtain data from the students sampled for the study as it is a great way to reach and engage with target audience (SmartSurvey, 2017).The survey contained four (4) questions and was administered to respondents at the end of the study to evaluate their experience and acceptance of e-Examination. The OSS was administered to the respondents by generating a short Uniform Resource Locator (URL) which was posted on the LMS used for the study. The Microsoft Digital literacy test was also used to assess the digital literacy skills of the respondents. # b) Survey Participants Respondents for the survey involve done hundred and sixty-two (n=162) final year student examinees in the multimedia course who had registered for the second semester of the 2017/2018 academic year. There were 89 (54.9%) males and 73(45.1%) females involved in the study. It was a prerequisite for the 162 students to sign up to the LMS which was used to administer the e-Exams. Students are automatically assigned to Digital literacy test,5 weekly quizzes for formative assessments, a mid-semester exam and an end-of-semester exam after signing in into the LMS. Except the digital literacy test, the other examinations were categorized into formative and summative eexaminations. The five weekly quizzes constituted the formative e-exams while the mid-semester and end-ofsemester activities constituted the summative e-exams. The e-examinations were scheduled for specific days and timed accordingly. Each quiz had 20 items to be completed in 12 minutes; the mid-semester and end-ofsemester had 150 (85 minutes) and 200 (110 minutes) items correspondingly. All students who completed the computer-based examinations were asked to appraise their experience by answering an online-based questionnaire directly after completing the end-ofsemester CBE; the return rate of responses for the questionnaire was 100% of the sample. Data were analyzed using mean and standard deviation. Inferences was made from the data employing correlational analysis, independent sample ttest (with unequal variance) and one-way Analysis of Variance (ANOVA) with a confident Interval of 95% at a 0.05 (5%) level of significance. # c) Course Content and Implementation Multimedia in Publishing course, which is part of the undergraduate Publishing Studies programme, was used to effectively and efficiently implement and evaluate students' acceptance and academic performance in the Computer-Based Examination. The course adopted a hybrid or mixed model (face-to-face and online learning) as the instructional stratagem. The discrete units of the course were arrayed using Schoology, a cloud-based Learning Management System (LMS) that allows students and faculty to communicate, share resources, host collaborative groups, and stay actively engaged from any device (Schoology, 2018). Students had to sign up to gain access to the contents of the LMS. The strategy for instructional delivery in the course combines theoretical know-how, and practical skills carried out in a computer laboratory to help students gain mastery in the planning, designing and development of web contents. The course was taught across 12 weeks within an academic semester interspersed with five weekly quizzes. The weekly quizzes focused on previous lessons taught and used as "scored formative" assessment to examine and heighten students' comprehension in the course. The quizzes were used as means of scaffolding students experience with CBE since it was their first-time involvement. Students' summative assessments were based on a mid-semester and end-of-semester examinations which were both proctored in a wellstocked and connected computer laboratory. All the examinations (including the 5 weekly quizzes) were strictly CBE; no pencil-and-paper based exams were employed in the study. The weekly quizzes, midsemester and end-of-semester e-exams were set with objective-typed questions which included multiplechoice, True/False, Ordering or ranking, Fill-in-the-Blank and Matching questions. # Need assessments of examinees Meta-analysis of research works manifested that the digital divide is gradually tapering and ICT education is accentuating in Ghanaian higher education. This is attributable to the fact that the global impression of ICT integration is differentiated as additional motivation to learn to deriving from the Hawthorne effect of novelty; or a skill set to master in addition to the content knowledge addressed (Fluck, Pullen, & Harper, 2009) providing state-of-the-art technologies and flexibility to engage students to work smarter(Media Planet, 2014).The focus of the needs assessment was to ascertain the personal operational ICT skills of the examinees; and other known online technologies available to them (Table 1). 1, 17.3% and 54.8% of the examinees rated themselves as very good and good personal operational ICT proficiency ratings respectively, while 25.0% rated themselves as averagely skills; and 3.0% of the examinees registered poor know-how. About gender, table 1 also shows male examinees rated their digital literacy level higher than the females. The overall results indicate that there is substantial rating of digital literacy with regards to the examinees used for the study though 3.0% of the sample were digital immigrants. learners that aural and kinesthetic leaners in the study. This implies that most of the examinees are spatial learners; hence, they will better understand and retain information when ideas, words, and concepts are associated with images (Inspiration Software, Inc., 2015).The learning style similarly influenced the presentation of the test driver's Graphical User Interface (GUI),activities and canons of the question prompts. It was also realized that majority (73.5%) of the examinees fall within the modal class of 22 -25 years. The results also show that there were more males in the modal class as compared to females. However, the age groups were recategorized into two groups (25 years of age and under; and 26 years above) to determine whether the performance of examinees and acceptance of e-exams differed among the groups. Examinees' prior experiences with other CBE systems (i.e.online quizzes and other test drivers) were crucial to the study. Table 2 evident that minority (14.2%) of the examinees had prior experiences with other computer-based assessment systems which included Driven Vehicle and License Authority (DVLA) test, Students Aptitude Test (SAT online) and quizzes from online courses. The data infers that majority (85.8%)of examinees were novel and needed probationary exposure to the CBE system as they had marginal experiences. Examinees' academic standings were also considered as an independent variable to infer whether it will have a significant effect on their performance in the e-Examinations. The result in table 2show that majority (55.6%) of the examinees were within the second-upper division; implying standard academic standing of the examinees. The result of the preliminary study influenced the choice of the test driver for computer-based examination, the presentation of the Graphical User Interface, organization of the question prompts as well as test delivery model to implement. Furthermore, these factors were used to govern the difference in performance and acceptance of the e-Examinations. # Scaffolding Experience of the Computer-Based Examination for Examinees Examinees ability to effortlessly navigate the e-Examination system was very crucial in the study; hence, the need to introduce original activities that will scaffold examinees' experiences from the actual weekly quizzes structured from the individual units of the multimedia course. The purpose of activity two was to heighten and scaffold the formative experiences and adaptability of the examinees to the CBE system. However, activity two was synchronous home task (all 5 weekly quizzes not proctored but equally timed) in which examinees explored new learning outcome realized uniquely through computerized examinations. The five weekly quizzes were used for formative objective assessments, purposely, to motivate and encourage students to keep pace with teaching and learning; and also, monitor their progress on the use of the CBE platform. Activities3 was mid-semester and end-ofsemester e-Examinations. This activity (summative e-Examination), likewise, were time bound but proctored under stringent exam conditions in a well-equipped brick-and-mortar computer lab with low latency and jitter-free internet connection. # Setting Objectives Question Types for the e-exams Zakrzewski (2002) discourses that, objective testing is the most commonly used form of eexaminations. Formulation of question prompts for the e-examinations was based on a synergy of the content of the multimedia course and experiences with paperand-pencil test concepts. The core of any robust system of CBE is the creation of appropriate, user-chosen question pools with appropriate question prompt to be built upon over time, allowing their reuse in suitable circumstances and ensuring time savings. The nature of question prompts for the e-exams revolved around two commonly adopted Multiple Choice Question (MCQ) Types, i.e., A-Type and R-Type. The A-Type typically provides 45 options -without any psychometrical law behind the number of options -from which the student can choose; and, the R-Type involves given a theme for each question, where students match the options with the scenarios, and the matching process is introduce by a lead-in question (Abdalla, Gaffar, & Suliman, 2011).The Blooms digital taxonomy for evaluating digital tasks was used as a basis to formulate the objective type questions as it gives flexibility in framing, classification, and breakdown of what learning outcomes and thinking skills expected in every learning task (Churches, 2008).The questions were set to appraise the experiences of examinees from low order thinking skills (LOTS) to high order thinking skills (HOTS). Churches (2008)and Krathwohl (2002) describe the spectrum of LOTS to HOTS as follows: remembering, understanding, applying, analyzing, evaluating and creating, and this is evident in figure 1. The Objective Test Questions (OTQ) adopted for this study was also based on the categorization of Computer Assisted Assessment Centre (CAAC). McKenna and Bull (1999) ----------------- ? ? ? Multiple Response Questions ? ? ? ? ? Extended MCQs ? ? ? ? Assertion Reasoning Questions ? ? ? ? ? ? Matching Questions ? ? ? ? Fill-in the blank Questions ? ? ? ? ? ? Ranking Order Questions ? ? ? ? Sore finger Questions ? ? ? ? ? ? Graphical Hotspot Questions ? ? ? Sequencing Questions ? ? ? ? ? ? MCQ types adopted and adapted are modified for the e-exams to develop new rubric for the question base reflecting the functionalities of the CBE platform. Depending on the number of correct options the examinee selects, differentiated points (Marks) are allotted to a question prompt. # d) The Architecture of the Computer-Based Examination test drivers The e-exam platform used functions on a 3-tier architecture, namely, the presentation, logic and data tiers, which is a three-way interaction in a client/server environment (Sarma, 2009). The presentation tier is the Graphical User Interface (GUI) of the CBE platform representing the top-most level. The function of the GUI is to translate tasks and results in something the user can understand. The logic or business tier coordinates application and process commands, make logical decisions and evaluations, and performs calculations. The data tier stores or retrieve questions prompts from a database or file system. The question prompt is passed back to the logic tier for processing and eventually back to the examinee. The 3-tier architecture gave the researchers the opportunity to fully integrate third-party applications (plugins) and enhanced logic (additional question types) to alter the functions of the e-exam platform. The presentation tier or client-side functionality of the CBE platform are modularized into authentication or identification module, and assessment or examination module: Volume XIX Issue III Version I Formative experiences: As evident in table 4the average scores and standard deviations show there were variations in the formative e-Exams (digital literacy skill test and the five weekly quizzes) administered to the examinees. The digital literacy test recorded a mean (m=7.91; sd=1.81) which represents the highest mean of all the formative. These infer that examinees' digital literacy skills -in terms of knowledge, the ability to manipulate and maneuver computers -are substantial. The study wanted to establish whether there was a constantly progressive pattern in the five formative quizzes; and correspondingly, the association between examinee's digital literacy score and the quizzes. The results from a Pearson product-moment correlation coefficient computed to assess the relationship between the formative e-Exams revealed moderate and low positive linear association between the five quizzes (Table 5). These indicate at that there was a progressively marginal pattern between the quizzes as relatively similar scores are observed. The results also indicate that obtaining a high score in the digital literacy test does not correlate increase in the score of any of the other five quizzes; hence, examinee's basic skill in computing did not have an impact on the formative experiences of the examinees. Furthermore, the study also estimated the effect of gender on the formative e-examination scores using an independent T-test (p = 0.05, unequal variance). The results evident that there is no significant difference between the digital literacy scores of the male (n=89, m=7.63,sd=1.974) and female (n=73, m=7.6, sd=1.855)examinees who took part in the study; t(157.034) = 0.0906, p = 0.9279 .In table 6, the results show that gender had a significant effect on the scores for quizzes 1 to 3,implying differences in the performance of the males' and females' formative eexams scores. However, the was no significant effect of gender on quizzes 4 and 5 implying similarities in the formative e-exam scores for males and females. Though there were differences in gender distribution in the first three formative e-exam scores, it improved with the subsequent e-exam scores. indicate that the examinees performed slightly better in the end-of-semester exam as compared to the mid semester exam. Moreover, the variability of the two examinations show that results obtained by examinees in the mid semester exam was a bit varied than the endof-semester exam. There was a low linear positive correlation between digital literacy test score and the summative e-Exams scores (Table 8)inferring that an examinees' digital literacy levels had marginal increase on the mid semester and end-of-semester performance. Table 8 also revealed a significant pattern between the mid semester and the end-of-semester e-Examinations. The Pearson product-moment correlation coefficient designated moderate positive association between the two examinations. The association indicates that an increase in the mid semester examination taking by the examinees correlated an increase in the end-of-semester examination. Evaluation of the summative e-exams scores revealed no significant difference (P>0.05) among male and female examinees (table 9) who participated in the study. It implies that gender did not play a considerable role in the performance of the examinees in both summative e-exams. Likewise, One-way ANOVA is conducted to compare the effect of academic standing of examinees on the summative e-exam performance. The results revealed that there was statistically significant difference between academic standings on the mid semester (F(3, 158) = 21.42, p = 0.000) and end-of-semester (F(3, 158) = 16.15, p = 0.000)e-exams at the p<.05 level. Regarding the mid semester e-exam, a Tukey post hoc test revealed the mean score of examinees whose academic standing was first class (m=59.04, sd=10.53) was significantly different from those with second class upper (m=49.25, sd=10.45), second class low (m=39.19, sd=10.4) and pass (m=33.74, sd=12.73).Additionally, those whose academic standing were second class upper had a significantly different mean score than examinees with second lower and pass. However, the score of examinees with second class lower did not significantly differ from those with pass. These suggest that higher academic standings of examinees had an effect in the mid semester e-exam scores. With the end-of-semester exams, the Tukey post hoc test revealed that the mean score of first class examinees (m=60.79, sd=9.99) was statistically significantly different from those whose academic standing falls within second class upper (m=51.38, sd=9.29), second class lower (m=44.28, sd=12.83) and pass (m=33.72, sd=9.3). There was no statistically significant difference between the mean score of examinees with academic standings of second class low and pass. In a nutshell, these results suggest that high academic standings also had effect on the end-ofsemester e-exam. Examinees with higher grade point performed better in the summative e-Examinations; this implies that academic standings can be an influential factor in determining the performance of an examinee in a summative e-examination. # e) Examinees responses after experiencing the Computer-Based examination Upon completion of the summative e-exams, the examinees were giving a one-page survey to answer; the survey contained four questions. This exercise was voluntary; however, all the 162 examinees responded. The responses given by the examineesbeing the first cohort to take summative e-exam, made the researchers feel a great deal of responsibility for making the summative e-exam experience one which students would want to reiterate. Moreover, there is also the likelihood the thoughts of the examinees would sculpt sentiments of innovation in KNUST with regards to summative e-exam implementation. Feedback on the formative e-exams taken by the examinees (97%) suggest that it had a high impact on their preparation towards the summative eexaminations; a minority (3%) found the formative eexams moderately useful. Another critical question on the survey asked if the examinees would prefer the paper-based examination administration to a Computer-Based Examination. Opinions of the examinees were varied as109 (67%) and 31(19%) supported Computer-Based Examination and paper-based examination respectively while 22(14%) opted for both modes for delivering highstake examinations. This finding supports that of Fluck, Pullen,and Harper (2009) compare with the study by Jonsson,Loghmani, and Nadjm-Tehrani (2002) where 95.4% of the sample preferred e-examination. Examinees who preferred paper-based examination confirmed that they are familiar with it hence transiting to CBE has not been a laid-back experience. With regards to proportion reporting technical issues in the e-examinations, it is realized that the majority of the examinees (51%) stated difficulties with the formative e-exams. Chiefly among the technical hitches encountered by the examinees included internet connectivity which may be a result of the examinees' dependency on wide-ranging Internet Service Providers to access the e-exams. Moreover, the examinees also complained about the timing allotted for the e-exams. However, there were no complains of routine complications on the summative e-exams by the examinees (100%). The complications are seemingly attributable to the conducive examination atmosphere provided for the e-examination. # V. Conclusion E-examinations is an innovative engineering initiative that can changing the face of high-stake objective-typed questions for examination in KNUST. This study found that examinees (students) performance in the objective typed e-exams was substantial, hence, reflecting examinees' entire acceptance of e-exams. Furthermore, this case study of using objective-typed questions for high-stake summative e-examination have revealed noteworthy evidence about ICT insurgency which have inferences (Fluck, Pullen&Harper, 2009)in the direction of the mandatory implementation of Multiple-Choice Questions for assessment in KNUST. Though it is a known fact that paper-based exam is the standard in KNUST, capitalizing on e-examination may bring transformational returns for contemporary students who are more motivated and adaptive to digital technologies. The e-exam model employed for the study support and validate the basis for university-wide implementation of computer-based MCQ and other objective typed questions for summative assessments. Moreover, the positives aspect of e-examination using objective-typed questions, and the absence of undesirable associations realized can be communicated to first-time examinees to maximize acceptance towards the implementation of e-exams (Boevé, Meijer R, Albers C, Beetsma, & Bosker, 2015). Finally, having huge question prompts in the database for the e-exams can assist as a measure to curb the pervasiveness of examination malpractices in MCQ test administration. Further studies can be conducted to test the variability of acceptance among the different academic levels as possibility of students viewing e-examinations inversely at different level could be influenced by factors such as academic and technological experiences. Correspondingly, Adaptive e-examination administration can be explored for summative assessment. ![Summative E-Examination for High Stake Assessment in Higher Education: A Case of Undergraduate Students at the Kwame Nkrumah University of Science and Technology Volume XIX Issue III Version I](image-2.png "") 1![Fig. 1: Bloom's Digital Taxonomy (adapted from Educational Origami, cited in Munzen maier & Rubin, 2013)](image-3.png "Fig. 1 :") 1VariableICT personal operational SkillsExcellentVery GoodGoodAverageLowTotalGenderMale-21 (13.0)51 (31.5)15 (9.3)2 (1.2)89Female-7 (4.3)38 (23.5)25 (15.4)3 (1.9)73Total28 (17.3)89 (54.8)40 (24.7)5 (3.1)162Examinees' digital fluency plays a pivotal role intheeffectivedeploymentofComputer-BasedExaminations. 2VariableFrequenciesMaleFemaleTotalLearning StylesAural29 (17.9)29 (17.9)58 (35.8)Visual39 (24.1)26 (16.0)65 (40.1)Kinesthetic21 (13.0)18 (11.1)39 (24.1)Age22 -2562 (38.2)57 (35.2)119 (73.5)26 -3120 (12.3)9 (5.6)29 (17.9)32-374 (2.5)5 (3.1)9 (5.6)above 373 (1.9)2 (1.2)5 (3.1)Previous Experienceswith other CBE SystemsYes11 (6.8)12 (7.4)23 (14.2)No78 (48.1)61 (37.7)139 (85.8)Academic standingFirst Class6 (3.7)14 (8.6)20 (12.3)Second Class Upper46 (28.4)(27.2)90 (55.6)Second Class Lower32 (19.8)15 (9.3)47 (29.1)Pass5 (3.0)-5 (3.0) 2(Formative e-Examinations) comprised of the five(5) 3Bloom's Digital TaxonomyObjective Test QuestionsLower Order Thinking Skills?- 4ExaminationNMean (M)Standard Deviation (SD)MedianModeHighest ScoreLowest ScoreDigital Literacy Test1627.611.928.339.339.971.0Quiz One1625.821.526.06.5, 79.0-Quiz Two1624.971.735.05.08.7-Quiz Three1625.581.895.756.010.0-Quiz Four1625.742.125.80-9.33-Quiz Five1626.291.366.335.09.111 6s of gender on the formativee-examination scoresQuizIndependent T-test (unequal variance)1t(159.546) = 2.5125, p = 0.01302t(159.533) = 2.2410, p = 0.02643t(159.189) = 2.0156, p = 0.04554t(158.844) = 1.7652, p = 0.07955t(157.331) = 0.2504, p = 0.8026Summative Grading: In table 7, both summativeCBEs generated different means (m mid =47.06,sd mid =12.35; m end =50.25, sd end =11.74). The results 7ExaminationNMean (M)Standard Deviation (SD)MedianModeHighest Score (100)Lowest ScoreMid-Semester16247.0612.3547.9839.3272.9217.20End of Semester16250.2511.7450.8451.683.7715.80 5Digital Literacy TestQuiz 1Quiz 2Quiz 3Quiz 4Quiz 5Quiz 10.2240* 0.00421.0000Quiz 20.2984* 0.00010.4301* 0.00001.0000Quiz 30.2863* 0.00020.4666* 0.00000.4016* 0.00001.0000Quiz 40.2780* 0.00030.4353* 0.00000.4615* 0.00000.3966* 0.00001.0000Quiz 50.0917 0.2461-0.0909 0.2498-0.0156 0.8438-0.0246 0.7558-0.0642 0.41711.0000 8Digital Literacy scoreMid semester ExaminationEnd-of-semester examinationMid semester Examination0.2161* 0.00571.0000End-of-semester examination0.1020 0.19660.5259* 0.00001.0000Significant at 0.05*; Confident Interval of 95%; Sig. (2-tailed) 9QuizIndependent T-test (unequal variance)Mid semester Examst(154.839) = 0.9931, p = 0.3222End-of-semester Examst(157.533) = 1.5173, p = 0.1312 Summative E-Examination for High Stake Assessment in Higher Education: A Case of Undergraduate Students at the Kwame Nkrumah University of Science and Technology © 2019 Global Journals * Blueprints in Health Profession Education Series MEAbdalla AMGaffar RASuliman 2011. June 1, 2015 * An Effective Computer Based Examination System for University MArchana RLeelavathi International Journal of Science and Research 2013 * The Impact of Standards-Based Reform on Special Education: An Exploration of Westvale Elementary School JKBacon 2012 Syracuse University Unpublished Doctoral Dissertation * Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment ABoevé RMeijer RAlbers C JBeetsma YBosker R 10.1371/journal.pone.0143616 PLoS ONE 10 12 e0143616 2015 * DChalmers WDMcausland 2002 * Computer Assisted Assessment J. Houston, & D. Whigham May 5. 2015 * Bloom's Taxonomy Blooms Digitally AChurches 2008. April 1 * Case study of a computer based examination system AFluck DPullen CHarper Australasian Journal of Educational Technology 25 4 2009 * What is visual thinking and visual learning? 2015. May 20. 2015 from Inspiration Software, Inc * Computer-based vrs Paper-based examinations: Perceptions of University teachers MJamila RHTariq PAShami Turkish Online Journal of Educational Teachnology 11 4 2012. 2015 * JISC 2008. May 3, 2015 Computer Assisted Assessment -Summative Online. Bradford University * Evaluation of an authentic examination TJonsson PLoghmani SNadjm-Tehrani 2002. June 2018 * Assessment in schools in Pakistan SGKhattak SA-eDUC JOURNAL 9 2 2012 * A Review of Models for Computer-Based Testing RLuecht SSireci 2011 Massachusetts College Board, University of Massachusetts Amherst * Designing effective objective test qestions: an introductory workshop CMckenna JBull 1999 Loughborough CAA Centre, Loughborough University * Teaching Our Teachers: Arne Duncan on Bridging the Digital Divide Classroom Technology 2014. June 1, 2015 Retrieved * Digital Literacy. Retrieved from Microsoft Corporation Website 2018. July 30 * E-assessment by design: using multiple-choice tests to good effect DNicol 10.1080/03098770601167922 Journal of Further and Higher Education 31 1 2007 * How should we measure the costs of computer aided assessment CRicketts PFilmore RLowry SWilks Computer Assisted Assessment Conference.7. Loughborough 2003. June 2018 UK: Loughborough University * 3-tier Architecture SSarma 2009. November. June 2, 2015 * Computer-Based Assessment in E-Learning: A Framework for Constructing "Intermediate Constraint" Questions and Tasks for Technology Platforms KScalise BGifford Learning, and Assessment 4 6 2006. 2018. June 3, 2015. 2017 10 advantages of Online survey * A Thesis Proposal: Quality Standards of Online Higher Education in Turkey OUysal AKuzu EMUNI Conference on Higher Education and Research. Potoro?, Slovenia 2009. May 3, 2015 * The Zone of Proximal development: Some conceptual issues. New Direction for Child and Adolescent Development JWales RBaraniuk 25. Wertsch, V. J. 2008. 1984. 1984 Technology opens the doors to global classrooms * Educational Resources Information Center DYelkpieri MNamale KEsia-Donkoh EOfosu-Dwamena 2012. May 1, 2015 * Implementation of Largescale, Computer-based Exams SZakrzewski from Leaarning Technology Dissermination Initiative 2002. June 1, 2015