Abstract
The impetus for the development of a measurement and evaluation team for Robert Morris University, School of Nursing and Health Sciences (SNHS), was to foster faculty and administration commitment in enhancing the quality of measurement and evaluation processes. Many of the SNHS faculty members had experienced incidents of academic inconsistencies with student exam protocols. The measurement and evaluation team was charged to define the goals for faculty to use evidence-based assessment and evaluation strategies that are appropriate for the learner and learning goals, support use of evaluation data to measure the achievement of designated outcomes, and promote curricular excellence through the use of assessment and evaluation data and policies to enhance the teaching and learning process. This paper examines the results of surveys of undergraduate students, proctors, and faculty within the SNHS regarding new exam protocols, the implementation of the protocols, and their success.
Evaluation in the field of higher education has traditionally been the method by which student performance has been analyzed using multiple assessment tools to demonstrate competency for certification or licensure. These tools typically have been items such as oral and written exams, performance objectives, rubrics, feedback tools, and portfolios. As educators it is our responsibility to promote quality exam construction, administration, and assessment for the optimal outcome of our students on high-stakes assessments, that is, the Nuclear Medicine Technology Certification Board (NMTCB) and American Registry of Radiologic Technologists (ARRT) certification exams for nuclear medicine technology students and the National Council Licensure Examination (NCLEX) for nursing students.
During the fall semester of 2014, a contingent of faculty members from the Robert Morris University (RMU), School of Nursing and Health Sciences (SNHS), met to discuss the quality of measurement and evaluation activities within the school for consistency across educational programs. From that initial meeting, the RMU SNHS measurement and evaluation team (MET) was formed and has provided leadership for measurement and evaluation activities within the school. By the end of the first academic year, the MET had developed formal bylaws, adopted an exam protocol, adapted assessment criteria, and recommended the adoption of a schoolwide platform for testing and to assess teaching and learning progress.
The faculty and administrators in the SNHS were and are committed to high-quality teaching–learning practices. They also recognized that to ensure the quality of the teaching–learning practices, outcomes must be assessed and evaluated. For example, assessment data are gathered on student learning, course outcomes, faculty effectiveness, and teaching–learning processes. Important decisions are made using these and other data, such as decisions on whether the learner has achieved the required competencies or whether the teaching–learning practices facilitate learner achievement of the stated objectives.
Ultimately, the purposes of the MET were determined to be, first, fostering faculty development to use evidence-based assessment and evaluation strategies that are appropriate for the learner and learning goals; second, supporting use of evaluation data to measure achievement of designated outcomes; and third, promoting curricular excellence through the assessment-and-evaluation data and policies to enhance the teaching–learning process. This definition of purpose provided the structure for ongoing activities of the MET.
This paper provides details on establishing a culture of excellence related to measurement and evaluation in the SNHS. This was an interprofessional collaborative process that included faculty from nursing, nuclear medicine technology, and health services administration who were united in the commitment to enhancing the quality of our processes.
MATERIALS AND METHODS
Best practices in the development and use of assessment and evaluation provided guidance for the MET. The work of Oermann and Gaberson (1), Ambrose and Mee (2), and Stonecypher and Willson (3) were the initial resources used to guide the first steps in the development of the exam protocol. Oermann and Gaberson described evaluation as “an integral part of the instructional process.” Internal curriculum evaluation, as described by Ambrose and Mee, is “methods used to measure outcomes that are enumerated in the course syllabi.” Additionally, external curriculum evaluation involves “methods used to compare a student or group of students to the overall student population”; that is, national certification exams. Both internal and external evaluations were considered when the MET was discussing how to determine the evaluation process.
In their article, Ambrose and Mee (2) state that “the purpose of evaluation is to drive the curriculum so that the students receive the best possible education in their health care profession, become excellent practitioners, and ultimately provide a worthwhile service to the community.” The commitment to determining the quality of the educational experiences was important to the MET members, and long discussions were held as to how to enhance the evaluation process to achieve the best possible outcomes.
In addition to the commitment to academic excellence, the MET was also concerned about promoting academic integrity. Stonecypher and Willson (3) found that “cheating in higher education is prevalent, with 21 percent to 90 percent of college students from all majors reporting cheating.” The authors encouraged the development of policies and processes that emphasized the commitment to promoting academic integrity.
Kotter (4), an international scholar on the topic of change, refers to “The Big Opportunity.” In fact, The Big Opportunity is the center of his 8-step model for leading change. Kotter’s model provided the structure for the change process focused on enhancing measurement and evaluation methods at the SNHS. The Big Opportunity for our school was a commitment to bring about change related to measurement and evaluation.
The impetus for development of a MET was the SNHS faculty and administration commitment to enhance the quality of measurement and evaluation processes. In a bold initiative to strengthen RMU’s SNHS curricula, the dean invited an independent consulting group to review the school’s undergraduate curriculum processes and policies and offer recommendations. Guided by the independent consulting group, RMU’s SNHS faculty and student needs were identified through the extensive programmatic evaluation. Focus groups and survey data gleaned further faculty insight allowing specific recommendations on testing processes and policies. Among the top development needs identified by the faculty was the desire to enhance the quality of measurement and evaluation within the SNHS. Specifically, the faculty wanted educational development opportunities related to improving measurement and evaluation. Additionally, the faculty cited the need to form a committee that would be dedicated to best practices in testing and evaluation. In recognition of the importance of evidence-based measurement and evaluation to the entire school, the new initiative was to be inclusive of all of RMU’s SNHS program offerings at all levels.
Led by the associate dean, the first order of business was to name the new committee. In a lighthearted contest, the associate dean asked all faculty members to submit possible names for the new committee, which helped to establish buy-in. The stakes were high: a modest gift card was given to the faculty member with the winning suggestion. After the faculty vote, the school named the committee the Measurement and Evaluation Team, or MET.
Second, the newly formed MET wrote governing bylaws (Appendix A). Bylaws are a set of rules that guide committees’ operations and activities. The bylaws established the MET’s purposes. The MET was to foster faculty development in the use of evidence-based assessment and evaluation strategies that are appropriate for the learner and learning goals, to support use of evaluation data to measure achievements of designated outcomes, and to promote curricular excellence through the use of assessment and evaluation data. Finally, the MET would develop processes that would enhance teaching and learning processes.
After establishing the MET, the next task was to determine best practices in evaluation strategies with an initial focus on testing. The MET consisted of faculty with varying levels of teaching experience at the undergraduate and graduate levels. Many of the faculty members had experienced incidents of academic dishonesty with students. A brainstorming session was held to discuss current exam practices used by faculty members. Current faculty exam practices included methods used during the testing procedure and methods of test analysis. Testing procedure methods used by the faculty were aimed at deterring academic dishonesty. Such methods included having students place all belongings at the back of the room, sitting every other seat when possible, and signing an integrity statement on the first page of the exam booklet. In addition to the brainstorming session, a visual demonstration was conducted at a department meeting by a graduate student who showed various ways students could cheat during an exam. This exercise was illuminating for all faculty members, and a decision was made to develop a test administration policy and procedure for the SNHS.
In addition, test construction was a concern among the MET. In an effort to promote curricular excellence in measuring outcomes, a decision was made to ensure a test design that would facilitate student success. These efforts focused on determining best practice in test construction and providing faculty development. Although several of the MET faculty used test blueprints for exam construction, several SNHS faculty members voiced lack of knowledge about how to design or use a test blueprint.
The focus of the MET became 2-fold: supporting faculty through the development of test construction and developing a testing administration procedure. The literature was reviewed for best practices in test construction and administration, and surprisingly, evidence-based information was sparse. Two textbooks served as a guide for creating test construction and administration procedures (1,5). The MET reviewed several chapters in the textbooks and began discussions of what would work in the SNHS.
The first focus was faculty development in test construction. The MET reviewed recommended practices in the noted evaluation textbooks (1,5) and invited several expert consultants to present faculty development workshops on measurement and evaluation. Workshop topics included the myths of testing, composition of test questions, meaning of test scores, and item analysis of test results. Information was also provided on test-blueprint development and ethical issues of testing. In addition to the workshops, all SNHS faculty members were provided with a booklet on critical thinking and test-item–writing development (6).
On the basis of textbook reviews and expert consultant recommendations, the following practices were recommended by the MET for implementing systematic test development and design. The first was the use of a test blueprint to inform test question topics, objectives, complexity, and degree of emphasis, and the second was the use of a cover page with general test directions and an integrity statement for students to sign (1). The third recommended practice was the use of item-writing guidelines such as a logical sequence of questions, grammatical consistency, use of the same font and type size throughout the exam, and avoidance of crowding of test items on each page (1,5). For prelicensure students, the fourth recommended practice was to consider test questions that mimic the NMTCB, the ARRT, and the NCLEX items using the provided guide (6) and the NMTCB, ARRT, and NCLEX practice analyses. Finally, the fifth recommendation was to consider using online commercial testing software that mimics the NMTCB, ARRT, and NCLEX for delivery of the assessments.
As dialogue occurred on these recommended practices, it became apparent that the testing software currently used by the SNHS was outdated, with several incidents of incorrect scoring of student exams. The MET decided that a comprehensive review of available commercial testing software for improved statistical analysis of test results would become a third primary goal of the MET to ensure best outcomes in measurement and analysis.
A subcommittee of 4 MET faculty members reviewed 3 different commercial testing software platforms to be considered for use by all students in the SNHS. Considerations were made of the costs to the SNHS and the students, customer support from the commercial testing software company, information technology resources and requirements, security and confidentiality, and ease of use for the faculty and students. In the fall semester of 2015, the SNHS implemented the use of an online testing platform.
As part of the volunteer “army,” students, faculty, and graduate assistants (GAs) must be diligent and attentive to details in the implementation of a comprehensive testing policy to facilitate success and ensure the creation of an appropriate testing environment. To test the exam procedure, a trial implementation began in the summer semester before the implementation of the online commercial testing software. The university offered fewer classes in the summer sessions, and this environment provided a less busy environment for implementation. Members of the MET were champions of the new exam procedure and worked with faculty and others to roll out the new process. The details of the exam procedure are provided in Appendix B and define the roles of the faculty, proctor, and student. The exam procedure used GAs as proctors, and one GA was designated as the trainer and coordinator for these proctors. As part of the exam procedure, each faculty member submitted a request for a proctor and was assigned a GA to serve in this role at the designated time and location. Faculty were asked to inform their students of the new process and to answer any questions from the students related to the new process. Most faculty announced the new testing procedure at the beginning of the semester and reminded students before each exam about the process to reinforce implementation.
In addition to recommendations for creating a standardized testing environment, the new procedure reinforced best practices for exam creation and evaluation. Faculty development had already begun on topics such as the use of exam test blueprints and ex post facto analysis. Many faculty were already using these tools for quality assurance, and the formalization of the exam procedure helped to support and enhance these ongoing efforts.
The summer trial revealed the need for some revisions such as improved communication between proctors and faculty. These revisions were made for quality improvement, and the new exam procedure was launched officially in the fall semester. The full implementation in the fall revealed some additional challenges such as ongoing communication issues and the need for more proctors to serve in these roles. Additional GAs were hired and trained as proctors, and GAs serving in other roles were trained to serve as additional proctors in times of high need such as midterm and finals weeks. The refinement of processes continued in the fall and spring terms. The implementation of the testing policy has been successful related to professionalism and diligence of faculty, GAs, and students. All 3 parties contribute to the effectiveness of the testing policy by following the exam procedures and providing feedback necessary for improvements. A formal evaluation process was undertaken to assess the outcomes of this process from stakeholders.
Kotter’s 8-step model (4) was used by the MET for organizational change as follows. The first step in the model, a sense of urgency, was stimulated by multiple factors that included concerns about high-stakes testing, the growing sophistication of faculty as educators, and a commitment to a culture of excellence. The second step, building a guiding coalition, was made up of faculty and administrators who had expertise or a commitment to enhancing measurement and evaluation processes. The third step, forming a strategic vision and initiatives, was created by the MET bylaws (Appendix A). The fourth step, enlisting a volunteer army, was accomplished by the enlistment of proctors and faculty and determining their roles within the exam protocols. The fifth step, removal of barriers, was accomplished by breaking down silos or barriers, such as separate departments within the SNHS, to create a cohesive and collaborative unit. The sixth step, generating short-term wins, was communicated via monthly MET meetings in which positive results were shared and celebrated. In the seventh step, sustaining acceleration, the MET hired more proctors and consulted experts for creating test blueprints, writing items, creating cover pages, and implementing new exam protocols. The eighth and final step of Kotter’s model, instituting change, was accomplished by the implementation of the MET’s new exam protocol, and its adoption by the SNHS’s policy committee for all courses. (4)
RESULTS
The MET finalized surveys, which were sent electronically via QuestionPro Survey Software to undergraduate students, proctors, and faculty in the fall and spring semesters after implementation. Proctors and faculty received surveys in the fall semester and students in the spring semester. Each survey included questions specific to the needs and perceptions of each group of stakeholders. Both quantitative and qualitative data were collected and were used to further modify and refine the testing procedures.
Preliminary review of the survey outcomes from proctors and faculty occurred at the end of the fall term. In the middle of the spring semester, student surveys were completed. Survey results were formally reviewed at the MET meeting in April 2015 and reported at the end-of-year faculty retreats. Data were collected using a descriptive, cross-sectional survey design. Both quantitative and qualitative data were used to further modify and refine the process.
Faculty Results
Faculty across all of the SNHS were sent a 7-question Likert survey that addressed the feasibility and implementation of the new exam procedures. One open-ended question at the end of the survey sought faculty suggestions for improving the exam process. There was no attempt to distinguish graduate from undergraduate faculty. Although 27 full-time faculty were sent the electronic survey, only 16 were completed (59.2%). Descriptive data were obtained related to usefulness and compliance with the exam process, use of test blueprints for exam creation, type of exams given (paper/pencil vs. online), use of proctors, reasons for not using proctors, and satisfaction with proctors.
Of the faculty surveyed, 54% reported use of blueprints for every test. Seventy-six percent reported using test blueprints 50% of the time or more, and 23% of the faculty admitted to using blueprints no more than 50% of the time. Use of proctors on a regular basis was reported by 77% of the respondents, with 56% of those being “satisfied” or “very satisfied” with the experience. Reasons for not using proctors included small class size (<20 students), lack of proctor availability, conflicting schedules, and perceived problems with proctor behaviors (late or distracted proctors). Barriers to proctor use and compliance with the exam process were further elaborated in an open-ended question: “What suggestions do you have for improving the exam process?” Responses included the need for consistent implementation of the policy, education and coordination of the proctors, and a standardized process for scheduling proctors using a GA as the coordinator.
Proctor Results
The pool of proctors consisted of faculty, administrators, secretaries and administrative support personnel, and GAs. In total, 9 surveys were sent to the GAs only. The survey consisted of 3 items that used a Likert scale for scoring and one open-ended question, The Likert-format items included, “How often did you serve as an exam proctor during the fall 2014 semester?” “Did you receive proctor training?” and “Training provided me with the necessary information to serve effectively as a proctor.” One hundred percent of the GA proctors responded to the survey. Sixty percent of the respondents served as proctors more than 15 times during the semester. Although 80% of the proctors were trained in the proctor role, only 40% described the training as effective.
Further delineation of proctor perceptions was elicited from one open-ended question: “Please offer comments on things that worked and things that didn’t, communication, and testing boxes and supplies.” Suggestions for improvement included the need for more proctors and improved communication between proctors and instructors. Use of standardized testing kits (containing pencils, calculators, scratch paper, earplugs) was found to be very helpful to the proctors.
Undergraduate Student Results
The student survey was sent electronically to undergraduate students in the SNHS. There was no attempt to distinguish between groups of students within the SNHS for purposes of this evaluation. Eighty-nine students received the electronic survey at the end of the semester. The survey was initially accessed by 69 students but was completed by 58 students, for a completion rate of 84%. Sixty-two percent of the students responded from smart phones, and 35% used laptop computers; only 3% accessed the survey using electronic tablets.
The 5 evaluation questions were as follows:
Are you aware that there is an SNHS examination protocol? (yes = 1, no = 2)
Did you receive information, written or verbal, about the examination protocol? (yes = 1, no = 2)
To what extent was the examination protocol implemented consistently? (always = 1, sometimes = 2, never = 3)
Were the testing rules easy to understand? (yes = 1, no = 2)
Were you provided with all the supplies needed for your examination? (yes = 1, no = 2)
The responses indicated that the exam protocol was explained to students before the implementation as recommended by the MET based on best practices for testing. Students overwhelmingly (98.28%) reported that they were aware of the SNHS exam policy and procedures (mean, 1.01; confidence interval, 0.983–1.051). Students also found the testing rules easy to understand (98.28%) and that they were provided with needed supplies while in the testing situation (mean, 1.086, confidence interval, 1.013–1.159). There was less agreement on protocol implementation, with 65.5% of students evaluating the extent of consistent adherence to protocol as “always” and 32.5% of students describing consistency as “sometimes.”
Outcomes of the testing protocol were assessed using 4- and 5-point Likert scales for each question. The questions related to perceptions of the exam policy and procedures on testing environment, testing anxiety, and intention to cheat. Table 1 is a summary of outcome measures and student responses.
Both quantitative and qualitative student responses were collected through the 3 questions listed in Table 1 and the following 2 open-ended questions: “How was your overall experience with the new testing rules?” and “What recommendations do you have for improving the exam protocol?” Overall, undergraduate student perceptions of the testing protocol were varied. Although 41.38% of students reported that the testing procedures contributed to a positive testing environment, nearly a third of the students (31%) felt it had no relationship to creating a positive testing environment and 27.5% of the students perceived a negative impact. Over half (n = 31) of the students reported increased anxiety during testing as a result of the exam protocol, as reflected by their choice of “disagree” (37.5%) or “strongly disagree” (17.8%) with the statement related to decreasing anxiety during testing. Some of the student comments to the open-ended questions provide insight into student rationale for their quantitative choices. For example, one student responded, “I like the exam protocol because it makes testing consistent in every class. In the beginning it felt like everyone was accusing us of cheating but once we got more information about it I understood it better. I feel like it will prepare us for boards better as well.” Another student responded, “It increases testing anxiety significantly by having two people in the room constantly walking around and staring at me. I am already stressed about taking the test but having such a strict protocol with two proctors always watching and needing to put absolutely all of my belong[ing]s on the opposite side of the room increases my stress to the point that I believe it has decreased my performance on some exams.” Another response was, “I think it makes me take the nursing program more seriously because it shows that the faculty of those programs do, too.” Finally, one student responded, “The professors watch the students like hawks and sometimes create an uncomfortable environment.”
DISCUSSION
One interesting and somewhat unexpected theme emerged in both open-ended questions: there were 24 comments related to stress associated with restrictions on use of personal pencils (47.7% of responses). Closely associated with this complaint was the inability to have water or a snack while taking the test. The MET’s recommendation was for the removal of all nonessential items that may serve to assist in academic dishonesty. Per the MET’s literature review, this recommendation was based on Stonecypher and Willson’s article (3) stating that 21%–90% of college students from all majors reported cheating. Additionally, the removal of nonessential items is used for high-stakes exams such as the NMTCB, ARRT, and NCLEX. Simulation of the anticipated exam environments will also aid in decreasing the anxiety levels experienced by the students during high-stakes exams.
Because the exam protocol was instituted with the goal of minimizing opportunities for cheating or the intention to cheat, the final question, related to intent to cheat, was of primary concern, as this was the third goal of the MET. Sixty percent of the students (n = 35) reported that the protocol either significantly decreased or decreased the intent to cheat, although a substantial number of respondents (39.6%) felt that the protocol had no effect on the intention to cheat. Although some individuals felt that there was no effect on the intent to cheat, the MET determined that the resulting outcomes would better reflect the true assessment of students’ knowledge and capabilities.
The creation of the MET has provided the SNHS faculty with a blueprint for forming a positive exam environment for the students. Initially, the MET focused on a means to enhance measurement and evaluation within the SNHS. By identifying these preliminary needs, the MET was able to create a basis for constructing an exam, writing a blueprint, developing an exam cover page, and preparing the students within the SNHS for their respective professional certification exams.
CONCLUSION
Although the initial implementation of the exam protocol was met with anxiety and trepidation, it has proven to be an effective platform for the SNHS in moving forward. Strong protocols have been developed for creating exams, consistent environments for exams, and descriptions of the roles of those involved in the exams, that is, professors, proctors, and students. Students who have been introduced to the exam protocol have grown accustomed to the methods and are vigilant in not deviating from these new protocols. One of the initial objectives for developing the protocols was to create a deterrent for those students who may have, in the past, considered some form of academic dishonesty. However, in evaluating the surveys given to the undergraduate students using the new protocols, it became apparent that almost 40% of the students felt that the protocols had no effect on their intention to cheat. Although 60% of the respondents recognized the benefits of the new system and its creation of an environment that replicates future high-stakes exams, these final statistics were not as prominent as anticipated. The MET has been able to develop high-quality teaching and learning practices while affording students an opportunity to enhance their own future and become successful in their chosen health-care fields.
DISCLOSURE
No potential conflict of interest relevant to this article was reported.
APPENDIX A: BYLAWS OF MET (AMENDED MAY 2016)
Article I: Purposes
The purposes of the MET are to:
1. Foster faculty development to use evidence-based assessment and evaluation strategies that are appropriate for the learner and learning goals.
2. Support utilization of evaluation data to measure the achievement of designated outcomes.
3. Promote curricular excellence through the use of assessment and evaluation data and policies to enhance the teaching–learning process.
Article II: Membership
Membership in the MET shall include:
1. SNHS faculty who have expertise or interest in the topic of measurement and evaluation.
2. SNHS faculty representing undergraduate- and graduate-level instruction from Nursing and Health Sciences.
3. A nonvoting advisory member from outside the organization as needed.
4. Nonvoting student members representing Nursing or Health Sciences.
5. A committee chair; the chair shall be the associate dean or other individual designated by the SNHS dean.
The terms of service are as follows:
1. Members shall serve a 2-year term and have the option to agree to additional terms.
2. Membership may be staggered so new members overlap terms of service with existing members.
Article III: Quorum and Proxy
A majority of the total number of members shall constitute a quorum. Proxies are not permitted.
Article IV: Amendments to Bylaws
The bylaws may be altered, amended, or repealed and new bylaws may be adopted by majority vote of the members.
APPENDIX B: EXAM PROCEDURE—SNHS (UNDERGRADUATE)
Standardized Exam Development and Review (Recommended)
1. All exams should have a cover sheet that includes directions and signature confirmation related to academic integrity.
2. Format for exams should be consistent throughout the exam (fonts, numbering vs. lettering of responses, punctuation, page numbering).
3. Exams should be created using an exam blueprint.
4. Exams will be reviewed in a faculty peer review process prior to administration.
5. Ex post facto analysis and documentation of the changes made to the exam as a result of the analysis shouls be kept by the faculty member who creates the exam.
6. Faculty members are encouraged to carefully examine ex post facto analysis of their exam and wait to post grades until this review has been conducted.
Standardized Expectations for the Testing Environment (Required)
1. In addition to the faculty member, at least 1 proctor should be used for every exam.
2. Students may be required to provide proof of identity when entering the testing environment.
3. Photo IDs should be required for standardized exams (e.g., HESI, NMED mock boards).
4. Faculty reserve the right to assign seats as students arrive for the exam.
5. Faculty should ensure adequate space between students or privacy filters if possible.
6. Late arrivals at exam will be admitted per the discretion of the faculty member.
7. Students must leave all personal items such as backpacks, books, papers, cell phones and other handheld devices, purses, briefcases, tissues, candy, gum, cough drops, beverage bottles or cups, good-luck charms, and so forth in a designated area to be retrieved after the exam is completed as the student exits. Outerwear such as coats and jackets, caps, hats, or hoods of any kind may not be worn. Sunglasses or visors and personal earplugs or earbuds may not be worn.
8. If there is a medical condition that requires access to food or drink during an exam, the student must make arrangements with the faculty member prior to the exam administration.
9. The faculty member should inform students about accessories they are permitted to use during the exam. Faculty members should provide those accessories (e.g., electrocardiography ruler, calculator).
10. Students are required to use pencils, scrap paper, earplugs, or calculators provided to them by the teacher and return those items when they complete the exam.
11. Students are not permitted to share items during an exam.
12. As much as possible, the faculty member should maintain a physical environment conducive to testing (e.g., adequate lighting, comfortable temperature, and minimal interruptions).
13. The faculty member should specify time limits for the exam.
14. Students should be reminded that once the exam begins they cannot leave the room unless an emergency arises.
15. Students should remain seated during the exam. If there is a question related to the exam (e.g., incomplete exam, missing pages, and misnumbered items) or concerns, students should raise their hand and wait for the faculty member or proctor to respond. The faculty member or proctor should respond to a student’s problem and raised hand in a timely manner.
16. Students should be notified that once the exam has started, the faculty member or proctor cannot answer any exam-item questions. Faculty members and proctors should provide no hints (verbal or nonverbal) on the correct answer for exam items. If students have a question about an exam item, they should be instructed to note their question on the exam booklet and the teacher should review these questions after the exam is completed.
17. The faculty member should designate responsibility for collecting the completed exams and inform the students of the process before the exam begins, in order to minimize congestion and noise when those who have completed the exam exit the room.
18. Disruptive students should be asked to leave the classroom and their exams should be confiscated. Class disruption is a violation of the student code of conduct (http://studentlife.rmu.edu/student-conduct/). Security should be notified if necessary.
Proctor Guidelines
1. The proctor should discuss specific exam details with the faculty member prior to the exam administration.
2. The proctor should arrive at least 10 min prior to the exam time.
3. Once the exam has started, the proctor cannot answer any exam-item questions related to content. The faculty member will instruct students to note any questions or concerns on the exam booklet. This message should be reinforced if questions arise.
4. Students should remain seated during the exam. If there is a concern, students should raise their hand and wait for the faculty member or proctor to respond. The proctor should provide no hints (verbal or nonverbal) on correct answers for exam items.
5. The proctor should supervise the students taking the exam, observing them to make sure they are on task during the exam and not in distress or being disruptive to other exam takers.
6. Proctors may choose to update students on the remaining testing time or refer them to the time that is automatically recorded on computerized exams.
Academic Integrity
Robert Morris University Academic Integrity Policy (http://academicaffairs.rmu.edu/academic-integrity) states that cheating includes but is not limited to the following:
1. Copying another student’s work with that student’s knowledge.
2. Copying another student’s work without that student’s knowledge.
3. Using prohibited devices during exams, such as calculators, cell phones, and personal digital assistants.
4. Soliciting or distributing exams, or information about exams, from or to other students.
5. Misrepresenting one’s identity in a course.
6. Misrepresenting entrance and admissions qualifications.
7. Allowing another person to take a student’s exam.
8. Allowing another person to take a course in a student’s name.
A student identified as cheating will be asked to hand in the exam immediately and will be reported to the university academic integrity committee. Academic sanctions will be determined by the faculty member. It is recommended that each faculty member determine the sanction related to cheating and share this information with the students at the beginning of the course. This information should also be included in the course syllabus.
Absence from Exam
Students must advise the faculty member prior to the start of the exam if they anticipate that they will be absent. Only students with legitimate excuses should be permitted to make up missed exams.
Details about notification related to anticipated missed exams should be included in the course syllabus.
Footnotes
Published online Aug. 3, 2018.
- Received for publication February 22, 2018.
- Accepted for publication July 26, 2018.