Electronic Rubric Grading: Establishing a Foundation for the Future

Presented by: Jayzona Alberto and Jorge Godinez

A challenge that is commonplace to most institutions of higher education is the measurement of learning outcomes through performance-based assessments. Another obstacle many colleges and universities face is the transition from paper to electronic grading. At our institution, performance-based assessments have been transformed into interactive, electronic versions in which faculty graders use their computers or mobile devices to submit scored rubrics complete with feedback for the students. A major advantage of the software we utilize is the ability to link learning outcomes to assessments, resulting in generating robust reports that display longitudinal data for individual students and each cohort.

A pilot test period was launched in December 2014 and lasted through February 2015 in which 12 faculty leaders were trained as graders for four exams. Staff inputted the rubrics into the software and tagged institutional and program learning outcomes to each criterion and rubric. The faculty graders were provided with tablets to utilize during grading. Staff members leading the pilot test observed faculty graders to ensure ease of use for multiple sessions. After collecting feedback from the faculty, staff members modified the setup and process of grading electronic exams. Additional training sessions and guides were provided to the remaining faculty members. In March 2015, electronic rubric grading was fully implemented and launched at our institution for different portions of the curriculum.

The development, implementation, and launching of electronic rubric grading was challenging, yet produced numerous benefits for our program. Faculty, staff, and administration can generate more robust reports that measure students’ institutional and program learning outcomes. With this data, longitudinal reports are a means to track and measure student outcomes. The transition to electronic grading has also resulted in a streamlined process, efficient and smoother workflow, decrease in the likelihood of miscalculations due to human error, quicker turnaround for releasing grades and feedback to faculty members and students, and data to support recommended improvements to the curriculum. Our advancement in electronic grading ensures our place at the forefront of assessment technologies and higher education.

 

Intended Structure of Session

0:00 – 0:02: Introduction of Presenters

0:02 – 0:05: Background on our Institution

0:05 – 0:08: Laying out the Logistics (Incl. Manipulating the System, Establishing a Clearer Process)

0:08 – 0:13: Faculty Buy-in and Training (Developing Manuals, Cheat Sheets, & Guidelines)

0:13 – 0:18: Pilot Testing with Lead Faculty (12 of 68 Faculty Members)

0:18 – 0:20: Addressing Issues during Pilot Test Period

0:20 – 0:25: Launching Electronic Rubrics (Large Scale)

0:25 – 0:35: Live Demo Simulation of Rubrics

0:35 – 0:38: Our Conclusions and Plans for Improvement

0:38 – 0:45: Interactive Q&A with Presenters

 

Session Learning Outcomes

By the end of this 45-minute concurrent session, participants will be able to:

1. Develop a plan to implement electronic rubrics on a large scale at their institution.

2. Discuss the benefits of using our software of choice for electronic grading.

3. Determine methods of addressing issues during pilot test periods.

4. Understand the importance of generating longitudinal reports using rubric results to help improve student outcomes.

Keywords: Assessment, Evidence of Impact, Learner Analytics.

 

Speak Your Mind

*