Improving assessment with intelligent automation
Rubrics are widely used to assess essays, lesson plans, reflections of professional practice, and teaching portfolios. Within the Melbourne Graduate School of Education, rubrics have been found to:
- clarify required standards of work for students and staff
- improve comparability of marking by staff teaching into the same subject
- provide a guide for staff to target feedback to students.
However, despite their utility, rubrics do not solve every assessment problem. The workload associated with marking in professional learning is still very high, particularly when double marking is required. The job is still essentially manual. Also, there is still variability in the quality of rubrics developed and feedback provided to students. There is research evidence that even high-quality feedback is not utilised, or not able to be utilised, by students to improve performance (Price and Handley, 2010). In addition, experienced educators worry that rubric-based marking does not encourage recognition of outstanding or particularly creative responses of a very high standard. Thus, while the use of rubrics has generated much improvement in assessment and feedback, there is still more to achieve.
The basic premise underpinning this project is that the effectiveness and productivity of the use of rubrics can now be enhanced by application of modern assessment analytics. In particular, applications that use new machine learning techniques can simultaneously reduce staff workload, improve comparability, increase sensitivity to non-standard or creative responses, and improve the timeliness, quality and quantity of feedback to students. This involves the integration of the use of rubrics with measurement science techniques and digital technologies such as Natural Language Processing, sematic analysis, automated essay marking, machine learning, and automated feedback systems.
This project proposes to use machine learning techniques to ‘mine’ these materials and to prototype the development of an application – a teacher’s assistant for rubric assessment, or ‘Tara’ – which will:
- provide feedback to teachers on the structural quality of any rubrics they develop
- provide an on-balance assessment of any piece of student work against a set of rubrics
- provide automated suggestions for feedback to students
- test the reliability and validity of the prototype using measurement theory
The prototype application will initially be used in a formative evaluation context to gauge the reaction of students and teachers to the efficacy of the application.
- Narelle English, Project Lead
- Dr Sandra Milligan
- Pam Robertson
University of Melbourne Learning and Teaching Initiatives grants 2017