Introduction
During the I-HE2023 conference in Istanbul (04 - 06 October 2023), Oli Howson (Open University, UK), presented the paper "Leveraging Jupyter Notebooks in Assessment Development, Completion and Marking to Reduce Cognitive Load and Minimise Errors".
In the redesign of a Level 2 Algorithms and Data Structures module, the Open University made a pivotal choice to center all learning material and assessments around Jupyter Notebooks. This shift allowed them to delve into the lessons learned and the benefits reaped from an assessment process now wholly contained within the Jupyter Notebook framework.
The complexity and risk of errors associated with assessment development often burden creators. However, by harnessing the power of Jupyter Notebooks, authors can tap into an array of features that bolster efficiency, foster collaboration, and mitigate the chances of mistakes.
Good Practices
Howson published a paper on the topic in the Conference Proceedings, which delves into multiple facets where Jupyter Notebooks significantly elevate the assessment development journey. He discusses how the integration of code, documentation, and visualization within a singular environment streamlines the writing and testing of assessments. This not only enhances readability and sustainability but also expedites error identification and resolution through interactive and iterative development.
Furthermore, the paper explores the collaborative potential of Jupyter Notebooks within the GitHub platform, enabling seamless teamwork among developers and reviewers. It touches upon an internally developed plugin that minimizes discrepancies between versions seen by students and assessors, ensuring synchronicity.
Jupyter Notebooks also play a pivotal role in alleviating cognitive load for students during assessments. Conventional assessments, laden with complex instructions, queries, and various response formats, can overwhelm learners, especially when dealing with digital programming tasks involving multiple document types. Jupyter Notebooks mitigate this overload by offering an interactive, visually engaging assessment environment. Collating questions, code, and responses within a unified digital format allows for easier navigation, reducing the cognitive effort required to comprehend and respond to questions.
Moreover, Jupyter Notebooks provides instantaneous feedback to students, offering clarity and guidance. Through automated testing and visual indicators like color-coded responses, they enable swift identification of areas requiring improvement, empowering students to adjust their approach promptly.
Lastly, Jupyter Notebooks streamline the assessment process for markers by providing a single document per student for evaluation, eliminating the need to sift through additional code files. This not only eases cognitive burden but also prevents the hassle of chasing students for omitted files.
Read the full paper