You are viewing the site in preview mode

Skip to main content
  • Original article
  • Open access
  • Published:

Re-thinking invigilation implementation: a mixed method approach to student perceptions and system usability for digital assessment adoption

Abstract

Due to the coronavirus disease 2019 (COVID-2019) and the sudden shift to online learning, higher education institutions adopted various approaches to reduce cheating in online assessments, mainly involving online live proctoring (OLP). The international assessment integrity regulation trend also applied to a university in South Africa, where accounting lecturers implemented using a mobile invigilation application (app) during online off-campus assessments. This study explored student perceptions and the system usability of an invigilation app during digital assessments to develop a framework for enhanced technology adoption. This study used a mixed-method, convergence parallel approach from the functionalist paradigm. This included the qualitative exploration of students’ open-ended online feedback and responses on the System Usability Scale (SUS) after using an invigilator app on their mobile phones during an assessment. The Technology Acceptance Model (TAM) (Davis, MIS Quarterly 13:319-340, 1989) was used as a theoretical foundation for the study. Universal elements of students’ perceived invigilation experiences identified by Marano et al. (Higher Education Quarterly e12506, 2023) were added to the TAM constructs to create a conceptual framework for the exploration. Students’ online written responses were analysed through the constant comparative method (Boeije, Quantity and Quality 36:391-409, 2002) using ATLAS.ti™ software. Findings were presented as data networks based on the codes created and discussed according to the conceptual framework. The SUS results converged with the qualitative findings to create a novel conceptual model for enhanced invigilation technology implementation. The converged conceptual framework serves as a blueprint for addressing the successful implementation of an invigilation system with average usability by intentionally preparing students and leveraging learnability to address individual and technological concerns, perceived usefulness, perceived ease of use and attitude for increased technology adoption.

Introduction

Assessments in higher education traditionally have been conducted in person, under specific invigilated and controlled environments, ensuring assessment results’ integrity (Marano et al. 2023). While online learning is no new occurrence for several universities worldwide, the sudden shift to online learning and remote assessments during the coronavirus disease 2019 (COVID-19) pandemic necessitated higher education institutions (HEI) to overcome numerous challenges in online teaching, learning and assessments (Selwyn et al. 2023). In this regard, institutions had to shift to new measures in online assessment while ensuring the academic integrity of students’ results (Brown et al. 2022). The verification and credentialing of student online learning and assessment results necessitated institutions to adapt practices where professional bodies require invigilated assessments (Hancock et al. 2022). Confirming student identity and completing assessment activities within an observed environment are especially required in subject disciplines such as accounting education since the international and national professional body necessitates verifying students’ individual work, proficiency and competency in specific assessments (Gallagher 2019). To ensure academic integrity, institutions adopted various approaches to confidently confirm student results by reducing cheating in online assessments by reverting to various remote proctoring systems (Arno et al. 2021), such as the invigilator app and Proctorio. Student preferences for the institutionally chosen invigilation software should be investigated and considered since students’ perceived user experience will influence adopting the new invigilation technology (Marano et al. 2023). Students’ positive perceived experiences will fast-track the institutions’ roll-out of flexible, online proctored assessments. In addition, the system’s usability and technological functions’ ease of use are important factors contributing to technology adoption. Therefore, this research sought to answer the following question: What elements from students’ perceptions and system usability feedback can be integrated into a conceptual framework to enhance the implementation of mobile invigilation applications in higher education? To answer this research question, the study necessitated a pragmatic approach to exploring students’ perceived experiences and feedback on the system's usability and, therefore, adopted a mixed-method approach that converged the qualitative and quantitative results to increase the adoption of the new invigilation app implementation during a digital assessment.

Literature review

Remote proctoring systems include online live proctoring (OLP), recorded proctoring (RP) and automated artificial intelligence proctoring (AI), and are often combined with other measures, such as lockdown browser settings (Hussein et al. 2020). Adding a lockdown browser functionality to OLP limits access to certain websites, and resources are blocked on the device from which the assessment is taken (Arno et al. 2021). OLP was widely implemented during the pandemic online assessments and essentially aimed to be an online version of real-time and in-person invigilation, where students are monitored remotely, in real-time and by human invigilators (Arno et al. 2021; Hussein et al. 2020). Likewise, RP was also used to good effect during COVID-19 online assessments and refers to proctoring that stored audio and video footage of students for post hoc human review (Almutawa 2021). AI proctoring was also used during and after the pandemic and refers to “an environment that utilises automated motion capture technologies to identify suspicious behaviours of the students and flag them for further review” (Marano et al. 2023 p. 4).

Globally, the universities’ adoption of online assessment invigilation prompted high-profile protests and petitions from student groups, which momentarily limited the acceptance and implementation of online proctoring in higher education (Selwyn et al. 2023). Using any form of online proctored assessments raised several issues and student pushback globally (Marano et al. 2023). The online proctored problems included students being required to install software to confirm their identity, scanning their room/environment with a camera and being recorded to monitor their behaviour, which students perceived as an invasion of their privacy (Balash 2021; Woldeab and Brothen 2021). Furthermore, students raised concerns over private data risks when they had to provide personal details to proctoring software (Barrett 2021). Likewise, students reported high levels of anxiety about being flagged as cheating while being proctored (Cheek 2020; Harwell 2020). In addition, proctoring systems often require a high-end computer, permanent high-speed internet connection, and data or paid Wi-Fi, which are challenges students experience in taking an online assessment with proctoring software (Eifel Corp 2021; Silverman 2021). Student experiences and feedback on remote invigilation are essential and should be considered when evaluating the successful implementation and further use of an invigilation system (Marano et al. 2023). The different OLP, RP, and AI proctoring systems also play a significant role in the student's experience, and the systems need in-depth scrutiny for best practices and deciding the way forward at an institution (Silverman 2021).

With the fleeting universal occurrence of student push-back, the use of online proctoring is anticipated to become the new normal post-pandemic (Chen 2023). In addition to the shift to increased invigilated online assessments, the post-pandemic digitally transformed higher education landscape equally must deal with a growing student-centred approach to flexible pedagogy and assessments (Wanner et al. 2024). The post-pandemic shift focuses inter alia on providing flexible assessments, which offer students the choice to explore online assessment at their preferred time and location (Kessels et al. 2024). Regardless of the pandemic and post-pandemic influences and the need for tailored online assessments, student-centred learning and assessment practices in higher education predominantly remain a lecturer-focused endeavour (Wanner et al. 2024). The lecturer’s dominance means that students’ perceived experiences are excluded from the choice of invigilation software and decision-making (Wanner et al. 2024). Since numerous online invigilation software are available for flexible off-campus assessments, re-thinking inclusivity in assessment taking is part of the post-pandemic narrative (Nieminen 2024), where the student voice on the systems usability, perceived ease of use and perceived usefulness is essential. In this regard, Huber et al. (2024) posit that the students’ perceived experiences and feedback should be included in the evaluation and decision-making of online assessment and invigilation software use.

Technology adoption

The technology acceptance model (TAM) by Davis (1989) often explains individuals' technology acceptance. This most frequently used model is the theoretical foundation for this study, providing an integrated approach to examining the technology acceptance research phenomenon (Nasir and Yurder 2015). The TAM combines concepts and perspectives to predict individuals’ tendency to accept innovative technology (Solomon 2017) and, in this study will be used to investigate students’ mobile invigilation app usage and experiences. The TAM is depicted in Fig. 1.

Fig. 1
figure 1

Technology acceptance model (Davis 1989)

The TAM indicates that the two main constructs of perceived usefulness (PU) and perceived ease of use (PEoU) determine users’ attitudes towards using (Att) and their behavioural intention to use (BI) technology (Davis 1989: Venkatesh 2003). Accordingly, PU refers to the benefits that users believe they could derive when using a technological device, while PEoU describes the degree to which users perceive that using technology would be easy or does not require much effort (Davis et al 1989; Davis 1993). The attitude towards using technology is described as an individual’s positive or negative appraisal of using the technology (Choi and Kim 2016). Several researchers suggest that a positive attitude will strengthen users' belief in technology use (Gao et al. 2016; Kubacki 2013). Therefore, the more favourable attitude an individual displays towards technology, the higher their behavioural intention to use it (Lu et al. 2003). Behavioural intention (BI), described as the possibility of an individual using any given technology, is an important aspect of onboarding users with new technology integrated into work or study contexts (Choi and Kim 2016). As such, the operational level of perceived ease of use serves as a vital factor when users consider self-, peer-, and collaborative-assessment types. The TAM provides a solid theoretical foundation for the study and permits further, in-depth exploration by integrating different constructs linked to the TAM, which is discussed within the conceptual framework section.

Conceptual framework

Marano et al. (2023) conducted a pragmatic scoping review on 21 papers to evaluate elements of student online proctoring experiences. Their study’s review focused on developing guiding principles for invigilation during online assessments for higher education providers. Marano et al.’s (2023) review synthesised the student online invigilation perceived experiences into specific elements as depicted in Fig. 2.

Fig. 2
figure 2

A visual representation of students’ online invigilation perceived experiences elements [Author compilation based on the review findings by Marano et al. (2023)]

The students’ perceived positive experience elements in using online invigilation refer to their mindfulness that online proctoring is as effective as in-person invigilation in deterring cheating in assessments (Alessio and Messinger 2021; Duncan and Joyner 2022; Njuguna 2022; Reedy et al. 2021). Students perceive online invigilation to facilitate their ability to complete assessments in the comfort of their own environment, which provides an uninterrupted space at a specific time (Coniam et al. 2021). Students enjoy the flexibility to complete their assessments with relative ease and comfort within their familiar surroundings, eliminating large and noisy gatherings in exam halls (Balash et al. 2021; Muckle et al. 2022). The choice exerted by students in completing assessments in a comfortable and familiar remote environment with online invigilation reduced the anxiety associated with assessment taking (Conijn et al. 2022). Students with learning disabilities such as attention deficit hyperactivity disorder (ADHA) and dealing with mental health conditions perceive online invigilated assessments as comfortable and less stressful, reducing their anxiety (Duncan and Joyner 2022). The last positive perceived experience element, labelled invigilated online assessments, refers to students perceiving the absence of online invigilation as easier to cheat, where only a few students within the study felt inclined to do so without impairment of online assessment conditions (Alessio et al. 2018; Duncan and Joyner 2022).

The negative perceived elements, including various disciplines, universities, and students, are evident in the Marano et al. (2023) review. Students' perceived challenges included concerns about user privacy, technical issues and personal concerns. In this regard, students’ concerns over user privacy were multi-layered and linked to the requirement of providing personal information to third parties (Bergmans et al. 2021). Students perceive their online invigilation as a mandatory surveillance requirement, and the scanning of their personal environment invades their privacy (Balash et al. 2021). Students’ technological concerns were linked to difficulty with installation, compatibility of their devices with invigilation software and the lack of technical real-time support (Njuguna 2022). Specifically, students identified video and audio difficulties, as well as connectivity and affordable and stable internet, as their primary technology challenges (Arnò et al. 2021; Njuguna 2022). Students’ perceived concerns about online invigilation are very individualistic and, in this regard, user behaviour, socio-economic situations and subconscious cognitive processing about being digitally invigilated led to weaker performance of online assessments for some students (Almutawa 2021). Similarly, students with low computer literacy or little experience in digital assessment-taking also raised individual concerns with online invigilation (Conijn et al. 2022).

The different elements of online invigilation perceived experiences identified by Marano et al. (2023), as summarised in the visual representation (Fig. 2), were linked to the TAM constructs (Fig. 1). Therefore, Fig. 3 portrays the concatenated conceptual framework combining the TAM constructs (Davies 1989) and online invigilation perceived elements by Marano et al. (2023). This study used this conceptual framework to explore students’ perceived experiences using an online invigilation app on their mobile phones to discover in-depth information on the possibility of their adopting the new technology.

Fig. 3
figure 3

A conceptual framework of students’ perceived experiences of online invigilation [Author compilation based on the review findings by Marano et al. (2023), combined with the TAM (Davies 1989)]

The study’s purpose originated from the institutions’ intention to implement invigilation software during online assessments post-pandemic and deliberated to include the student voice before large-scale implementation.

The invigilator mobile app

The Invigilator is a South African-developed mobile application, downloadable from the App Store, Google Play and AppGallery (Eifel Corp 2021). The cell phone app aims to assist examiners in mitigating online assessment risks through this tool specifically designed for the education sector (Eifel Corp 2021). The app allows examiners to choose from various photo authentication and speech recording tools to match the level of security required for a specific assessment (Eifel Corp 2021). The app uses AI to authenticate photos and flag recordings containing speech to address the issue of students attempting an assessment on behalf of someone else or obtaining assistance from fellow students during an assessment (Eifel Corp 2021). The app's benefits include cost-effectiveness, easy integration into institutions’ learning management systems (LMS), offline access, light data use, easy monitoring, and limited power drain of the cell phone (Eifel Corp 2021). The app features include selfie photos, audio recordings, taking additional photos and using a verification code. When registering on the app, students are instructed to take a selfie photo, which becomes the master or default photo. Other photos taken during assessments are verified against the master photo and students’ details on the institutions’ database. The AI facial recognition feature proves the photos, and the lecturer checks several selfies as part of the verification process. This is added to the assessment as an autogenerated quick response (QR) code (Eifel Corp 2021). The app has an audio-recordings feature where app records a pre-determined number of audio recordings of the student in his/her setting, at random intervals during the assessment. The audio recordings are analysed to check whether any talking was detected during the assessment, and lecturers access the recordings flagged by the system afterwards (Eifel Corp 2021; Shange 2023). In addition, lecturers can request a certain number of photos of specific items throughout the assessment, such as their examination sheet, identity document (ID) document, or student card (Eifel Corp 2021). The last feature refers to the verification code, which a student can be asked to enter on the app and check to detect a significant time delay (Eifel Corp 2021). The generic features are adapted and implemented in various ways, depending on the required security level. In this regard, the process that students from the institution had to follow to use the Invigilator app is discussed below.

Process followed using the invigilator app

The lecturers approached the implementation of the invigilator app pragmatically and scaffolded the students’ exposure to becoming accustomed to the app. In this regard, communication on the future implementation of the app during assessments commenced before the semester module started. During the first week of the module, students were given instructions to download and install the app and use the website’s online demo videos, support centre chat function, student downloadable frequently asked questions (FAQ) and student guide. The website information prepares students to use the app effectively on the day of the assessment, with instructions on what is needed on the day, how to manage messages coming in, handle mistakes such as accidentally exiting the app during the assessment and many more real-life scenarios. Real-time access to support safeguards students from struggling without technical support. During the second week, lecturers administered a mock test where students had to use the invigilator app. Lecturers factored in additional ‘technology time’ as part of the assessment time allocation for students to use the app since they had to access the assessment through the app, take a selfie for verification, take another selfie at a specific time and scan their answer sheets (take photos) and upload it for submission via the app. The app was also programmed to take two to three randomly timed audio recordings during the assessment. During week three of the module, students performed an accounting assessment using the Invigilator app; however, lecturers could discard the results if a student did not perform well within the assessment. This option which the lecturer could perform, was to mitigate student pushback in using a new invigilation software, where students might feel they were not sufficiently prepared for implementation during a high-stakes assessment. Therefore, students received a second assessment opportunity while using the app; however, it was not deemed a high-stakes assessment, and the students had ample opportunities to become accustomed to using the app before the actual high-stakes assessment using the app. During weeks 2 and 3, where students first performed the mock test and the accounting test as a safety net, students had to prepare themselves, their cell phones and their ability to use the app during the fourth week, where a high-stakes assessment was scheduled. Students received ample time to engage with their lecturers, who scheduled additional Zoom sessions with students who experienced challenges in using the app effectively during the first two practice opportunities. During this period, all student problems were ironed out and students performed a high-stakes assessment in week 4 of the 10-week module. Research highlights specific issues relating to using the Invigilator app, and therefore, careful consideration should be taken on how the technology affects student experience, attitude and adoption (Shange 2023). In this regard, are not only students’ perceived experiences of importance but also the students’ perceived system usability.

Method

Research paradigm

This study was conducted from the functional paradigm of social theory (Burrel and Morgan 1979). This pragmatic paradigm aims to develop practical solutions to real-world scenarios and views the world from a multi-dimensional approach (Feilzer 2010). The functionalist quadrant approach of this study aimed to understand and describe the social phenomenon of students’ perceptions of use, and the usability of an invigilation app during a digital assessment. This study sought to identify elements from students’ perceptions and the system usability feedback to integrate it into a conceptual framework to enhance the implementation of mobile invigilation applications in higher education. The study, therefore, adopted a mixed-method, convergent parallel approach, depicted in the flowchart in Fig. 4, to identify the elements for inclusion in a novel conceptual framework for successful implementation. The nature of the social phenomenon calls for varied inquiries and, therefore, includes quantitative and qualitative explorations. The mixed method steps followed include stating the research objectives, data collection, analysis, merging of results and interpretation of merged results for the quantitative and qualitative parts. The qualitative and quantitative data collection occurred simultaneously, while the analysis was performed separately (Creswell and Plano Clark 2011).

Fig. 4
figure 4

Flowchart of the mixed-method convergent parallel approach adapted from Creswell and Plano Clark (2011)

This design utilised the concurrent creation of quantitative and qualitative datasets that inform each other, where the merging of results includes interpreting the separate and combined results (Creswell and Plano Clark 2011). The qualitative method explored students’ open-ended online answers, portraying their perceptions of the ease of use and perceived usefulness of using the invigilator app during online accounting assessments. The quantitative method evaluated the usability of the invigilation app using the system usability scale and the results were merged and interpreted with the qualitative results to develop a converged framework.

Measurement instrument

A Google Forms document included a landing page informing the students about the purpose of the study and stating that participation was voluntary and anonymous. The researcher's details were also provided if they had any questions. The Google Forms document consisted of two sections, where section A consisted of biographic information such as the students’ campus, gender, and ethnicity (internal use), as well as the ten questions of the System Usability Scale (SUS) (Brook 1996). The SUS is a popular instrument for assessing perceived usability and has a high reliability with a Cronbach alpha value of 0.90 (Lewis 2018; Peres, Pham & Phillips 2013). The SUS has 10 items, with the odd number items stated as positively stated, whereas the even number items have a negative tone and are reversed scored (Brook 1996). The responses were on a Likert scale where 1 = strongly disagree and 5 = strongly agree.

In section B, the Google Forms document had four open-ended questions on i) students’ general perceptions of using the invigilator app, ii) perceived benefits, iii) challenges or negative aspects and preparation, and iv) suggestions for future implementation. This paper explored the students’ open-ended questions, and the quantitative analysis converged with the qualitative findings to inform the system’s usability and students' perceptions of adopting the invigilation app during digital assessments.

Data collection

The target population comprised full-time registered students in an accounting programme within first-, second- and third-year accounting modules. The Bachelor of Commerce (BCom) students enrolled in the accounting modules comprise more or less 1,000 students across the university’s three campuses within the Faculty of (omitted for author anonymity). The link to the online Google Forms survey was posted on the LMS module site of the accounting modules, and students were informed that participation was voluntary and anonymous. No human contact occurred during the data collection process. Students who voluntarily clicked on the link were navigated to the Google Forms landing page, which explained the purpose of the research, provided background to their inclusion and that they could withdraw at any time without any consequences. Students ticked a box to indicate that they understood the purpose of the research and as their informed consent. The researcher received the anonymous results of those students who completed the Google Forms, containing the demographic, System Usability Scale, and open-ended questions and submitted them on the Google Forms platform. The researcher had exclusive access to the results, password-protected on the researcher’s computer. The research study adhered to the ethical standards of academic research of the university’s Ethics Committee (ethics clearance number:).

Data analysis

Qualitative data analysis

The constructs from the TAM referring to PU, PEoU and Att were labelled as overarching deductive themes and the student perceptions as identified by Marano et al. (2023), were used as deductive categories. Student responses to the four open-ended questions on the Google Forms, were inserted into ATLAS.ti™ computer-aided qualitative data analysis software (CAQDAS). The 128 student responses were analysed by applying the constant comparative method (CCM), as described by Boeije (2002), which involves the creation of meaning units, indicating a specific idea that can stand on its own. During the CCM, each meaning unit is analysed and compared to new and current data, constantly adding and categorising the meaning units (Boeije 2002). The CCM implies a deductive-inductive approach where students’ open-ended responses were analysed during the coding process (Boeije 2002). In this regard, the inductively created new meaning units were categorised under the conceptual framework's deductive categories. During the coding process, an inductive category of preparation performed was created, with numerous new codes identified and linked to this category. To ensure scientific rigour, the researcher developed a codebook during the analysis process according to the procedures explained by DeCuir-Gunby et al. (2011). Developing a codebook aims to establish coding procedures that can be replicated and used to consolidate and validate the coding process. During the post hoc co-coding process (Creswell and Clark 2007), a qualitative research expert scrutinised the codebook containing the inductive codes, each with a short description or definition and an example from student responses. The researchers independently coded a section of the data using the codebook and convened to compare coding results (Hemmler et al. 2022). During the collaborative review, the researchers adjusted and refined the codebook, clarifying definitions, merging similar codes or adding new ones. Throughout the post hoc co-coding process, the researchers reached an agreement regarding the application of the codes within the data and verified the consistency and rigour of the coding (Creswell and Clark 2007).

Data networks present the qualitative content analysis within the qualitative results section. The data networks were created in ATLAS.ti™ and represented as figures to portray the individual and collective students’ perceptions, which are considered within the text discussion. The qualitative data was connected to the different categories and the TAM constructs as themes.

Quantitative analysis

To calculate the SUS score, first sum the score contributions from each item. Each item's score contribution will range from 0 to 5. For items 1, 3, 5, 7, and 9 the score contribution is the scale position minus 1. For items 2, 4, 6, 8 and 10, the contribution is 5 minus the scale position. Multiply the sum of the scores by 2.5 to obtain the overall value of SUS (Brook 1996). The Curved Grading Scale (CGS) by Sauro and Lewis (2016), depicted in Table 1, was used to interpret the SUS scores. The averages and standard deviation values of the ten SUS items are provided in the quantitative results section.

Table 1 The Suaro-Lewis Curved Grading Scale

Demographic information

Table 2 indicates that a total of 128 out of 1000 students completed the Google Forms open-ended questions and numbers were distributed according to the campus size. In this regard, 61 (47.66%) students were from the B Campus (largest in student numbers), 36 (28.13%) from the A Campus (second largest) and 31 (24.21%) from the C Campus, with the lowest student numbers. The majority of respondents were females (82/64.06%), with 40 (31.25%) male students and 6 choosing not to disclose their gender (4.69%) who answered the open-ended Google Forms questions.

Table 2 Respondents’ demographic information

Results and discussions

To answer the research question, the results of this study are portrayed in five sections in accordance with the last two steps of the mixed-method flow diagram, as depicted in Fig. 4. The five sections include the qualitative results, merging qualitative results, quantitative results, merging quantitative results, and interpretation of merged results. Each section’s contribution to address the research question will be explained within the specific sections below.

Qualitative results

The qualitative results section portrays the students' perceived experiences regarding using the mobile invigilation app, using the conceptual framework as a structural guide to identify elements. The results are represented according to the conceptual framework, referring to the constructs from TAM (Davis 1989) as deductive themes and elements from Marano et al. (2023) as categories. Table 3 summarises the deductive themes and categories with the inductively created codes. One inductive category of preparation performed was created and linked to the Attitudes theme, along with the inductive codes created for this category indicated. Each theme will be discussed in detail, referring to the categories and codes depicted in the different data networks created using ATLAS.ti™.

Table 3 Themes, categories and codes created (deductive and inductive)

Theme 1: perceived usefulness (PU)

The first three categories within the perceived usefulness theme are clustered since many codes are interlinked. The codes created in the three categories of convenience, increased comfort and reduced anxiety are indicated in the data network (Fig. 5). Various codes were created and linked to the categories, as indicated by the green codes (darker code blocks) connected to the white categories in Fig. 5. The second category of efficacy detecting cheating is discussed after the first category discussion.

Fig. 5
figure 5

Data network on convenience, increased comfort and reduced anxiety categories using the Invigilator app

Convenience, increased comfort and reduced anxiety categories

The convenience category is linked to the following codes: own environment, independent location, convenience and comfort, and complete assessment despite illness. Students’ responses to the convenience and comfort code included: “a person gets to write [sic] in the comfort of their own space”, “it brought the exam room to our own house” and “it makes it easier to take a test at home”. The independent location code indicated students’ positive perceptions through their quotations: “especially the fact that I can write from anywhere in the world and it is good as it also cuts down on travel costs”.

The complete assessment despite illness code indicates the benefit to at least two students: “not having to be in a contact space, being able to write a test from home”, “due to an illness, to write an assessment when I am sick and to do it in my own space”. Students indicated that writing in their own environment (code) was beneficial since they felt they “don't get disturbed” and “I felt most comfortable in my own room, and so did other students”. The increased comfort category has a number of codes linked to it: own environment, cost-efficient, not disturbed as in sit-down exam, efficient time – not travelling and being alone. Students indicated that being in their own environment increased their comfort and cost efficiency. These experiences are portrayed by the following student quotations: “I don’t need to travel to campus and is therefore cost efficient” and “it [the Invigilator app] is a better alternative than Zoom, since it costs less data”. The increased comfort in writing from home is also linked to not being disturbed as in a sit-down exam: “I don’t get disturbed, as during the usual sit-down, written exam”. Online assessments that were made possible due to the Invigilation app usage provide increased efficiency of time since they need not travel to campus. In addition, being alone benefited the students, since they felt “it is quieter in my own environment, than writing in an exam venue”. The comfort and convenience experienced by students are evident from the above-mentioned quotations and codes, which indicate that they perceived specific positive aspects related to doing online assessments using the Invigilator app. This study's findings align with Coniam et al. (2021), who posit that online invigilation enables users to complete assessments uninterrupted and in the comfort of their own environment. Similar to the study by Conijn et al. (2022), students from this study’s institution indicated that they, too, experience reduced anxiety, evident through the reduced anxiety, controlled environment, and being alone codes linked to the reduced anxiety category. In this regard, quotations such as “there is not a lot of people around you flipping pages and making you more anxious about the test”, “it is just you and your test and a friendly app that pops up now and then”, “the pressure is not as much as in class” and “I was able to track time properly and I was more at ease so I did not rush into my assessments”, demonstrate lower anxiety with home exams using the Invigilator app.

Efficacy detecting cheating category

The efficacy detecting cheating category revealed several linked codes, as depicted in the data network in Fig. 6. The codes that are positively associated with the efficacy category include assessment experience close to an exam, efficiency in detecting cheating, ethical manner to test knowledge, easy invigilation, location tracing, efficacy in preventing cheating, and voice recording, ensuring no cheating.

Fig. 6
figure 6

Data network on the efficacy detecting cheating category using the Invigilator app

Contrasting to the positive experiences when using the Invigilator app, a code was created that indicated that honest students feel it provides no security. This code contrasts with the positive experiences within this category and theme. In this regard, a student responded: “honestly, from an honest student, I saw no benefit (in using the app)”. The assessment experience close to an exam code that reflected the positive experience theme is substantiated by the following responses: “It made the remote writing feel more like exam conditions”, “gives you an exam sit-down feeling” and “being able to write assessments at home during COVID but also having the environment of a testing environment”. Students’ responses to the efficacy in detecting cheating included: “The app tracks time and limits possible misconduct such as copying or assessments taken in groups”, and “your location is tracked, so you cannot write with another person”. Students felt that it was an ethical manner to test knowledge: “This improves the ethics of a test”, “it ensures test integrity”, “this is an ethical testing of knowledge way”. In using the app, students also felt “there is no need for other external invigilation when using this app”, since it is an easy way to invigilate: “This makes invigilation easier without the worry of losing connection as in Zoom or another application”. (When using Zoom or similar software to invigilate an assessment, students need a consistent internet connection, and the impact is more severe when they lose connectivity. In using the Invigilation app, the students feel more at ease since they only need connectivity when they start and complete the assessment).

The location tracking function of the app was mentioned: “the app locates students and their proximity”, and “your location is tracked, so you cannot write with another person”. The experience of the location tracking and other functionalities indicate that students perceive the efficacy in preventing cheating in using the app. In this regard, the following statements substantiate their perceptions: “integrity of assessments is more preserved”, “it can help prevent cheating”, “cheating can be more limited”, “it helps ensure students are not dishonest during tests”, and “it helped a lot with integrity and honesty”. In addition, the voice recording ensured no cheating: “the voice recording ensured you are writing alone”, “the voice recording could pick up if you are talking to someone”. This study’s findings on the students’ perceptions of the Invigilator app’s efficacy in reducing cheating are in line with previous research which states that remote proctoring appears to reduce the occurrence of cheating (Adama et al. 2023; Newman 2022). A possible reason for this occurrence could include that students feel that they are under surveillance and, therefore, adapt their behaviour to not cheat (Lee and Fanguy 2022).

Likewise, proximity identification, as measured by the Invigilator app, indicates increased student perceptions that they should not cheat. This correlates with the findings by Dendir and Maxwell (2020), indicating that proxy measures reduced misconduct. Behaviour monitoring certainly influences students, albeit linked to their perceptions of being monitored and the changes of being caught out. However, contrasting research findings oppose this notion, where Newman (2022) posits that it is unclear whether invigilation or proctoring systems reduce misconduct. The primary perception of students within this study indicated that the Invigilator app enhanced the efficacy by the lecturer in detecting cheating. The findings of this study lean towards the notion that more research is needed to determine the effectiveness of remote proctoring. Students’ responses linked to the PU theme, reiterated positive experiences and extended to challenges experienced, which are discussed in the next section within the second theme.

Theme 2: perceived ease of use

Two categories of technological issues and individual concerns are classified within this theme. The data networks (Figs. 7 and 8) indicate the created codes within the categories and are discussed.

Fig. 7
figure 7

Data network on the technological issues category using the Invigilator app

Fig. 8
figure 8

Data network on the individual concerns category using the Invigilator app

Technological issues category

Technological issues that students experienced included the following: picture quality, scanned pages, unclear documents uploaded, network connectivity, submission problems, not enough time to upload scripts, stressful experiences and other technical issues. The first challenge that students reported referred to the picture quality. Students were instructed to take a selfie photo at the onset and at certain points within the assessment when the app prompted them to do so. In this regard, some responded that performing this task was hard since “when taking photos, it doesn’t give you a flash option, so you have to adjust the light in the room, because otherwise the quality is poor”, and “the picture quality was a bit off sometimes”.

In addition, during the initial use of the app early in the semester, the students had to take pictures of their written work and then submit it via the app. This too posed a problem to students as indicated by their responses: “The photos of the assessment were hard to read sometimes and should rather be a scanner”, “it sometimes froze and didn’t allow me to take good-quality photos of my work” and “Wi-Fi problems and reliability of accepting the photos”. From the previously mentioned quotations, it is evident that other technical issues arose as well as the stressful situations experienced. The poor picture quality resulted in unclear documents being uploaded, with consequences to their marks and stress levels as indicated by the responses: “uploading clear documents was a challenge and stressed me”, and “the unclear document had to be uploaded onto eFundi, with not a lot of time, making it stressful”. The app developers received this feedback early in the semester via the lecturers and university representative and adapted the app by adding a scanning function to overcome poor picture quality. At first, the photo quality was still not satisfactory as students responded with: “the quality of the scanned pages isn’t that good”; however, the app developers increased the scan functionality and thereafter students reported other challenges: “network problems while submitting the scanned pages”, “uploading the scanned documents in time since the timer runs out if you take too long”. The network connectivity, submission problems and technical issues were interlinked problems that students experienced as they responded: “network issues while using the app and then not submitting in time”, “having internet problems lead to unsuccessfully completing the assessment” and “the app loose connectivity and switches off”. Likewise, the students responded not having enough time to upload their script, considering the picture and scanned document quality, network connectivity and other technical issues, it is clear that students could have a stressful experience.

Individual concerns category

The second category within the PEoU theme includes individual concerns. Within this category, several codes were created, as indicated by the data networks’ red blocks (darker code blocks) in Fig. 8. In this regard, the codes indicated that students experienced the frequent identification verification required as distractions during assessments, resulting in concentration loss = loss in marks and a negative experience. Likewise, students indicated that the selfies they had to take as part of the frequent identification verification process wasted time. In this regard, they perceived the process as time consuming and their time management of the assessment answering interrupted.

These negative experiences are substantiated by the following quotes within the different codes: “It was very distracting while writing a test, you just began to answer a question then the app goes off and you need to take a photo”, “you frequently have to identify yourself”, “the constant interruptions stopped my momentum, focus and chain of thoughts” and “constant taking of selfies mad for a horrible experience as you lose focus and waste time taking photos, instead of writing, which means less time to write”. One student was able to communicate a perceived loss in marks due to the frequent interruptions: “with accounting and tax assessments, while doing calculations, I had to take pictures and I lost my thought process and lost at least 2 marks”. In addition to the interruptions, the students also indicated that the process they had to follow included performing many tasks and added to their negative experience in using the app.

Their responses included: “I was nervous to try it for the first time because of all the procedures that need to take place, like taking photos of my student card and taking selfies at intervals”, as well as “using the app requires a lot of task activities that take away the focus and time”. In contrast to the majority of students reporting a negative experience, there were a number of students who indicated they had no challenge in using the app: “I had no challenge”, “the app did what it was supposed to do”, and “no challenges”. The findings of this study are in line with very recent additional research on the Invigilator app at another South African institution, indicating various challenges experienced by students (Shange 2023). In this regard, this study concludes similar findings to Shange (2023), where the constant taking of selfies was distracting, network connection issues arose, and they experienced challenges in uploading the answer sheets. Emotional responses of stress and frustration due to the challenges in uploading the scripts are evident in this study and that of previous research (Rossade et al. 2022; Shange 2023). The students experienced numerous challenges in using the Invigilator app, resulting in negative experiences, which influenced their perceived ease of use of the Invigilator app.

Theme 3: attitude

Two categories were linked to the attitude construct of the TAM namely privacy concerns and preparation performed. However, none of the students reported any privacy concerns within the open-ended questions on using the Invigilation app and no codes were created. Therefore, this category is not included within any of the data networks. The inductively created preparation performed category within this theme indicated the students’ experience of their lecturers’ effort and activities to prepare them to use the app during an assessment.

Preparation to use the invigilator app category

The implementation of the Invigilator app stretched across first-, second- and third-year students within the accounting programme and included various modules. The different lecturers implemented various strategies, which caused the varied responses, as indicated in the data network (Fig. 9).

Fig. 9
figure 9

Data network on the preparation performed by lecturers for student’s category to use the Invigilator app

The blue-coloured codes (darker code blocks) in the data network represent the different methods followed to prepare the students to use the Invigilation app. Students’ responses to the open-ended question on their preparation indicated that YouTube videos, Zoom sessions, demo sessions, practise assessments, instructions provided, documents, slides, personal attention and instructional manuals were used throughout the process. The process aimed to gradually prepare the students to use the app during a high-stakes assessment. Students responded with: “We were referred to YouTube videos and then through YouTube video had to do a demo”, “we had a Zoom session and they sent out documents to explain how it works”, “she gave us an instruction manual on how to use it and let us practice how to use it in mock tests” and “we were given slides and preparations”. Students communicated that “some lecturers made sure everybody knew how the app worked beforehand, and everybody was comfortable with the using of the app”. Most of the responses indicate that students perceived that they received good preparation, as indicated by “they took us through a short setup session on how to use and where to find, everything was done very well”, “(we were) very prepared, the lecture took us through the invigilators before a formal assessment”, and “we were prepared by knowing to always have your student card with you while writing and we always got the QR code/pin a day before the assessment. We were told how the app works and that it requires pictures during the test and that gave clarity of what to expect”. However, in contrast, a few students responded that they received poor preparation: “Most lectures didn’t prepare us, they just expected us to know how to use it”, “not sufficiently enough, because they had also just heard about the app”.

“We were referred to YouTube videos and then through YouTube video had to do a demo, but with lecturers and preparation it was very poorly executed”, and “our lecturers didn’t prepare us well as they also didn’t understand how to use it”. The findings of this study indicate that lecturers need to follow a specific process involving numerous sessions and different methods to prepare students to use invigilation software. In this regard, lecturers need high-quality training in using the app themselves, as substantiated by Frankl and Bitter (2012). Likewise, this study’s findings indicate that various methods and instructions were provided to prepare the students; however, numerous students still perceived the training as inadequate, which is also in line with students’ experiences in similar Invigilation app research (Shange 2023). More technical support is needed for struggling students, which corresponds with Cramp et al. (2019), who posit that students should receive adequate technical support to minimise their anxiety and prepare them with the invigilation software for high-stakes assessments.

Merging qualitative results

The qualitative results offer a novel perspective and a comprehensive understanding of student perceptions of a mobile invigilation app during online assessments. The identified elements extend to the international discourse on invigilation implementation since the in-depth exploration of the technology acceptance constructs of perceived usefulness, perceived ease of use and attitude (TAM Davis 1989) was probed according to elements identified from a cohort of international studies (Marano el al. 2023). The identified elements influencing students’ perceived usefulness, perceived ease of use and attitude toward the invigilation software within this study allow for the qualitative results to be further explored and merged into the initial conceptual framework based on the TAM. In this regard, the merging of the qualitative results contributes to and expands the TAM as an existing theoretical framework for technology adoption within this specific mobile invigilation application usage (Fig. 10).

Fig. 10
figure 10

Qualitative consolidated conceptual framework for adopting an invigilation app. [Authors’ expansion on the conceptual model after the empirical exploration]

This visual representation indicates the empirical contribution of the mixed-methods’ merging step, according to the conceptional framework, as this study’s results expanded on the globally recognised TAM to decipher and explain the implications of an invigilation app’s perceived ease of use and perceived usefulness for possible adoption. The results indicate that four primary elements influence students’ PU of invigilation software: convenience, increased comfort, reduced anxiety and efficacy in detecting cheating. The details regarding individual and technical concerns identified in this study provide the perceived practical challenges at the individual level. The technical and individual challenges influence students’ PEoU. As PEoU influences PU, results indicate that the technical and individual student concerns regarding the invigilation app should be addressed to mitigate the sense of difficulty in using the invigilation technology. If institutions intend to implement invigilation software, they need to ensure that students’ technical difficulties are overcome, which will enhance the PEoU experience and, in turn, positively influence the PU perception of the software. The technological challenges students reported validate further investigation. In this regard, the students perceived system usability was evaluated to determine if their technological challenges with the system were detrimental to their perceived ease of use, which would negatively affect their adoption. The system usability evaluation was based on the system usability scale feedback and will be discussed in the next section to clarify the negatively perceived usability of the system.

Quantitative results

The quantitative results section provides answers regarding the system usability of the invigilation app during a digital assessment as part of the quantitative analysis of the mixed-method study. The averages and standard deviations (SD) of the individual questions of the SUS are portrayed in Fig. 11 (where the scores of the even-numbered items were reversed).

Fig. 11
figure 11

Averages and standard deviation of the system usability scale questions answers

The average score for question 1 (I think that I would like to use this invigilator app frequently) was 2.4 (SD = 1.29), indicating that most students strongly disagreed or disagreed and did not want to continue using this app. The 3.3 average score (SD = 1.252) for question 2 (I found the invigilator app unnecessarily complex) is a reversed score, indicating that it is not that complex for students to use. Question 3’s average of 3.6 (SD 1.109) indicates the students perceived the app as easy to use (Q3 = I thought the invigilator app was easy to use). The 3.3 average score (SD = 1.068) for question 4 (I think that I would need the support of a technical person) is reverse scored, indicating that students do not perceive a need for assistance from a technical person to use the app. The average score for question 5 (I found the various functions in this invigilator app were well integrated) was 3.2 (SD = 1.118), indicating that most students agreed or strongly agreed. The 2.8 average score (SD = 1.266) for question 6 (I thought there was too much inconsistency in this invigilator app) is a reversed score item, indicating that perceptions were balanced regarding system inconsistencies. Question 7’s average of 3.8 (SD = 1.048) indicates the students perceived the app as easy to learn (Q7 = I would imagine that most people would learn to use this invigilator app very quickly). The 2.1 average score (SD = 1.087) for question 8 (I found the invigilator app very cumbersome to use) is a reversed score item, indicating that students did not perceive the app to be very cumbersome. The average score for question 9 (I felt very confident using the Invigilator App) was 3.1 (SD = 1.219), indicating that more students strongly agreed or agreed with this statement. The 3.5 average score (SD = 1.136) for question 10 (I needed to learn a lot of things before I could get going with this invigilator app) is reverse scored, indicating that the majority of students perceived that they would learn the app easily.

The score of SUS (Q = average of the question)

$$\begin{array}{c}=\left(\left(\text{Q}1-1\right)+\left(5-\text{Q}2\right)+\left(\text{Q}3-1\right)+\left(5-\text{Q}4\right)+\left(\text{Q}5-1\right)+\left(5-\text{Q}6\right)+\left(\text{Q}7-1\right)+\left(5-\text{Q}8\right)+\left(\text{Q}9-1\right)+\left(5-\text{Q}10\right)\ast2.5\right)\\=\left(\left(2.4-1\right)+\left(5-3.3\right)+\left(3.6-1\right)+\left(5-3.3\right)+\left(3.2-1\right)+\left(5-2.8\right)+\left(3.8-1\right)+\left(5-2.1\right)+\left(3.1-1\right)+\left(5-3.5\right)\ast2.5\right)\\=65.08\end{array}$$

According to the CGS by Sauro and Lewis (2016), (Table 1), the invigilator app that was used during digital assessments’ SUS value of 65.08 is at a C grade and on the 41–59 percentile. The SUS value is just below average, where Lewis and Sauro (2018) posit that a SUS score of 68 indicates an average experience, and 80 is a perceived good experience. Dutta et al. (2022), emphasise that testing a system’s usability is crucial as it highlights any missing or overly complex features within the designed system. Therefore, the average SUS achieved in this study for the mobile invigilation app indicates that certain elements of the system might not meet the needs of the students and that students are not ready to adopt it. If the system does not meet the needs of the students, it may cause stress and negatively impact their intention to use it in future (Dutta et al. 2022). However, according to Singh et al. (2019), the more students learn to use the system, the better their overall performance in using it will become. To further understand the SUS results, it is necessary to dissect the SUS constructs. Although the SUS was designed as a unidimensional (one-factor) measurement of perceived usability (Brook 1996), between 1996 and 2009, researchers indicated a consistent two-factor structure with items 4 and 10 aligning with Learnability and the rest of the items with Usability (Borsci et al. 2009; Lewis & Sauro 2009). Although different results on the two-factor structure were achieved after 2009, numerous studies have continued to refer to the two factors (James and Sauro 2017). In this regard, the results on question 4 of the SUS indicate that students do not perceive a need for assistance from a technical person to use the app, while in the responses on question 10, students perceive that they would learn the app easily. These two learnability factor results, coupled with feedback on question 7, indicated that students perceived that they would learn the app easily. Thus, the learnability factor of this system is sufficiently high, and students perceive that with enough practice or training, they will learn to use it effectively, which is in line with previous research on system usability measurements (Dutta et al., 2022; Singh et al. 2019). The two-factor structure provides more in-depth explanations, and these quantitative results must be synthesised with the qualitative results and, therefore, merged within the conceptual framework. This is explained in the next section.

Merging of quantitative results

The merging of the quantitative results into the qualitative consolidated conceptual framework is depicted in Fig. 12. Previous research indicates that the SUS is linked to the PEoU construct of the TAM (Lah et al. 2020) and can, therefore, be associated with this construct in the qualitative consolidated conceptual framework. In this regard, the two factors of learnability and usability from the SUS are added to the qualitative consolidated conceptual framework, which directly influences the PEoU. If students learned to operate or use the technology, it would positively influence their PEoU and intention to use it (Lah et al. 2020). For students to learn to use a system or technology, certain training is needed, and therefore, the SUS learnability and usability constructs are also linked to the element of preparation performed within the conceptual framework.

Fig. 12
figure 12

Converged conceptual framework for perceived ease of use, perceived usability and usability of an invigilation app for technology adoption. [Author compilation based on student experiences empirical evidence]

Further interpretation of the merged qualitative and quantitative results into the conceptual framework is discussed in the next section.

Interpretation of merged results

This study’s qualitative and quantitative results effectively identified and organised the elements of students’ perceptions of using an invigilation app and the system usability during digital assessments. The integration of these results provides empirical insights into the conceptual framework derived from the TAM and elements outlined by Marano (2023). This new framework elucidates how various constructs and elements interconnect to influence PEoU, PU, attitude and behavioural intention to use new invigilation technology.

The connection between usability constructs and conceptual model elements

The SUS constructs of usability and learnability are closely associated with the TAM's perceived ease of use construct; however, they also exert influence on other elements within the newly developed conceptual framework. The learnability construct, which refers to the user’s ability to effectively engage with new technology, is particularly noteworthy. An analysis of students' responses specific to SUS items 4 and 10 indicates a strong belief that they could efficiently learn to operate the invigilator app, despite the systems’ usability rating being a C grade. This suggests that while usability may be perceived as moderate, students’ confidence in their ability to learn the system is high.

The influence of learnability

The learnability construct’s unique proposition highlights its influence on both technological and individual concerns, linked to the PEoU construct, as well as the element of preparation performed, which is linked to the attitude construct of TAM. This implies that effective training and preparation are vital in shaping student’s attitudes towards adopting new technology. Likewise, the structured and intentional preparation activities described in the qualitative results reveal that students' technological and individual concerns interplay with the learnability factor, emphasising the importance of comprehensive training since it influences the PEoU.

Addressing technological concerns through training

The qualitative analysis sheds light on various technological concerns expressed by students, including issues related to picture quality, document clarity, network connectivity, submission problems and time constraints. These challenges can be mitigated through targeted training sessions led by lecturers. The positive quantitative results regarding the students’ perceived learnability of the invigilator app, reinforce this notion, indicating that with appropriate training, many of these concerns can be effectively addressed. Furthermore, individual concerns raised by students such as frequent identification verification causing distractions, can also be alleviated through proper preparatory measures. By addressing these issues through instructional materials like videos, online sessions and PDFs developed by lecturers, students can improve their time management skills during assessments and reduce interruptions and anxiety.

The role of preparation in shaping attitudes

The preparation performed category emerges as a distinctive concept within this study’s findings and plays a critical role within the converged conceptual framework. It is posited that this category not only directly influences students’ attitudes, but also indirectly impacts PEoU and PU constructs by affecting other elements. Familiarity with the app gained through training fosters a sense of convenience for students, enhances their comfort during off-campus assessments and alleviates test-taking anxiety.

Implications for institutions and instructors

Institutions should prioritise creating an environment where students feel comfortable with invigilation technology prior to high-stakes assessments. The convenience, increased comfort, reduced anxiety, and efficacy in detecting cheating are elements that lecturers and institutions should intentionally address and communicate during the onboard training. If students are comfortable with the invigilation technology before the high-stakes assessment, they will have increased PU and PEoU, decreased anxiety and a positive attitude towards the app, providing a means to detect cheating to ensure a fair assessment. Although the students in the study did not directly mention privacy concerns, this element is linked to influencing the attitude of students and needs to be addressed within the preparation students must receive before using the invigilation app.

Conclusions

Higher education institutions need practical and technological invigilation solutions to ensure online assessment integrity. It is, therefore, imperative for institutions to navigate students’ negative invigilation technology experiences and have a clear framework to successfully implement invigilation technologies. This study’s pragmatic exploration produced renewed answers to address off-campus digital assessment invigilation. Specifically, the study identified elements of students’ technology adoption and the system usability elements of a mobile invigilation application and developed a novel framework for successful implementation. The conceptual framework is theoretically grounded and depicts the elements that enhance successful mobile invigilation implementation. Elements such as individual- and technological concerns can be addressed by establishing the level of learnability and usability of a system while addressing concerns and elements through specific preparation and training to increase the PEoU. Likewise, the convenience, increased comfort, reduced anxiety and perception of efficacy in detecting cheating are elements that influence students’ PU of the invigilation technology and must also be addressed through intentional preparation and training. In this regard, the systems’ low perceived usability can be overcome since students perceived that they would be able to quickly learn to use the invigilation app if trained sufficiently. Several elements within this study’s results indicate that the South African student experiences regarding invigilation software use are very similar to those of students in other parts of the world. Therefore, the transferability of the newly converged conceptual framework extends to institutions and students worldwide. This study represents a pioneering contribution to the field of assessment integrity by exploring and investigating the usability and technology adoption of students in using a new invigilation app during digital assessments and providing a guiding framework for successful future implementation.

Future implications

The practical implications of this study are multi-faceted and extend beyond its local significance to contribute to the international discourse on invigilation implementation and technology adoption of students during digital assessments. The study provides a robust framework for addressing issues limiting the adoption of invigilation technology. Likewise, the framework indicates that addressing the PEoU, PU, Att, and system usability will mitigate the pushback from students regarding future invigilation implementation. The findings also indicate the importance of including student feedback in the technology adoption evaluation process, which should include a qualitative exploration and a quantitative investigation, seen as the convergence of the data sets substantiates each other within this study. The findings of this study also recommend that institutions develop intentional and standardised processes and procedures to address individual and technological concerns. In addition, institutions should leverage this study’s insights to guide the evaluation of invigilation software adoption in a pragmatic and intentional manner. Likewise, the guiding document should provide clear guidelines to address the technical requirements and support for training and preparation procedures for implementing new invigilation technology. Future research explorations can also extend to the lecturers’ experiences and be used to develop a guide to determine students’ perceived invigilation software usability and specific training material addressing the elements in the framework. The general investigative properties of the converged conceptual framework provide transferrable guidelines to other institutions and various invigilation software. Emphasis should be placed on creating proactive usability and technology adoption measurements that are formally incorporated into institutional procedures.

Data availability

The datasets analysed during the current study cannot be made publicly available due to the institutions’ ethical clearance prescriptions and restrictions on data safety. Once the institution has adapted its ethical clearance for research studies, will the authors be able to make the data publicly available.

Abbreviations

ADHD:

Attention deficit hyperactivity disorder

AI:

Artificial intelligence

App:

Application

Att:

Attitude

BCom:

Bachelor of Commerce

BI:

Behavioural intention to use

CAQDAS:

Computer-aided qualitative data analysis software

CCM:

Constant comparative method

COVID-19:

Corona Virus Disease 2019

FAQ:

Frequently asked questions

HEI:

Higher education institutions

ID:

Identity document

LMS:

Learning management system

NWU:

North-West University

OLP:

Online live proctoring

PEoU:

Perceived ease of use

PU:

Perceived usefulness

QR:

Quick response

RP:

Recorded proctoring

SD:

Standard deviation

TAM:

Technology acceptance model

References

Download references

Acknowledgements

Acknowledgement is given to the School of Accounting Sciences lecturers who implemented the Invigilation App during online assessments and posted the link on their LMS sites for the students to access.

Funding

Open access funding provided by North-West University. No funding was received to perform this research.

Author information

Authors and Affiliations

Authors

Contributions

The sole author conceptualised the study, performed the analysis and wrote the manuscript. The lecturers in the School of Accounting Sciences assisted in posting the link to the online questionnaire for the students on the learning management system platform.

Corresponding author

Correspondence to Liandi van den Berg.

Ethics declarations

Competing interests

The author has no competing interest to declare.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

van den Berg, L. Re-thinking invigilation implementation: a mixed method approach to student perceptions and system usability for digital assessment adoption. Int J Educ Integr 21, 10 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s40979-025-00183-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s40979-025-00183-w

Keywords