Developing a change management measurement instrument for effective use of examination administration system

The quantitative change management measurement for effective use, which consists of operationalized change drivers, dimensions and sub-dimensions for effective use of Examination Administration System (EAS) in the South African context, has been scarcely discussed in the literature. This paper intends to develop and validate a change management measurement instrument (CHAMI) to measure the effective use of EAS in South Africa’s (SA) Technical and Vocational Education and Training (TVET) colleges. The CHAMI was developed drawing on the survey measurement instruments literature for change management and effective use and assessed quantitatively using data collected from 215 EAS users across all 318 TVET colleges from nine provinces of SA. The empirical results found the existence of construct validity of the CHAMI with 11 dimensions (i.e. user involvement and change recognition, user satisfaction, performance measurement, technology use, EAS adaptive use, EAS verification, user learning, transparent interaction, representational fidelity, informed action, and effective use) and 63 questions. Another contribution this study stems from the way the research constructs in a measurement model are operationalized as they incorporate measurement class (i


Introduction
The challenge of ineffective measurement instruments is recognized among the information systems (IS) and management system practitioners (Eden et al., 2020;Hair et al., 2012).
From the literature reviewed in this paper there is a limited knowledge on the measurement scales of the effective use of the information systems specifically for Examination Administration System (EAS) in South Africa's Technical and Vocational Education and Training (TVET) colleges.Effective use construct is grounded on Effective Use Theory which refers to it as a degree to which the user communities believe that the way they put to use information systems, helps them to executes their activities (e.g.examinations administration) to produce desired results (Trieu et al., 2023;Burton-Jones and Grange, 2013).Borrowing from this definition, in this paper effective use refers to a degree to which the EAS user communities believe that the manner in which they use EAS helps them to carry out their examination administration service effectively and efficiently (Burton-Jones & Grange, 2013).
In this study, the EAS is a technology of some kind which refers to the socio-technical, organizational systems that are utilised to gather, process, store, extract and distribute information in an institution (Piccoli and Pigni, 2018).It refers to a specialized customized-business IS that end user community at different Government departments such as Department of Higher Education Training (DHET) and associated agencies (i.e., TVET colleges) levels (e.g., operational and management) must use to perform their day to day operation such as facilitating the examination administration (public service delivery) to the society (Department of Telecommunications and Postal Services, 2017),EAS refers to is a public service administration-information systems that public service agents at different organizational levels (e.g., operational and management) use to carry out examinations administration (Mphahlele et al., 2024).Contextualising South Africa's TVET colleges, suggest that both South Africa's public and private TVET colleges were debuted premised on the FET (Further Education andTraining) colleges Act No. 16 of 2006 (DHET, 2006).TVET colleges (previously known as FET colleges) are one of the three tiers of higher education institutions alongside universities in South Africa (SA), which provide programme-based vocational and occupational training to the members of public in SA and neighbouring countries such as Swaziland and Namibia.Currently there are 183 in South Africa (50 public TVET and 133 private colleges) (DHET, 2023).What makes SA a captivating case for this study is South Africa is a league leader in e-Government development in Africa, with an EGDI value of 0.7357 (United Nations Report, 2022).From this reasoning, effective use of EAS in South African TVET colleges is an ideal setting for this study.
While there is scanty literature on quantitative measurement of effective use, there is existence of literature for example in performance information systems in medium-sized enterprises (Marchand & Raymond, 2017); in finance (Haake et al., 2018); Healthcare (Eden et al., 2019); Emergency information systems in United States (Bonaretti, 2019); health care information systems in the Chinese context (Yang et al., 2021); Analytic decision support systems (Campbell & Roberts, 2019) and in financial system in the Australia large tertiary education provider (Eden et al., 2020).The effective use measurement items for generic information systems were provisioned in Burton-Jones & Grange, (2013) but were very seldom used as reported in Eden et al. (2020) study).Mphahlele et al. (2024) study proffered dimensions for effective use of EAS but fell short of making provision for measurement scales.
Scholars such as Sahibzada et al. (2019) suggest that the measurement scales must be context dependent.First challenge from literature reviewed in this study is that there are limited studies that have developed specific measurement scales for effective use of EAS in South Africa's TVET colleges context.Scholars such as Bonaretti, (2019) and Burton-Jones & Volkoff, (2017) contend that the dimension of effective use must be premised on the context in focus of investigation.For example in the healthcare record, the effective use consists of accuracy, consistency, and action aspects of reflection (Burton-Jones & Volkoff, 2017).In emergency management, effective use was regarded as a multifaceted formative concept made from promptness, currency, and responsiveness (Bonaretti, 2019).In a case of effective use of EAS dimensions were uncoated in Mphahlele et al. (2024) study however there is limited knowledge on measurement scales.Second challenge is the measurement scales for user satisfaction indicators (Bölen, 2020;Calvo-Porral & Nieto-Mengotti, 2019;Nagy, 2018); information systems adaptive use (Burton-Jones & Grange, 2013;Haake et al., 2018a) and information flow (Adapted from Yang et al., 2021;Kim et al., 2009) validity test was not in line with Hair et al. (2022) guide for testing validity of formative constructs which could lead to erroneous results as validity tests for reflective constructs is inappropriate for formative constructs (Petter et al., 2007).In this study what is new is measurement class (i.e., Reflective or formative) is explicit on instrument and validity test for formative constructs is done in line with Hair et al. (2022) guideline.The inclusion of all measurement classes of the construct was important to avert the reaching erroneous results (Eden et al., 2020).
A change management dimensions, technology use and sub-dimension (i.e.verification of information systems) scales of effective use have not been adequately covered in the previous body of knowledge.The previous measurement studies on the effective use of information systems had questionable measurement items with exception of Eden et al. (2020).For an illustration purpose, Bonaretti, (2019) and Burton-Jones & Volkoff, (2017) were criticised for offering measurement scales not premised on the effective use theory which could lead to erroneous results due to latent content validity issues (Eden et al., 2020).Eden et al., 2020argument hinges on the fact that the effective use of seminal study by Burton-Jones and Grange (2013) indicator guide was not followed in many instances, therefore there are content validity issues with the previous information systems measurement studies on the effective use of IS.
Prior study such as Eden et al., (2020) offers effective use measurement scales premised on effective use theory, measurement scales were designed to measure effective utilization of enterprise systems not effective use of EAS.More specifically, there is scant literature on change management measurement instruments (CHAMI) for the effective use of various information systems.Given the important role of change management in effective use of information systems (Mphahlele et al., 2024), the change management indicators study is equally important as inadequate measurement scales could lead to erroneous recommendations (Eden et al., 2020;Hair et al. 2012).This study aims to develop and validate a CHAMI for effective use of IS, specifically the Examination Administration System (EAS) in South African Technical and Vocational Education and Training (TVET) colleges.Apart from contributing to the development of CHAMI for effective use in information systems, TVET colleges' management may benefit from CHAMI by identifying areas of improvement and to develop target change management strategies such as user training, communication and gamification of performance.Hence the previous studies measurement scales are insufficient for CHAMI to measure effective use of Examination Administration System in South African TVET colleges which incorporate novel change management factors scales (i.e., user involvement and change recognition, user satisfaction, performance measurement scales) technology (EAS) use, EAS adaptive use, EAS verification, and effective use of Examination Administration System scales.Though this work build on Eden et al., (2020), this paper offers CHAMI measurement scales premised on new a approach by integrating change management factors, Unified Theory of Acceptance and Use of Technology (UTAUT) and Effective Use Theory (EUT) scales.
This study relies on the measurement scales integration of change management framework, UTAUT (Venkatesh et al., 2003) and EUT (Burton-Jones & Grange, 2013) as foundation.The measurement scales of change management dimensions (Smith et al., 2022;Rohmah & Subriadi, 2022;Musungwini & Mono, 2019;Yusif et al., 2019;Mogogole & Jokonya, 2018;Abatan & Maharaj, 2018;Chepa et al., 2017;Ziemba & Oblak, 2015;Kapupu & Mignerat, 2015), such as top management support/guiding team and activities, user involvement and change recognition, change shared vision, planning a EAS as a change, effective communication, user training, user satisfaction, information flow and performance measurement, gleaned from the change management literature.This paper now extends these measurement scales to the post implementation phase (effective use) of the information systems which is new as reported by Osnes et al. (2018).The change management dimensions are viewed as facilitating conditions of the technology use (Venkatesh et al., 2003).Yusif et al. (2019) study also revealed that change management factors influence the introduction of IS as change in health public organization in Ghana.In the context of this study, the use of Examination Administration System represents a change in TVET colleges in South Africa and it is believed that these change management strategies such as user training will in turn lead to effective use.The effective use is constituted by the three dimensions (i.e.representational fidelity, transparent interaction and informed action), and three sub-dimensions (i.e., verification, adaptation and learning) (Burton-Jones & Grange, 2013;2008) hence the inclusion of these dimensions measurement scales gleaned from literature to evaluate the effective use of EAS.The existing measurement scales from literature (Mogogole & Jokonya, 2018;Abatan & Maharaj, 2018;Visser, 2017;Visser et al., 2013;2012) have not been tested in the effective use (post-adoption-phase) of EAS context.
Reflecting on the first part of this study's aim, namely the development of a CHAMI, change management can be defined as the structured approach that allows individuals to accept changes as necessitated by projects (Mrini et al., 2019).In the context of this study, change management refers to a set of strategies (programmes or plans), processes, activities, techniques and structured approaches (tools), required to manage the user community to use the EAS.
Linked to the second part of this study, namely to validate the CHAMI, from the literature it seems there is limited knowledge on the measure scales of the effective use of the information systems (Osnes et al., 2018).For example the studies such as Mogogole & Jokonya (2018) and Abatan & Maharaj (2018) that offered measurement scales were for the implementation of IS projects.However, there is existence of literature on the usage of information systems in business intelligence (Trieu et al., 2022), in the Chinese hospital context (Yang et al., 2021); usage of enterprise financial system in the Australia large tertiary education provider (Eden et al. 2020); and use of information systems in general (Burton-Jones and Grange, 2013).Yet, it seems change management dimensions, technology use and sub-dimension (i.e.verification of IS) scales of effective use have not been adequately covered in the previous body of knowledge.The previous measurement studies on the effective use of IS had questionable measurement items, which could lead to erroneous results (Eden et al., 2020).These authors' argument hinges on the fact that the effective use of seminal study by Burton-Jones and Grange (2013) indicator guide was not followed in many instances leading to the content validation issue that is covered in this research study.Further to this research gap, there is inadequate literature on effective use of EAS, specifically in the South African TVET college settings leading to the second part of this study, namely the validation of the CHAMI in the EAS in SA TVET colleges.
The rest of the study is structured as follows: literature review, research methodology, statistical analysis results, discussions and recommendations, study contributions, ethical considerations and conclusion.Next section presents a literature and theories espoused to develop CHAMI for effective use of EAS.

Literature Review
A systematic literature analysis was used to develop the CHAMI to achieve first component of the research aim.This section discusses literature review on the change management measurement, effective use, specifically for Examination Administration System concluding with the triangulation of the change management framework, the Unified Theory of Acceptance and Use of Technology and the EUT as this study theoretical framework.Next section discusses change management measurement literature.

Change management measurement
The technology initiatives such as information systems are transformational in nature as they often change the operational processes, organizational structure and mechanisms of service delivery (Afyonluoğlu et al., 2014).One such example is the use of EAS, changing the way in which the TVET colleges' users perform examination administration and management.The examination administration processes in TVET colleges were digitised and transformed as processes that still require manual handling are slow and often lead to service delivery challenges in institutions of higher learning (Adam et al., 2017).Consequently, in this study, the use of EAS represents change.Musungwini and Mono (2019) conceptualized a change as a new system, introducing a new system.How the examinations are administered represent a change), a new system (Musungwini and Mono, 2019)) in the context of this study.The change is a standard situation especially when an organization introduces the application of new information systems often accompanied by changes in operational processes, organizational structure and mechanisms of service delivery (Rohmah and Subriadi, 2022).
The IS implementation life cycle involves pre-implementation, implementation, post-implementation phases (Butarbutar et al., 2023;Xei et al., 2022).The focus of this study is the development and validation of CHAMI for effective use of Examination Administration System (post-acceptance phase).The post-acceptance phase of IS involves the change management, top management support, project management, implementation team and user training (Osnes et al., 2018).The change management has been highlighted as a mandatory intervention to restructure the operational processes of the organization to make them appropriate to make them fit into the new IS (Osnes et al., 2018).Osnes et al. (2018) study recommends a continuation of the change management strategies and measurement scales in the implementation or development phase of new IS for the post-acceptance phase.Change management is one of the key factors that has a history of influencing the success or failure of the information systems implementations.For example, in their study to identify the success factors for the IS implementation, Wijaya et al. (2018) have also recommended the change management dimension as extremely important for the IS implementation.On the contrary, Phaphoom et al. (2018) study found change management as one of the factors that led to IS implementation failures.
Change management can be defined as the structured approach that allows individuals to accept changes as necessitated by projects (Mrini et al., 2019).Mogogole and Jokonya (2018) also refer to the change management as the practices and tools for management and controlling of individuals to realize the necessary organization results succeeding a change.In the context of this study, change management refers to a set of strategies (programmes or plans), processes, activities, techniques and structured approaches (tools), required to manage the user community to effectively use the EAS.
The change management in the post-implementation phase such as effective use remains a gap that the higher education institutions must still consider (Adam et al., 2017).A topic of change management framework for the effective use of information systems is at the nascent phase as highlighted in Osnes et al. (2018) study.A crucial step in ensuring effective use of information systems is having a better grasp of the change management measurement scales for effective use.This would among others require management to produce knowledge on what the EAS user community wants to effectively use EAS.To attain this, management requires an appraisal and the application of effective change management (Rohmah & Subriadi, 2022).The appraisal and application of effective change management requires effective measurement scales.Osnes et al. (2018) suggested a continuation of change management strategies in the IS implementation to post-acceptance phase is another opportunity that can be considered as it will also offer an extension of measurement scales.However, Finney and Corbett (2007) study reported that although change management has emerged as one of the most cited key factors for information system implementations, they observed a variance with regard to what change management entails and which change management strategies and associated measurement scales would work.Few authors drew up a discrete measurement scale of change management (Mogogole & Jokonya, 2018;Abatan & Maharaj, 2018;Visser, 2017;Visser et al., 2013;2012).There are limited discrete indicators for the change management especially in post-implementation phase such as effective use of EAS from literature.Next section discusses effective use within EAS.

Effective Use within the Examination Administration System
Tantua and Godwin-Biragbara (2020) refer to the operational type of IS, management information system as a collection of hardware, software, data, process and people components which work together to facilitate the input, processing, storage, output and control actions in order to transform raw data into information that can be used to support the organization's operational activities, management and decision making.The perspective of the information systems is that there is an information processing/ information chain that includes inputs, processing, output, and data storage.In the present study, the EAS referred to is a specialized customizedbusiness function information system that end user community at different organizational levels (e.g.operational and management) have to use to perform their daily activities such as facilitating the examinations administration and management.
A lack of use or partial use of EAS is of great concern to Government as it invested huge amounts of cash on Examination Administration System to improve the examination administration and support the business processes in higher education institutions, specifically TVET colleges.The challenge of lack or partial use of information system is known as Recker et al. (2019) acknowledged that as the information systems evolves, the key issue is to ensure that information systems is used effectively which remains a challenge to date.It is against this background that an idea was conceived of finding out the measurement indicators for effective use of information systems.Therefore, from that view it is of great importance to know what worked or not worked or how the EAS use can be improved for it to be fully used and/or effectively used.
According to Burton-Jones & Straub (2006) an individual-level information systems use implies a single user employing features of the system to carry out an activity.A user in this study refers to an individual (EAS users) who uses the EAS which include direct and indirect users (Urus, 2013).In this present study, the EAS use definition adopts the aggregate construct formulation.It is defined in terms of 180-degrees three equal angles (i.e. 60 degrees' angles) which are the EAS users performing examination administration (task or activities) on EAS.The highlight of this conceptualization is that there is 360 degrees change management circle (activities (examination administration), organization/group structures (EAS users), change or system & processes (EAS) which carries out an informatics square with four equal angles (inputs, processing, output, and data storage)).
Despite the Eden et al. (2020) validation of the measurement items provisioned in the seminal work by Burton-Jones & Grange (2013).There are still limitations as this study conceptualization of effective use includes three sub-dimensions (i.e., adaptation, learning, and verification) provided by seminal work by Burton-Jones & Grange (2013).Although this study leveraged on the Eden et al. (2020), the seminal work self-report measures, which were not validated, require individuals to reflect on their lived experiences, which are often affected by cultural components (Lenz et al., 2018).The results of validation studies are not generalizable indiscriminately due to the importance of the different effective use dimensions and exact measurement items associated with each dimension may be contextualised (Van der Vaart, 2021;Burton-Jones & Volkoff, 2017).Eden at al. (2020) case study was single context relating to operational use of an enterprise system to perform finance tasks.The data in Eden at al. (2020) study was also collected from one cohort with a limited sample of 12-30 participants.A sample was also a constraint as various dimensions of effective use may be more appropriate for multigroup (Eden et al., 2020).The Eden et al. (2020) 2022) also made a call for more investigations on how to use the IS effectively, the context-specific applications and EUT extension similar to Eden et al. (2020) who also recommended expansion of the nomological network surrounding effective use.Eden et al. (2020) study also warned against the assumption of error free formative scales that could spell internal consistency reliability concept is not suitable (Diamantopoulos, 2006).Hair et al. (2014) suggest that scholars must establish content validity prior to any empirical evaluation of constructs measured formatively as 6/33 erroneously evaluated formative measures using the reflective criteria.Practically this must be done by ensuring that all integrative constructs operationalization capture all dimensions including the measurement class and partial least squares-structural equation modelling (PLS-SEM) evaluation of measurement models include all dimensions (Hair et al., 2014).The inclusion of all measurement classes of the construct is important to avert the reaching erroneous results (Eden et al., 2020).Koo et al. (2015) define the exploitative use (EAS use) as making use of more available EAS features to accomplish the activities.Whereas explorative use refers to making a use of the EAS in a novel/innovative way to support the tasks (Koo et al., 2015).Koo et al. (2015) in their study to investigate the explorative and exploitative uses of smartphones found that exploitive use has an impact on the explorative use.Therefore, adding the variables such as adaptive use (explorative use) and technology use (exploitative use) measurement scales to measure the effective use will add to the mechanism through which the effective use of EAS can be measured.These additions will extend measurement theory for the nomological network surrounding effective use called for by previous studies such as Eden et al. (2020).Next section discusses the theories for use and effective use.

Use theories
This study relies on UTAUT and the EUT in developing a measurement for change management.The two information systems' theories are chosen as the previous studies have relied on the widely used theories and theoretical models such as Lewin's (1951) change management models; Aladwani (2001) model; Contingency model of IT organization; Stages of Growth Model; Prosci's ADKAR (Creasey, 2019); Kotter Eight Steps Model (Almanei et al., 2018), McKinsey 7S model (Sachdeva, 2008) and ITIL 3 (Susanto, 2016)) built to explain the change management with no regard to the post-acceptance behaviour such as effective use.The previous studies offered measurement scales for the change management for adoption rather than post-adoption phase (effective use) which is the focus of this study.This study also recognizes that there are individuals (McKinsey 7S model (Sachdeva, 2008); Prosci's ADKAR (Creasey, 2019)), organization (Prosci's ADKAR (Creasey, 2019); Kotter Eight Steps Model (Almanei et al., 2018); McKinsey 7S model (Sachdeva, 2008); environment (Task-Technology Fit model (Goodhue & Thompson, 1995)) and technology (ITIL 3 (Susanto, 2016)) change factors measurement scales for the use.However, this study focuses on the individual (EAS end users) factors measurement scales for the use and subsequent effective use of EAS.
In this study, the change management dimensions are regarded as facilitating conditions of the use of technology in the UTAUT.One of the motivations for the adoption of UTAUT theory is according to Venkatesh et al. (2016), the UTAUT model has been applied in many various research contexts for instance e-library (Mashaba & Pretorius, 2023) and technology acceptance Dwivedi et al., 2019).From this study literature review there is inadequate application of the UTAUT model in particular the EAS in South African TVET colleges setting.Venkatesh et al. (2016) also reported some progress in the area of UTAUT integration with other models such as IS success model (Kim et al., 2007) and Task-Technology Fit (Zhou et al., 2010).The researchers also observed scanty integration of UTAUT with other use models such as effective use which is the focus of this study.
Moreover, because of a significantly higher percentage in application, extension and integration to study technology innovation acceptance and use the UTAUT model is deemed a superior model than the former models.Notwithstanding superiority, higher percentage in application, extension and integration of UTAUT.There is still space for a thorough analysis of the key measurement scales for the facilitating conditions of Examination Administration System use (Venkatesh et al., 2012).Therefore, the UTAUT is viewed as a strong basis for identifying the change management measure scales for EAS use and subsequent effective use.The other relevant theory is EUT, which is discussed next.
The EUT is an information system theory based on the representational theory established by Burton-Jones & Grange (2013) to explain any information systems effective use and model the nature (i.e.representational fidelity, transparent interaction and informed action).The EUT is akin to representation theory nature and drivers (i.e.adaptation, learning, verification activities) of effective use.The EUT theorizes that representational fidelity, transparent interaction and informed action determines one's effective use level.Therefore, indirectly positioning representation theory as an appropriate theory to also investigate the measurement scales for the effective use of EAS.As the effective use is a social reality which can only be explained and demystified through different individuals' eyes (views).
A key component of the theory is that information system is used in a manner that assists individuals to achieve relevant goals, a transition from the system uses theory (Burton-Jones and Straub, 2006), which is to carry out activities to attain specific goals.Burton-Jones and Grange (2013) based their notion of effective use on the representation theory (Wand and Weber, 1995;Wand and Weber, 1990) which states that any IS comprises physical, surface, and deep dimensions.Premised on the grounding theory of representation the effective use is generic in nature, whereby its three hierarchically related dimensions (i.e.representational fidelity, transparent interaction and informed action) and three sub-dimensions (i.e.verification, adaptation and learning) are proposed to be applicable to any IS (Burton-Jones and Grange, 2008).Burton-Jones and Grange (2013) also claim the applicability of their study results to all IS since EUT is grounded on representational theory.The validation and extension of the EUT has been left to future studies.A systemic review provided a necessary foundation for the operationalization of the CHAMI.Validation results for CHAMI to measure effective use of EAS are also presented.Next section discusses methodology followed for this study.

Research and Methodology
For the first part of the research aim, a systematic literature analysis was used to develop the CHAMI, thereafter the CHAMI was validated espousing a quantitative survey using a questionnaire to gather the data.
For the validation, the unit of analysis comprised of individuals who directly and indirectly use EAS, with a population size (N) of 12,538 (Department of Higher Education and Training, 2017).Raosoft Inc. ( 2004) was used to calculate the optimal sample size, with a margin of error of 0.05, confidence interval of 90%, response distribution of 50% resulting in a sample size of 265.A total of 215 responses were received representing 81.13% response rate (RR) which is greater than the average RR (Holtom et al., 2022).Of the 215 respondents, 139 were direct users and 71 indirect users which is satisfactory since the measurement scales are measuring effective use.
The CHAMI consists of 63 questions, using a five-point Likert scale (i.e., from strongly disagree (1) to strongly agree (5)), and was pilot tested through Microsoft forms with 17 target audience from eight different user cohorts.Thereafter, an online questionnaire was exported to Microsoft Excel spreadsheet to make it compatible with WarpPLS version 8.0 software and for researchers to check data completeness.Coding of questions in Table 1.PLS-SEM was used for assessment of measurement model.The model was evaluated using adapted measurement models evaluation phases offered by Hair et al. (2022); Hair et al. (2014).Although PLS-SEM is applied, only measurement models' evaluation results are presented as structural evaluation results are out of scope for this paper.The reflectively measured constructs (i.e.user involvement and change recognition, user training, top management support and activities, planning a EAS as a change, change shared vision, performance measurement, technology use, user learning, EAS verification, transparent interaction, representational fidelity, informed action, transparent interaction, representational fidelity, and informed action) were tested for internal consistency reliability, convergent validity and discriminant validity in line with the measurement guide line proffered by Hair et al. (2022).In line with Hair et al. (2022) guide, formatively measured constructs (i.e., user satisfaction information flow, EAS adaptive use and effective use) were assessed for convergent validity, multicollinearity, and relevance of formative indicator weights and significance.The measurement items validity test was carried out using the convergent and highest discriminant validity tests through the output results of Fornell-Larcker criterion, combined and cross-loading, Heterotrait-Monotrait (HTMT) Ratio and full collinearity assessment.Next section presents measurement constructs conceptualization and operationalization, question code, measurement scales/questions developed and gleaned from literature and validated measurement scales.

Findings and Discussions
To address the aim of the study, the results section are presented in two, namely development of CHAMI, and the validation of the CHAMI for effective use of EAS.

Development of Change Management Measurement Instrument
To achieve development of CHAMI objective, measurement scales and constructs were gleaned from the literature and adapted into this study context.The measurement constructs conceptualization and operationalization, measurement class, question code, measurement scales/questions developed and gleaned from literature are presented in Table 1.UT is an extent to which an EAS users have been trained on how to use EAS through training presentations, self-study and manuals or any other means (Rohmah & Subriadi, 2022;Musungwini & Mono, 2019;Abatan & Maharaj, 2018;Chepa et al., 2017;Kapupu & Mignerat, 2015;Ziemba & Oblak, 2015).

UT1
The training provided was adequate to assist me to use the EAS at my organization.

User satisfactio n (USAT)
An USAT denotes an extent to which the EAS users believe that the EAS meet their expectations/ requirements.In other words, the user satisfaction construct will reflect a level of EAS users' positive feeling to continue using EAS and degree of satisfying EAS users' expectations (Ziemba & Oblak, 2015).

USAT 3
The EAS is user friendly.

Content
USAT 4 The EAS provides the precise information I need.

EC1
I am informed about the use of EAS as for example it makes provision for reporting of examinations service delivery at all levels (i.e., campus, college, regional, province and national).

EC2
There was adequate communication about the EAS.

EC3
There was adequate communication about the EAS.

EC4
My organization management presented about the EAS at our staff meetings.

Informati on flow (IF)
IF refers to a degree to which expectations of users and requests are fulfilled by the EAS information.(Ziemba & Oblak, 2015).In this study, what sets the information flow apart from representational fidelity (RF) is that RF is a property of EAS effective use while IF is a property of EAS use (Burton-Jones & Grange, 2013).However, other scholars found that information quality affects effective use (Yang et al., 2021) PC refers to an extent to which there is a clearly documented change management process pertaining the use of EAS (Yusif et al., 2019;Abatan & Maharaj, 2018;Ziemba & Oblak, 2015).

PC1
At my organization there was a change management program in place for EAS.

PC2
At my organization there was change management plan for the EAS.PC3 At my organization there was change management meeting minutes for the EAS.

Change shared vision (CSV)
CSV denotes an extent to which EAS users know how the EAS will be able to transform the organization (e.g.workflows) or at least their jobs.In other words, is the degree to which the EAS users know about the EAS future destination (end goal) (Rohmah & Subriadi, 2022;Mogogole & Jokonya, 2018;Ziemba & Oblak, 2015).

CSV1
The EAS changed the way I render the examination services.
Authors CSV2 EAS improves my organization's examination services.

CSV3
The EAS assist me in communication with different stakeholders (e.g.students).

Performa nce measurem ent (PM)
PM refers to a concept of recognition and reward which is essentially a degree to which performance of examination administration on the EAS is gamified and assessed on an individual performance appraisal system including annual performance plan for the organization and its units (Rohmah & Subriad, 2022;Chepa et al., 2017;Ziemba & Oblak, 2015).

PM1
Carrying out/performing the examination administration tasks on the EAS contributes towards my performance and/or organization performance.

PM2
My EAS activities/tasks are considered during my performance appraisals.

PM3
My contribution towards roll out of EAS at my organization was recognized by my peers and other stakeholders (e.g.Management and students).

Technolog y use (TU)
TU denotes an extent to which EAS user communities exploit the Examination Administration System to perform examination administration (tasks or activities) (Sun et al., 2019;Haake, 2017;Koo et al., 2015).

TU1
When I was using EAS, I felt completely absorbed in what I was doing.EAU refers to an extent to which EAS users explore EAS by adjustments made to the EAS either to enhance its ability to represent examination administration (reality) or to improve accessibility through the surface (e.g.,, reports or customize an interface of EAS per user role), deep structures (e.g., customize data rules and EAS features), physical structure (e.g.,, use of larger computer screens) to improve representation fidelity (Sun et al., 2019;Haake, 2017;Burton-Jones et al., 2017).

EAU1
I tried new features in EAS.

EAU2
I replaced some EAS features with new features.

EAU3
I used some features in EAS together for the first time.

EAU4
I used some features in EAS in ways that were not intended by the system designers and developers.

EAS verificatio n (EV)
EV refers to an extent to which EAS user communities confirm expected benefits through their use experience with the EAS (Bhattacherjee, 2001a).Reflective

EV1
As I was using some EAS feature/reports I checked other features such as functional task/business reports.

EV2
While I was working on EAS I wanted to check if I could check other college's student enquiries.

EV3
While I was working on EAS I wanted to know whether the system is the same as other college systems.

User learning (UL)
UL is defined as an extent to which the EAS user communities exploit the EAS by engaging in learning activities of the representational fidelity of the EAS and learning how to access the transparent interaction (e.g., surface or physical structure) offered by the EAS and leverage on them (Trieu et al., 2023;Haake, 2017).
Reflective UL1 I explored many ways of using EAS features or reports.

TI2
EAS is convenient and easy to learn for me.

TI3
I found it easy to carry out/perform what I want to do on EAS.Ease of use

TI4
My interaction with EAS was clear and understandable.

Represent ational fidelity
RF refers to a degree of fidelity to which the EAS user is obtaining representation from the Examination Administration System manifest a represented domain (examination administration).In other words, EAS user obtains a clear, complete, correct, and meaningful information (information process) about the represented domain (examination administration) from the EAS to an extent (Trieu et al., 2023;Eden et al., 2020;Haake, 2017).

RF1
When I use EAS, I find that the content (data and reports information) it provides me was sufficiently correct at my organization.
Adapted from Eden et al., 2020;Campbell & Roberts, 2019;Haake et al., 2018b;Burton-Jones & Grange, 2013 RF2 When I use EAS, I find that the content (data and reports information) it provides me was sufficiently complete at my organization.

RF3
When I use EAS, I find that the content (data and reports information) it provides me was sufficiently meaningful at my organization.

RF4
When I use EAS, I find that I can rely on it to process the data I entered correctly.

Informed Action (IA)
IA refers to an extent to which relevant, appropriate and correct information is leveraged/acted/decided on by EAS users to improve their state in an examination administration domain.(Trieu et al., 2023;Eden et al., 2020;Haake, 2017).Reflective IA1 I use EAS because it supports me in successfully performing my work.
Adapted from Eden et al., 2020;Campbell & Roberts, 2019;Haake et al., 2018b;Burton-Jones and Grange, 2013 IA2 I act upon the information that is provided by EAS because it assists me to effectively execute my work.

IA3
The information I obtain from EAS I can act upon it to effectively perform my task (e.g., allocate operational tasks monitor operations, reporting, improve service delivery and complete examination administration management/operations).

IA4
When I obtain information from EAS, I use key parts of it to find solutions for problems encountered in my work.

Effective Use (EU)
In light of burgeoning technological landscape (e.g.wearable technologies, artificial intelligence, machine learning algorithms and omnipresent computing) an effective use can be of a social or digital reality, which can only be enlightened through different agents (i.e.individuals or digital) eyes (views) (Recker at al., 2021) however the focus of this study is a social reality.
Therefore EU is refers to as a degree to which the IS user communities believe that the manner in which they make use of information systems, helps them to carry out their activities (e.g., studies, work) to produce desired results (Burton-Jones and Grange, 2013).
Formative EU Adapted from (Eden et al., 2020 1 evinces 17 constructs and 63 measurement scales.The CHAMI was piloted with 17 target audiences across all colleges in seven provinces of South Africa to establish the content validity which was important especially for the measurement scales that were not espoused from literature.The pilot testing was in line with Creswell & Cresswell (2018) who state that piloting assists with preliminary evaluation of the items internal consistency; and to assist in questions and format, and questionnaire instructions improvement.Most participants said the survey questions were appropriate.Views of respondents were sought on the appropriateness of the questions."Excellent.",Was a view from Participant 11.The pilot study results were analysed and some questions were combined and duplicate questions were removed.A data collection instrument was also peer reviewed by two peers (i.e., IS expert with PhD and professor in IS) prior to the piloting of the CHAMI.Based on expert suggestions, flow and sequencing and ambiguity of indicators and measures were addressed.To achieve second part of this research object, the final instrument (Table 1) was used for the validation process

Validation of the Change Management Measurement Instrument
To achieve second objective of this research, validity of the CHAMI was carried out through the PL-SEM to determine the reliability and validity of the measurement model.The questionnaire reliability and validity test results indicate that all measurement scales reliability and validity were fulfilled as presented in Table 2.The results indicate that a minimum criterion (i.e.0.50) was met for all indicators and their respective variables.All the standardized factor loadings as presented in Table A.1 were significant at the P-level <0.001 and loading values ranged from 0.859 to 0.950.Therefore, all indicators convergent validity has been fulfilled.The overall loading results also suggest that the measurement instrument had an acceptable convergent validity.Table A.1, as presented in Annexure A, exhibits that the loading values of all measurement items are higher than the cross-loading values.Thus, the indicators discriminant validity had been met.
Following the establishment of convergent and discriminant validity of all indicators, questionnaire convergent validity test was performed.The convergent validity results (i.e., Factor Loading, Cronbach's Alpha, Average Variance Extracted (AVE) and Composite Reliability (CR)) are presented below.Table 2 evinces that all scale factor loadings of above 0.70 which indicate that the scales have good convergent validity.Table 2 also shows that the AVEs for all 17 variables are above the criterion of 0.50 as recommended by Hair et al. (2010), meaning that all the measurements are valid for their respective constructs.It displays that all constructs scale CR test results are accordingly above the 0.70 which is the minimum threshold recommended by Hair et al. (2022) for both CR and Cronbach's alpha.All constructs were retained as CR and convergent validity were above the threshold required (Hair et al., 2022).Therefore, Table 2 exhibits enough evidence for convergent validity of the measurement instrument.
Notwithstanding the positive CR results for all the constructs, the formative construct validity deserves more attention as the existing construct validity tests for reflective constructs were unsuitable for formative constructs and given scanty information systems literature on examinations of formative constructs (Petter et al., 2007).The formative construct convergent validity is not tested similar to formative factors (Sun & Zhang, 2008).In addition, discriminant validity can also be assumed using the indicator weight, Variance inflation factors (VIFs) and associated p-values for the items (Elbaz, 2013cited in Alamir Nasser Salim., 2017).Table A.2 as included in the Annexure, presents the indictors' weights.All indicator effect sizes (f 2 >= 0.02) were greater than the smallest recommended value of 0.02 as recommended by Kock (2022).The indicator standard issue errors (SE) were also provided for all indicators' weights.Therefore, all the indicators have sufficient discriminant validity.
After the determination of all constructs' convergent validity, the discriminant validity was determined by applying four approaches (i.e., Fornell-Larcker criterion, cross-loading, HTMT Ratio and full collinearity assessment) as suggested by various exponents (i.e., Hair et al., 2022;Rasoolimanesh et al., 2017;Henseler et al., 2015;Kock & Lynn, 2012;Fornell & Larcker, 1981).However, the most applicable approach for this study is full collinearity assessment as the model involves both reflective and formative constructs (Rasoolimanesh et al., 2017).The cross-loading, Fornell-Larcker criterion, HTMT ratio approaches are deemed more appropriate for the models involving only the reflective constructs (Rasoolimanesh, 2022).Henseler et al. (2015) also acknowledged that the poor performance of cross loadings in their study, therefore, the use in formative measurement models remained unsettled and they called for further studies in this area.The four approaches had to be adopted for this study given the importance of the discriminant validity in the model assessment using PLS-SEM as it is tantamount to guarantee, and uniqueness of each construct represented in the model and a basis to remove bias results arising from shared meaning by different constructs (Voorhees et al., 2016cited in Rasoolimanesh et al., 2017).Table A.1 (refer to Annexure) and, Table 3 and 4 below present the cross-loading, Fornell-Larcker criterion, HTMT ratio and full collinearity assessment results respectively.Specifically, Table 3 shows that all constructs' square root of AVE is larger than its correlations with other constructs as suggested by Tamjidyamcholo et al. (2013).Therefore, the criterion of discriminant validity was fulfilled to confirm all construct uniqueness (divergent validity).In addition, the discriminant validity was also confirmed by computing the discriminant HTMT ratio as suggested by Henseler et al. (2015) that is superior to the conventional cross loadings and Fornell-Larcker that fail to detect the discriminant validity issues.Table 4 presents the construct HTMT ratio test results.Table 4 shows that all constructs HTMT ratio results were less than 1.00 as suggested by Henseler et al. (2015).In instances where HTMT value is >.90, there is no causal relationship between the constructs in the path model and the values were kept as they were different from one, although above a liberal HTMT ratio of.90,where HTMT ratio value was above.90(Henseler et al., 2015).Consequently, acknowledge that any construct with a HTMT value higher than.90lacks uniqueness between the constructs as a limitation consistent with the liberal HTMT ratio of.90.However, this was not the case in this study as the HTMT ratio approaches are deemed more appropriate for the models involving only the reflective constructs (Rasoolimanesh, 2022).Henseler et al. (2015) also acknowledged that the poor performance of cross loadings in their study, the use in formative measurement models, remained unsettled and called for further studies in this area.This study model incorporates both reflective and formative constructs which means the most applicable approach for this study is full collinearity assessment as the model involves both reflective and formative constructs (Rasoolimanesh et al., 2017).
Table 5 shows that the full collinearity for all constructs is lower than.10.As indicated, a full VIFs are met for all variables which is evident of no lateral collinearity (Kock, 2021) synonymous to acceptable discriminant validity.In this study, the indicators effect sizes were computed as the absolute values of the individual contributions of the corresponding indicators towards the R 2 coefficients of the associated latent variable (Kock, 2022).All indicator effect sizes (f 2 >= 0.02) were greater than the smallest recommended value of 0.02 as recommended by Kock (2022), while medium is 0.15, and large which is 0.35; respectively (Cohen, 1988;Kock, 2014).

Implications
This study makes the following contributions: The current study extended scanty measurement investigations on effective use of EAS in South African TVET colleges.Although this study leveraged on the Eden et al. (2020) and change management measures, that were not validated through individuals to reflect on their lived experiences, which are often affected by cultural components (Lenz et al., 2018).This study also offered a validation of CHAMI in the EAS context and further contributes to the literature by addressing the scanty measurement theory in the domain of change management in the post-implementation phase (effective use).This paper emphasises the importance of basing measurement items on theory and provides a validated CHAMI that can be used in various contexts and feed into development of targeted interventions.
Methodological contribution of the present study stems from the workflow for measurement model development and the way research constructs in a measurement model are operationalized and analyzed as they incorporate measurement class (i.e., reflective or formative).This is a form of contribution to the measurement theory (Hair et al., 2010).
Another methodological contribution is the combined use of the three theories, namely change management framework, UTAUT and EUT measurement scales to guide the data collection and analysis.At the time of this study, all three theories measurement scales have not been used in complementary to guide data collection in IS body of knowledge, notably in the context of EAS effective use in South Africa's TVET colleges.
The empirical evidence (CHAMI) of the study will assist the management and IS practitioners to evaluate the effective use of information systems and develop more targeted interventions.
There is little of CHAMI to measure effective use of information systems in the higher education setting in particular the South African TVET colleges setting as it is unique and different to previous studies settings.Therefore, CHAMI to measure effective use of Examination Administration System in South African TVET colleges makes context contribution to literature in that it extend change management framework, UTAUT and EUT measurement theory into EAS context which is an important step to advancing a theory as suggested by Venkatesh et al. (2016).

Conclusions
The The study offered CHAMI for effective use within information systems, specifically the validation occurred for the EAS of South African TVET Colleges.The CHAMI may inform the IS use measurement use theory and practice.This paper sample was limited to user community of EAS in South Africa's TVET sector as such is not fully representative of all electronic provision of public services in South Africa.Therefore, the results may not be fully generalized given that context and cultural settings play a significant role in advancement of theories (Venkatesh et al., 2016).In future, researchers can use the CHAMI to measure effective use or test relations among the dimensions to develop a guide to manipulate users to effectively use information systems in different settings to advance its generalizability.It is recommended that future work also test the nature of the cause-effect relationships among measurement constructs.Practical implication of this research is that the CHAMI can be used by practitioners and policy makers as an additional tool to collect the information regarding change management and user community use of various information systems in an effective manner as in a way contributes to achievement of United Nations 2030 sustainable development goals improving competitiveness, accountability and transparency in public administration.The CHAMI scales can be used by policy makers and executives as indicators of the degree of the effective use in diverse contexts (e.g.EAS).Policy makers may espouse or adapt this instrument as an additional use measurement dimension on United Nations digital transformation programmes index as United Nations digital transformation programmes are scarcely measured through use (Dobrolyubova, 2021).The collection of more qualitative data in future would provide richer insights into the CHAMI which was not possible in this study due to limited scope and resources, such as time.
study was also not conducted in a South African setting and not on the examination administration systems with operational use of examination administration.The multi case study may produce varying results and three sub-dimensions validated indicators based on the seminal work of Burton-Jones and Grange (2013) are still inadequate in the literature specifically for effective use of EAS.Trieu et al. (2022) conducted an investigation to extend the EUT into the business intelligence context.The Trieu et al. (2022) study tested the EUT measurement scales in the business intelligence context but excluded the sub-dimension such as adaptation and verification measurement scales.The exclusion of these activities' measurement scales was not done despiteBurton-Jones & Grange (2013;2008) recommendation to incorporate these individual drivers for use as they still play a key role in the implementation of the IS success(AL-Mamary et al., 2014).Trieu et al. ( Abbreviations: N -Number, AVE-Average variance extracted, CR -Composite reliability; N -Number; UT -User training; TMS -Top management support and activities; UI -User involvement and change recognition; USAT -User satisfaction; EC -Effective communication; IF -Information flow; PC -Planning EAS as a change; CSV -Change shared vision; PM -Performance measurement; TU -Technology use; EAU -EAS Adaptive use; EV -EAS verification; UL -User learning; TI -Transparent interaction; RF -Representational fidelity; IA -Informed action; EU -Effective use. study has now addressed the research problem of limited measurement instruments for the effective use of the Examination Administration System in the South African TVET colleges.The study has uncoated the change management measurement indicators that measure effective use of the Examination Administration System.The CHAMI consists of 11 constructs (i.e., user involvement and change recognition, user satisfaction, performance measurement, technology use, EAS adaptive use, EAS verification, user learning, transparent interaction, representational fidelity, informed action, and effective use of Examination Administration System) and 37 items.The study introduces the validated novel dimensions such as the user involvement and change recognition, user satisfaction, performance measurement, technology use, EAS verification scales to the effective use measurement theory of Burton-Jones and Grange (2013).

Table 1 :
Questionnaire reliability and validity test results

Table 2 :
Construct Fornell-Larcker test results   Diagonal elements (italicised)are the square roots of the AVE.Below the diagonal elements are the correlations between the construct's values.
Abbreviations: UT -User training; TMS -Top management support and activities; UI -User involvement and change recognition; USAT -User satisfaction; EC -Effective communication; IF -Information flow; PC -Planning EAS as a change; CSV -Change shared vision; PM -Performance measurement; TU -Technology use; EAU -EAS Adaptive use; EV -EAS verification; UL -User learning; TI -Transparent interaction; RF -Representational fidelity; IA -Informed action; EU -Effective use.

Table 4 :
Full collinearity Variance inflation factors for all variables