255
C CONCEPTUALISING U USE F FOR I INFORMATION S SYSTEMS ( ( I I S S ) ) S SUCCESS TAN TER CHIAN FELIX Thesis submitted for the degree of Doctor of Philosophy IT PROFESSIONAL SERVICES FACULTY OF SCIENCE AND TECHNOLOGY QUEENSLAND UNIVERSITY OF TECHNOLOGY RESEARCH SUPERVISORS: DR DARSHANA SEDERA PROFESSOR GUY G. GABLE 2010

INFORMATION SYSTEMS (IS) SUCCESSeprints.qut.edu.au/41850/1/Ter_Tan_Thesis.pdf · philosophies of the conceptualisation using empirical evidence in an Enterprise Systems (ES) context

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

CCOONNCCEEPPTTUUAALLIISSIINNGG UUSSEE FFOORR

IINNFFOORRMMAATTIIOONN SSYYSSTTEEMMSS ((IISS)) SSUUCCCCEESSSS

TTAANN TTEERR CCHHIIAANN FFEELLIIXX

Thesis submitted for the degree of

Doctor of Philosophy

IITT PPRROOFFEESSSSIIOONNAALL SSEERRVVIICCEESS

FFAACCUULLTTYY OOFF SSCCIIEENNCCEE AANNDD TTEECCHHNNOOLLOOGGYY

QQUUEEEENNSSLLAANNDD UUNNIIVVEERRSSIITTYY OOFF TTEECCHHNNOOLLOOGGYY

RESEARCH SUPERVISORS:

DR DARSHANA SEDERA

PROFESSOR GUY G. GABLE

2010

Tan  2010

Page | 2

Statement of Original Authorship

The work contained in this thesis has not been previously submitted to meet

requirements for an award at this or any other higher education institution. To

the best of the researcher’s knowledge and belief, the thesis contains no material

previously published or written by another person except where due reference is

made. The ‘researcher’ in this thesis also means the author of this thesis.

Signature ____________________

Date __________________

Conceptualising Use for IS Success

Page | 3

Thesis Abstract

This thesis conceptualises Use for IS (Information Systems) success. While Use in

this study describes the extent to which an IS is incorporated into the user’s

processes or tasks, success of an IS is the measure of the degree to which the

person using the system is better off. For IS success, the conceptualisation of Use

offers new perspectives on describing and measuring Use. We test the

philosophies of the conceptualisation using empirical evidence in an Enterprise

Systems (ES) context. Results from the empirical analysis contribute insights to the

existing body of knowledge on the role of Use and demonstrate Use as an

important factor and measure of IS success.

System Use is a central theme in IS research. For instance, Use is regarded as an

important dimension of IS success. Despite its recognition, the Use dimension of IS

success reportedly suffers from an all too simplistic definition, misconception, poor

specification of its complex nature, and an inadequacy of measurement

approaches (Bokhari 2005; DeLone and McLean 2003; Zigurs 1993). Given the

above, Burton-Jones and Straub (2006) urge scholars to revisit the concept of

system Use, consider a stronger theoretical treatment, and submit the construct to

further validation in its intended nomological net.

On those considerations, this study re-conceptualises Use for IS success. The new

conceptualisation adopts a work-process system-centric lens and draws upon the

characteristics of modern system types, key user groups and their information

needs, and the incorporation of IS in work processes. With these characteristics,

the definition of Use and how it may be measured is systematically established.

Use is conceptualised as a second-order measurement construct determined by

three sub-dimensions: attitude of its users, depth, and amount of Use. The

construct is positioned in a modified IS success research model, in an attempt to

demonstrate its central role in determining IS success in an ES setting.

A two-stage mixed-methods research design—incorporating a sequential

explanatory strategy—was adopted to collect empirical data and to test the

research model. The first empirical investigation involved an experiment and a

survey of ES end users at a leading tertiary education institute in Australia. The

second, a qualitative investigation, involved a series of interviews with real-world

Tan  2010

Page | 4

operational managers in large Indian private-sector companies to canvass their

day-to-day experiences with ES. The research strategy adopted has a stronger

quantitative leaning.

The survey analysis results demonstrate the aptness of Use as an antecedent and

a consequence of IS success, and furthermore, as a mediator between the quality

of IS and the impacts of IS on individuals. Qualitative data analysis on the other

hand, is used to derive a framework for classifying the diversity of ES Use

behaviour. The qualitative results establish that workers Use IS in their context to

orientate, negotiate, or innovate.

The implications are twofold. For research, this study contributes to cumulative IS

success knowledge an approach for defining, contextualising, measuring, and

validating Use. For practice, research findings not only provide insights for

educators when incorporating ES for higher education, but also demonstrate how

operational managers incorporate ES into their work practices. Research findings

leave the way open for future, larger-scale research into how industry

practitioners interact with an ES to complete their work in varied organisational

environments.

Keywords: Use, IS Success, IS-Impact, Enterprise Systems.

Conceptualising Use for IS Success

Page | 5

Table of Contents CHAPTER 1:  INTRODUCTION .......................................................................... 13 

1.1  THE RESEARCH OBJECTIVE ........................................................................... 13 

1.2  RESEARCH BACKGROUND .............................................................................. 13 

1.2.1  Central Role of Use to IS Success ................................................................... 14 

1.3  RESEARCH GAPS .......................................................................................... 14 

1.4  RESEARCH QUESTIONS ................................................................................. 16 

1.5  THE RESEARCH STRATEGY ............................................................................ 20 

1.6  UNIT OF ANALYSIS ........................................................................................ 23 

1.7  A STATEMENT ON ETHICS .............................................................................. 23 

1.8  CONTRIBUTIONS ........................................................................................... 24 

1.9  THE THESIS STRUCTURE ............................................................................... 26 

CHAPTER 2:  LITERATURE REVIEW ................................................................. 29 

2.1  INTRODUCTION ............................................................................................ 29 

2.2  THE BREADTH OF IS LITERATURE EMPLOYING USE............................................ 30 

2.3  DEFINITIONS OF USE .................................................................................... 32 

2.3.1  The Multidimensional Nature of Use ............................................................... 36 

2.3.2  The Multilevel Nature of Use ........................................................................... 37 

2.4  IS SUCCESS AND USE ................................................................................... 39 

2.4.1  The IS Success Model (1992; 2003) ................................................................ 40 

2.4.2  Differing Meanings of Use in the IS Success Model ........................................ 43 

2.4.3  The IS Nomological Net (2003) and Use .......................................................... 46 

2.4.4  The IS-Impact Measurement Model (2008) and Use ....................................... 48 

2.5  USE AS A CONSTRUCT ................................................................................... 50 

2.5.1  Use as an Antecedent ..................................................................................... 51 

2.5.2  Use as a Consequence .................................................................................... 51 

2.5.3  Use as a Mediator ........................................................................................... 52 

2.5.4  Considerations for Formative and Reflective Constructs ................................ 53 

Tan  2010

Page | 6

2.6  MEASUREMENT OF USE ................................................................................. 57 

2.6.1  An Analysis of Prior and Current Use Measures ............................................. 60 

2.6.2  Richness of Measures ...................................................................................... 67 

2.7  A SUMMARY OF CONSIDERATIONS FOR USE IN IS SUCCESS ................................. 70 

2.7.1  A Work-Systems Definition of IS Use ............................................................... 71 

2.7.2  System Considerations .................................................................................... 73 

2.7.3  Business and Work Process Considerations ................................................... 74 

2.7.4  User Considerations ........................................................................................ 75 

2.7.5  Information Considerations ............................................................................. 76 

2.7.6  Adapting Work Systems Theory for Understanding Use ................................. 78 

2.8  SUMMARY ................................................................................................... 82 

CHAPTER 3:  THE RESEARCH MODEL ............................................................. 84 

3.1  INTRODUCTION ............................................................................................. 84 

3.2  THE MODIFIED (IS SUCCESS) RESEARCH MODEL .............................................. 85 

3.2.1  Positioning the Research Model ....................................................................... 86 

3.3  OPERATIONALISING USE ................................................................................ 87 

3.4  IS TYPOLOGY ............................................................................................... 89 

3.4.1  An Enterprise Systems Focus .......................................................................... 91 

3.4.2  Multiple Stakeholder Perspectives ................................................................... 94 

3.5  RESEARCH MODEL CONSTRUCTS AND MEASURES ............................................. 96 

3.5.1  Use .................................................................................................................. 96 

3.5.2  Individual Impact .......................................................................................... 100 

3.5.3  System Quality .............................................................................................. 101 

3.5.4  Information Quality ....................................................................................... 102 

3.6  CHAPTER SUMMARY .................................................................................... 104 

CHAPTER 4:  RESEARCH DESIGN .................................................................. 105 

4.1  INTRODUCTION ........................................................................................... 105 

4.2  ASSUMPTIONS OF THEORY: TESTING AND BUILDING ......................................... 106 

Conceptualising Use for IS Success

Page | 7

4.3  QUANTITATIVE AND QUALITATIVE METHODS ................................................... 107 

4.3.1  Issues with Positivism .................................................................................. 108 

4.3.2  Data Collection Techniques ........................................................................... 109 

4.4  CHARACTERISTICS OF THE MIXED-METHOD RESEARCH DESIGN ......................... 110 

4.4.1  Benefits of the Mixed-methods Approach ..................................................... 115 

4.5  THE EXPERIMENT: AN ES HANDS-ON EXPERIENCE .......................................... 116 

4.5.1  The Setting .................................................................................................... 116 

4.5.2  The Process-system Centric Approach .......................................................... 116 

4.5.3  Quantitative Data Collection: Survey ............................................................ 118 

4.5.4  The Survey Instrument ................................................................................. 119 

4.5.5  Completing and Returning the Surveys ........................................................ 120 

4.5.6  Minimising Measurement Error ..................................................................... 121 

4.6  A QUALITATIVE PERSPECTIVE: ES MANAGERS’ EXPERIENCE ............................. 124 

4.6.1  Qualitative Data Collection: Interviews......................................................... 125 

4.6.2  Interview Protocol.......................................................................................... 126 

4.6.3  Interviewee Profiles ....................................................................................... 128 

4.6.4  Conducting the Interviews ............................................................................ 130 

4.6.5  A Statement on Analytical Tools ................................................................... 132 

4.6.6  Qualitative Validity ....................................................................................... 133 

4.7  SUMMARY ................................................................................................. 135 

CHAPTER 5:  SURVEY DATA ANALYSIS AND FINDINGS .................................. 137 

5.1  INTRODUCTION .......................................................................................... 137 

5.2  DEMOGRAPHICS AND DESCRIPTIVE STATISTICS ............................................... 138 

5.3  MEASUREMENT MODEL ............................................................................... 142 

5.4  STRUCTURAL EQUATION MODELS ................................................................. 145 

5.4.1  Specifying the Use Nomological Net .............................................................. 145 

5.4.2  PLS Structural Models ................................................................................... 147 

5.4.3  Testing for Potential Mediation ..................................................................... 152 

Tan  2010

Page | 8

5.5  ADDITIONAL FINDINGS ................................................................................. 155 

5.5.1  The Value of Quantitative IS Use Measures .................................................. 155 

5.5.2  ES Use for Higher Education ......................................................................... 157 

5.6  CHAPTER SUMMARY .................................................................................... 159 

CHAPTER 6:  QUALITATIVE DATA ANALYSIS AND FINDINGS ......................... 161 

6.1  INTRODUCTION ........................................................................................... 161 

6.2  PREPARING TO ANALYSE .............................................................................. 162 

6.2.1  A Contextual Statement on ES Use ............................................................... 163 

6.2.2  Coding the Data ............................................................................................ 164 

6.2.3  Managers’ Backgrounds ............................................................................... 164 

6.3  ORGANISING PATTERNS OF IS USE INTO LEVELS .............................................. 166 

6.3.1  Levels of Use and Supporting Elements ........................................................ 169 

6.3.2  Use at Orientation Level ................................................................................ 172 

6.3.3  Use at Routine Level ...................................................................................... 176 

6.3.4  Use at Innovation Level ................................................................................. 182 

6.4  DISCUSSION .............................................................................................. 185 

6.4.1  Emergent Issues ............................................................................................ 187 

6.5  SUMMARY ................................................................................................. 191 

CHAPTER 7:  CONCLUSIONS AND OUTLOOK .................................................. 193 

7.1  INTRODUCTION ........................................................................................... 193 

7.2  THEORETICAL CONTRIBUTIONS TO EXPLAINING USE ......................................... 193 

7.2.1  Interaction with Core Elements of Use .......................................................... 194 

7.2.2  Representations of Use .................................................................................. 195 

7.2.3  Levels and Types of Use ................................................................................ 196 

7.3  A CHECKLIST TO STUDY USE ........................................................................ 198 

7.3.1  Define Elements of Use .................................................................................. 198 

7.3.2  Contextualise Use .......................................................................................... 199 

7.3.3  Operationalise Use ........................................................................................ 200 

Conceptualising Use for IS Success

Page | 9

7.3.4  Validate Use ................................................................................................. 201 

7.3.5  Integrate Results ........................................................................................... 202 

7.4  LIMITATIONS AND FUTURE RESEARCH ............................................................ 202 

7.5  QUESTIONS FOR PRACTICE .......................................................................... 205 

7.6  CONCLUDING REMARKS .............................................................................. 206 

7.7  CHAPTER SUMMARY ................................................................................... 208 

APPENDIX A: ARCHIVAL ANALYSIS OF USE ....................................................... 210 

APPENDIX B: THE SAP HANDS-ON EXERCISE .................................................... 212 

APPENDIX C: SURVEY INSTRUMENT ................................................................. 213 

APPENDIX D: INTERVIEW INSTRUCTIONS ......................................................... 221 

APPENDIX E: FLOWCHART OF QUESTIONS ....................................................... 222 

APPENDIX F: MAPPING RESPONSES TO STUDY THEMES (1/13) ........................ 223 

APPENDIX G: PUBLICATIONS AND CONTRIBUTIONS .......................................... 236 

REFERENCES .................................................................................................... 238 

Tan  2010

Page | 10

List of Figures

FIGURE 1-1: KEY PHASES IN THE RESEARCH STRATEGY ......................................................... 21 

FIGURE 2-1: PARADIGMS OF IS RESEARCH EMPLOYING USE ................................................... 32 

FIGURE 2-2: DELONE AND MCLEAN’S IS SUCCESS MODEL (1992) .......................................... 41 

FIGURE 2-3: DELONE AND MCLEAN (2003) UPDATED IS SUCCESS MODEL .............................. 45 

FIGURE 2-4: THE IS NOMOLOGICAL NET ............................................................................. 47 

FIGURE 2-5: THE IS-IMPACT MEASUREMENT MODEL ............................................................ 49 

FIGURE 2-6: REFLECTIVE AND FORMATIVE MEASUREMENT MODELS ........................................ 54 

FIGURE 2-7 : A BASIC WORK SYSTEM OF USE ...................................................................... 73 

FIGURE 2-8: AN EXAMPLE OF CORE AND VALUE-ADDED FUNCTIONS OF THE PROCUREMENT

PROCESS ................................................................................................................ 80 

FIGURE 3-1: RESEARCH MODEL: RECONCILING THE IS SUCCESS MODELS ............................... 86 

FIGURE 3-2: EXAMPLES OF CORE OPERATIONAL BUSINESS PROCESSES ................................... 93 

FIGURE 4-1: EPISTEMOLOGICAL ASSUMPTIONS FOR QUALITATIVE AND QUANTITATIVE RESEARCH108 

FIGURE 4-2 : SEQUENTIAL EXPLANATORY DESIGN ............................................................... 112 

FIGURE 4-3: RESEARCH DESIGN ...................................................................................... 114 

FIGURE 4-4: KEY ACTIVITIES AND DELIVERABLES FOR A HANDS-ON ES EXERCISE ................... 118 

FIGURE 4-5 SAMPLE OF SPREADSHEET EXPORTED FROM NVIVO ........................................... 133 

FIGURE 5-1: DESCRIPTIVE STATISTICS .............................................................................. 140 

FIGURE 5-2: DISTRIBUTION, CENTRAL TENDENCY, AND DISPERSION OF AMOUNT OF USE .......... 141 

FIGURE 5-3: THE NOMOLOGICAL MODEL OF IS USE ........................................................... 147 

FIGURE 6-1: AN ILLUSTRATION OF LEVELS OF IS USE AND SUPPORTING ELEMENTS .................. 169 

FIGURE 6-2 : TRIANGULATION OF QUALITATIVE AND QUANTITATIVE FINDINGS .......................... 187 

Conceptualising Use for IS Success

Page | 11

List of Tables

TABLE 2-1: DEFINITIONS OF USE ....................................................................................... 33 

TABLE 2-2: CONSIDERATIONS FOR FORMATIVE VS REFLECTIVE NATURE OF USE ....................... 56 

TABLE 2-3: USE DIMENSIONS AND MEASURES ..................................................................... 60 

TABLE 2-4: MAPPING CHARACTERISTICS OF USE MEASURES IN IS STUDIES .............................. 62 

TABLE 2-5: RICHNESS OF MEASURES ................................................................................. 67 

TABLE 3-1: STEPS IN OPERATIONALISING THE USE CONSTRUCT .............................................. 88 

TABLE 3-2: TYPES OF INFORMATION SYSTEMS ...................................................................... 90 

TABLE 3-3: EMPLOYMENT COHORTS AND RELATED TASKS ..................................................... 95 

TABLE 3-4: USE DIMENSIONS AND MEASURES ..................................................................... 99 

TABLE 3-5: INDIVIDUAL IMPACT MEASUREMENT ITEMS ........................................................ 100 

TABLE 3-6: SYSTEM QUALITY-MEASUREMENT ITEMS ........................................................... 102 

TABLE 3-7: INFORMATION QUALITY-MEASUREMENT ITEMS ................................................... 103 

TABLE 4-1: SUMMARY OF MIXED METHODS ....................................................................... 111 

TABLE 4-2: INTERVIEW PROTOCOL ................................................................................... 128 

TABLE 4-3: OVERVIEW OF INTERVIEWEES AND THEIR ORGANISATIONS ................................... 130 

TABLE 4-4: SUMMARY OF QUALITATIVE VALIDITY STANDARDS............................................... 134 

TABLE 5-1: SAMPLE DEMOGRAPHICS ................................................................................ 139 

TABLE 5-2 CRONBACH’S ALPHA, COMPOSITE SCORES, AND FINAL FACTOR LOADINGS (T1 AND T2)

........................................................................................................................... 143 

TABLE 5-3: INTER-CONSTRUCT CORRELATIONS AND AVERAGE VARIANCE EXTRACTED .............. 144 

TABLE 5-4: PLS STRUCTURAL MODELS ............................................................................ 151 

TABLE 5-5 : INNER WEIGHTS MODEL ................................................................................ 151 

TABLE 5-6: MEDIATION MODELS ..................................................................................... 155 

TABLE 5-7: PAIRED SAMPLE T-TEST OF QUANTITY OF ES USE .............................................. 157 

TABLE 5-8: PRELIMINARY RECOMMENDATIONS FOR ES USE IN EDUCATION ............................ 159 

TABLE 6-1: SUMMARY OF SUPPORTING ELEMENTS OF ES USE.............................................. 170 

TABLE 6-2: LEVELS AND SUB-LEVELS OF (MANAGERIAL) IS USE AND EXAMPLES ..................... 172 

Tan  2010

Page | 12

TABLE 6-3: USE INSTANCES AT ORIENTATION LEVEL ........................................................... 175 

TABLE 6-4: USE INSTANCES AT THE ROUTINE LEVEL ........................................................... 181 

TABLE 6-5: USE INSTANCES AT INNOVATION LEVEL ............................................................. 184 

TABLE 6-6: HOW MANAGERS SCORED THEIR SYSTEM .......................................................... 190 

TABLE 7-1: SUMMARY OF IS USE PRINCIPLES .................................................................... 198 

TABLE 7-2: CONTRIBUTIONS OF THE THESIS ...................................................................... 208 

Conceptualising Use for IS Success

Page | 13

Chapter 1: Introduction

1.1 The Research Objective

This thesis conceptualises Use for IS success. Where Use describes the extent to

which an IS is incorporated into the user’s business processes or tasks, success

of an IS is the measure of the degree to which a person evaluating a system

believes that the stakeholder (in whose interest the evaluation is being made) is

better off (Seddon 1997). For IS scholars, the conceptualisation presents finer

considerations and recommendations for measuring Use. Empirical evidence

collected from an Enterprise Systems (ES) setting is used to test the philosophies

of the conceptualisation. Results from the empirical data analysis seek to

position Use as an important factor and measure of IS success.

1.2 Research Background

Use—its synonym system usage, or simply system use—features prominently in

IS research. Given the above, researchers have studied multiple aspects of Use.

These include: intention to Use (Venkatesh, Morris, Davis et al. 2003), Use

continuance (Bhattacherjee 2001), behaviour or post hoc usage evaluation such

as ‘routinisation’, substantive Use, and exploitative usage (Burton-Jones and

Straub 2006; Jasperson, Carter and Zmud 2005; Sundaram, Schwarz, Jones et

al. 2007). Others include psychological notions such as appropriation moves

(DeSanctis and Poole 1994), structuration (Giddens 1979), and enactment

(Orlikowski and Iacono 2001) in describing IS user behaviour.

Furthermore, scholars adopt multiple lenses to shape and represent the concept

of Use in different streams of IS research. For instance, system Use is often

depicted as the dependent variable in the IS acceptance domain (Davis 1989;

Zain 2005), specifically in Davis’s Technology Acceptance Model (TAM). System

Use has also been investigated as a dependent variable in the IS for decision-

making domain (Dickson, Senn and Chervany 1977). For this study, the interest

lies in the role of Use in IS success.

Tan  2010

Page | 14

1.2.1 Central Role of Use to IS Success

In IS success, Use is primarily considered as a dimension—as described in the

DeLone and McLean IS success model (DeLone and McLean 1992; DeLone and

McLean 2003). Building on the foundations of the widely adopted IS success

model, other scholars have depicted Use in a similar light in later work. For

example, in the IS nomological net (see Section 2.4.3) of Benbasat and Zmud

(2003), Use is portrayed as the mediating variable between work and system

capabilities and net benefits of IS; in the IS-impact measurement model of Gable,

Sedera and Chan (2008), Use is depicted as both an antecedent and

consequence (of the net benefits that flow from IS).

Still today, scholars regard Use as one of the most extensively employed

dimensions for evaluating IS success. On this premise, Use is a central theme to

organisational IS success research for a number of reasons. From an IS

investment perspective, organisational users use IS to conduct an array of

operational, technical, and strategic tasks to support core business processes

and functions. For this reason, organisations making investments in costly and

complex IS such as Enterprise Resource Planning (ERP) or ES are under

constant and increasing pressure to justify their value (Gable et al. 2003). On

this premise, IS adoption, its uses, and its success have remained important

streams of IS research for several decades. These are portrayed in the works of

Ives and Olson (1984), DeLone and McLean (1992; 2003), Ballantine et al. (1996),

Seddon et al. (1999), and Sabherwal et al. (2006) among others. In these works,

scholars report that the effects of IS are often less of a function of the systems

themselves than of how they are used, and hence systems cannot improve

performance if not used properly (Avison and Fitzgerald 2003; Davis, Bagozzi

and Warshaw 1989; DeLone and McLean 2003; Petter, DeLone and McLean

2008; Szajna 1993).

1.3 Research Gaps

Despite its central role in IS, the concept of Use in its current form is believed to

be inadequate for IS success, and potentially for other streams. The four gaps in

the concept of Use for IS success are summarised below.

Conceptualising Use for IS Success

Page | 15

First, prior definitions of Use have tended to be ‘simplistic’ (DeLone and McLean

1992, p. 16; Bokhari 2005), adopting terminology and assumptions without first

making theoretical or contextual references, thereby causing a misconception of

its complex nature (Lee 2000; Schwarz and Chin 2007). Without adequate

theoretical treatment to explain the nuances of the Use phenomena, a more

complete measurement approach will continue to elude scholars and will

generate mixed results and inconclusive findings in studies employing Use

(Burton-Jones and Straub 2006).

Second, Use suffers from an often ‘techno-centric’ (Lee 2000) focus. Research

emphasising solely the capabilities of systems that users draw on to describe

Use IS myopic, and a misrepresentation, given the current state of IS. Section

2.3 summarises the definitions of Use adopted by scholars. At the outset, types

of contemporary IS and the capabilities of information technology continue to

expand, and they present to the users who invest (time and effort) in them

potential value in their actual Use. The value of these technologies today is as

much a matter of the design of the business processes, the interpretations of

pertinent business information, and organisational structures in which they are

used, as are the cognitive qualities of their users. Therefore, Lee (2000) suggests

a more integrated technology, business process management, organisational and

social focus.

Third, there are conceptual differences in how scholars represent Use in IS

success research models. The notion of Use sometimes carries diverse meanings,

even in the same model (see Section 2.4.2). For example, Seddon (1997)

criticises the IS success model (DeLone and McLean 1992) for having a

combination of three (two variance and one process) models, causing confusion

and conflicted meanings for Use. Furthermore, there are conceptual arguments

for Use to be an antecedent, consequence, mediator, and dimension. Although

there is no crisis with multiple representations of Use, researchers have yet to

test or argue the extent of these representations in the light of IS success models.

Finally, Use suffers from inadequate measurement approaches (DeLone and

McLean 2003; Zigurs 1993). Given that, one of the reasons is that scholars have

chosen repeatedly to adopt or recycle purely objective and (or) quantifiable

system Use measures (see Section 2.6.1 for illustrations). Though these studies

Tan  2010

Page | 16

using purely objective assessment of system Use provided some insights into IS

success, the worth of such evaluations is less adequate for the mandatory and

less volitional IS (Seddon 1997; Schwarz and Chin 2007).

Given the above research gaps in the conceptualisation of Use, the recent work

of Andrew Burton-Jones (Burton-Jones and Gallivan 2007; Burton-Jones and

Straub 2006) suggests that very little has been done to address these concerns.

Burton-Jones and Straub (2006) explain that generally, the concept of system

Use in our discipline fails to receive strong theoretical treatment, is lacking in

understanding of context prior to the selection of measures, and suffers from

poor to no validation. He highlights that these inadequacies and short-sighted

conceptualisation of Use reduce the value of the overall assessment for today’s

complex and multifaceted systems.

1.4 Research Questions

Given the above research gaps, we develop a set of (three) research questions to

guide the research activities closely. Answers to the research questions will

inherently inform a new conceptualisation of Use for IS success, and the attempt

to address the research gaps. The new conceptualisation seeks to define,

operationalise, and measure Use in the domain of IS success. The new

conceptualisation underpins an approach for studying Use, which must address

three critical aspects: (1) the terminology in Use, and from this (2) an approach

for developing and selecting measures of Use, and finally (3) its

operationalisation and validation.

The approach of Chan (1998) is followed to develop the salient research

questions. This approach is to: (1) express the research topic as a title and

highlight the ensuing issues; (2) develop preliminary questions about this title,

starting from the known; (3) interrogate each of the preliminary questions (again

starting from what is already known); and (4) when there is a “short list” of

unanswered questions, each is tested for feasibility as a research question. The

questions not answered readily are drafted, leading eventually to three broad

research questions. In other words, answering these questions requires the

researcher to make a rigorous attempt to understand the theoretical

Conceptualising Use for IS Success

Page | 17

underpinnings to guide the development of an appropriate measurement

approach, and to study the effects of the variables of Use.

The short-listed questions are categorised and ranked in terms of their

importance to the study objectives. These questions and the strategies adopted

to answer them are described below.

Research Question 1: How can one define Use for IS success?

This research question seeks to define the meaning of Use for IS success. The

attempt to answer this question comprises two finer investigative aspects. First,

theoretical references to describe Use were sought; this must look beyond

technology, accommodate all elements of Use, and be consistent with other prior

definitions of Use in an IS success context. Second, the concept of Use captured

should explain the nature of interactions between the defining elements. In other

words this question focuses on how, where, and if Use plays a part in a series of

events in a process to determine IS success.

The first aspect of this question seeks to describe Use. Theoretical references (the

theory of work systems, Alter 2006) are applied to emphasise the

multidimensional (see Section 2.3.1) nature of Use within an organisational

context. In addition, we seek a definition of Use that accounts for the evolution

of contemporary IS, and as aptly pointed out by Lee (2000) and McAfee (2006)

among others, it includes consideration of other elements beyond just the

physical systems. Often, the nature of this form of research answers the ‘how’

questions and attempts to draw logical relationships between different aspects of

Use from observable behaviour or experiences gathered. This stream of work is

commonly referred to as process research. The bulk of studies in this stream

generally comprise a process model that attempts to explain the occurrence of

an outcome—Use in this case—by identifying the sequence of events preceding it

(Tsohou et al. 2008).

In addition, the question explores levels and types of Use. This study examines

the effects of Use on ex post rather than ex ante IS implementation. Although

understanding user activities affecting systems implementation is useful, forging

patterns (see Section 2.3.2) from ex post accounts of IS implementation Use

activities (rather than ex ante) is valuable in understanding issues pertaining to

Tan  2010

Page | 18

assessing their (system) impacts, and anticipating and managing the processes

of change associated with them.

Patterns forged from detailed user accounts of experiences contribute to the

understanding of factors that would affect (Use) measurement. It is the intention

to utilise the findings to explain likely differentiating scores for Use in the

perspectives of multiple stakeholders. From here, the firm rationalisation is that

the description of patterns of events that lead to a significant impact of IS as

perceived by its users (for example, learning and innovation in process following

direct and indirect Use of ES) could mean little without the identification of the

factors that cause a specific pattern of them to emerge. In addition, the

processes investigated draw insights into how one might understand the

variance results captured. To claim accuracy of the lifecycle and suggest a “one

size fits all” connotation is still premature. However this stream of work adds to

(but is different to) previously established applications of extended theories, such

as activity theory (as in (Sun and Zhang 2005), expectation–disconfirmation

theory (as in (Bhattacherjee and Premkumar 2004) and structuration theory (as

in DeSanctis and Poole 1994) to describe the observable behaviours and user

accounts of Use.

Research Question 2: What are the salient dimensions and measures of

Use for IS success?

The second research question focuses on defining the dimensions and measures

of Use. The motivation is that despite efforts to measure Use, attempts to do so

have received wide criticism. The question focuses on two aspects. First, we

examine prior measures of Use to determine their necessity and sufficiency in

the study context. Subsequently, new dimensions are introduced if necessary.

The second aspect focuses on how to seek validity and reliability of dimensions

and measures introduced. Further tests attempt to validate the relationship

between the dimensions and measures. It is suggested that this relationship can

either be formative or reflective (Diamantopoulos and Winklhofer 2001; Jarvis,

MacKenzie and Podsakoff 2003), thereby rendering the additional tests required

for the research model.

Conceptualising Use for IS Success

Page | 19

The second aspect of the question defines attempts to measure Use. In this

stream of work, Use is often perceived as a latent construct that is commonly

evaluated via proxy measures. The bulk of work answers the ‘what’ and ‘how

much’ questions and attempts to explain a set of variables that make up and (or)

reflect Use. This stream of research is often based on variance-based approaches.

In addition, variance models are typically introduced in this approach. In

contrast to a process model, variance models explain the variability of a

dependent variable based on its correlation with one or more independent

variables (Tsohou et al. 2008). In other words, variance theory explains the

variation in a dependent variable as a result of the variation in an independent

variable (or variables) (Mohr 1982). Many studies operationalise Use IS as an

aggregate construct, comprising various dimensions, often borrowed from a set

of synergistic concepts and (or) prior theories.

The crucial aspect of the attempt to answer this question is to define a phased

approach to: (1) specify the context and assumptions of each definition of Use

and from there (2) select appropriate measures for Use. This approach is in

similar vein to earlier work by Burton-Jones and Straub (2006). The approach

should therefore help researchers derive a set of rich Use measures that are

context and theory driven, more complete, mutually exclusive, and

parsimonious.

Research Question 3: What is the role of Use in IS success?

The final research question relates to better understanding of Use for IS success.

One aspect of this question focuses on testing the new conceptualisation of Use

against the backdrop of established IS success frameworks. Specifically, a

nomological net of Use is defined to scope (see questions in Section 2.4.3) the

research, and better position the contributions for IS success. Benbasat and

Zmud (2003) recommend that a measure of an IS phenomenon must be

validated within its immediate IS nomological net. The nomological net also

reveals the relationships between the construct in question and other constructs

that a researcher should seek to test or validate.

Based on the premise above, statistical analysis (such as incremental

contribution to r2 and correlation analysis) and qualitative data tests the

Tan  2010

Page | 20

sufficiency and necessity (or not) of the Use construct in IS success models. The

plan is to examine Use as an antecedent, consequence, and as a mediator

construct. The investigation into the causal nature (rather than a dimension) of

Use here has implications beyond the IS success stream. This objective seeks to

extend, challenge, validate, and make thoughtful refinements to the IS success

models such as IS-Impact, to improve the models’ robustness and completeness.

In order to understand the role of Use in IS success better, further interpretation

of the model-testing results is required. For this, both issues (variance and

process) of research are regarded as not being mutually exclusive. Tsohou, et al.

(2008, p. 277) explicate that to answer a “What” question regarding the

phenomenon under study, one typically assumes or hypothesises an answer to

the “How” question. They further explain that whether implicitly or explicitly, a

variance-based study generally follows an underlying logic that answers a

process-related study about how a sequence of events unfolds to cause an

independent variable to influence a dependent variable. With this premise,

answering the question requires moving between process, variance, and process-

type work to draw more in-depth conclusions of relationships that are not only

fixed or affected by random forces, but that are less predictable. Chapter 4

summarises further details of incorporating variance and process views in this

research.

1.5 The Research Strategy

The research strategy defines key phases of the study. The strategy incorporates

a quantitative and qualitative mixed-method research approach. The quantitative

approach is top-down, focusing on constructs definitions, and possible

relationships between Use and other relevant concepts in the IS success context,

leading to the validation and checking of the research models. On the other

hand, the qualitative approach phase is more bottom-up, focusing on building a

framework from empirical data, for understanding and classifying the

occurrences in a Use phenomenon. Figure 1-1 illustrates the key phases

(literature review, developing research models, surveys, interviews, and

triangulation) and related outcomes (Use definitions and concepts, model

analysis and results) of the research strategy. In the figure, the rectangles

Conceptualising Use for IS Success

Page | 21

describe the key steps and outcomes. On the other hand, the arrows in the

figure do not show causality but simply indicate relationships.

Literature Review

•Definitions of use•Role of use in IS success•Issues of use construct and measures•Work-systems theoretical lens•Use in the ES context

Developing Research Models

•Consolidation and Reconciliation with key IS success studies•Modified IS success research model•Five-step operationalisationapproach

Qualitative Investigation

•Practitioner Interview• Managerial perspective on ES•Micro-analysis of interpretations with literature•Emergent patternsof ES use

Triangulation and Analysis

•Triangulation of findings•Principles and guidelines for studying IS use•Conclusions for IS success

QuantitativeInvestigation

•Dual Survey Method•ES use in tertiary institute context•Statistical Validation and checking•Structural Modelsanalysis

Figure 1-1: Key Phases in the Research Strategy

The literature review provides the platform for defining the context of the

research strategy. Specifically, the literature review reports on the definitions,

inter-relationships with other IS concepts, shortcomings, challenges, and issues

associated with system Use. Through the literature review, the focus of analysis

on the studies in the IS success stream is narrowed to define the scope and

boundaries of the study clearly. Topics include the different perspectives that

system Use takes in key studies and the measures adopted in its

operationalisation. The shortcomings of prior conceptualisations of system Use

form the basis with which to seek an approach that consolidates key

perspectives in the phenomena of system Use.

The terminology and nature of Use for IS success as derived from the literature,

forms a basis for comparisons between literature findings, model hypotheses,

and understanding our empirical findings. Subsequently, an a priori research

model, built on the principles of the IS-Impact measurement model, IS-Net, and

IS success model, is derived. In addition, the Use construct in the model is

operationalised in a two-phase approach. This approach considers the

contextual definitions of Use, and expresses how scholars select suitable

dimensions and measures.

A two-staged mixed-method empirical data collection approach is used to test

and explore the research model.

The first empirical investigation into the role of Use in determining success of IS

adopts a dual-survey methodology in a laboratory setting. The objective of the

investigation is to derive empirical evidence to test the a priori model, the

Tan  2010

Page | 22

measures of Use, and the relationships between the key constructs in the

models. Here, the customised survey canvasses the perspectives of a participant

group of users—drawn from a tertiary institute in Australia—on their experience

with using an advanced IS in achieving their tasks. Descriptive and comparative

statistics from the surveys are subjected to further statistical validation.

Findings are expected to provide quantitative evidence that supports the

inclusion of Use as a critical consideration for IS success.

The second empirical investigation explores patterns of Use. Through the years,

studies (including Burton-Jones and Gallivan, 2007 and DeSanctis and Poole,

1994) have reported on the multitude of ways advanced IS can be adopted and

used. This suggests that patterns of Use exist in the phenomena. Subsequently,

a search was conducted for participants matching the requirements for a project

appropriate to highlighting patterns of Use. To achieve this objective, subjects

are screened against principles of the conceptualisation of Use, including their

roles, systems, and work processes. This further aids the scope of the

investigation. Once contacted and screened, interviews with advanced ES

practitioners were conducted. The purpose of the interviews is to canvass

qualitative evidence to support a pattern of Use, through perspectives of ES

users working in the natural setting of their experiences with advanced IT

systems over time. Patterns of Use derived would further improve the

explanatory power of the quantitative results. Drawing from Yin’s (1994; Yin

2003) steps for explanation building and DeSanctis and Poole’s (1994) micro-

analysis strategies, the importance of treating all elements of the new

conceptualisation of Use simultaneously at each Use phase is demonstrated.

Principles of the conceptualisation approach triangulate the empirical results

from both study methods and data. The triangulation (Gable 1996) of methods

(surveys and interviews) and data (quantitative and qualitative) is useful to

support the observations and anticipated responses from the research

instruments better. Furthermore, insights and issues from the methods will

support the aptness of the conceptualisation approach. In relation to the earlier

findings from prior research, the study makes some useful suggestions on how

to study Use better, manage Use, and identify the issues for further examination.

This translates subsequently to a theory-building phase, where the triangulation

of data will inform the literature and provide useful insights on how Use can be

Conceptualising Use for IS Success

Page | 23

better measured and understood in an IS success model. Chapter 4 revisits the

objectives, procedures, and activities of each of the above key stages.

1.6 Unit of Analysis

Pinsonneault and Kraemer (1993) classified six types of unit of analysis as: (1)

individual, (2) work group, (3) department, (4) organisation, (5) application, and

(6) project. Using their categorisation, and based on the nature of the study—to

reconceptualise Use—the unit of analysis defined in this research is the

Enterprise Systems User(s). To draw conclusions on the unit of analysis,

observable data were collected at the individual level, from ES operational

managers and student/learning users. Growing business needs combined with

the relentless emergence of new technologies over the last three decades have

triggered many organisations to switch from more conventional (Drori 1999)

systems (such as text retrieval systems and management information systems)

to highly integrated contemporary systems that span the entire organisation, are

more scalable, and able to handle multiple processes in real time. However, the

underlying complexities and challenges these systems impose on their user(s),

and the importance for researchers to account for the role of the users in

performance management, are widely publicised in recent practitioner and

academic reports (including Chien and Tsaur 2007; Hakkinen and Hilmola 2008;

Hendricks, Singhal and Stratman 2007; Liang, Saraf, Qing et al. 2007). Recent

literature (including Burton-Jones and Gallivan 2007; Wu and Wang 2007; Wu

and Wang 2006a) suggests that more can be done by scholars to understand the

role of the user and the nature of complex system use better. Therefore, the

thesis focuses squarely on examining how users interact with ES for their work.

1.7 A Statement on Ethics

This study is undertaken in accordance with the National Statement on Ethical

Conduct in Human Research (Australian Government 2007) for low-risk

research. This study involves (1) ES course participants from an Australian

institute of higher learning, and from (2) six other external study sites (ERP

adopting Indian companies). On this basis, an application for human ethics level

1 clearance was made prior to the commencement of the project (data collection).

Tan  2010

Page | 24

In the application, we clarified: (1) the relationship between the investigator and

course participants—researchers, course participants, and practitioners—and (2)

anonymity of the course participants is guaranteed (names will not be published).

The university’s research ethics committee reviewed the application and the

ethical clearance certificates1 (ID numbers 0700000644 and 0800000450); they

approved the internal and external data collection on the 30 July 2007 and 7

July 2008 respectively. The approval certificate contains the project details,

participant details, and (specific and standard) conditions of approval. Further

permissions from all participating organisations to access course participants

and staff were obtained prior to data collection. Ethics progress and status

reports based on the above applications are submitted annually.

1.8 Contributions

This study contributes to the IS success body of knowledge. The subject matter

discussed in this research carries significance for cumulative knowledge for the

dimension of Use (1) in the IS success research stream, (2) the proxy IS-Impact

research stream, and (3) for organisations at large wanting to evaluate IS

success. Summarised below are the contributions2.

Starting with significance for the IS success research stream represented by the

contributions of the thesis:

1. To subject the construct of Use to theoretical treatment in the domain of IS

success. This study proposes a conceptualisation of Use in the domain of

IS success, adopting a number of theoretical lenses. The

conceptualisation accounts for elements of Use prompted by a new

generation of systems ignored previously. The relationships between

elements in the terminology of Use, together with the nature and

representation of Use, are consistent with the underlying IS success

theory and epistemology adopted in this study. This consistency in the

explanation of causality further raises the validity of the suggested

relationships.

1 The research ethics clearance certificate is available on request.

2 Furthermore, selected publications and their contributions by the researcher and the contributions of the paper to knowledge are summarized in Appendix G.

Conceptualising Use for IS Success

Page | 25

2. Present evidence for the reconciliation of Use in IS success models.

Ultimately, the study attempts to overcome the confusion in previous

work (by authors such as DeLone and McLean 1992, and Seddon 1997)

over whether Use is more appropriate as a dimension, behaviour, or a

measure. The extent of roles of Use in the IS success model, the IS

nomological net, and the IS-Impact measurement model are therefore

investigated to develop the above.

Significance of research for the (candidate’s) research group describes the

contributions of the study towards current and future activities of the IS-Impact

research stream:

3. Provide evidence for the relevance of Use for the IS-Impact measurement

model. Although it is dropped as a dimension, it is believed that Use is an

antecedent (and consequence) of IS-Impact (Gable et al. 2008, p. 388).

This study provides data to support the above theory—that the IS quality

affects its Use in one iteration—which in turn will influence the impact of

IS.

4. Extend systematic and contextual approach towards construct specification

and validation. IS-Impact is a formative index, although previous work

validated its dimensions as reflective. In this study, the constructs are

submitted to both formative and reflective checks to determine their

inherent nature.

Significance of research for practice describes the contributions of the study

towards finding how organisations can better evaluate their IS:

5. Measure Use with both quantitative and qualitative indicators. Besides

capturing the objective extent of Use (through duration and frequency),

the survey instrument includes measures of Use that capture, for one

thing, the depth and general attitude towards Use. Quantitative (for

example time spent) and qualitative (for example exploratory uses and

attitude of Use) measures provide a more holistic measurement of Use.

6. Focus resources and attention on human aspects that the organisation

needs. It is well established that using a system appropriately can lead to

better performance. The ability to track, monitor, and understand Use can

Tan  2010

Page | 26

further aid organisations in directing scarce resources towards the parts

of the business that need them.

1.9 The Thesis Structure

The thesis is organised in three key parts. Part I covers the underpinning issues

of system Use from the literature; Part II covers the proposed conceptualisation

and operationalisation of Use; and Part III covers the empirical data analysis and

the interpretation of its results. In this section, we introduce each of these three

parts including the chapters, their content, and their relationship to the overall

research strategy.

Part I consists of Chapter 1 (Introduction) and Chapter 2

Chapter 2: Literature Review—broadly, this chapter presents an account of the

state of current IS and its Use, reflects on prior conceptualisations of system Use,

and establishes the theoretical underpinnings that the study seeks. The

literature review begins with an examination of the definitions of system Use,

and its representations in various domains of IS research. This section discusses

key study terms such as nomological net and IS success, and other research

streams adopting Use. In the light of the changing stream of IS, the research

reveals that the current conceptualisation is inadequate, and poses a number of

challenges, including an appropriate measurement approach or lack thereof.

Subsequently, we introduce the theoretical backgrounds adopted for this

research―Alter’s (2006) work-systems theory, which aptly captures the effects of

multiple elements during Use of a system. In summary, the review of the

literature purports a definitive approach and proposition for this study: to study

Use, one must define, contextualise, operationalise, and decide how to validate it.

Part II consists of Chapter 3 and Chapter 4

Chapter 3: Research Model—this chapter presents the research model. The

model positions the new conceptualisation of Use with other IS success

dimensions. The research model developed illustrates the effects of

contemporary IS (ES chosen in this case) Use on the impacts of IS over time.

Development of the research model constitutes identifying the constructs,

contextualising the model, operationalising the constructs, and deriving the

hypotheses. Operationalising the Use construct is shaped using considerations

Conceptualising Use for IS Success

Page | 27

of work systems (Alter 2006), types of information systems (McAfee 2006), and

employment cohorts (Gable, Sedera and Chan 2003). The research model spans

two parts: the conceptual and thereafter the a priori model. While the conceptual

model identifies and contextualises the key concepts in the research model, the

a priori model operationalises the constructs, and introduces the key constructs

and measures.

Chapter 4: Research Design—this chapter presents the methods adopted in this

research. This includes its epistemology, characteristics, and merits for the

study. As mentioned earlier, we adopt a mixed-methods research design,

following a sequential explanatory strategy. The mixed-method approach

consists of two distinct yet related phases: a model development and testing

phase and a theory-building phase. For each phase, we discuss the

implementation of data collection, the priority given to certain methods, the

stance of the study, the driving theory, and the overall relevance to research

questions.

Part III consists of Chapter 5, Chapter 6, and Chapter 7

Chapter 5: Survey Analysis and Findings—first, this chapter reports on the

descriptive and comparative statistics gathered from analysing the survey data,

and preliminary inferences drawn from it. Second, the chapter addresses

statistical conclusion validity of the empirical research survey data. This section

consolidates the inferential statistical analyses conducted to test the research

hypotheses, and to extend the instrument and research model validity. For

instrument validation, tests for construct (convergent and discriminant) validity

are discussed. Constructs and Items reliability is reported next. Hypothesis (and

rival hypotheses) testing conducted in alignment with the research models is

reported.

Chapter 6: Qualitative Data Analysis and Findings—this chapter presents

findings from the set of managers’ interviews conducted for explanatory

purposes. This chapter describes the formation of the concept of levels in Use.

Patterns, trends, and insights from this interpretation (of qualitative data)

provide meaning to previously hypothesised measurements of Use (quantitative

data). Classifying and analysing the spectrum of contemporary Use behaviour

Tan  2010

Page | 28

allows us to build a process of strategic Use and thereby rationalise how users

would eventually score Use.

Chapter 7: Research Implications and Future Research—this chapter summarises

the key research implications, the overall contributions of this thesis, and

explores the potential for future research. The principles of Use and the

researchers’ checklist to study it largely reflect the research implications. While

the principles of Use—represented by three key conclusions developed from the

study findings—add to existing knowledge of the phenomena, the checklist

defines a series of steps and considerations in designing a study on Use. These

implications correspond to the study objectives and are compared with prior

literature and alternative views.

Conceptualising Use for IS Success

Page | 29

Chapter 2: Literature Review

2.1 Introduction

The literature review seeks to consolidate, describe, evaluate, and integrate

content from key studies to develop an understanding of primary issues

surrounding Use in IS. Extending the above notion the literature review attempts

to: (1) provide a retrospective examination of the literature and identify ‘gaps’

and salient issues relating to the conceptualisation and operationalisation of Use;

(2) provide the study context to position the study relative to other work in the

area; (3) summarise the set of pertinent concepts and elements in IS that best

shape and describe the phenomenon; (4) aid in model and hypotheses building;

and (5) serve as a plausible source to explain (and compare) results observed in

ensuing empirical data collection.

The literature review is organised into six key topic areas: (1) breath of IS

literature employing Use, (2) the definitions of Use, (3) Use in the IS success

stream, (4) Use as a construct, (5) operationalising and measuring Use, and (6)

new considerations for Use in an IS success context. First, it is found that Use

has featured prominently in the IS discipline, in streams and domains such as

IS success (DeLone and McLean 1992), IT adoption (Davis et al. 1989), and IS

performance (Jain and Kanungo 2005) for example. We examine these streams.

Second, the point of Zigurs (1993), DeLone and McLean (2003) and Petter et al.

(2008), that the definition of Use, when employed, is too simplistic, is examined.

Third, this chapter specifies the domain where the objectives of this research are

most relevant―IS success. The chapter discusses the understanding of Use in

theoretical frameworks and models which build on the foundations of IS success,

more specifically the IS nomological net (Benbasat and Zmud 2003), the IS

success model (DeLone and McLean 1992), and the IS-Impact measurement

model (Gable et al. 2008). This sets the tone for the rest of the discussion in this

chapter. Fourth, the chapter examines the construct of Use. On this premise,

the popular representations of Use―as an antecedent, a consequence, an event in

a process, and a mediator―are examined. A deeper understanding of each of

these representations from Burton-Jones and Straub (2006) and other noted

research that have adopted them are sought. Next, the chapter discusses the

Tan  2010

Page | 30

operationalisation and the measures of Use. A detailed analysis of over 80

studies featuring Use illustrates that despite its importance, the measures

remain inadequate for a rich assessment of Use. In addition, the

multidimensional and dynamic nature of Use is discussed by adopting and

extending characteristics of work systems theory. The chapter concludes with a

summary of the considerations that will shape a new conceptualisation of Use.

To address the gaps in the conceptualisation of Use, the study introduces an

operational definition of Use that uses work-systems theory (Alter 2006) to tie

together the key elements of Use. These considerations help to define,

contextualise, and operationalise Use for IS success. They further help to inform

the research model and the nature of scholars’ use of it. These forms are the

principles of a re-conceptualisation of Use in this study.

2.2 The Breadth of IS Literature Employing Use

The first step to building an understanding of Use for archival analysis and to

find inter-relationships and patterns for this study is consolidating a pool of IS

studies published in the last three decades. We conduct a broad literature

search for the above purpose using the following keywords: ‘Use’, ‘System Use’,

‘Utilisation’, ‘Usage’, and ‘System Usage’. These keywords are synonyms in most

of the literature identified. Many of the relevant Use and IS success articles also

span the leading3 journals and conferences of the IS discipline, as highlighted by

the Association for Information Systems. The selection of studies is methodically

narrowed down to include those that explicitly examine Use as a construct (or a

surrogate for another construct, for example behavioural intention of Use), or

that employ Use as a variable in studying a larger phenomenon. A panel

comprising two novice researchers and a senior researcher vets this selection of

appropriate literature. Preliminary analysis including the definitions,

representation, and nature of Use studied is gathered from this pool of IS

studies. Figure 2-1 below illustrates a broad cross section of (five) IS domains

3 Seven out of the senior scholars’ basket of six or eight journals (http://home.aisnet.org/displaycommon.cfm?an=1&subarticlenbr=346) are represented and incorporated into the literature review and synthesis. Specifically, it is noteworthy that relevant Use articles are drawn from MISQ and ISR, relevant IS success studies are drawn from JAIS and JMIS, relevant Enterprise Systems studies from EJIS and JIT. The researcher consulted the top ranked AIS conferences like ICIS, ECIS and PACIS.

Conceptualising Use for IS Success

Page | 31

and streams that have employed Use. It is noteworthy that the five streams of IS

research employing Use and references are not exhaustive.

The first observation is that scholars portray Use differently in different streams

of IS research (Goodhue 1992) which include (see Figure 2-1) (a) use of data from

IS to perform processing functions (Panel 1); (b) an indication of success of the

implementation process (Panel 2); (c) actual technology Use that is determined by

perceived usefulness and ease of Use, and intentions to use (Panel 3); (d) a

dimension of success as a result of perceived system quality and information

quality, that in turn affects individual and organisational impact (Panel 4); or (e)

a predictor of work and IS performance (Panel 5).

The portrayal of Use suggests that scholars often have different intended

meanings of Use, and adopt different theories and epistemologies. It is noted

that these other streams of IS research carry differing and in some cases

conflicting meanings of Use. Given the above, correctly specifying the conceptual

representation of Use in a theoretical model adds to its definition. Therefore, it is

crucial that researchers employing Use first appropriately define the type of Use

and the study context for which that Use is defined. Although not all the

representations are fully investigated in this study, it is believed that a meaning

of Use, if appropriately defined, can be employed in multiple domains.

No. IS Paradigm* References

1

IS for Decision Making

Data Selection

Data from IS

Human Information Processing

Use

Examples in Barkin and Dickson

(1977), Szajna (1993) and

Yuthas and Young (1996)

2 Implementation Process

Implementation Success (Use)

IS Implementation

Examples in Lucas (1976),

Ginzberg (1981), and Hartwick

and Barki (1994)

3 Intention to Use

Usefulness and Ease of

UseUse

IS Acceptance

Examples in Davis (1989; 1993),

Segars and Grover (1993), Gefen

et al. (2003) and Venkatesh et al.

(2003)

Tan  2010

Page | 32

4 Use

System and Information

Quality

Organizationaland Individual

Impact

IS Success

Examples in DeLone and

McLean (1992) Goodhue (1995)

and Benbasat and Zmud (2003)

5 Use Performance

IS Performance

Examples in Burton-Jones and

Straub (2006), Rice (1994), Jain

and Kanungo (2005) and Igbaria

and Tan (1997)

Figure 2-1: Paradigms of IS Research Employing Use

(* Reproduced from Burton-Jones and Straub 2006)

From here, the pool of IS literature is sorted according to several core aspects for

analysis; these include the underlying epistemology, types of systems, types of

measures, and empirical methods. Next, this list is filtered down to studies that

are IS success-themed or the like (studies evaluating a particular IS or IT). This

codification of IS studies into analysable content of Use forms the core of

building the literature review. Observations from the literature are organised into

the interrelated topic areas. These topics help structure the organisation of key

issues facing the conceptualisation of Use and they deliver a complete definition

of the research area.

2.3 Definitions of Use

Definitions of Use have varied in terms of the terminology used, theoretical

underpinning, and application. However, the definition of Use in many of these

studies has often been reported as inadequate. It has been reported (by Zigurs

1993; DeLone and McLean 2003; Petter et al. 2008 among others) that

researchers have often simplified the definition of Use, with most reverting to

previously published definitions without due consideration for theory and

terminology. However, the candidate believes that it is not the definitions that

are simplistic but more so, scholars have oversimplified the understanding

behind these definitions. It does not matter that there are multiple

interpretations of Use but it is important to account for them. There is scant IS

literature—besides recent publications by Burton-Jones and Straub (2006) and

Burton-Jones and Gallivan (2007)—that has attempted to define Use

systematically, and simultaneously to account for all core elements central to

Conceptualising Use for IS Success

Page | 33

the IS field and to the domain of Use. An in-depth analysis of Use definitions

across a selection of studies illustrates this point. Table 2-1 shows the

definitions of Use adopted in a selection of articles that have either been widely

cited, are IS success articles, or are articles that have attempted to

reconceptualise Use. The other purpose of looking across these definitions is to

draw a consensus on what IS researchers consider as the key elements in a

terminology of Use. As the starting point, DeLone and McLean (1992 p. 66)

define Use as ‘the received consumption of the product of the IS’. This definition

focuses on the Use of the product of IS (for example IS reports), rather than the

IS itself. The IS as a physical system (and not the discipline), is an application of

computers that helps organisations process their data so that they can improve

their management of information (Avison and Elliot 2006). Reflecting on these

ideas, Use encompasses more than system Use but also information Use.

No Definitions Source (Citations to date)*

1 [Use is] the received consumption of the product of the IS

DeLone and McLean (1992 p. 66); 2984 citations

2 [Use is] the behaviour of employing technology in completing tasks

Goodhue and Thompson (1995 p. 218); 1101 citations

3 [Use is] the utilisation of information technology (IT) by individuals, groups, or organisations

Straub et al. (1995 p. 1328); 436 citations

4 [Use] means using the system. It is expected that resources such as human effort will be consumed as the system is used

Seddon (1997 p. 246); 639 citations

5 [Use is] an activity which involves a user, a system, and the task

Burton-Jones and Straub (2006 p. 231); 102 citations

6 [Use is] the individual’s behaviour of, or effort put into, using the system

Sabherwal et al. (2006 p. 31); 59 citations

7 [Use is] the degree and manner in which staff and customers utilise the capabilities of an IS

Petter et al. (2008 p. 239); 18 citations

Table 2-1: Definitions of Use

*Source: Google Scholar Citation Count to date (March 2010)

Straub et al. (1995 p. 1328) define Use on the other hand, as ‘the utilisation of

information technology (IT) by individuals, groups, or organisations’. This

Tan  2010

Page | 34

definition captures several aspects. Executives often talk about the revolution,

which IT systems bring to the companies (see McAfee 2006) using them to

deliver the organisational capabilities they desire. Section 2.7.2 contains a

discussion of the evolution of contemporary IT systems and their effects on Use.

Reflecting on the definition by Straub et al. (1995), Use here emphasises

developed IT systems. Implicit in the Straub et al. (1995) definition, IT systems,

when implemented in an organisation, can be used at multiple levels (individuals,

groups, or organisations).

Burton-Jones and Gallivan (2007) further differentiate between individual, group,

and the organisational nature of Use in the light of interdependencies and

structures formed in Use. This study (elaborated later) will focus on the

individual users of IS in an organisation. Today, the effects of an enterprise-wide

IS implementation (like ERP) are widespread, not just affecting some groups or

pockets of individuals, but the entire organisation (Shanks, Seddon and

Wilcocks 2003). Internally in an organisation, such systems entail many users

ranging from top executives and managers to data entry operators (Consulting

1999; Sedera, Tan and Dey 2006). Depending on their roles, different users

would naturally make different uses of the same systems. The perspectives of

multiple stakeholders are discussed later. External stakeholders (like clients of a

firm or students in a university) can also contribute to the way organisations

adopt and use IT. This is captured in the Petter, et al. (2008 p. 239) definition of

Use, which elaborates that Use describes ‘the degree and manner in which staff

and customers utilise the capabilities of an IS’. This is applicable in (but not

restricted to) instances such as in E-commerce (DeLone and McLean 2004) in

the E-marketplace, where customers’, suppliers’ and students’ Use of IS

contribute to the successes of these systems. Reflecting on this, Use differs

across organisational and hierarchical levels, and for users within and external

to an organisation.

Goodhue and Thompson (1995 p. 218) define Use as “the behaviour of employing

technology in completing tasks”. This definition adds the consideration of tasks.

The consideration of tasks in a definition of Use is shared by Burton-Jones and

Straub (2006 p. 231) who define Use as “an activity which also involves a user, a

system, and: the task”. In fact, Burton-Jones and Straub (2006), in their attempt

to reconceptualise system Use, propose a two-stage approach to define and

Conceptualising Use for IS Success

Page | 35

select system usage measures. Later sections discuss the relevance of the

approach. In their arguments, the second stage—selection—involves

conceptualisation of the usage construct in terms of its structure and function,

where the structure of system usage is tripartite, comprising (i) a user, (ii) a

system, and the (iii) task. The task, according to Vakkari (2003), represents an

activity the task doer performs in order to accomplish a task. In other words,

task indicates purpose and thus represents the bridge between user and system,

an aspect not generally captured in any of the previously mentioned definitions

of Use. Purposes of an IS can be many and varied (just like completing one or a

set of tasks), but purpose serves as the distinguishing feature of an IS artefact.

Therefore, in studying IS artefacts or theorising about an IS-focused

phenomenon, it is necessary to consider the purpose of the system as originally

intended, or as arising in Use (Gregor 2009).

Referring to the editor’s comments in Lee (1999), researchers should look at Use

as involving more than just a physical and passive application of the system, but

also the consideration of tasks, users, and information, and also organisational

functions. The organisational function here captures notions beyond the

technical complexities of information and communication technologies and

refers to the characteristics and resources of the organisation, including the

managerial structure. Many IS researchers have argued the importance of

considering the role and function of an organisation in determining an IS-related

phenomenon. For instance, studies have indicated how the organisational, social,

and behavioural complexities in the organisation influence technological

innovation (Kuan and Chau 2001; Rogers 2003; Tornatzky and Fleischer 1990).

Further, drawing from the definitions of the emerging discipline of IS (see Lee

1999, and Avison and Elliot 2006) the richness of the age-old concept of Use can

further be augmented by focusing on the phenomena that emerge when the

technological system and the social system interact, rather than on the

technologies themselves. In a similar light, Sabherwal et al. (2006 p. 31) define

Use as “the individual’s behaviour of, or effort put into, using the system”. This

view is also implicit in the definition of Use offered by Seddon (1997 p. 246) who

describes the “resources such as human effort that will be consumed as the

system is used”. Seddon (1997) however raises several more interpretations of

Tan  2010

Page | 36

Use as originally specified by DeLone and McLean (1992). Reflecting on this, the

Use of IS encapsulates both a passive task and an active state of interaction for

the task.

2.3.1 The Multidimensional Nature of Use

Given the above analysis of the definitions of Use, its multidimensional nature of

Use is illustrated. This conclusion is drawn through the terminology of Use.

First, the domain of system Use involves no less than the following elements:

systems, information, users, and tasks to be completed using the system. Next,

at least three definitions—Seddon (1997), Sabherwal et al. (2006) and Petter,

DeLone et al. (2008)—raise the notion of consuming effort in the behaviour of Use.

This refers to the consumption of resources such as capabilities of systems,

management information reports, and users’ time. Extending this notion, if one

were to consider multiple user groups, each having varying uses of the system,

then different amounts of resources are used for each user group.

At a high level, variability in Use can thus be distinguished by the manner and

degree of interaction with the system, as Petter et al. (2008) aptly describe.

Consider the meaning of these two terms: degree and manner. The Oxford

dictionaries (Oxford 2008) define degree as ‘the amount, level, or extent to which

something happens or is present or a unit in a scale of temperature, intensity,

hardness, etc.’. On the other hand, the same dictionaries define manner as ‘a

way in which something is done or happens’. A simple example of cooking an egg

can distinguish the above terms. There is more than one way to cook an egg (this

refers to manner) but for how long do we need to cook an egg before we can eat it

(this refers to degree)?

The rise of the ES phenomena can explain this further. Organisations adopting

complex and integrated systems like ES can choose either to customise the

industry-specific enterprise software purchased to suit the existing business

processes, or reengineer the business processes to adopt the best practices in

the software (Al-Mashari 2001), but the number of processes vary in each case.

The efficiency of ES and the amount of time saved through automation of

processes for a broad spectrum of stakeholders in an organisation are also

widely reported (Ross and Vitale 2000; Umble, Haft and Umble 2002). This

Conceptualising Use for IS Success

Page | 37

behaviour is unfounded in rigid transactional systems, because there is usually

a fixed manner and degree of completing a process. From this example, one can

infer that types of systems affect the manner and degree of their Use. The second

inference is that while quality assesses manner, quantity assesses degree.

Therefore, quality and quantity are also important considerations when looking

at the concept of complex system Use.

2.3.2 The Multilevel Nature of Use

The concept of a system affecting multiple stakeholders extends a larger aspect

of Use: its multilevel nature.

A publication by Burton-Jones and Gallivan (2007) demonstrates how

researchers can break down the behaviour of users, and a group of users

observed in a study, to conceive the multilevel nature of Use. The levels

described by Burton-Jones and Gallivan (2007) are individual, group, and

organisation. Their study primarily adapted the work of Morgeson and Hofmann

(1999) to develop a set of five general dimensions considered necessary to build a

complete multilevel theory of Use. These five dimensions pertain to three distinct

and overarching theoretical guidelines. First, the article asks scholars to

consider the functional relationships of system Use at different levels. This is

“whether the function of the construct would be the same at multiple levels even if

the structure is different” (Burton-Jones and Gallivan 2007 p. 661). The function

of a construct refers to “the effects or outputs of the phenomenon that the

construct is used to reflect” (ibid. p. 661); and the structure of a collective

construct refers to “the actions among individuals that generate the collective

phenomena that a collective construct is used to reflect” (ibid. p. 661). Second, the

article asks researchers to “work backwards by studying the function of a

construct and then discerning what structure might give rise to that effect” (ibid. p.

662). When analysing the structure of a collective construct, researchers should

consider (a) interdependencies in system Use, patterns of action and Use where

two or more entities are mutually dependent on each other, and (b) forms of

collective system Use in members—either homogeneous or having a pattern.

Third, the article asks researchers to “account for two types of contextual factors:

(1) factors that affect functional relationships among constructs, and (2) factors

that affect the emergence of collective phenomena” (ibid. p. 671).

Tan  2010

Page | 38

While Burton-Jones and Gallivan (2007) purport levels of Use across groups,

different individuals within the same group or organisation also experience

different levels of Use. Herein, we discuss the multilevel nature of Use at the

more granular individual level. It describes how one stakeholder group (for

instance managers) makes sense of the same information systems and

technology that they develop and with which they interact. Over the years, the

field has seen evidence of applying theories and models rooted in other

disciplines such as psychology and social science to explain or seek in-depth

understanding of its multilevel nature, although some do not explicitly claim to

do so. For example, the concept of ‘taking possession of’ or integrating a tool or

technology in everyday human activity has been widely discussed in IS literature

for years, featuring prominently in the works of (DeSanctis and Poole 1994);

(Orlikowski 1992) and Carroll et al. (2002) among others. These scholars have

sought an increased understanding of the process of appropriating everyday

technology in human actions, where people consciously and actively select

technological and social rules and resources within a real context in deciding its

adoption, and the relevant control practices. Some exemplar studies from this

stream of research that demonstrate that Use is multilevel are highlighted herein.

DeSanctis et al. (DeSanctis and Poole 1990; DeSanctis and Poole 1994;

Desanctis, Poole, Zigurs et al. 2008) point out that: groups and organisations

using IT dynamically create perceptions about the role and utility of the system,

and how it applies to their tasks. These perceptions vary across groups and

influence the way in which technology is used, and hence mediate its impact on

outcomes. Based on the analysis of group interactions during Group Decision

Support Systems (GDSS) Use, DeSanctis and Poole (1994) developed a coding

system, which suggests a typology of ‘moves’ through which technology can be

employed by groups. In this typology, 37 appropriation moves are organised into

nine general categories. Many researchers investigating complex enterprise

systems cite and extend the works of DeSanctis et al. (discussed here).

Tchokogue, et al. (2005) suggest that successfully appropriating an ERP occurs

at three levels: strategic, tactical, and operational. Berchet and Habchi (2005)

further point to the importance of appropriating ES in the progressive stage of

integrating ES. Their study found that in this stage, users are clearly detecting

key processes and incorporating them in their work practices. Similar

Conceptualising Use for IS Success

Page | 39

applications of this stream of work to the education sector can be found in

Furomo and Melcher (2006) and LeRouge and Webb (2004).

Although Burton-Jones and Straub (2006 p. 232) insist that appropriate Use

does not measure system usage, on the contrary, appropriation or a like concept,

is useful for determining varying Uses in a work system. When Use of the system

is voluntary or volitional, and consistent with the ‘human selection’ theme in

appropriation, users can choose not to perform key processes in their work

processes using the system. For example, for a procurement process, an

employee can choose walk-in banking over an online banking facility to complete

the payment of an invoice from a vendor. At the other end of the spectrum, the

user appropriates the technology, where the user uses rules and resources

embedded in the online banking technology to complete payment. In between,

the employee may choose to use the fields of an online form as a guide to

preparing a cheque for banking. In this case, the employee can use information

from the system to check their payment process, although the system is not part

of the procurement process. The theory of appropriation, and appropriation

moves determine the amount of Use of technology between these two extremes.

In summarising, when employing Use, scholars must define it and inherently

consider its multidimensional and multilevel nature. Definitions from IS scholars

point us to key elements of system Use that are crucial to its definition. Hence,

they must at least recognise the relationships between the user, the IS system

they are using, and the task for which they are interacting (using the system

more than once) with the system.

2.4 IS Success and Use

IS success remains one of the most enduring research topics in the field of

information systems. For the past three decades, the bulk of work in IS success

(for example Bailey and Pearson 1983; DeLone and McLean 1992; Gallagher

1974; Shang and Seddon 2002; Wixom and Todd 2005) attempted to capture the

organisational user’s score of a particular system (or systems) implementation.

As highlighted earlier, Seddon (1997) defines IS success as a measure of the

degree to which a person evaluating a system believes that the stakeholder (in

whose interest the evaluation is being made) is better off. There has been a

Tan  2010

Page | 40

multitude of studies that investigate the phenomenon of IS success, including

the measurement of success, the antecedents of success, and the explanations

of success or failure (Markus, Axline and al. 2003).

Researchers benchmark the success of an IS from a variety of perspectives; they

adopt a multitude of system, human, organisational, and environmental

measures (Petter et al. 2008). In the process, different models and frameworks

that consolidate the measures into various dimensions have been developed and

empirically validated. These include the widely cited IS success model (DeLone

and McLean 1992), the ES benefits framework (Shang and Seddon 2002), the

balanced scorecard (Kaplan and Norton 2001), and more recently the IS-Impact

measurement model (Gable et al. 2008) to cite but a few.

For years, studies have adopted and applied these models and their dimensions.

For example, the DeLone and McLean IS success model (1992; 2003) is widely

cited (over 3500 combined citations according to Google Scholar in March 2010),

and is often regarded as the quintessence for this stream of research (see Panel

4 in Figure 2-1). The IS success model is best known as a multidimensional

measurement model, which classifies definitions of IS success and their

corresponding measures into interdependent categories. The authors classified

existing measures of success into six constructs (used interchangeably in this

study with dimensions)—System Quality, Information Quality, Organisational

Impacts, Individual Impacts, Satisfaction and Use. In the IS success model, Use is

depicted as a variable that is an event in a process that leads to a set of net

benefits (DeLone and McLean 1992; 2003; Seddon 1997). The rest of the section

examines the role of Use in three models developed by scholars in the field—the

IS success model, the IS nomological net, and the IS-Impact measurement

model—to evaluate IS success.

2.4.1 The IS Success Model (1992; 2003)

Based on the work of Shannon and Weaver (1963) and Mason (1978), the

seminal DeLone and McLean (1992) article consolidates the definitions of IS

success and the corresponding measures into a multidimensional measurement

model. As previously highlighted, the schema classifies the multitude of IS

success measures that have been used in the literature into six categories. The

Conceptualising Use for IS Success

Page | 41

relationships between the six dimensions as presented in their 1992 model (see

Figure 2-2) follow: (1) system quality and information quality lead to Use and to

user satisfactions that are interdependent; and (2) Use and satisfaction induce

an individual impact that leads to an organisational impact (Despont-Gros 2005).

The model is theoretical and the selection of measures is context and objective-

dependent.

Figure 2-2: DeLone and McLean’s IS Success Model (1992)

The rest of the section briefly defines and discusses the remaining five

interdependent dimensions of the DeLone and McLean (1992) IS success model.

The characteristics of these dimensions in the light of subsequent (to DeLone

and McLean 1992) literature are described to achieve a more contemporary

understanding, and to demonstrate the extent of the body of IS success model

types of research. We keep the definitions deliberately short here to reflect the

core focus of the study and to position further discussions on Use and its

elements.

Individual Impacts: Individual impacts are generally concerned with how the

implemented systems have influenced the performance of individuals. They are

generally closely related to performance, as DeLone and McLean (1992) suggest,

and could be an indication that an information system has improved the user’s

decision-making productivity, produced a change in user activity, or has

Tan  2010

Page | 42

changed the decision maker’s perception of the importance or usefulness of the

information system.

Organisational Impacts: According to Senn (1982), a system’s impact could be

assessed by looking at the performance (effectiveness and efficiency) and the

effect that the applications of the system have within an organisation. The

performance assessment helps to determine whether to readjust or to put in

more resources to improve the performance of the system, while applications

assessment helps to determine how the implementation and Use of introduced

systems affect the organisation.

Information Quality: Information quality captures the perceived goodness of

the product of IS. Today, the growth of data warehouses and the direct access to

information from various sources by managers and information users have

increased the need for, and awareness of, high-quality information in

organisations (Lee, Strong, Kahn et al. 2002). From the literature, information

quality as perceived by a user stems from an implicit Use of a system’s

information and outputs (for example reports).

System Quality: System quality of the implemented system, according to Sedera

et al. (2004), is a multifaceted construct designed to capture how the system

performs from a technical and design perspective. It must be noted that system

quality as referred to in this study does not equate to software quality, although

as Von Hellens (1997) identified in her study, software carries qualities that

reflect its performance in the user environment and, subsequently, affect the

users’ opinions about its quality. IS users’ experiences and perceptions of quality

are beyond the technical properties of the software. Software normally refers to

programs, whereas an information system is the organisational context in which

software is used. Therefore, as Von Hellens (1997) pointed out, one should

confine software quality to the technical characteristics of software, leaving out

its Use.

User Satisfaction: User satisfaction is one of the most extensively used single

measures for IS evaluation (DeLone and McLean 1992; Doll and Torkzadeh 1988;

Etezadi-Amoli and Farhoomand 1996; Gatian 1994; Igbaria and Nachman 1990;

Igbaria and Tan 1997). It is often used as a surrogate measure of IS success

(Bailey and Pearson 1983) in general, and the success of e-commerce

Conceptualising Use for IS Success

Page | 43

applications in particular (Kim, Lee, Han et al. 2002). Khalifa and Shen (2005)

share this view, highlighting that satisfaction is not just an important

determinant of success but also its proxy, due to its conceptual closeness and

its empirical linkages to the success construct. Ein-Dor and Segev (1978)

reported that satisfaction—as compared with other common proxies for success,

such as Use and perceived usefulness—provides a higher degree of content and

construct validity.

The DeLone and McLean (1992) study purports a close interdependent

relationship between Use and user satisfaction. However like Use, several

studies (Rai et al. 2002; including Seddon 1997; Sedera and Tan 2005) in IS do

not adequately measure this idealised construct, and suggest treating user

satisfaction as an overarching4 construct of success, rather than as a measure of

success. The Use dimension in the IS success model is discussed next.

2.4.2 Differing Meanings of Use in the IS Success Model

Scholars have highlighted issues with the DeLone and McLean (1992) treatment

of Use in the model. For example, Seddon (1997, p. 240) found the original

DeLone and McLean (1992) classification “both confusing and misspecified”.

According to DeLone and McLean (1992), the original model recognises success

as a process construct that must include both temporal and causal influences in

determining IS success. However Seddon (1997) claims that the original DeLone

and McLean (ibid.) framework is actually a combination of three models (two [of

Use and success] variance and one process) with three seemingly diverse

meanings that attempt to combine both process and causal explanations of IS

success.

In the re-specification and extension of the DeLone and McLean (1992) model,

Seddon (1997) identified the three possible meanings of Use summarised below.

Meaning 1: suggests that Use is an outcome of the implementation success. This

is so, as it has frequently been assumed that heavily used systems are successes

and systems that were not used are failures (for example Lucas 1975). However,

4 For example, Sedera and Tan (2005) analysed 16 user-satisfaction instruments and demonstrated that user-satisfaction measures map predominantly to existing IS dimensions and measures.

Tan  2010

Page | 44

as Szajna (1993) pointed out, this assumption is not necessarily correct.

Systems (examples of those in Lucas, 1975) were failures generally, not because

they were not used, but because they provided no benefits (such as better work

or less time consumed), which is a consequence of non-Use.

Meaning 2: suggests that Use is being used to describe behaviour and not as a

measure of IS success in the DeLone and McLean model. Works in intention and

(or) behavioural (IT acceptance) models best exemplify this meaning where Use is

being used to describe behaviour.

Meaning 3: Impacts are outcomes of a process that begins with Use. This third

meaning refers to Use as an event leading to individual and organisational

impact. Like Meaning 2, impacts and satisfaction, not Use, are being treated as

measures of IS success.

Given the above reported differences on the treatment of Use, Delone and

McLean (ibid.) called for further development and validation of their model. The

following examples demonstrate that despite several researchers taking the

advice to further enhance and validate the model, subjecting Use to different

treatment often produces conflicting results.

As one example, Seddon and Kiew (1994) tested the relationships between

system quality, information quality, satisfaction, and Use. After replacing Use

with usefulness and adding a new variable (user involvement), results5 from

their path analysis of data from 102 individual users of a university accounting

system indicated that user involvement, system quality, and information quality

had strong correlations with usefulness; user involvement had a weaker

relationship with satisfaction. Fraser and Salter (1995) replicated the Seddon

and Kiew (1994) study and obtained similar results.

5 Refer to Seddon (1997, p. 241) for further path analysis results and a summary.

Conceptualising Use for IS Success

Page | 45

Figure 2-3: DeLone and McLean (2003) Updated IS Success Model

In 2003 (its tenth year of revision), Delone and McLean proposed a number of

changes to their original model. The differences noted in the new model (Figure

2-3) include (1) the joining of individual impact and organisational impact into

one dimension called net benefits, (2) the addition of the dimension service

quality6, (3) the arrows demonstrating proposed associations. The model is as

follows: characteristics of the IS (evaluated by system quality, information

quality, and service quality) affect intention to Use, user satisfaction, and

subsequent Use. Because of user satisfaction and Use, net (positive or negative)

benefits are achievable. The net benefits will influence user satisfaction and

future Use of the IS. Despite these changes, it is observed that not much work

has been done on Use, and thus consistent with the DeLone and McLean (1992)

ideas, we still have reason to believe that the concept of Use is too simplistic and

incomplete in IS success, often ignoring how users interact with the IS. An

understanding of the nature, extent, and appropriateness of Use must be

encompassed in the measurement of Use according to (DeLone and McLean

2003; Petter et al. 2008).

6 There is no detailed discussion of the two constructs; the researcher instead focused on the other dimensions that have featured in the ES Success study. Refer to DeLone and McLean (2003) for further explanations of the two new constructs of Net Benefits and Service Quality.

Tan  2010

Page | 46

2.4.3 The IS Nomological Net (2003) and Use

The IS nomological net of Benbasat and Zmud (2003) is examined at the outset

for three reasons. First, the IS nomological net recognises Use in a central role;

second, for how the nomological net specifies an approach to validate the

phenomenon (Use in this case); and last, for the close connotations of IS

nomological net with IS success and IS-Impact.

Expanding on the second motivation is to look first at the nomological net

approach toward validation developed by Cronbach and Meehl (1955). As part of

the American Psychological Association’s development of psychological testing

standards, a nomological network was originally conceived as a view of construct

validity. That is, in order to provide evidence that a measure has construct

validity Cronbach and Meehl (1955) argued that you had to develop a

nomological network first for the measure. Defining a nomological network of

contemporary Use identifies and helps to build the context within which to

validate a model of the study phenomena.

A nomological net must include: (1) a theoretical framework for what to measure,

(2) an empirical framework for how to measure it, and (3) a specification of

linkages among and between these frameworks (Trochim 2002). This study

adopts a similar strategy. We develop a theoretical research model following the

establishment of a theoretical underpinning, and thereafter we make an

identification of the contextual measures. Then the constructs and

corresponding measures of the research model are tested. Further, nomological

validity reflects the extent to which predictions about constructs and measures

are accurate from the perspective of reasonably well established theoretical

models (Straub et al., 1995). Nomological validation analysis remains one of the

most powerful ways of examining the validity of constructs and measures

(Baggozi 1980; Cronbach 1971), but one that was not often mentioned in IS

research until recently.

Benbasat and Zmud (2003) highlight the need for establishing an organisational

identity for IS disciplines. They recommend that all IS research should include

the IT artefact and (or) elements from its immediate nomological net to bind

together the IS sub-disciplines, and to communicate the distinctive nature of the

IS discipline to those in its organisational field. The scholars conceptualised the

Conceptualising Use for IS Success

Page | 47

IT artefact as the application of IT to enable or support some task(s) embedded

within a structure (or structures) embedded within a context (or contexts).

Massey et al. (2001) point out that the IS discipline’s unique contribution to the

broader field of social science requires that all IS researchers understand

technology as well as the organisational and individual issues surrounding its

Use. The problems with not doing so make the boundaries of IS scholarship in

research ambiguous, thus raising questions regarding its distinctiveness and its

legitimacy with respect to related scholarly disciplines. All too often however,

elements from the authors’ conceptualisations of the IT nomological net are

seemingly absent from much IS scholarship (Benbasat and Zmud 2003;

Orlikowski and Iacono 2001).

Based on the above motivations, the IS nomological net proposed by Benbasat

and Zmud (2003) is developed and is shown in Figure 2-4. The IS net depicts

that identity of Use and the other key constructs and their inter-relationships.

The IS nomological net comprises several principles:

IT Managerial,Methodological,

and TechnologicalCapabilities

IT Managerial,Methodological,

and TechnologicalPractices

Use

Information Systems

Net BenefitsThe IT Artefact

Figure 2-4: The IS Nomological Net*

*Adapted from Benbasat and Zmud, 2003)

The defining elements of the IT artefact include the information technology

itself, the tasks for which it was constructed, the task structures

(including policies, rules, and practices) supporting the tasks, and the

context in which they are embedded;

Tan  2010

Page | 48

The managerial, methodological, and technological capabilities as well as

the managerial, methodological, and operational practices involved in

planning, designing, constructing, and implementing IT artefacts;

The human behaviours reflected within, and induced through the

planning, designing, constructing, and implementation; and the direct

and indirect Use of these artefacts;

The managerial, methodological, and operational practices for directing

and facilitating IT artefact Use and evolution;

As a consequence of Use, the impacts (direct and indirect, intended and

unintended) of these artefacts on the humans who directly (and indirectly)

interact with them, the structures and the contexts within which they are

embedded, and associated collectives (groups, work units, organisations).

Based on the above principles of the IS nomological net, a set of questions to

heighten the distinctiveness of this research are developed. These are:

(1) Does the study investigate the relationships that fall within the IS nomological

net?

(2) How far outside the boundaries of the nomological net are the primary

constructs being investigated?

(3) Do relationships involving only IS constructs represent a majority of the

relationships in a research model?

As noted, the issue of validation in an IS-specific phenomenon of interest is very

much as important as and as related to the theory underpinning the

phenomenon. It is highlighted in previous IS literature, for example Burton-

Jones and Straub (2006), that neither is as well defined for the topic of Use.

2.4.4 The IS-Impact Measurement Model (2008) and Use

Derived from the IS success model, the IS-Impact measurement model is a

formative index that benchmarks the net benefits from an IS. IS-Impact of an IS

is “a measure at a point in time of the stream of net benefits from an IS, to date

and anticipated, as perceived by all key user groups” (Gable et al. 2008 p. 381).

In other words, and contrary to DeLone and McLean (1992), the authors propose

Conceptualising Use for IS Success

Page | 49

that the four dimensions of system, information quality, and individual and

organisational impacts yield an overall aggregate score of the impact of IS.

Herein the characteristics of the IS-Impact measurement model (in Figure 2-5)

are summarised.

Figure 2-5: The IS-Impact Measurement Model*

*Source: Adapted from Gable et al. 2008, p. 395

It is an index comprising four dimensions in two halves. The impact half

measures benefits to date, while the quality half measures probable future

impacts. The model suggests system quality and information quality as

measures of the IS, and individual impact and organisational impact as

measures of overall impact. The model reconciles the IS success model (DeLone

and McLean 1992; 2003) and the Benbasat and Zmud (2003) IS nomological net.

The dimensions accounted for in the IS nomological net include (1) IT

managerial capabilities, (2) IT managerial practices, (3) IT artefact, (4) Use, and

(5) impact. The model conveys the repeating nature of the IS-Impact pattern

across time. Impacts resulting from the IS in one iteration will subsequently

influence IT capabilities and practices, which in turn will influence the IS quality

and thereafter system Use, and so on. We explain this effect through further

expanding and flattening the nomological net by eliminating the feedback loops.

In developing the IS-Impact model, Gable et al. (2008) found misinterpretations

(noted later) of Use, so much so that it was dropped from their final

measurement model. Gable et al. (2008, p. 388) deliberately omit the Use

Tan  2010

Page | 50

construct, citing that “Use, either perceived or actual is only pertinent when such

Use is not mandatory”. However, the (Gable) study still acknowledges Use as an

important construct that could be perceived as both an antecedent and a

consequence of an assessment of benefits that have followed (or not) from the

system (impact) and its potential (quality) IS-Impact.

2.5 Use as a Construct

Constructs, according to Edwards and Bagozzi (2000) are abstractions that

describe a phenomenon of theoretical interest. Constructs (sometimes called

latent variables) may be used to describe an observable (for example,

performance) or unobservable (for example, attitude) phenomenon. In addition,

constructs, according to Petter et al. (2007) may focus on outcomes, structures,

behaviours, or cognitive and (or) psychological aspects of a phenomenon being

investigated. It is noted that the terms dimensions and constructs are sometimes

used interchangeably in IS studies, but it is important to note their subtle

difference7.

Gable et al. (2008) suggest that Use as a construct in an IS evaluation model can

play dual roles. They suggest that Use can be an antecedent or a consequence.

From the IS-Impact model illustrated earlier, it is further interpreted that Use

can be a mediator. This section examines all three views. It is noteworthy that

the discussion on the constructs draws examples from other domains. More

specifically, the inadequacies and the issues highlighted in one stream are not

restricted to that particular stream. On the other hand, this study converges on

seeking a deeper understanding of how one could interpret its role in IS success.

The potential representation of Use as either a formative or a reflective construct

is examined. The examination of this aspect is driven by recent attention in the

IS literature on the relationships between measures and their relevant

constructs. This examination not only sufficiently informs the central role that

7 Constructs are not directly observable events, and dimensions are manifest variables that are indicators of latent variables. Dimensions do not always represent constructs perfectly and reliably (Sharfman and Dean 1991). To avoid confusing their intent, unless otherwise stated or purported in other IS literature, we do not use these terms interchangeably in this study.

Conceptualising Use for IS Success

Page | 51

Use plays in this (IS success) stream, but forms an important contextual basis

for further empirical data analysis.

2.5.1 Use as an Antecedent

An antecedent is any phenomenon that precedes or causes another. Use as an

antecedent suggests that Use leads to downstream outcomes (such as impacts

or performance), thus determining how IT benefits individuals or organisations.

With reference to Figure 2-1 for instance, studies such as Trice and Treacy (1988)

and Burton-Jones and Straub (2006) found Use as a variable that determines

the performance of working individuals. D'Ambra and Wilson (2004) studied the

influence of Use on information-seeking behaviour. They found that Use of the

Internet (in this study for travel information problems) resolves the uncertainty

of information problems and aids the minimisation of the cost of engaging in

information-seeking behaviour. Further, Devaraj and Kohli (2003) suggest Use

as a predictor of organisational performance. As shown, it is surmised that Use

as an antecedent generally suggests that Use must occur for a set of benefits to

be retrieved from an IS implementation. Despite this, Seddon (1997)—as a case

in point—urges researchers to consider net benefits that flow from Use, rather

than Use as the critical factor for IS success measurement.

2.5.2 Use as a Consequence

A consequence is a phenomenon that follows and is caused by some previous

phenomenon. Referring to examples from Figure 2-1, Use as a consequence is

apparent in the frequently cited technology acceptance model (Davis et al. 1989;

Gefen, Karahanna and Straub 2003; Venkatesh et al. 2003). The TAM (Davis

1989) has been validated as a powerful and parsimonious framework to explain

users’ adoption of IT; it is the most widely used theoretical model for explaining

system usage (Mathieson, Peacock and Chin 2001; Straub, Limayem and

Karahanna-Evaristo 1995), and in general IS adoption behaviour (Hong, Thong,

Wong et al. 2001). Like TAM, the Technology Transition Model (TTM) posits that

actual system Use is a function of behavioural intentions. TAM is a model for

predicting actual system Use, the key indicator of success for technology

transition. According to TAM, behavioural intentions cause actual Use and it is a

measure of the strength of one’s intention to perform a specific behaviour that in

Tan  2010

Page | 52

turn is affected by the ease of Use and usefulness. TAM has its strengths and

shortfalls in the light of system Use. The significant benefit of TAM is that it

provides a framework within which to investigate the effects of external variables

on system Use (Hong et al. 2001).

Despite the significant contributions of TAM to our field, TAM does not

investigate actual Use itself much, with researchers often stopping at intention

to Use. The reasoning behind this is that while great efforts have gone into

operationalising the external variables on system Use—such as perceived

intention—actual Use itself does not always feature. Intention is a useful

construct because it is measureable well in advance of actual Use. However,

intention as operationalised in TAM is not a measure of Use but is an antecedent.

Originally, the development of TAM was to predict future Use after initial

exposure to the system. It is therefore not reasonable to expect it to offer a

complex model for a longer-term understanding or evaluation of Use (O. Briggs,

Adkins, Mittleman et al. 1998). There are other studies that develop a synthesis

of TAM with other theories such as Task Technology Fit (D'Ambra and Wilson

2004) to build alternative explanations on Use. Later work in the area

(Venkatesh, Morris et al. 2003) suggests that four key constructs—performance

expectancy, effort expectancy, social influence, and facilitating conditions—

determine users’ intentions to use an IS, and their subsequent Use behaviour.

2.5.3 Use as a Mediator

The DeLone and McLean (1992) IS success model purports Use as an event in IS

success. Referring to examples in Figure 2-1, an event is a phenomenon that

occurs in a course of action or a series of procedures. The (categorical)

relationships in the model suggests first that system quality and information

quality constructs lead to Use and user satisfaction constructs that are

interdependent (Despont-Gros 2005). Use is depicted as the next event leading

to individual and organisational impact. By treating Use as an event, DeLone

and McLean (ibid.) purport that impacts are outcomes of a process; it begins

with quality, then Use, then impacts. While focusing on the when effect of Use

as ‘when an event is useful’, there is greater value in looking at the how effect of

Use as an event.

Conceptualising Use for IS Success

Page | 53

Given this knowledge, one can possibly look at Use as a mediator construct in IS

success. This argument stems again from the Seddon (1997) suggestions of

conflicting meanings of Use in the IS success model. Referring to Figure 2-2, if

the (IS success) model does not purport causality it is observed that a mediating

effect of Use is possible; it underlies the relationship between IS quality and IS

impacts via the inclusion of Use itself. A mediator variable (or mediating variable)

in statistics is a variable that describes ‘how’ rather than ‘when’ effects will occur,

by accounting for the relationship between the independent and dependent

variables. Rather than hypothesising a direct causal relationship between the

independent variable and the dependent variable, a mediation model

hypothesises that the independent variable causes the mediator variable, which

in turn causes the dependent variable. The mediator variable, then, serves to

clarify the nature of the relationship between the independent and dependent

variables (MacKinnon, Fairchild and Fritz 2007). There is scant (except

Boontaree, Ojelanki and Kweku-Muata 2006b) IS success literature to suggest

Use as a mediating variable and, more importantly, research that has

empirically tested it to date. Therefore, exploring this view is potentially useful.

Mediating variables contrast with moderating variables, where moderators

pinpoint the conditions under which an independent variable exerts its effects

on a dependent variable. It occurs when the relationship between variables A

and B depends on the level of C (Baron and Kenny 1986; Sobel 1982).

2.5.4 Considerations for Formative and Reflective Constructs

Finally, it is crucial for scholars when considering construct specification for Use,

to discuss the potential representation of Use as either a formative or a reflective

construct. This is consistent with the Burton-Jones and Straub (2006) call for

researchers to consider all assumptions, specifications, and characteristics of

system Use prior to selecting measures. Burton-Jones and Straub (2006 p. 240)

purport the system Use construct as formative, captured by cognitive absorption

and deep structure usage dimensions. The remainder of this section

differentiates between the possible (formative and reflective) nature of the Use

construct in a measurement model.

Before discussing the differences between the formative and reflective nature of

the Use construct, it is important to revisit the terms measures and

Tan  2010

Page | 54

measurement models used throughout this discussion. In measurement models,

we also refer to measures as indicators or items, and we use them to examine

constructs (Section 2.6 discusses measures in detail). Measures can be

distinguished as either those that are influenced by (reflect) or influence (form)

latent variables (Bollen 1989). A multi-item measure of a construct is one

comprising several indicators. In measurement models, multi-item measures are

present whenever a single latent variable is operationalised in some way by more

than one indicator (Diamantopoulos and Winklhofer 2001).

The interest in formative versus reflective constructs follows the popularity of

structural equation modelling (SEM) techniques for assessing (a) the

relationships between constructs, and (b) relationships between constructs and

measures. Despite this, researchers have reported issues of measurement model

misspecification including misleading findings reported in several empirical

studies adopting SEM (Freeze and Raschke 2007; Petter, Straub and Rai 2007).

Measurement model misspecification occurs when researchers do not pay

attention to the directional relationship between measures and the construct

(Chin 1998). There are important differences between a reflective and a formative

model. Table 2-2 summarises these differences between a formative and a

reflective perspective of the Use construct.

A)

Reflective Construct

Y1 Y2 Y3

e1 e2 e3

B)

Formative Construct

Y1 Y2 Y3

Figure 2-6: Reflective and Formative Measurement Models*

*Reproduced from Petter et al. 2007

In a reflective model (Nunnally and Bernstein 1994), the direction of causality is

from the construct to the measures (that is, Panel A, Figure 2-6); it is anticipated

that changes in the reflective construct will be manifested in changes in all its

measures (Diamantopoulos and Winklhofer 2001). All the measures (that is, Y1,

Conceptualising Use for IS Success

Page | 55

Y2, and Y3 in Panel A) represent the underlying construct in a reflective model

and they are expected to be highly correlated. Due to the high correlations

between the indicators, they are also interchangeable; dropping an indicator

should not change the conceptual meaning of the construct (Jarvis et al. 2003).

An example of a reflective measurement model that is of some relevance to this

study is the Perceived Ease of Use of the Technology Acceptance Model (Davis et

al. 1989). Perceived ease of Use is the degree to which a person believes that

using a particular system would be free of effort (Davis et al. 1989). Six reflective

indicators measure perceived ease of Use: easy to learn, controllable, clear and

understandable, flexible, easy to become skilful, and easy to use (Freeze and

Raschke 2007).

On the other hand, a formative measurement model (that is, Panel B, Figure 2-6)

depicts a construct as an explanatory combination of its measurement variables.

As explained in Henseler et al. (2008), an increase in the value of one measure

translates into a higher score for the composite variable, regardless of the value

of the other measures. Diamantopoulos and Winklhofer (2001) further

emphasise that in a formative model, the measure variables collectively

represent all the relevant dimensions or independent underpinning of the latent

variable (that is, Y1, Y2, and Y3 in Panel B); thus omitting one measure could

omit a unique part of the formative measurement model and change the

meaning of the latent variable. These are often called ‘causal’ indicators and the

construct is often termed a combination variable (MacCallum, Wegener, Uchino

et al. 1993) or composite variable (MacKenzie, Podsakoff and Jarvis 2005). Socio-

Economic Status (SES) (Heise 1972) is an example of a formative construct.

Three measures—education, income, and occupational prestige—cause SES. An

increase in income would increase SES, even if there were no increases in

education or occupational prestige (Freeze and Raschke 2007). The IS-Impact

measurement model is another example of a formative model. IS-Impact is a

reconceptualised, formative, multidimensional index of IS success (Gable et al.

2008). Thirty-seven items organised into four dimensions capture the index:

system quality, information quality, organisational impacts, and individual

impacts. The authors demonstrate the presence of a formative construct by

studying the correlations of the items with their respective criterion measures

and they examine the extent to which the items associated with the index

Tan  2010

Page | 56

correlate with the global indicator (Diamantopoulos and Winklhofer 2001, p.

271): IS-Impact.

Characteristics of the Construct

Reflective Use Formative* Use Supporting Reference

1. Effects of Change in measures

Changes in Use manifest in changes in all its measures

Change in one measure of Use does not require change in all other measures

Diamantopoulos and Winklhofer 2001; Jarvis et al. 2003; Petter et al. 2007

2. Inter-changeability of measures

Dropping a measure does not change what Use is measuring

Dropping a measure changes what Use is measuring

Freeze and Raschke 2007; Petter et al. 2007

3. Causality

Measures reflect variations in Use

Measures predict Use

Diamantopoulos and Winklhofer 2001; Jarvis et al. 2003; Petter et al. 2007

4. Theoretical Views

Theory does not view Use as formative

Theory views Use as formative

Petter et al. 2008

5. Differences in Antecedents and Consequences

Use measures have similar antecedents and consequences

Use measures have different antecedents and consequences

Diamantopoulos and Winklhofer 2001; Jarvis et al. 2003 Petter et al. 2007

6. Correlations (test of Multicollinearity)

Should be high in Use measures

Not expected in Use measures

Freeze and Raschke 2007

Table 2-2: Considerations for Formative Vs Reflective Nature of Use

*All conditions must be true to be in the presence of formative constructs.

Reflecting on the above, there are differences between a formative and a

reflective conceptualisation of the Use construct. Table 2-2 provides a summary

of the differences between the formative and the reflective nature of Use. It is

noteworthy that references highlighted in column four did not explicitly

structure opinions on Use, but were merely inferences on Use drawn from these

articles. While most of the five differences between the characteristics of

formative versus reflective construct of Use have been discussed above, scholars

would also have to consider whether the theoretical lens that they employ as the

foundations for their study views the construct as formative or reflective (that is,

row four, Table 2-2). For instance, it is envisaged in the IS-Impact measurement

index that IS-Impact is a formative construct. Other formative indexes that

Conceptualising Use for IS Success

Page | 57

characterise several composite measures include the human development and

quality of life indexes (Diamantopoulos and Winklhofer 2001, p. 270).

Researchers who adopt these indexes must consider and (or) adhere to their

formative intent.

It was further reported in Petter et al. (2007) that there is a tendency for IS

researchers to neglect the underlying nature of measurement models. More

specifically, because guidelines (such as those summarised in Table 2-2) have

been lacking for the validation of formative constructs, in many instances they

have been misspecified as reflective constructs, even in premier scholarly

journals (Petter et al. 2007; Jarvis et al. 2003). It has become apparent that

many researchers simply assume that the constructs are, by default, reflective

(Petter et al. 2007; Diamantopoulos and Winklhofer, 2001). It is thus important

for researchers to pay attention to the direction of causality between measures

and constructs. Likewise, it is important for researchers to pay attention to their

conceptualisation of Use in terms of its reflective or formative nature to add to

the definition and subsequently to its validation.

2.6 Measurement of Use

Management consultant Peter Drucker once famously said: “If you can’t measure

it, you can’t manage it”. This old business adage still stands in areas across IS

and business. Researchers attempt to measure different aspects of IS

implementation in businesses, including competitive advantage and

organisational performance (Barney 2001), implementation success, or

technology adoption (Agarwal and Prasad 1998), IS success (DeLone and

McLean), behavioural intention, and even actual system usage (Venkatesh et al.

2003) with a range of measures. There exists a multitude of measures used to

measure Use. Despite this, the identification of more contextually salient

measures and their coherent application in IS studies have so far eluded

researchers. This section summarises the patterns and subsequently the

inadequacies of Use measures, given a contemporary IS context.

Before discussing the patterns of Use measures, it is important to define the

terms operationalisation, measures, and indicators. Operationalisation, as

referred to in IS success studies such as DeLone and McLean (2003) and Gable

Tan  2010

Page | 58

et al. (2008) is the process of specifying how theoretical concepts or variables will

be measured so that theoretical propositions or hypotheses can be tested

(Edwards and Bagozzi 2000). Extending the discussions in earlier sections,

measures are an observed score gathered through self-reporting, interview,

observation, or some other means (Edwards and Bagozzi 2000). Measures are

quantifiable, as for example, an empirical score gathered from a survey

instrument (Freeze and Raschke 2007). In addition, measures and indicators are

often synonyms. Before measuring the concept of Use for example, a researcher

should decide what the indicators of Use are and then specify how these

indicators will be measured. Indicators are typical kinds of self-reporting

measures used to operationalise system Use when objective usage metrics are

not available.

With reference to the above objective of finding the patterns of Use measures,

several parameters for the analysis of Use measures in IS literature are identified.

To do this, we consulted a smaller sample 8 of studies to account for the

characteristics of Use measures. From this, we drew several parameters with

which to analyse measures of Use. Table 2-3 also represents an extension of the

background work on Use measures in Burton-Jones and Straub (2006). Next,

these parameters are used to analyse a larger sample of IS literature (see Table

2-4) and frame the salient issues of the measurement of Use.

From Table 2-3, measures can generally be split into (1) Perceptual: generally

user-perceived but not quantifiable (qualitative) and (2) Objective: generally

system-generated and (or) user-perceived, but quantifiable. Second, researchers

must anticipate and consider different levels of access and the responsibilities of

stakeholders in an integrated systems environment. Looking at the domain of

content measure, users’ responses to task-related measures such as variety,

specificity, proportion, and nature would be dissimilar given the above. Third,

and building from the previous point, researchers must consider whether they

would achieve more relevant results canvassing measures of (1) information Use

or (2) system Use. Previously, researchers have adopted more system Use

measures when they were actually (or intending) to measure information Use.

The list of considerations conforms to three of the Cameron and Whetten (1983)

8 Twenty-five IS studies from the top three IS journals (MISQ; I&M; MS) and the top three IS conferences (ICIS; ECIS; AMICIS) published between 1990 and 2007 were canvassed.

Conceptualising Use for IS Success

Page | 59

questions9 on organisational effectiveness measurement, where Seddon et al.

(1999) recommend that anyone seeking to evaluate an IT investment should

have very clear answers too. It is logical to employ the questions of Cameron and

Whetten (1983) as a guide to identifying the suitable measures for evaluating IS.

Dimensions Examples of Measures# References*

Quantitative and Largely Objective

Extent of Use Number of reports or searches requested; number of information systems, sessions, messages; users’ reports on light and (or) heavy users

(Al-Qirim 2004; Igbaria and Tan 1997; Sutanto 2004)

Frequency of Use Frequency of report requests; frequency of information system Use: daily, weekly and so forth

(Cheung and Limayem 2005; Djekic and Loebbecke 2005; Dwivedi 2006; Rawstorne 2000; van der Heijden 2000; Xia 1996)

Proportion of Use Number of applications of information system used; total number of visits per Use; percentage of times information system is used to perform a task; percentage of Use of a particular information system

(Bhattacherjee 1996; Christ, Baron, Krishnan et al. 2003; Dishaw 1999; Lee and Lee 2003; Sutanto 2004)

Duration of Use Amount of time spent; connect hours; how many times a day and (or) week; duration of Use via system logs

Straub et al. 1995; Taylor and Todd 1995; Dishaw and Strong 1999; Moon and Kim 2001; Venkatesh et al. 2003; Cenfetelli 2004; Dwivedi et al.2006

Productivity of Use Number of projects completed Taylor and Todd 1995; Venkatesh and Davis 2003

Recurrence of Use Use the system repeatedly; number of times of reuse of the system

Cheung and Limayem 2005

Qualitative and Largely Perceptual

Nature of Use Types of reports requested; general versus specific Use; appropriate Use; type of information used

(Lee, Braynov and Rao 2003; Tang, Hornyak and Rai 2006)

Method of Use Direct versus indirect or chauffeured Use

DeLone and McLean 1992

9 The seven questions of Cameron and Whetten (1983) are as follows. (1) From whose perspective is effectiveness judged? (2) What domain of activity is the focus of the analysis? (3) What is the level of analysis? (4) What is the purpose of evaluating effectiveness? (5) What is the time frame employed? (6) What are the types of data used for judgments of effectiveness? (7) Against which referent do we judge effectiveness?

Tan  2010

Page | 60

Decision to Use Use versus no Use Bhattacherjee 1996; Lee et al. 2003; Sutanto et al. 2004

Voluntariness of Use Voluntary versus mandatory Rawstorne 2000

Variety of Use Number of business tasks supported by the information system; the variety of applications

Hutchinson et al. 1995

Specificity of Use Specific versus general Use; utilitarian versus hedonic Use; interpretive versus exploratory Use

Hutchinson et al. 1995; Tu 2001; Kim and Hwang 2006; Abdinnour-Helm and Saeed 2006

Appropriateness of Use

Appropriate versus inappropriate Use Chin et al. 1997

Acceptance of Use How system is accepted; how reports are accepted

Moore and Benbasat 1991

Dependence on Use Degree of dependence on Use Goodhue and Thompson 1995

Intensity of Use Perceived intensity of using the system Van der Heijden 2001

Motivation of Use Motivation levels DeLone and McLean 1992

Table 2-3: Use Dimensions and Measures

#Measures are classified into their dimensions as in the source articles; some measures overlap.

* Cited references do not employ all these measures but one or many in a combination.

2.6.1 An Analysis of Prior and Current Use Measures

Literature that has employed Use as a measurement construct (that is,

operationalised Use) is consulted to understand how Use has been measured.

The list of articles gathered was narrowed based on whether the study had either

solely or in combination (1) operationalised the Use construct, (2) introduced

measures of Use, or (3) employed and tested IS success models. The logic for the

third criterion is pushed by the strength of the research stream in introducing a

host of system, human, organisational, and environmental variables and

measures (Petter et al. 2008) to help organisations justify their IS investments

(Markus et al. 2003). Eventually, 54 studies spanning the period from 1985 to

2007 across 18 IS journals and conferences10 were canvassed. These are based

on the above-mentioned criterion, with the main objective of identifying the

inadequacies of prior operationalisation. Table 2-4 illustrates the consolidated 10 Sample journals: MISQ; ISR; CACM; I & M; DSI; JMIS; MS; conferences: ICIS; AMCIS.

Conceptualising Use for IS Success

Page | 61

list. We discuss the following observations. (1) the types of systems studied and

the number of Use measures recorded [column A], (2) whether prior studies

consider an holistic Use, considering both objective and behavioural measures of

Use [column B], (3) whether prior studies of Use accommodate multiple

stakeholder groups [column C], and (4) whether prior measures are actually

measuring information or system Use [column D]. The observations made follow.

Tan  2010

Page | 62

Study

No of measures

examined^

Barki and Huff (1985) 1

Mahood and Medewitz (1985) 8 (1) (7)

Raymond (1985) 1

Srinivasan (1985) 2 (1) (1)

Raymond (1990) 2

Liker et al. (1992) 1

Szajna (1993) 6

Leider and Elam (1993) 2

Thompson et al. (1994) 4

Taylor and Todd (1995) 3

Compeau and Higgins (1995) 2

Xia and King (1996) 3

Choe (1996) 2 (1) (1)

Igbaria et al. (1996) 2

Gill (1996) 1

Li (1997) 1

Seddon (1997) 5

Gelderman (1998) 4

Doll and Torkzadeh (1998) 30

Bhattacherjee (1998) 6

Lucas and Spitler (1999) 15

Tu (2001) 21

Skok et al. (2001) 2

Staple, Wong and Seddon (2002) 8

Devaraj and Kohli (2003) 3

McGill et al. (2003) 1

Almutairi and Subramanian (2005) 20 (2) (18)

Abdinnour-Helm and Saeed (2006) 10

Wu and Wang (2006) 5

Burton-Jones and Straub (2006)^ 17

Sabherwal et al. (2006) 4 (1) (3)

Wang et al. (2007) 3 (1) (2)

Tsai and Chen (2007) 5 (1) (4) Halawi et al. (2007) 6

Rice (1994) 1 Straub et al. (1995) 3

Massetti and Zmud (1996) 4 Collopy (1996) 2

Guimaraes and Igbaria (1997) 2 Rai et al. (2002) 1

Pflughoeft et al. (2003) 6 DeLone and McLean (2003) 4 (2) (2) Mao and Ambrose (2004) 4 (2) (2)

Gebaur et al. (2004) 4 DeLone and McLean (2004) 8 (2) (6)

Djekic and Loebbecke (2005) 7 Kim et al. (2005) 1

Kim and Malhotra (2005) 1

Cheung and Limayem (2005) 2

Jain and Kanungo (2005) 5 (2) (3)

Adams et al. (1992) 2

Igbaria and Tan (1997) 2

Iivari (2005) 2

Chien and Tsaur (2007) 8 (1) (7) Count ---> 275 41 25 11 24 11 29 16 7 53

Subset of Studies* ---> 76% 46% 20% 44% 20% 54% 30% 13% 98%

Column A Column B Column C

^ It is noteworthy that for some studies, only a representative set of measures were printed or the full survey instrument or list of measures was not available.

* Percentages are calculated with respect to total number of studies (54). Percentages do not add to 100% due to overlapping occurences

Name of Issue(s) ---> Nature of measures Type of stakeholders canvassed

Name of Dimension(s) --->Objective Behavioural Strategic

Column DType of measures

SystemExternal

Description of dimension(s) --->

Extent of Use eg. duration

Nature of Use eg.

sophistication

CEOs, Directors

Managers, CIOs

Managerial Technical Operational

Direct use of the IS

Fu

nc

tio

na

l S

yst

em

s

Type of System(s) Assessed

Technicians, IT support

staff

End users, plant workers

students, web consumers

Use (indirect) of Information

from an IS

Information

Mu

ltip

le

(Fu

nct

ion

al

and

N

etw

ork

) S

yst

em

s

En

terp

ris

e S

ys

tem

sN

etw

ork

Sys

tem

s

na

na

na

na

Table 2-4: Mapping Characteristics of Use Measures in IS Studies

Types of Systems: Referring to the McAfee (2006) three-tiered classification of

IT systems, the types of systems investigated in the 54 articles are distinguished.

Generally, we see three types of work changing IS. According to McAfee (2006),

Conceptualising Use for IS Success

Page | 63

these are functional, networking, and enterprise systems. Functional systems are

often associated with limited processing and data management functions, and a

non-communal database. Networking systems provide a means by which people

can communicate with one another. They are often associated with unrestricted

data input parameters and non-standardised data. The newest brand of work

systems (of the three) is Enterprise systems, and their characteristics are

discussed later (see section 3.4.1). Results in column A depict that 70 per cent

and 33 per cent of studies focus on functional and networking systems

respectively, while there are only two studies that focused on ES (similar to the

characteristics of contemporary IS). Having established the potential differences

between systems, we define the need for focusing more on ES. Many researchers

employ Use as a key construct to determine the success of functional systems

such as MS Excel (Jain and Kanungo 2005; Burton-Jones and Straub 2006),

and decision support systems (Devarai and Kohli 2003; Lilien et al. 2004) that

support the needs of specific target groups. Use has also been employed as a

construct to measure networking systems including email (Igbaria and Tan,

1997; Rice, 1994) and voice mail (Straub et al. 1995). For ES, Chien and Tsaur

(2007) adapted the DeLone and McLean (2003) IS-Impact model in their

evaluation of ES in three case organisations.

Lack of Behavioural Measures: This section provides evidence to support

earlier claims that Use measures are often objective and lack meaning. Column

B distinguishes whether measures employed are objective or behavioural in

nature, where the objective measures focus on identifying the ‘number’ or

‘percentage’ Use, and the behavioural measures focus on the ‘quality’ of Use.

Some examples for objective measures include ‘frequency of Use’, ‘duration of

Use, and ‘number of records accessed’ (Devaraj and Kohli 2003; Tsai and Chen

2007). Looking across the Use measures in the articles, there is seemingly a

consistently higher occurrence of objective measures (76 per cent) over

behavioural measures (46 per cent). This also illustrates that the majority of

scholars prefer to choose quantitative constructs over qualitative constructs,

even when given different systems. The business users of today are more mobile,

are ‘digital natives’, and more often than not spend less time in the traditional

workplace. Although the appropriateness of objective measures is recognised in

the light of the popularity of functional IT, a combination of objective and

Tan  2010

Page | 64

behavioural measures is more appropriate (note that only 11, or 22 per cent of

studies employ both objective and behavioural measures). More recently in

Landrum et al. (2008 p. 6) Use was measured with the number of times a person

used the library’s online catalogue. To make measurement comparable among

all constructs, Use is scaled into five categories: 1 = none, 2 = once, 3 = 2 to 5

times, 4 = 6 to 10 times, 5 = 11 or more times.

Myopic Stakeholders’ Perspectives: The importance of gathering perceptions

of success at multiple levels in organisations has been discussed among

academics for several decades (Cameron and Whetten 1983; Leidner and Elam

1994; Sedera, Gable and Chan 2004; Tallon, Kraemer and Gurbaxani 2000).

Different users have different needs and interests and they draw different

conclusions, even for similar systems. Four such key stakeholder groups

(strategic, managerial, technical, and operational for enterprise IS) were

previously defined (Column C). However, the findings (in column C) demonstrate

that most studies focus on operational cohorts (54 per cent), followed by

managerial (44 per cent), and rarely look into all employment cohorts (8 per

cent). This would be an issue (in fact wrong), if scholars employ the wrong cohort

for evaluating a system designed for Use by another cohort, or in other words,

another cohort is better placed to evaluate the system. We also note that

students (for example Szajna 1993; Cheung and Limayem 2005; Burton-Jones

and Straub 2006) and web-consumers (DeLone and McLean 2004) are popular

groups for study.

Lack of Indirect System Use (Information) Measures: It is noted with interest

that despite the enduring literature on including ‘information’ as an integral

aspect of a ‘system’, the vast majority of studies (98 per cent) only consider

system Use (Column D). Measures employed to gauge system Use include extent

of system usage, and time spent on analysing reports (from Seddon 1997, p.).

Only a handful of studies assess information Use (13 per cent) and employ such

measures as “provides useful output reports” (from Mahmood and Medewitz

1985), and “make sure the data match my analysis of problems” (from Doll and

Torkzadeh 1998, p.).

Mixed Results: Although measured in numerous past studies, it is reported that

many research findings on the relationship between Use and other constructs (in

Conceptualising Use for IS Success

Page | 65

IS success) have been found to be “mixed, inconclusive, and misleading” (Bokhari

2005, p. 251). I consider some examples to support this view. Almutairi and

Subramanian (2005) used Use measures such as “on the average working day

that you use a computer, how much time do you spend on the system?”, “With

respect to the requirements of your current job, please indicate to what extent

you use the computer to perform the following tasks?” They reported in their

study that system usage accounted for 9 per cent of variation in individual

impacts. The positive beta (of 0.32) indicates that usage had a significant

positive effect on individual impacts. Likewise, Burton-Jones and Straub (2006)

introduced and reported a set of rich measures of usage (exploitive usage) that

captures user, system, and task aspects, and yields almost three times the

variance explained by a lean measure. On the other hand, Iivari (2005) found

that actual Use is insignificant (path coefficient = 0.15) as a predictor of

individual impact. This study used only two quantitative measures. These are

daily Use: “How much time do you spend with the system?” and frequency of

Use: “How often on average do you use the system?” Similarly, McGill and Hobbs

(2003) found no significant relationship (path coefficient = -0.19) between

intended Use and individual impact. They used only one measure: “Overall, how

would you rate your intended Use of the system over the next year?” In addition,

Wu and Wang (2006) found that system Use had no significant effect on user-

perceived KMS benefits (path coefficient = -0.25). Consistent with Gelderman

(1998) and Seddon (1997), their results suggest that there may not be a causal

relationship between Use and individual impacts. However, this study does not

relate mixed findings—an inadequacy of its conceptualisation—but attributes

these to the inappropriateness of the measures adopted (see Zmud 1979; Zigurs

1993; Burton-Jones and Straub 2006).

Lack of Methodological and (or) Theoretical Validation: Drawing from the

sample of studies in Table 2-4, it appears that studies reporting Use not only

differ in validation techniques but also in what is actually reported. First, from

our analysis, second generation SEM techniques such as LISREL and Partial

Least Squares (PLS) are commonly employed to test for statistical conclusion

validity to address IT Use-related studies (see Chin and Todd 1995; Segars 1993).

Studies such as Gefen et al. (2000) and Henseler et al. (2008) provide stepwise

guidelines as to when such SEM techniques should be used to indicate

Tan  2010

Page | 66

construct validity, reliability, and model validity (reflective, structural, and

formative models) as items to test.

Though sufficient for their purposes, several of the studies investigated indicate

more often an ad hoc data analysis, opting to focus more on the heuristics rather

than on a stepwise methodological approach. Gill (1996), Liker (1992) and

Gelderman (1998) are studies that report mostly on the reliability of constructs,

indicators, and path validity coefficients. Devaraj and Kohli (2003) report

coefficient scores, R-squared, and F-statistics. Straub et al (1995) and Igbaria et

al. (1996) tested measurement and nomological net models. Finally, the majority

of studies do not explicitly test the nomological validity (Gefen 2000) or effects

(for example moderating or mediating) of Use (Henseler et al. 2008) in cause–

effect relationships with other constructs. Another of the advantages of

regression-based procedures like SEM is their ability to test statistically a priori

theoretical and measurement assumptions against empirical data. However,

Chin and Todd (1995) at the same time highlight their concern surrounding a

lack of a substantive, theoretical, justification for construct development and

poor indicators in Use studies. This is an issue still largely unresolved.

Proxy Measures: Use of proxy measures for actual Use and proxies of actual

Use are also common. For example, Crowston et al. (2006 p. 126) introduced six

proxy measures of actual Use of free (libre) and open source software (FLOSS).

They include ‘Number of users’, ‘Downloads’, ‘Inclusion in distributions’,

‘Popularity or views of information page’, ‘Package dependencies’ and ‘Reuse of

code’. In addition, Jennex and Olfman (2008 p. 48) further suggest perceived

usefulness as a proxy for intended Use, citing ‘job fit’, ‘social factors in Use’,

‘complexity of tools and processes’, and ‘job security’ as measures.

Studies have also measured system Use as a higher-order construct determined

by different dimensions. For example, Saeed and Abdinnour-Helm (2008, p. 380)

used extended and exploratory Use to capture post-system implementation

usage. Where ‘extended usage’ captures the breadth and frequency of using

different IS features and functions, ‘exploratory usage’ captures active

examination of new uses of the IS. Citing another example, Burton-Jones and

Straub (2006 p. 236) suggest that exploitative system Use is captured by

measures of ‘cognitive absorption’—a way to measure the extent of user

Conceptualising Use for IS Success

Page | 67

engagement—and ‘deep structure usage’—the extent of task-related system

features used.

2.6.2 Richness of Measures

As mentioned earlier, Burton-Jones and Straub (2006) published an article on

reconceptualising system usage. As discussed throughout the thesis, this article

bemoans the lack of a systematic approach to studying and measuring Use and

makes several important yet simple considerations for studying Use. One of

these considerations and a concept coined by the article is the ‘richness’ of Use

measures. Burton-Jones and Straub (2006) insist that Use measures must go

beyond the simple ‘lean’ Use measures and they illustrate richness in terms of

the content measured. Table 2-5 illustrates that each of the domain contents

reflected by a very rich measure (final column in Table 2-5) is important for the

successful functioning of contemporary (enterprise) systems. Between very lean

and very rich measures, Burton-Jones and Straub (2006) highlight a spectrum

and the extent of Use measures, using elements of Use as the terms of

comparison. The next section discusses the similarities and differences of

opinions made in this study on the purported concept of richness.

Richness of Measures

Very Lean Rich Rich Very Rich

Type Presence of Use

Extent to which the user employs the system

Extent to which the system is used to carry out the task

Extent to which the user employs the system to carry out the task

Example Use versus non-Use (Alavi and Henderson 1981)

Cognitive absorption (Agarwal and Karahanna 2000)

Variety of Use

(Igbaria et al. 1997)

None to date

Domain of content measured#

System System System System

User User User User

Task Task Task Task

Table 2-5: Richness of Measures*

*Adapted from Burton-Jones and Straub, 2006)

# Elements that are struck through are not measured

Tan  2010

Page | 68

This study echoes the arguments of Burton-Jones and Straub (2006) of

considering theoretical richness of the measures, where researchers must state

assumptions that are specific to the domain wherein they employ Use. The

present study is only looking at IS success Use, and thus is not considering Use

measures that look into intention to use or measures that are better placed in

any other streams, hence the selection of measures such as decision to use are

not considered as measures of Use. The authors recognise that it is possible to

have mutual measures. That is, it is possible for measures that are adopted in IS

success to be employed for other streams such as IS acceptance. Extent,

frequency, and duration of system Use are examples of such measures. However,

the vice versa case is not necessarily true. For example, a decision to Use, or

intention of Use, or acceptance of Use, are all measures adopted in IS

acceptance but are less meaningful for IS success if the systems are already

regularly used or nearly mandatory.

On the other hand, this study disagrees with the approach that Burton-Jones

and Straub (2006) employ to select appropriate measures of Use. There are three

overarching reasons. First, although the logic for considering richness of a Use

measure is sound, its operationalisation is not to some extent. The first issue is

with the Burton-Jones and Straub (2006) concept of richness. For every Use

measure, it must have a user and a system to which the user responds. In other

words, when the respondent answers any question (whether it includes the

name of the system or the term “I”), the respondent would answer with the view

of how they “Use” the “system”. Therefore, the Burton-Jones and Straub (2006)

classification of Use (in Table 2-5 above) is inappropriate (see also ibid. Table 2 p.

233). It is noteworthy that they select Alavi and Henderson (1981) as an example

of a ‘very lean’ measure, where the summarised depiction of the survey item

does not refer to a ‘user’ (for example, “usage of decision support systems versus

decision support systems not used” (ibid. p. 1319). It is argued that, regardless

of the inclusion or exclusion of the term ‘I’ in the survey instrument of Alavi and

Henderson (1981), when a respondent scores a survey item about ‘their’ Use of a

system, the ‘user’ perspective is inherently included in the measurement.

Without explicitly stating system and user, Burton-Jones and Straub (2006)

classify the measures as very lean and not reflective of its nature. Therefore, the

labelling is inappropriate.

Conceptualising Use for IS Success

Page | 69

Citing another example, they select Venkatesh and Davis (2000) as an example

of a ‘lean’ and an ‘omnibus’ measure where they depict the survey item—for

example “on average, how much time do you spend on the system every day in

hours and minutes?” (ibid. p. 194)—as not referring to the extent to which the

user employs the system to carry out the task, which they regard as a ‘very rich’

measure type. Regardless of the inclusion or exclusion of the term ‘I’, or naming

the system in the survey instrument of Alavi and Henderson (1981) and

Venkatesh and Davis (2000), when a respondent scores a survey item about

‘their’ Use of a system, the ‘user’ and ‘system’ perspective are inherently

included in the measurement. Therefore, whether the measures have user,

system, or task is irrelevant. However, to base the richness of a measure on the

presence of Use elements is confusing. It is believed that the Burton-Jones and

Straub (2006) intention is to address the parsimony and completeness (they

purport this on p. 237, footnote 7), but not richness of a Use measure.

Second, all items of Use must be treated as having equal importance, and

differentiating between “richness” of measures clearly contradicts the argument

of equality and high correlation among reflective measures of the same construct.

Regarding the importance of measures is a better argument, where the selection

is based on the content and context of the business process completed in IS by

the user, and the assumptions that one makes in the definition of Use for this

purpose. In other words, domain content does not purport a Use measure; its

importance does.

Finally, the inclusion of ‘tasks’ in determining the richness of a measure is

questioned. Despite this, tasks are an important notion to reflect the nature of

Use. Burton-Jones and Straub (2006 p. 237) include the concept of tasks in

their dimensions of exploitative usage, more specifically for deep structure

usage—“Use of features in the IS that support the underlying structure of the

task” (ibid. p. 238). In fact, it is believed that the authors are measuring the

variety of tasks completed (for example using the system for analysing, to

compare data or perform calculations) (ibid. p. 237), similarly to those employed

by Hutchison et al. (1995). However, deep structure usage measurement items

could potentially be large, or need changing for every instrument. For example

ES have become a critical backbone for many contemporary companies’

Tan  2010

Page | 70

business processes. For these large and complex systems like ES, no one can go

to an organisation to canvass for responses on a variety of uses, as the variety of

uses would naturally be very great. To measure the variety of uses for an IS here

is thus not as valuable, but looking at the system features here that enable

tasks to be completed is worthwhile.

2.7 A Summary of Considerations for Use in IS Success

From the above knowledge of the IS success field, a series of four considerations

for the conceptualisation of Use in this study is summarised:

First, Use can be better defined in IS success. Scholars should consider for

instance how and what user tasks are completed using the systems in

understanding Use for IS success, the importance of the user practice in IS user

success, and for which stakeholder perspectives is success of Use measured.

Second, there are multiple interpretations of the role of Use. For instance,

system Use is purported as an antecedent (and consequence) of IS-Impact rather

than a dimension. This duality of system Use (as antecedent and consequence)

is thus far untested. In summary, researchers should consider the interpretation

of Use as an antecedent, consequence, and as potentially a mediating variable

over time.

Given these two points, scholars may then define a model and ways to use it to

evaluate IS success. Given the principles of the nomological net, a theoretical

framework for what elements to measure, the appropriate measures, an

empirical framework for how to measure it and specify the linkages among and

between these constructs, must be proposed. Extending the discussion on

measurement, there is extensive encouragement to scholars to consider the

completeness, parsimony, mutual exclusivity (minimal redundancy or overlap),

and necessity of dimensions and measures.

Finally, the nature of empirical data collection needs consideration. As

mentioned earlier, there is confusion regarding the role of the DeLone and

McLean (1992) constructs. The IS-Impact measurement (Gable et al. 2008)

model adopts a snapshot or cross-sectional approach for the system (and not a

test of causality), possibly reconciling the confusion.

Conceptualising Use for IS Success

Page | 71

These four points above purport a preliminary and procedural framework to

reconceptualise Use for IS success. In the next (three) sections we examine the

first three points highlighted above. We address the final point in the next

chapter where the research model is introduced.

2.7.1 A Work-Systems Definition of IS Use

So far, we established that there are varying definitions of Use and that Use is a

multidimensional concept. It encompasses several basic aspects including:

Information Systems—the IS artefact or the IT system used, tasks completed in

the systems—or the purpose for using an IS, system users—the person(s) using

the IS and Information needs of the users—the product or input of an IS. Burton-

Jones and Straub (2006) refer to all of the above as elements of Use.

These considerations above frequently urge scholars to move from the often

‘techno-centric’ foci (Lee 2000) and account for how the (above) crucial elements

interact when completing a definition of IS Use. Furthermore, Burton-Jones and

Straub (2006, p. 229) urge scholars to subject Use to “stronger theoretical

treatment”, when conceptualising it. There have been attempts by IS scholars to

adopt theories that describe the interaction between the above elements to

characterise Use. For example, Structuration Theory (Giddens 1979)—describes

how users enact social structures during interaction with IT—and Adaptive

Structuration Theory (DeSanctis and Poole 1994)—explains how users

appropriate advanced IT and its structures. In Adaptive Structuration Theory,

DeSanctis and Poole (1994) refer to crucial elements of Use such as tasks,

information and the IT system as sources of structures. Following the lead of

Burton-Jones and Straub (2006), and consistent with the motivations of

structuration and appropriation to describe Use, this study proposes Steve

Alter’s work system concept (Alter 2003; Alter 2006) as an alternative and

appropriate theoretical lens through which to characterise Use.

According to Alter (2003; 2006), a work system is one in which human

participants and (or) machines perform work using information, technology, and

other resources to produce products and (or) services for internal or external

customers. Typical business organisations contain work systems that perform

among other functions, procure materials from suppliers, produce products,

Tan  2010

Page | 72

deliver products to customers, find customers, create financial reports, hire

employees, and coordinate work across departments. Alter, on his personal vitae

and website, explains the basics of the work system concept to differentiate the

types of systems that operate within or across organisations. For example, an IS

is a work system whose processes and activities are devoted to processing

information. A service system is one that produces services for its customers. A

project is designed to produce a product and then go out of existence. A supply

chain is an inter-organisational work system devoted to procure materials and

other inputs required to produce a firm’s products and so on.

The other key aspect of work system theory is the notion of processes and

activities. According to Alter (2006 p. 303), ‘processes’ and ‘activities’ include

everything that happens within the work system. The concept of processes and

activities is therefore much broader than a ‘business process’, defined by

Davenport (1993), Pall (1987), and Jasperson et al. (2005) among others,

because in Alter’s view many work systems do not contain highly structured

business processes involving a prescribed sequence of steps, triggered in a pre-

defined manner.

On the above premise, Alter’s work system theory is useful for characterising

Use of IS for two broad reasons:

1. It is an appropriate lens to describe the IS user activities. As explained

earlier, a work system is a system in which human participants perform work

using information, technology, and other resources to produce products and (or)

services for internal or external customers. The work system scenarios described

in the earlier paragraphs are instances where the IS will be used. To Alter, IT

Use makes an important difference only when it is part of a work system.

Therefore, if one adopts a work system definition for IS, Use is embedded within

processes and activities, and consumes or encompasses the participants,

information, and technology.

2. Therefore, it recognises key elements of IS and its Use. As explained in

Alter (2006), the work system itself consists of four elements: the processes and

activities, participants, information, and technologies. Five other elements must

be included in even a basic understanding of a work system’s operation, context,

Conceptualising Use for IS Success

Page | 73

and significance. Those elements are the products and services produced by the

work system, customers, environment, infrastructure, and strategies.

Figure 2-7 below illustrates a work-systems view of the Use of IS. The diagram

specifies the relationships between four crucial elements that characterise Use of

an IS. They are the processes and activities (referred to as work processes),

participants (referred to as users), information, and technologies (referred to as

the actual IS system). The following section includes discussion of

considerations for each element when explaining the Use of IS.

Figure 2-7 : A Basic Work System of Use

2.7.2 System Considerations

Since the 1950s, computer information systems are widely regarded as

applications of computers to help businesses and organisations manage

information. For most of the following 40 years, when a business function

needed computerised information it used a stand-alone application.

Today these functional systems are no longer the sole work changing IT that

businesses use (McAfee 2006). Where Small and Medium Enterprises (SMEs)

rely more on lower-cost transaction processing and networking systems, larger

organisations driven by growth, and for which IS are more central to their

business, are more likely to turn to larger packaged software (Levy and Powell

2005). Given the impetus of complex and more portable computer technologies,

including the advent of the Internet, the capabilities and functions of many

Tan  2010

Page | 74

software applications have become more sophisticated; an example is

contemporary enterprise information systems.

By the 1990s, management and IT organisations alike became convinced that

packaged software proved to be a more effective way (than a best-of-breed

approach) to satisfy the growing necessities of an increasingly competitive

business environment. Amid downsizing and reorganisation by companies in the

early 1990s (Brady, Monk and Wagner 2001), the ES market thrived and there

was little choice or debate about how to spend sometimes up to millions of

dollars to implement them (Schwartz 2007).

In summary, people and organisations have relied on a variety of system types

including text retrieval systems, decision support systems, management

information systems, expert systems, executive information systems, and

enterprise systems. These systems are used in the workplace for a combination

of controlling transactions, decision-making, structuring, and formatting

information, and problem solving and reporting. Systems have evolved in a way

that characterises how users interact with these systems for work purposes.

2.7.3 Business and Work Process Considerations

Given the characteristics of contemporary IS outlined above, for the IS to be

successful it must be used to perform work, or support part or whole of a

business process. This section expands on this notion in the light of the Pall

(1987), Davenport (1993), and Alter (2003) definitions of business process, and

argues that Use of IS systems today is embedded within business processes.

First, Pall (1987) describes a work process as the logical organisation of people,

materials, energy, equipment, and procedures into work activities designed to

produce a specified result. This definition, according to Pall (ibid.) captures the

skills of the people implementing the process and the application of tools and

methods.

Adapting Pall’s early definitions, Davenport and Short (1990) define business

process as a set of logically related tasks performed to achieve a defined

business outcome. Further, in a widely cited article (over 2500 citations, based

on a citation count by Google Scholar), Davenport (1993) explains that a

(business) process is really a structured, measured set of activities designed to

Conceptualising Use for IS Success

Page | 75

produce a specific output for a particular customer or market. This definition,

according to Davenport, implies a strong emphasis on how work is done within

an organisation, in contrast to a product-focused emphasis. The definition

further purports characteristics of a process. First, a process must have clearly

defined boundaries, input and output that consist of smaller parts, and activities

ordered in time and space. There must be a receiver of the process outcome—a

customer for example—and the transformation within the process must add

value to the customer. Next, processes could be inter-organisational, inter-

functional, or interpersonal. Processes result in manipulation of physical or

informational objects. Finally, processes could involve different types of activities:

managerial (for example developing a budget) and operational (for example filling

in a customer order) (Davenport and Short 1990).

2.7.4 User Considerations

Humans are social and rational beings; expectations, associations, values,

knowledge, preferences, learning, and other thought processes form the core of

their actions. Similarly, thought processes or cognition of different technology

users lead logically to differing intensity levels and outcomes in technology Use.

As described earlier, the uses of advanced information technology (IT) in

organisations have increased in both variety and complexity where users play a

more pivotal role in their development.

Despite this, previous studies define a more passive nature of users’

employment of IT. According to Lamb and Kling (2003), even the well-established

concept in IS research of ‘users’ has been found simplistic and unrepresentative

of the multitude of roles undertaken by users in their interactions with a

diversity of applications. There, its Use is less likely to alter the system design;

and the ‘outcomes’ of the IS are less likely to change the way employees use the

system (Schwarz and Chin 2007). Although adequate in evaluating the more

conventional IT systems, such an approach is unrepresentative of the underlying

cognitive processes of users in a modern, complex, working environment. This is

particularly so, as these contemporary systems become more prevalent in the

workplace and the society; the subsequent Use of these systems is near

mandatory rather than optional. It is envisaged that with the right interaction,

changed business processes (either system or business) become more

Tan  2010

Page | 76

institutionalised over time, where the practices are drawn on, adapted, and

reinforced by users in ongoing interactions (Orlikowski 1992).

With this knowledge, it seems hardly necessary to demonstrate that a theory of

Use should capture at some level the dynamic processes in user interactions to

distinguish between different users and advanced technologies, further

implemented and operated in a non-passive environment. However, there is a

distinct lack of theoretical underpinning in the IS evaluation stream for

examining and categorising human actions, or why users have to interact

dynamically with (rather than simply Use) a contemporary IS. This will

ultimately be more meaningful when we determine the benefits brought to bear

by the system. Further, understanding why and how the user actually functions

through some theoretical lens may be used to signal mismatches and difficulties

to management (Somers et al. 2003) and reduce the incompatibility of system

features with organisational information and business process needs (Janson

and Subramanian 2003).

Thus, examining user-related topics is relevant and directs other useful aspects

towards the understanding of Use. The rest of this section elaborates on two

theoretical aspects related to IS users: multiple stakeholders and multilevel

analysis of Use. Examining multiple stakeholders is relevant on the premise that

given multiple roles, different groups of users would therefore tend to use the

same system but for different purposes and would naturally evoke different

perceptions. Investigating multilevel Use is relevant on the premise that IS user

activity can be discussed at more than one unit of analysis; this is notably at

individual, group, and organisational levels. The discussion of these related

concepts points to the importance of breaking down the nature of user activity to

understand the nature of Use in organisations better.

2.7.5 Information Considerations

Users rely on information on both business process and system capabilities to

complete work processes. This section discusses information as a primary

strategic and management resource in Use. Users of IS are part of the

information society and need to access and use information strategically if they

are to operate effectively (Levy and Powell 2005). This study considers only data,

Conceptualising Use for IS Success

Page | 77

information, and knowledge that is relevant to a user’s work processes. Data not

related to the work system are not directly relevant, making the distinction

between data, knowledge, and information less important when describing or

analysing a work system.

Data, information, and knowledge are terms that are often associated. There

have been many attempts to distinguish between them. First, data are the

building blocks of the information world (Levy and Powell 2005). For example, a

customer invoice, number of orders, salary paid, vendor names, customer

addresses, and sales quotations and so on are examples of data. Data can be

generally described as factual, tend to be formal, and are either quantitative or

qualitative in nature (Jashapara 2004). The IS user uses a variety of data to fulfil

their role in a business process. For example, an accounts payable clerk is likely

to require quotations, payment orders, and a supplier’s bank accounts to pay a

supplier. Functional managers also require data such as monthly reports to

manage and control various business activities. As responsibilities of the IS user

grow in a firm, it is likely that data required become more complex, not only to

satisfy the users’ role but to ensure smooth operations in their firms.

Information is ‘systematically organised’ data. The notion of systematic, as

explained by Jasphara (2004 p. 15), implies the ability to predict or make

inferences from the data assuming they are based on some system. Information

includes codified and non-codified information used and created as participants

perform their work. Organisations may or may not computerise information. If

information is given about a sequence of completed steps in a procurement

process, for example if goods have been received from the vendor and that an

invoice has been received, we can assume from the information that the next

step is to pay the vendor. Another conception of information is data put into a

situational context for them to become meaningful (Galliers 1987). This meaning

can be both scientific—such as a Dewey decimal classification system—or have a

subjective meaning given by the receiver of the information (Jasphara 2004). In

addition, it is the receiver of the information who determines whether they are

data or information. For example, a consolidation of sales order reports inform

sales managers of critical performance issues but may be judged as unimportant

to other recipients such as a human resource manager.

Tan  2010

Page | 78

If information is data plus context, knowledge is information plus experience.

Experience is essential if Use is to be made of information (Levy and Powell 2005

p. 36). Knowledge is actionable information that can provide a rational

justification. Knowledge can be tacit (memories, thoughts, and cognitions) or

explicit (organisational norms, practices, routines), and can ensure that

interpretations of the same data and information vary significantly based on

these perceptions and the original knowledge base of the individual. For example,

critical success factors for implementing an ES or best practice for completing a

financial process using ES software, all rely on knowledge.

2.7.6 Adapting Work Systems Theory for Understanding Use

Despite the value of Alter’s work system concept as an appropriate theoretical

lens to characterise Use, Alter’s theory does not capture some aspects. In this

study, there are four key notions of work system to develop a deeper

understanding of the characteristics of Use. In other words, these notions may

be viewed as the assumptions made for the conceptualisation of Use for IS

success in this study.

First, only four of nine elements are described and therefore relevant in this

study. These four basic elements are the work processes, users, information, and

the actual IS system. Together, these four elements embody a basic system of

Use (see Figure 2-7 above).

Second, the definition of processes and activities proposed by Alter is too broad

and understates the relevance of context (Jasperson 2005), appropriateness

(DeLone and McLean 2003), levels (Burton-Jones and Gallivan 2007) and

business purpose (Davenport 1993) of Use. On the other hand, there are

important work processes (such as inter-organisational teamwork, equipment

testing, product verification with customers and so on), which are not structured

but help with completing or adding to the business process. Therefore, the

argument is that a definition of processes needs to be more specific and yet

accommodate different (structured or unstructured, defined or undefined)

processes. In other words, they do not “include everything that happens within

the work system” (as claimed by Alter 2006, p. 303). Here, work processes refers

to only relevant tasks in a business process that is completed by the IS, and that

Conceptualising Use for IS Success

Page | 79

is part of a work system. The term “work processes” refers to actionable tasks

and activities in a business process that a user will attempt to complete in IS. A

user’s work process forms part or the whole of the business process. For

example in a sales order processing process, clients can process enquiry and

quotations without the support of a user (from the supplier firm); while users are

required for work processes such as validating an order, exception handling, and

completing invoice information.

Third, for reasons explained the term incorporation level—the proportion of the

business process encoded in the IS. To a varying degree and manner, an

organisational user uses an IS for completing business tasks. Nevertheless, just

how and how much of the business process is completed by the IS is an aspect

often not explained by scholars in a definition of Use. A system can execute a

varying amount of the business process. Where basic forms of functional

technology help automate some parts of a business process, other more

advanced applications often direct its completion. In cases where “the system is

the process”, as is so often the case in heavily customised ES, a system could

perform most of the key accounting, sales and distribution, inventory processing,

and management on behalf of the user with the users providing monitoring and

support. At the opposite end of the spectrum of Use, users could be required to

use the system to execute an entire work process, where the user requires

information from a completed work process for the next work process. User

involvement with the system at every stage of the business process is therefore

mandatory. In between the two extremes, the amount of Use varies. Therefore,

incorporation sets a basic level of information system Use within the process and

determines the nature of interaction with the participants in the process.

Understanding the incorporation level thus helps us to understand the extent of

the relationship between the user’s work system and the information system.

Fourth, a user’s work processes can comprise core (C) and value-added (V+)

functions. The view for work processes is adapted from Porter’s (1985) Value-

Added Chain analysis (see Figure 2-7) which prescribes a process view of the

chain of activities in an organisation. Core functions generally represent the bare

minimum, stipulated, or mandatory activities for completing the business

process, also referred to as requisite system Use. Often ‘requisite usage’ parallels

Tan  2010

Page | 80

the automated system functions and features. Such IS usage will be near

mandatory or compulsory. On the other hand, value-added system Use

represents non-compulsory and often non-automated system functions that

when adopted by the user, provides crucial support to the core functions. The

value-adding Use is volitional and is essential to achieve a specific value-adding

objective. Value-adding Use captures the additional (none-core) Use by the user

conducted to enhance the output or impact. Core and value-added functions of

the process determine how features of IS are used and how the functionalities

and features of the system are configured. In addition, we argue that these two

types of Use each captures a unique aspect of Use and therefore must be

measured using different instruments.

Figure 2-8: An Example of Core and Value-added Functions of the Procurement Process*

*Adapted and reproduced from Michael Porter’s (1985, p 37) Value Added Chain

One way of defining C and V+ functions of work processes at incorporation level

is determining the depth and extent of Use. For example, preparing a quotation

for a customer and a weekly sales report are core processes in order fulfilment

processes. An employee in the sale department may use MS Word and

spreadsheets to prepare both documents. Besides placing words and values in

the respective documents, the employee may use “Insert SmartArt” and “Borders

and Shading”, V+ features of the systems to enhance the outputs. Adding a

border using MS Word or highlighting a cell in MS Excel may not be considered

value adding by many process owners, however, completing a vendor evaluation

in a procurement process would be considered V+ for many organisations.

CORE FUNCTIONS

Create Purchase Requisition

Create Purchase Order

Receive the Goods

Receive the Invoice

Pay the Supplier

P.O

PR

GR

INV VER

PYMT

VALUE-ADDED FUNCTIONS Market Assessment And Configuration of Parameters

Supplier Selection Price Evaluation

Procurement Policies and Principles Evaluation

Contract Management/ Negotiation

Conceptualising Use for IS Success

Page | 81

Furthermore, the incorporation level thus informs how many of the system’s

features are actually used. For example, an executive support system should

have features that support data analysis, modelling, monitoring functions to

support different functions of business processes, but not all features are

required. Therefore,

Incorporation Level = Work Processes (C + V+) --- (1)

For every set of work processes, a number of core and value-added functions

exist. Mathematically, incorporation level is thus the product of work processes

and the core and value-added functions per set of work processes (Equation 1).

In measuring success of an IS, scholars ought to be interested in the longer-term

effects on the users’ capabilities or performance. Experimental Use, for example

a one-off billing system to send an invoice on a particular customer’s request is

less valuable here as the system is not regularly incorporated into the user’s

work processes. Incorporation therefore considers very regular Use of stock

control systems that monitor all daily movements into, within, and out of the

business, and also irregular Use of monthly payroll and tax payments systems,

as long as the system has become part of the user’s standard operating

procedure in completing a business process (and forms part of users’ process

knowledge).

In summary, and with reference to Figure 2-7, Use can be characterised by the

interaction between elements in a work system. Starting with systems, in terms

of scalability, architecture, applications, data, multiple stakeholders and their

cognitive processes, there are more work changing systems that we use today.

Contemporary IS (where ES is an archetype) is significantly different to more

conventional and functional IS. Users rely on these contemporary IS to specify

their business processes and to complete the daily user work processes.

Different systems serve multiple users and patterns of Use. Therefore, embedded

in a definition of Use is the manner and degree to which, for instance, senior

managers use executive support systems to make strategic decisions,

management group uses decision-support systems to make decisions in

situations of uncertainty, businesses use knowledge management systems to

create and share information or individuals use transaction processing systems

Tan  2010

Page | 82

to process routine transactions. The above assumptions consolidate how

different users interact with contemporary IS vis-à-vis their Use.

2.8 Summary

The literature review attempted to provide a detailed account of the issues to

date surrounding IS Use. From the literature review, Use is widely employed, but

it is rarely scrutinised in IS research. Careful examination of the consolidated

studies reveals that there exist different definitions of Use, different

representations in different domains, and different approaches towards its

operationalisation. However, these issues do not need addressing urgently.

The more pressing issues are first, for the IS success domain, Use can be better

defined. To define Use, scholars must consider its multidimensional nature and

consistent with Burton-Jones and Straub (2006), identify the elements central to

Use and from there researchers can build on contextual implications to inform

an appropriate definition of Use. We propose a work systems theoretical lens as

the alternative to characterise Use, and to explain how these key elements

interact with one another during Use.

Second the literature review focus on the representation of Use as a construct in

models of IS success is the domain of interest. Although not a problem, the

different representations of Use in the IS success model, IS nomological net, and

IS-Impact measurement model still expect different forms of rigorous validation.

This study does not recommend a particular form at this stage, but merely

emphasises the value and principles for all perspectives.

About its representation and its conceptual nature in various domains (for

example in IS acceptance, IS decision making, and IS success), Use has been

represented as an antecedent, a consequence, and a mediator in these various

streams. Drawing from these streams, the likely roles of Use in IS success were

discussed. Defining the roles correctly, aids in research model design.

The literature review discussed measurement of Use. Researchers have often

adopted inadequate Use measures and they are not restricted to a particular IS

stream. For example, the majority of IS studies consolidated (that adopt Use as a

measurement construct) tend to use more quantitative or objective measures,

dominated by frequency, duration, and extent of Use. We suggest that

Conceptualising Use for IS Success

Page | 83

canvassing Use as a psychological experience is of greater value, given the

mandatory nature of Use. In summarising, despite the attention on Use, for the

complex state of systems today, prior notions of Use are still largely inadequate.

For weaknesses in the theoretical treatment and measurement of Use, a re-

conceptualisation of its role in IS success is thus necessary and timely. The next

chapter introduces the research model in this study and the approach to

implement the model for empirical investigations. The approach considers the

issues of defining, contextualising, operationalising, measuring and validating

Use discussed in this literature review. Essential considerations in

contextualising the study are systems, business processes, users and work

knowledge, and information.

Tan  2010

Page | 84

Chapter 3: The Research Model

3.1 Introduction

This chapter presents the research model, the methodology to conceptualise Use

for a study, the measures developed to operationalise the constructs featured in

it, and the contextual applications of the research model. Recapping, Use in this

study describes the extent to which an IS is incorporated into the user’s processes

or tasks. The definition is based on a work-process system-centric lens and

draws upon the characteristics of modern system types, key user groups and

their information needs, and the incorporation of IS in work processes. The Use

construct is positioned to demonstrate its central role in determining IS success.

First, the chapter presents the research model that specifies the relationship

between existing IS success constructs and the newly conceptualised Use

construct. Figure 3-1 illustrates the model. The research model is a

reconciliation of the IS success models described in the literature review. Next, a

set of hypotheses between these constructs in the a priori model are drawn.

Explanations of the model, constructs, and hypotheses here seek to guide the

design of the study and guide the set-up of the empirical investigation.

Before introducing the measures of the constructs in the research model, there

is a discussion of the operationalising of the constructs, in particular Use. For

this, we introduce an approach to operationalise Use that builds on Burton-

Jones and Straub’s (2006) staged approach to develop and select measures of

Use. The new two-phase approach seeks to aid in the systematic development of

conceptualisations of Use that are context-specific, and for the selection of

relevant measures in a similarly rigorous way. Table 3-1 illustrates the approach.

In addition to the definition and the selection stages in Burton-Jones and Straub

(2006), the approach incorporates three further stages. These are system

typology, incorporation level, and type of Use.

Subsequently, there is a discussion of the applicability of the research model in

the current study. The considerations when applying the two-phase methodology

that would potentially affect the later empirical analysis and findings are

described in light of examples from a contemporary systems context. Specifically,

the discussion focuses on the unit of analysis―ES and the other crucial

Conceptualising Use for IS Success

Page | 85

elements that characterise Use in a work system―its business and work

processes, information, and users. Finally, we list the dimensions and measures

of the constructs in the research model.

3.2 The Modified (IS Success) Research Model

The research model construction is in two parts: a conceptual model for

understanding, and an a priori model for testing. While the conceptual model

describes the key studied constructs and builds an association between them,

the a priori research model comprises variables and measures that

operationalise the constructs, and a set of hypotheses that represent the

causalities between these constructs. The term a priori is used here to describe

the predictive (Gregor 2006) model, including its constructs and the measures

that would be validated using quantitative methods and conventional a posteriori

statistical analysis. Figure 3-1 below (highlighted in black) illustrates the

research model.

A nomological net of Use—the central theme to identify the concepts relevant to

studying Use in the IS success context—is built in order to develop the

conceptual model. At this initial stage, only relationships are postulated, not

causalities between the identified constructs. The focus of an explanatory

conceptual model here is on how and why some phenomena occur, rather than

with making testable predictions. To achieve this, the IS success literature that

has already evidenced the sufficiency and necessity of a number of constructs

are referenced.

Tan  2010

Page | 86

IS-Impact

Quality

System

Information

Capabilities and

Practices

Impact

Individual

Organisation

Use

IS Success

Impact

Individual

Organisation

IS Net

H1 H2

H3

Figure 3-1: Research Model: Reconciling the IS Success Models

3.2.1 Positioning the Research Model

The research model differs from the original IS success model (DeLone and

McLean 1992) in the following (three) ways:

1. The research model reconciles the IS-Impact measurement model (of

Gable et al. 2008), the IS (nomological) Net (of Benbasat and Zmud 2003), and

the IS Success model (of DeLone and McLean 1992). The reconciliation is an

important step to account for and better understand the work of other scholars

to apply, support, and extend the IS success model. The IS-Impact measurement

model is included, to reflect a sequence of events leading to impact, and

incorporating Use. The IS-Impact model paved the way to revisit the Use of the

construct: the exclusion of Use as a dimension of IS Impact is mainly on the

basis that its previous measures are inadequate, given the near-mandatory

context of the prior study. The IS Net depicted is also consistent with the DeLone

and McLean (1992; 2003) IS-success model, when capabilities and practices are

temporarily set aside (see the grey areas in the research model).

2. Six constructs complete the DeLone and McLean (1992) research model

(see section 2.3.1). Given the emphasis of the system user as the unit of analysis

and the intention of the research to focus on differentiating individual patterns

of Use, the thesis examines only four of the six constructs. These are (1)

Individual Impact (the dependent variable), (2) System Quality, and (3)

Conceptualising Use for IS Success

Page | 87

Information Quality (the independent variable). The overarching relationship

postulated suggests that (4) the quality of the context of its application

influences contemporary Use, and in turn influences the overall impacts of the

IS. User Satisfaction is treated as an overarching measure of IS success rather

than a dimension (see section 2.4.1). Organisational impacts and IT capabilities

(an extension of the IS-Net) and practices as separate constructs are not tested,

in the light of the set-up of the empirical investigations.

3. Considering the definition of Use in this study, and addressing the issues

with the conceptualisation of Use in the original IS success model (discussed in

section 2.4.2), this thesis implies three hypotheses.

Hypothesis 1—Quality of IS → Use: The perceived quality of IS influences Use.

This hypothesis implies that: (1) the better the system, the more positive Use is;

(2) the better the information produced or displayed by the system, the more

likely that Use is positive and vice versa. Quality of IS is potentially a composite

variable made up of system quality and information quality.

Hypothesis 2—Use → Impacts: Use influences future individual impacts or the

net benefits received from Use. Given positive Use, the impact from IS is likely to

be positive and vice versa.

Hypothesis 3—Quality of IS → Use → Impacts: Given hypotheses 1 and 2, it is

further hypothesised that Use has a mediating effect on the impacts that the

users receive from the IS. Given a positive relationship between the quality of IS

and the impacts from IS, the impact from IS is likely to be positive and vice versa.

In summary, this thesis tests the IS success model in three aspects. For the

purposes of examining Use, the model is (1) extended to reflect the current and

ongoing work of scholars to define the effects, constructs and boundaries of IS

success better, (2) adapted to examine the effects of systems on the system

User(s) where organisational practices are temporarily set aside, and (3) defined

by three salient and causal hypotheses.

3.3 Operationalising Use

A two-phase methodology is proposed to operationalise Use in the model. This

approach seeks to aid the systematic and rigorous development of Use measures

Tan  2010

Page | 88

that are context-specific and relevant. Table 3-1 illustrates the approach,

specifying two phases—defining and selecting. Contrary to the definition stage

suggested in Burton-Jones and Straub (2006), this stage incorporates three finer

considerations of system typology, incorporation level and type of Use when

defining Use. The steps are procedural and inform the researcher of the

measures most appropriate to capture Use. The methodology is further

organised in two parts: (1) context and (2) measurement. Researchers can adopt

the approach in the following manner: Steps 1a, 1b and 1c provide signposts for

researchers in determining the salient considerations for the context of Use,

while step 2 determines the appropriate type of measure for the context.

Define Use and its assumptions

Select Use Measures

Systems Typology

Level of Incorporation

Type of Use

1. Define important characteristics and assumptions of Use

a. Determine the system typology (FIT, NIT, or EIT)

b. Determine the level of incorporation (High or Low) in the work

process

c. Determine the type of Use you want to measure (Core or Value-

adding or both)

2. Select the appropriate type of measure (Frequency-based or Depth and (or)

Extent, Quantitative-based or Qualitative-based)

Table 3-1: Steps in Operationalising the Use Construct*

* Adapted from (Burton-Jones and Straub 2006). The shaded areas reflect expanded views.

Consistent with Burton-Jones and Straub (2006, p. 231), the first phase of the

method attempts to “define the distinguishing characteristics of system usage

and state assumptions regarding these characteristics”. The second stage of

Conceptualising Use for IS Success

Page | 89

selection attempts to “choose the best measures for the part of the usage activity

that is of interest” (op. cit.). Where Burton-Jones and Straub (2006) concentrate

on the elements of Use, we use the Alter (2003; 2006) work-systems theory to

define the relationships between the above elements of Use. Defining Use in this

study thus defines the extent to which an IS is incorporated into the user’s work

processes. Therefore, to define Use and its assumptions, three finer

considerations in the defining phase, prior to selecting measures, are considered.

First, the types of IS systems today that are central to a user’s work system are

considered. Second, the incorporation of IS by the user for parts of business

processes are considered. Work processes describe the stipulated processes that

users are required to complete in IS which are not automated by the system (see

Section 2.7.3). Third, in types of Use, core and value-added activities to describe

the extent of processes encoded in IS are considered.

The second phase of the approach involves selecting Use measures based on the

terms and the inter-relationships of Use specified in the earlier phases. To

achieve this, one must first identify work processes that are encoded in

contemporary IS, and from there select relevant measures that not only attach to

its core and value-added functions, but to the study context. The Use measures

chosen reflect the type of systems studied, the domain of study, and the extent

of work processes completed.

3.4 IS Typology

The consideration of IS types when defining Use is first discussed. According to

Gregor (2009), theorising in IT requires that IT systems and artefacts play a

central role (See also Section 2.7.2). At the outset, the IS that businesses adopt

and Use evolved over the last few decades; understanding the characteristics of

systems allows us to build the context of their Use. This section argues that in a

work-system view of Use, different systems enable different patterns of Use.

The relevance of defining a typology of IS as the crucial consideration for Use is

argued by looking at one example: the McAfee (2006, p. 144) “three types of work

changing IT”. McAfee (ibid.) classifies today’s systems into three categories: (i)

Function IT, (ii) Network IT, and (iii) Enterprise IT.

Tan  2010

Page | 90

Table 3-2 is a reproduction from McAfee (ibid.) that outlines the three categories

of systems with related definitions, their characteristics, and examples and

considerations for Use. Observing characteristics of the types of systems, it is

clear that each system type stipulates different types of Use (refer to the fifth

column in the table). For the considerations, the focus is on the effects of the

type of systems on users’ depth and familiarity of Use, the potential of impact,

and most importantly work processes.

Category System Use Characteristics Examples Considerations

Function IT

Execution of discrete tasks

Can be adopted without complements*; impact increases if complements are in place

Spreadsheets; computer-aided design; statistical software

Relevant groups of users become proficient with basic and system-stipulated features; potential to improve process performance through greater depth of Use over time

Network IT

Facilitating interactions without specifying their parameters

Does not impose complements*, but lets them merge over time; does not specify tasks or sequences; accepts data in many formats; Use is optional

Emails; instant messaging; Wikis et al. (date)

Stakeholders have equal access to system features; Allows for greater depth of Use to emerge over time; provides limited work-related processes and functionalities

Enterprise IT

Completing and specifying organisational business processes

Imposes complements* throughout the organisation; define tasks and sequences; mandates data formats; Use is mandatory

Enterprise systems; CRM systems; SCM systems

Higher automation and incorporation of work processes; different uses among stakeholder groups; greater depth and time of Use would improve performance

Table 3-2: Types of Information Systems

*Complements are defined by McAfee (2006, p. 142) as "organisational innovations, or changes in

the way companies get work done". Examples of complements that allow working and performing

with technologies, according to McAfee (2006, p. 143), are better-skilled workers, higher levels of

teamwork, redesigned processes, and new decision rights.

With reference to Table 3-2, McAfee (2006) defines Function IT as systems that,

when used, assist with the execution of discrete tasks. Spreadsheets (Jain and

Conceptualising Use for IS Success

Page | 91

Kanungo, 2005; Burton-Jones and Straub, 2006), simulators (Liker et al. 1992),

and decision-support systems (Devarai and Kohli, 2003; Lilien et al., 2004) are

examples of functional IS (McAfee 2006) that have featured prominently in IS

studies. Generally, users and organisations use these systems for executing

discrete tasks such as decision making, creating a series of purchase orders, or

building a workflow model. They are often associated with limited processing

and data management functions, and a non-communal database. Often, these

systems stop short of supporting a work process but they do not control it.

McAfee (2006) classifies email, messaging, and blogs under Network IT as IT that,

when used, facilitates interactions for users without having to follow specified

parameters. Network-based applications and tools ranging from emails to the

more content-rich Group Support Systems are prevalent forms of IT in modern

organisations. Users generally employ these networking systems to support

coordinated efforts towards achieving organisational goals. Above all, users

adopt these systems to enable communication and participation across and

within their working networks.

The third classification—Enterprise IT—includes IT that, when used, specifies

and completes business processes. Organisations and users who work with

these systems are able to integrate business processes, share common data and

practices, and access information in real time. Applications such as Enterprise

Resource Planning (ERP), Customer Relationship Management (CRM), and

Supply Chain Management (SCM) fall into this category.

3.4.1 An Enterprise Systems Focus

In this study, ES is the unit of analysis and the point of reference for

contemporary IS. Scholars have taken great interest in recent times in the

proliferation and use of such packaged software. Shanks et al. (2003) identified a

range of topics surrounding packaged software implementation, including

phases of an ERP implementation lifecycle, critical success factors, business-

process management, culture, and so on. One of the reasons why ES has

captivated IS success scholars is that they so often represent an organisation’s

biggest one-off investment (Sedera et al. 2004; Shanks et al. 2003). Despite the

costs, these enterprise business suites, according to McAfee (2006), continue to

Tan  2010

Page | 92

be at the forefront of the varieties of computer systems that businesses see and

Use. On the other hand, anticipating the impacts of enterprise systems is often

less direct and is influenced by a host of human, organisational, and

environmental factors (Petter et al. 2008; Shanks et al. 2003); therefore they

present a socio-technical challenge. Rolls Royce (Yusuf et al. 2004), Geneva

Pharmaceuticals (Bhattacherjee, 2000), and Nestlé (Worthen 2000) are just some

of the organisations that faced unprecedented demands in post-implementation

of ES in their business.

Adopting contemporary IS such as ES embodies change. Where conventional IS

generally are used without making organisational changes, ES impose

organisational changes (Davenport 1998; 2000). As a result the managerial and

technological capabilities, as well as the managerial and operational practices—

involved where directing and facilitating the use and evolution of IS—have

changed accordingly in the face of more contemporary IS. Consistent with

Gregor (2009), understanding distinguishing features of an IS artefact and what

it serves define purposes that can vary practices instilled in it. Next, we highlight

the salient differences between conventional and contemporary systems. Using

these differences, Use of ES for completing work processes is characterised.

In terms of application scope (Hendricks et al. 2007), while more conventional

systems integrate selected functions within each functional area and operate

independently, ES provide cross-functional transaction automation. Modules in

ES based on business processes encompass individual functional units.

In terms of a business logic (Al-Mudimigh 2001; van der Aalst 2003), where

conventional systems are developed to reflect a business’s practices, ES contain

inbuilt best practices adopted in organisations as a way of doing business, or as

prerequisites to reengineer business processes. Organisations aim to improve ‘fit’,

either by configuring or customising the ES to suit existing business processes,

or reengineering the organisation’s processes to adopt these best practices in the

software.

In terms of tasks, conventional functional systems do not necessarily specify

tasks or sequences but contemporary IS, like ES, define them (Brady et al. 2001;

Devadoss and Pan 2007). As opposed to stand-alone tasks, a typical task

completed by a single user in ES is just a step executed within a larger business

Conceptualising Use for IS Success

Page | 93

process (for example creating a vendor’s list for Use in procurement, or contract-

negotiation processes). In this case, while ES are commonly associated with

uniform practices and rigid control mechanisms, there is still scope for human

agency (user-inspired action) (Boudreau and Robey 2005) to impose synergy

between organisation and technology for task completion. Consistent with a

work-system view, tasks represents work processes in the standard operating

procedures for users in completing a business process (See also work process

considerations in Section 2.7.3). Figure 3-2 shows some examples of core

operational business processes such as accounting, purchasing, and sales

processes that are present in most companies. ES therefore supports or enables

some or all of these processes.

Pay for product or service +

Prepay for product

or service +

Pay expense/

commission/ salary +

Refund Customers

Process inquiry and quote

Receive enter and validate orders

Manage back

orders and exceptions

Complete invoice

information

Collect for product or

service

Process customer

prepayments

Collect other

income

Collect supplier refunds

Identify sources and supply

Select final

supplier and

negotiate

Manage purchase

requisitions and orders

Manage receive, and

verify discrepancies

Authorise supplier payment

Manage return goods

Pay

Process Sales Orders

Collect

Purchasing

Figure 3-2: Examples of Core Operational Business Processes*

*Source: Adapted from Microsoft NAV (2010)

In terms of ES data, unlike working with voluminous printed output the

business process-based modular design of ES brings a ripple effect following

data entry, automatically updating data in all related files in a central database

(Bancroft et al. 1998; McAfee 2006). Data are processed interactively, are

available in real time, and are format mandated. Mandated data formats and

reports in ES offer consistent information to customers (Hendricks et al. 2007)

and facilitate governance of the firm (Scott and Vessey 2000). ES reports also

provide managers with a clear view of the relative performance of various parts

Tan  2010

Page | 94

of the enterprise. This is used thereafter to identify needed improvements and to

take advantage of market opportunities (Hendricks et al. 2007).

3.4.2 Multiple Stakeholder Perspectives

A contemporary IS operates as if an ES has many stakeholders (refer to earlier

Section 2.7.4). According to Sedera et al. (2004), the successful function of ES in

an organisation involves the cooperation of multiple users ranging from top

executives and managers to data-entry operators. These internal stakeholder

groups, according to Gable et al. (2008), entail strategic, managerial, technical,

and operational cohorts. Table 3-3 summarises the four main employment

cohorts and their related tasks. The other key players (external stakeholders)

involved in the function of the industry include system vendors, consultants,

and customers (Nah et al. 2001).

Besides employment cohorts or stakeholder groups, a different perspective of key

user groups11 as purported by Hirt and Swanson (1999) and Wu and Wang

(2007) is noteworthy. They refer to key users as generally selected from operating

departments and they are both familiar with business processes and have

domain knowledge of their areas. In contrast, end-users employ ES in a way that

satisfies their immediate needs and they only have very specific knowledge of the

parts of the system they need for their work, despite the process-oriented nature

of the system. This study refers largely to key users and employment cohorts or

stakeholders.

11 ‘Key user groups’ does not include groups such as shareholders, debt holders, or others who may indirectly have a stake in the impact of IS, but who are not direct users of the IS or its outputs. Note that annual reports for shareholders, and marketing material are highly processed outside the IS and are distant from any IS that may have originated certain of their details. The term ‘key user groups’ is synonymous with stakeholders and employment cohorts.

Conceptualising Use for IS Success

Page | 95

Activity Strategic Management Technical Operational

Focus of Plans Futuristic, one aspect at a time

Whole organisation

Whole organisation and (or) support

Single task and (or) transaction

Complexity Many variables Less complex Complex Simple, rule-based

Degree of Structure

Unstructured, irregular

Rhythmic, procedural

Routine Structured

Nature of Information

Tailor made, more external and predictive

Integrated, internal but holistic

Integrated, troubleshooting

Task-specific, real time

Time Horizon Long-term Long, medium to short

Medium Short

Table 3-3: Employment Cohorts and Related Tasks

Stakeholders have their own interpretations of IS success following the IS

implementation and its subsequent Use. From the developer’s perspective, a

successful IS could be indicated by an implementation that is on time and under

budget, with a complete set of features that are aligned with the specifications

and that function correctly. From a management perspective, a successful IS

may be one that reduces uncertainty of outcomes and thus lowers risk and

leverages scarce resources (Briggs et al. 2003). From the end-user perspective, a

successful system may be one that improves productivity and performance. In

sum, the success of an IS is by no means assured from any perspective.

Sedera et al. (2006) report on changes in stakeholder foci in IS success studies

in relation to IS evolutions, and they seek the different views of employment

cohorts on ES success. As discussed by the authors, strategic stakeholders are

more involved in complex, irregular decision-making and they focus on providing

policies to govern the entire organisation. At management level, stakeholders

deal with rhythmic (but not repetitive) prescribed procedures, preferring ‘goal-

congruent’ IS. Stakeholders at the operational level are involved in highly

structured and specific tasks that are routine and transactional. Last, technical

stakeholders, as identified by Shang and Seddon (2000), are involved in systems

configuration and testing.

Tan  2010

Page | 96

3.5 Research Model Constructs and Measures

Based on the considerations of the theoretical approach and the IS success

context of this study, the salient dimensions and measures for the constructs

specified in the research model are identified. Beginning with Use, there is a

generic set of dimensions of Use for possible use to measure the incorporation of

advanced IS (where ES is the archetype) for users’ work processes. Next, the

three remaining IS success constructs—Individual Impact (the dependent

variable), System Quality, and Information Quality (the independent variables)—

that complete the a priori research model are discussed.

3.5.1 Use

Each dimension consolidates a set of reflective measures that capture various

relevant aspects that describe Use. For this, three absolute (that is ‘would not

depend on anything else’) dimensions—amount of Use, depth of Use, and

attitude of IS users—are proposed. It is the belief that the dimensions represent

an holistic evaluation of Use, one that qualifies Use as a necessary dimension of

IS success measurement—a widely believed notion but scarcely proven in theory.

The dimensions of Use and the final12 15 measures are summarised in Table 3-4.

Amount of Use—Amount of Use is an objective dimension comprising frequency

and duration of Use of the system actually used for completing work processes.

More than mere execution of business processes where work processes are

straightforward and the automation level is predictable, frequency of Use is more

important to achieve work productivity. In this case, the preference for work

performance is on efficiency rather than on effectiveness. In other words, the

importance of amount as a measure for work processes that outweighs value-

added to core functions is therefore less. An example of such a work process is

human resources (HR) management where automated, self-service personnel

and benefits administration, and expense reporting are value-added functions

completed in ES. These value-added features of the HR management module are

often not part of the primary activities of a firm.

12 Twenty-eight measures originally came under consideration for the instrument. Thirteen were removed from the final analysis due to theoretical reasons—that is, they are believed not to measure the dimensions of Use directly, but rather antecedents and consequences of Use.

Conceptualising Use for IS Success

Page | 97

Hence, in this dimension, duration refers to time spent per sitting, and

frequency refers to how many sittings. Each sitting is characterised by a user

having to spend a period with the system for work purposes. These measures are

often used to capture Use as indicated earlier and they feature in studies such

as Venkatesh et al. (2003) and Cheung and Limayem (2005).

Depth of Use—The depth of Use refers to whether when an IS is used to perform

work processes the maximum potential Use is made of it. In other words, this

dimension captures the extent to which users have used available features and

functionalities of the system to not only complete but to enhance a work process.

For users to respond to this dimension, they must not only be familiar with the

core functions but with the value-added features of a system too. In other words,

the depth of Use is the surrogate for capturing users’ value-added process

knowledge. Take the example of completing a client history to evaluate

creditworthiness. A user who knowingly Uses only the first page of a multiple-

screen questionnaire (that is, client details, salary, and debts) in an evaluation

function of the system is not maximising the level of detail, but fulfils only the

core functions. However, the employee may choose to add value to determining

creditworthiness by adopting features of the system to plot a client’s loan

repayment ability through a dependencies check and assets-growth strategy.

The employee may also recommend credit loans in future based on triangulated

client data. In other words, the automation level is less predictable. For the large

and often complex ES like a CRM process that purports to add value to its

adopters, depth of Use as a measure is more important when value-added

functions are present. Furthermore, it is notable that measures capturing the

depth of Use ought to be process-related.

The well-established field of management literature forms the basis for

constructing measures of depth. This stream of literature is found to attempt

operationalising process or task-related measures. In the light of more advanced

IS, task characteristics have been considered a major determinant in problem

solving (Jonassen 2000) and, more importantly, task performance (Campbell

1988). From this pool of management literature, two measures that capture a

user’s association with the intrinsic tasks structure and its characteristics

Tan  2010

Page | 98

during Use are adapted. Three new scales to capture the exploratory and value-

adding nature of Use are introduced.

Attitude towards Use—This dimension captures the extent to which the user

truly incorporates a system into their work process. Users’ psychological states

based on their experiences with the system for completing work processes

measure the attitude of users. Consistent with Hong et al. (2001), Gopal et al.

(1992) and Kozlowski and Klein (2000), it is argued that both psychological and

plausible views of Use are important to capture the whole user experience,

irrespective of the core or value-added function of a work process or the type of

system used. These measures ought to be user-related.

Gopal et al. (1992) introduce a set of attitude indicators to capture how group

users appropriate advanced group decision systems. According to Gopal et al.

(1992), attitude is reflected in users’ comfort levels, the respect they have for the

software, and the challenges it promotes. Kim and Soergel (2005) and Li (2004)

developed a classification scheme to capture intrinsic versus extrinsic task

characteristics, measurement of the task performer, task performance, and more

importantly, the relationship between task and performer. The measures

introduced to capture the relationship between tasks and the task performer are

adapted to investigate their attitude to Use. They include intrinsic interest,

which captures the degree to which the tasks in themselves are interesting,

motivating, or attractive to the task performer; acceptance captures the degree of

willingness to exert effort to meet the goals of Use and task reward. There are

two new scales to add—enforcement and confusion—in the light of the

complexity and unique Use nature of contemporary ES.

Conceptualising Use for IS Success

Page | 99

Dimensions of Use

Measures Reflective Item Description Original Source

Amount of Use

Frequency (F1) I spend X hours per week on the system completing my tasks.

(Cheung and Limayem 2005)

Duration (F2) I spend X hours per sitting on the system completing my tasks.

(Venkatesh et al. 2003)

Depth and (or) extent of Use

Clarity of goals (DP1)

I have a clear understanding of the outcomes of Task X.

(Campbell 1988; Kim and Soergel 2005)

Clarity of given state (DP2)

I have a clear understanding of what I need to complete in Task X.

(Campbell 1988; Kim and Soergel 2005)

Value added Use (DP3)

I use system X features to perform configuring organisational and user parameters steps.

New Scale

Value added Use (DP4)

I use system X features to perform strategic and value-added tasks. New Scale

Exploration level (DP5)

I have explored additional system features in System X beyond the given specifications.

New Scale

Attitude of Use

Reward (AT1) I find the Task X exercises rewarding and fulfilling.

(Kim and Soergel 2005)

Intrinsic interest (AT2)

I find the Task X exercises interesting and attractive.

(Kim and Soergel 2005)

Acceptance (AT3)

I am willing to put in as much effort as required to complete Task X.

(Kim and Soergel 2005)

Comfort (AT4) I feel confident and relaxed when engaging with System X.

(Gopal et al. 1992)

Respect (AT5) I feel that System X is invaluable in completing Task X.

(Gopal et al. 1992)

Challenge (AT6) I am willing to challenge myself and excel at using System X for Task X.

(Gopal et al. 1992)

Confusion (AT7) I am confused by system features and functions in system X.

New Scale

Enforcement (AT8)

I am only using System X for Task X because I must.

New Scale

Table 3-4: Use Dimensions and Measures

Tan  2010

Page | 100

3.5.2 Individual Impact

The dependent variable of this study is the impact of IS on the individual. Past

studies such as Lucas and Nielsen (1980) used learning—or rate of performance

improvement—as a measure of individual impact. In the information-system

framework proposed by Chervany et al. (1972), the dependent variable of

individual impact was generally defined to be ‘decision effectiveness’. Rivard and

Huff (1984) included increased user productivity in their measure of success. In

summary, measures of individual impact seek to assess, for example, whether

the system has helped its users or the stakeholders of an organisation to

perform their tasks efficiently and effectively. This might be for example, learning

to transact or process with the system, interpret information accurately,

understand information and work-related activities in their area better, make

more effective decisions, and generally be more productive. In Table 3-5, the

researcher uses Task X to denote the different possible streams of impact that

an ES supports. Table 3-5 illustrates the measures adopted in this study for

assessing how the system has influenced the users’ performances. This study

does not operationalise organisational impacts, as the focus of the study is Use,

and is restricted to the individual level. In the table, II refers to Individual Impact,

and System X refers to the ES hardware, its software features, and its

procedures.

ID Item Name Item Description*

II1 Learning I have learnt much about Task X through System X.

II2 Awareness What I completed in system X has increased my awareness of Task X.

II3 Task effectiveness

System X has enhanced my effectiveness in Task X.

II4 Task productivity

System X has increased my productivity in Task X.

II5 Task performance

System X has increased my overall performance in Task X.

Table 3-5: Individual Impact Measurement Items

*Adapted from Gable et al. 2008

Conceptualising Use for IS Success

Page | 101

3.5.3 System Quality

Measures of system quality have been found to focus on performance

characteristics of the IS under study. Earlier studies looked to content of the

database, aggregation of details, human factors, and system accuracy (Emery

1971), reliability, response time, and ease of terminal Use (Burton 1974) as

indicators of the quality of an information-processing system. Some research

also looked at resource and investment utilisation (Kriebel and Raviv 1980), and

hardware utilisation efficiency (Alloway 1980). The Hamilton and Chervany (1981)

study identified a more comprehensive list of measures including data currency,

response time, turnaround time, data accuracy, reliability, completeness, and

system flexibility and ease of Use. Seddon (1997) considers system quality to be

concerned with ‘bugs’ in the system (system reliability), user-interface

consistency, ease of Use, documentation quality, and maintenance ability of the

program code. Gable et al. (2003) identify a similar list in their ES-success study,

with ease of learning, quality of the system functionality, and sophistication and

integration of the system as the additions. Ten items in a scale of users’

perceptions measure the quality of the system as how well the system performs

from a design and technical perspective. Table 3-6 lists system quality measures

employed in this research. In the table, SQ refers to System Quality and System

X refers to the IT system or ES in question.

Tan  2010

Page | 102

ID Item Name Item Description*

SQ1 Ease of Use System X is easy to Use.

SQ2 Ease of learning System X is easy to learn.

SQ3 Meets requirements System X meets my requirements.

SQ4 Ease of access System X is easy to access.

SQ5 Features and functions

System X includes necessary features and functions.

SQ6 System accuracy System X always does what it should.

SQ7 System adaptability

System X's user interface can be easily adapted to one’s personal approach.

SQ8 Level of complexity

System X requires only the minimum number of fields and screens to achieve a task.

SQ9 Level of integration

All data within System X are fully integrated and consistent.

SQ10 Level of customisation System X is easy to modify, correct, or improve.

Table 3-6: System Quality-measurement Items

*Adapted from Gable et al. 2008

3.5.4 Information Quality

According to Jeong and Lambert (2001) in their review of the information quality

literature, the construct can be measured in three related areas: information

content, information format, and the physical environment associated with

information. The measures commonly associated with assessing information

content are accuracy, currency, relevance, security, validity, and completeness

(Auster 1993; Miller 1996; Smith 1996). Depending on the system under study,

the measures for assessing information format include its design, format, and

links a measure of a customer’s physical movement through the system of the

study (Miller 1996; Zmud 1978). Finally, the physical environment associated

with information refers to a user’s ease of access to the system and its

information (Culnan 1985). Combining all three aspects evaluates the overall

quality of information. In a later study by Lee et al. (2002), the multidimensional

construct ‘information quality’ is represented comprehensively in four

dimensions: intrinsic, contextual, representational, and accessible information

Conceptualising Use for IS Success

Page | 103

quality. In summary, this research adopts the measures of Gable et al. (2003)

where information quality is concerned with such issues as the relevance,

timeliness, and format of reports, and the accuracy of information generated by

the implemented system. The quality of information is thus measured through a

five-item scale of users’ perceptions of the quality of the task outputs produced by

the system in reports and on screen. Table 3-7 illustrates the item name, item

description or question, and its source. In the table, IQ refers to Information

Quality and task X outputs refer to the data, text, or other results produced by

the system because of operating on data or procedures required of the task X

activities.

ID Item Name Item Description*

IQ1 Output accuracy

Task X's outputs (for example, quotations and goods invoices from an order-fulfilment task) generated from System X seem to be relevant and exactly what is needed.

IQ2 Output usability Task X's outputs generated from System X are in a readily usable form for the next sub-task without any modification.

IQ3 Ease of understanding

Task X's outputs generated from System X are easy to understand.

IQ4 Formatting Task X's outputs generated from System X appear readable, clear, and well formatted.

IQ5 Conciseness Task X's outputs generated from System X are concise (that is, to the point).

Table 3-7: Information Quality-measurement Items

*Adapted from Gable et al. 2008

Tan  2010

Page | 104

3.6 Chapter Summary

Given the weaknesses of Use explained in the literature review, a new

conceptualisation of Use for the system’s success domain is proposed. It is

defined as the manner and degree to which an IS is incorporated into the user’s

work processes. The new Use construct is positioned in a modified IS success

research model (Figure 3-1), and the model illustrates the relationships

investigated in this study. Three IS success models reconcile to form the model,

and this clarifies the relationship between the constructs that will determine the

latter empirical analysis and findings.

In addition, the new conceptualisation of Use draws attention to a staged

approach for developing measures of Use. The two-phase approach (in Table 3-1)

comprises first defining assumptions of Use, including the systems, and users,

and work processes. The second stage involves identifying the type of system

studied. Different systems prescribe different sets of features and functionalities

that users can employ. From here, we are likely to observe users and the levels

of incorporation and automation, which together capture the extent of core and

value-added work processes that the IS encodes. Researchers can then identify

the type of Use, and from there select relevant measures that not only tie in to

its core and value-added functions, but to the study context.

The pertinent system type in this study, ES, is discussed, in the light of the

above approach. Specifically, we highlight the salient differences between ES and

other types of systems and the considerations that such systems make on the

type of users and the work processes supported. These differences demonstrate

how different users would have different patterns of Use.

Finally, we identify the relevant Use, system quality, information quality, and

individual impact constructs and their measures in the a priori model.

Explanations of the models, constructs, and measures here help guide the set-

up of the empirical investigation.

Conceptualising Use for IS Success

Page | 105

Chapter 4: Research Design

4.1 Introduction

This chapter discusses the empirical methods adopted in this research. The

previous chapter introduced a new conceptualisation of Use that attempts to

address the scant past attention to theory; work-systems theory, and the

understanding of systems types and their key users support this. The

implication of developing a theoretical lens extends beyond supporting a new

conceptualisation and operationalisation of Use, but drives the study methods.

Methods adopted should be chosen not only on the premise of answering the

research questions, but they should seek to address and (or) contribute to the

theoretical lens for the study. On this premise, a synergy of theories, literature,

and approach—a mixed-method approach—is appropriate. The adoption of this

combination of both methodologies to investigate one (or series of) research

question(s) is expanding (Creswell 2003; Creswell 2009; Gable 1994; Tashakkori

and Teddlie 2003). To narrow the gap in understanding the use of contemporary

IS in the light of current technological and social contexts, the approach of this

project is to adopt mixed methods to investigate the effects of Use in two

domains: ES for education and ES for management. The mixed-method

approach consists of two distinct yet related phases: a quantitative (model-

testing) phase and a qualitative (exploratory) phase.

The rest of the chapter describes the mixed-method research design. The chapter

begins by discussing the underlying assumptions for each of the two phases in

the mixed-method approach and thereby sets the tone and the overall structure

for the remainder of the chapter. The chapter continues to distinguish between

quantitative and qualitative methods in Section 4.3; specifically, it distinguishes

the epistemology, characteristics, and merits of the dual stages. Section 4.4

introduces the characteristics of the mixed-method design and the methods

adopted to answer the research questions. Herein, there is discussion of the

implementation of data collection, the priority given to certain methods, the

stance of the study, the driving theory, and the overall relevance to research

questions.

Tan  2010

Page | 106

The chapter then introduces the context of each of the chosen methods. Section

4.5 discusses the quantitative study to test the operationalisation and

relationships cast by the new conceptualisation. We gather data and responses

from participants enrolled in an Australian institute of higher learning who were

completing a set of ES exercises to test and validate the models. Section 4.6

describes a series of semi-structured interviews used to capture the daily

experiences of managers in Indian organisations adopting ES to investigate the

relevance of Use and the phases of ES Use in practice. Insights from the

managers’ interviews when triangulated with findings from the quantitative

study will provide a more complete picture, and will show the emerging

perspectives of contemporary Use. The concluding remarks for this chapter are

in Section 4.7.

4.2 Assumptions of Theory: Testing and Building

As mentioned earlier, the overall research method comprises a model-testing

phase and thereafter an exploratory, theory-building phase. The first phase takes

on a deductive (top-down) view where a theoretical lens and (or) plausible model

of Use is first defined. This theory narrows down thereafter to specific testable

hypotheses (like those in Section 3.2.1). Observations are collected to address

and test the feasibility of these hypotheses, subsequently providing a

confirmation (or not) of the original theory. These are essentially the model-

testing phase objectives. The second phase takes on more inductive (bottom-up)

reasoning where specific observations move to broader generalisations and

theories. Building from specific measures in the previous model-testing phase,

and with independent observations collected in this phase, we can begin to

detect patterns and regularities. Subsequently, we formulate tentative

hypotheses, general conclusions, and emerging theories of the contemporary Use

that inform our original theory or model and explore future research. These are

essentially the theory-building phase objectives.

The assumptions and characteristics of the two phases are relatively distinct yet

somewhat related. The model-testing phase follows a deductive reasoning, is

variance-based, and predominantly adopts a quantitative, empirical, data-

collection and analysis approach. The model-building phase on the other hand

follows inductive reasoning, is process-based, and emphasises qualitative study

Conceptualising Use for IS Success

Page | 107

and empirical data collection. Although it is the expectation to generate support

from qualitative data for findings in the model-testing phase, the overarching

motivation is to draw new concepts of contemporary Use from qualitative data.

The assumptions and characteristics discussed above also explain the logic of

the sequence 13 (model building before testing) of the phases. Together, the

model-testing and theory-building phases purport to be a theory for predicting

and explaining the effects of Use on IS success. The ensuing theory contains key

constructs from IS success and IS-Impact, causal relationships, testable

hypotheses, and recommendations for practice. This according to Gregor (2006,

p. 24) can be best14 specified as type IV (explain and predict) theory.

Reconciling variance and process strategies (like in Sabherwal and Robey 1995)

is useful when investigating a social phenomenon such as the Use of

contemporary IS. The overarching advantage of a combined deductive and

inductive approach is the strength of the design to cycle continuously from

theory down to observations and back up to theory, and thus adds to our

understanding of Use and, ultimately, IS success.

4.3 Quantitative and Qualitative Methods

Settling on a research design requires researchers not only to consider the

philosophical assumptions, perspectives, or underlying epistemologies, but the

type of research, the general research method, the data-collection technique,

and the data-analysis approach in relation to the research questions (Hair,

Anderson, Tatham et al. 1995; Myers 2009). This section discusses the general

(quantitative and qualitative) methods and overall data-collection techniques.

One of the common ways to classify types of research design is to distinguish

between qualitative and quantitative methods (Myers 1997). Where quantitative

research methods were originally developed in the natural sciences to study

13 Gable (1996) proposes the opposite sequence (model building before model testing), and that the model-building phase provides the testable notions and constructs for the model-testing phase. 14 Where type IV theory seeks to predict and explain, type I seeks to analyse, type II to understand, type III to predict, and type V to design and take action. Further examples of type IV theory identified by Gregor (2006) include the Technology Acceptance Model (TAM) (Davis et al. 1989), and the IS success model (DeLone and McLean, 1992; 2003). The theories seek to address these questions: what is, how, why, and what. For elaboration on type IV theory refer to Gregor 2009, p. 24.

Tan  2010

Page | 108

natural phenomena, qualitative research methods were developed in the social

sciences to enable researchers to study social and cultural phenomena (Myers

2009). Essentially, quantitative research involves the using of quantitative data

to study phenomena; and, not surprisingly, qualitative research involves the use

of qualitative data to understand and explain social phenomena.

Figure 4-1: Epistemological Assumptions for Qualitative and Quantitative Research*

*Source: adapted from Straub et al. 2004

4.3.1 Issues with Positivism

While qualitative research can be positivist, interpretive, or critical (Myers 2009),

in the case of quantitative research only positivist is meaningful (Straub et al.

2004) (see Figure 4-1). Positivism is the dominant form of research in most

business and management disciplines (Myers 2009). Positivist research

subscribes to a more ‘scientific method’ and deals generally with positive facts

and observable phenomena. Positivist studies generally attempt to test theory in

an attempt to increase the predictive understanding of phenomena. In other

words, positivism defines a scientific theory as one that can be falsified (Straub

et al. 2004). And positivist researchers typically formulate propositions that

portray the subject matter in terms of quantifiable measures of independent

variables and dependent variables and the relationships between them. In this

light, this study takes a slightly more positivist stance where Use is a

measurable variable in its nomological net.

Typically, a researcher must decide what type of research to conduct in

quantitative positivist research: confirmatory or exploratory research.

Predominantly, this study seeks to derive and test the a priori research model

introduced in the previous chapter (a deductive lens). This typically suggests an

approach that seeks to test (support) a set of pre-specified relationships. To

Conceptualising Use for IS Success

Page | 109

achieve this, we canvass the learning experiences of participants working with

ES in a laboratory setting.

Despite the obvious value of this model-testing phase, we can do more to

minimise reverting to prior empirical studies that adopt Use as part of model

testing a larger phenomenon. As this study attempts to focus squarely on Use in

its nomological net, moving away from a solely positivist bias is useful. In this

case, the researcher subscribes to the view of critical realism, a form of post-

positivism where all observations are falsifiable and where all theory can be

revised (Trochim 2006). Critical realism recognises the flaws in the ability of

researchers and of a single-measurement approach, thus emphasising the

importance of multiple methods, measures, and observations (Trochim ibid.). In

fact, a multidimensional and multi-nature construct such as system Use should

be studied and (or) validated within different contexts and purposes (Burton-

Jones et al. 2004; 2006; 2007) to add to cumulative knowledge.

For these reasons, we propose a second phase focusing on the phenomena of

Use in another context. The motivations for a second study phase should be to

add to knowledge about contemporary Use and, although not the priority, to

‘explain’ the data of the confirmatory approach. In this case, the type of research

completed here seeks not only description and measurement of reality, but

prediction, exploration, and explanation too.

4.3.2 Data Collection Techniques

Drawing from the weaknesses of a purely positivist research approach to the

topic, a number of data-collection techniques can be used; however, the

research approach does not prescribe the kind of data-collection techniques to

be used (Straub 2004). In fact, besides the research method, the choice and

appropriateness of data collection also rely on the research questions and the

availability of data (Myers 2009).

To appreciate and understand fully the phenomena of ES Use and its impacts,

this study draws largely from two sources of data and uses more than one

technique to gather data (see also Table 4-1). To answer research questions 2

and 3 (refer to Section 1.4) and the finer questions of ‘how do users rate their

Use’ and ‘what is the nature of the relationship between Use and IS success’,

Tan  2010

Page | 110

this study draws results from quantitative data collected from a survey. To

provide more answers to research question 1 and the finer questions of ‘why and

how do users use ES in the real world’, this study uses qualitative data drawn

from the opinions of ES practitioners through a series of practitioner interviews.

The objective of completing another piece of opinion research (Jenkins 1985) is to

gather analysable data on the attitudes, opinions, impressions, and beliefs of

human subjects in various situations and experiences of ES Use. Asking them

via questionnaires, interviews, and so on accomplishes this. The methodology

not only allows testing of a priori hypotheses, but it offers an iterative approach

to the generation of new hypotheses, informing prior theory in the process (an

inductive lens). This triangulation (Gable et al. 1994; Myers 2009) of data is

useful when looking at the same topic from multiple angles and in different

environments: controlled and uncontrolled; structured and unstructured. Where

there are different possible triangulation types: by data source (people, times,

places), by method (observation, interviews, surveys), by researcher (investigator

A, B, and so on), by theory, or by data type (quantitative and qualitative) (Miles

and Huberman 1994), this study focuses on triangulating by data type, enforced

by the nature of method chosen.

4.4 Characteristics of the Mixed-Method Research Design

With the development and legitimacy of both qualitative and quantitative

research argued, a combination of two methodologies to investigate and answer

the research questions is used. In addition, one method takes centre stage, with

the other providing evidential support for its data. This approach is common in

the mixed-method approach (Creswell 2009, cited in Tashakkori and Teddlie

2003). According to the literature, it can potentially cancel out some of the

disadvantages of certain methods, better understand complex social phenomena,

be more useful for applied research, construct and confirm theory in the same

study, and provide explanations for contradictory results emerging from different

methods.

A set of factors compiled by Creswell et al. (2003) is consulted for determining

the mixed-method research design. These factors include: (1) the priority given to

quantitative or qualitative research in reference to research questions; (2) the

implementation of data collection; and (3) the stage of integration. These factors

Conceptualising Use for IS Success

Page | 111

not only characterise the proposed mixed-method framework for this study, but

they differentiate between the quantitative and qualitative projects completed.

The design used seeks primarily to triangulate qualitative and quantitative data

(Miles and Huberman 1994; Morse 2003); the data are used to form essential

interpretations of contemporary Use. Table 4-1 summarises the key

characteristics of the methods.

Method Stance Research Question Addressed

Data Collection Key elements and considerations of method

Quantitative, and top-down

Q2: What are the salient dimensions and measures of Use for IS success? And Q3: What is the role of Use in IS success?

Exploratory and Confirmatory experiment and survey (participants using ES for education)

Nomological net, constructs, measures, measurement and structural models, loadings and weights

Qualitative, and bottom-up

Q1: How can we define Use for IS success And Q3: What is the role of Use in IS success?

Practitioner Interviews and transcripts (managers using ES for business)

Accounts of Use, Emergent patterns, frameworks, and typology

Table 4-1: Summary of Mixed Methods

Priority of deductive, quantitative lens: In the mixed-methods approach,

researchers can emphasise one method over the other (Creswell et al. 2003).

With reference to Table 4-1, this study places a greater emphasis and

precedence on the quantitative aspect. The motivation is the focus of the

research agenda on operationalising Use and validating Use in its nomological

net. In other words, the emphasis is on the quantitative (survey) aspect, where

the qualitative (interviews) study serves to complement and explain the

quantitative data.

Revisiting key research objectives (in Section 1.1), this study will attempt to,

Operationalise Use with a set of rich Use measures;

Examine the effects of Use over time;

Provide evidence of the formative or reflective nature of the Use;

Examine Use as an antecedent, consequence, and as a dimension.

Tan  2010

Page | 112

The quantitative study adopted will address these four objectives directly by

testing the sufficiency of several research models, including their derived

measures and the hypothesised role of Use. Data from the quantitative study are

envisaged to confirm the role of Use and its relationship with other dimensions

in an IS success stream. Rich qualitative data could inform the quantitative data

by explaining the relevance and iterative phases of Use in practice. Despite the

value of a different context, purpose, and data in explaining Use, one important

aspect remains constant in the light of triangulating the data: the IS artefact or

the type of contemporary systems being used (that is, the ES). Interpretation of

qualitative data should also be done in the light of the core components of the

research model and other theory, in the spirit of informing the conceptualisation

of Use in IS success. Thus, qualitative data attempt to shed light on previously

established objectives and for the purposes of:

examining the dynamics of Use, and

explaining likely differentiating scores for Use according to the

perspectives of the stakeholders.

Data Collection: The researcher can choose to collect data sequentially or

concurrently (Creswell, Plano, Guttman et al. 2003). In this study of

contemporary Use, data collection is sequential. This approach (see Figure 4-2)—

explanatory sequential research or sequential-quantitative first (Creswell 2009)—

prescribes that the researcher collects quantitative data and then collects

qualitative data to help explain or elaborate on the quantitative results (Morse

2003). Although it is the expectation of the researcher to find evidential support

from the qualitative data for findings in the model-testing phase, it is more

important to draw new concepts of contemporary Use from the qualitative data.

Thus, it is important to note that the integrity of each research method be kept

to avoid violating the assumptions, sampling, and other methodological

principles of these methods (Morse 2003).

QUANData

Analysis

qualData

Collection

Interpretation of Entire Analysis

QUANData

Collection

qualData

Analysis

qualQUAN

Figure 4-2 : Sequential Explanatory Design*

Conceptualising Use for IS Success

Page | 113

*Source: Creswell 2009, p. 209. Quantitative is “QUAN” and Qualitative is “qual”. Arrows indicate

a sequential form of data collection with one form building on the other.

This is contrary to the exploratory sequential approach (Morse 2003), where the

researcher gathers qualitative data to explore a phenomenon and then collects

quantitative data to explain the relationships found in the qualitative data. An

example is the work of Gable (2004) on IS consultant-engagement success

factors, where a case study oriented data-collection method was conducted prior

to and integrated with another survey. Furthermore, as Esteves and Pastor

(2004) demonstrated, collecting data from two sample groups in a mixed-method

approach is not only plausible but is useful.

Stage of integration: The researcher needs to decide when to integrate the

research (Creswell et al. 2003). This depends largely on the purpose of each of

the study methods and the overall research, and the ease of integration. As

suggested earlier, the purpose of the quantitative study is largely aimed at

testing the research model. Quantitative data attempt to confirm relationships in

the model and test the effects of Use hypothesised. Thereafter, a qualitative

study to capture insights to key model concepts and, more importantly, ES Use

in practice is conducted. Findings should suggest how they relate to and explain

quantitative findings. Contrary to the work of Gable (1994), quantitative findings

in this study merely inform the design of the qualitative study, while qualitative

data support or embed to the primary form of data (quantitative) (Creswell 2009).

Thus, the two forms of data are separate yet connected. From this, we

triangulate qualitative and quantitative data only after the conduct of the

qualitative study. Findings from both studies are used to address all three of the

research questions and to provide cumulative feedback to the current IS success

literature.

Building on these arguments and the descriptions of the research strategy (refer

to broad phases in earlier Section 1.5), Figure 4-3 summarises the overall

research design, including the seven key phases (depicted by rounded

rectangles). In addition, the design highlights the key considerations and

outcomes (depicted by parallelograms) of each phase. The arrows in the diagram

do not indicate causality but they simply indicate relationships, inputs, and

outputs. The design demonstrates the cyclical nature of the relationship between

the qualitative and quantitative approaches undertaken.

Tan  2010

Page | 114

1) Literature Review

2) Specification and Selection

5) Statistical Validation

and Findings

3) Participant Survey (1)

4) Participant Survey(2)

6) Participant Interviews

Insights and

Issues

Nomological Net,

Constructs and measures

Validated Models

Explanatory Phases

A Priori Research Models

Challenges and Issues

Th

eo

ry B

uil

din

gM

od

el B

uil

din

g a

nd

Te

sti

ng

Descriptive and

Comparative Statistics

7) Findings and Interpretations

Phases

Outcomes

Loadings and Weights,

Structural Analysis

Figure 4-3: Research Design

Conceptualising Use for IS Success

Page | 115

4.4.1 Benefits of the Mixed-methods Approach

There are well-documented benefits of combining methods. Depending on the

philosophical approach, the mixed-methods approach not only allows the

researcher to access insights to material, social, and personal worlds in a

research context (Mingers 2001), but it brings benefits to the methods

themselves. Five inter-method benefits of the current approach, based on the

work of Greene et al. (1989), are discussed below.

Triangulation—describes the tests of consistency of the findings obtained

through different research instruments. In this case, not only does the

laboratory study test the user responses to the key elements of ES Use,

triangulation of laboratory data will increase control and assess potential threats

to the conduct of the practitioner interviews. Triangulation of interview data will

lend support and add creditability and reliability to laboratory data, thus

showing whether contemporary ES Use is truly important for evaluating ES

success.

Complementarity—clarifies and illustrates data from one method with the Use of

another method. In our case, practitioners’ interviews will add information about

the learning and thought processes that reside in everyday ES Use, and will help

qualify the scores and statistics gathered in the participant survey.

Development—the results from one method shape subsequent methods or steps

in the research process. In this case, data from the quantitative study might not

only inform the interview, but may suggest other assessments of Use that could

be appropriate in the future. Emerging themes from the qualitative interviews

may also be tested further in the research process.

Initiation—stimulates new research questions or challenges data obtained

through one method. In this case, semi-structured but in-depth interviews with

ES practitioners will provide new and richer insights on perceptions of the

impacts of ES and how its daily Use across different sites.

Expansion—provides richness and detail for the study by exploring specific

features of each method. In this case, the integration of interviews and survey

methods mentioned above will expand the breadth of the study and is likely to

add to the role and experiences of Indian and Australian users—the educational

Tan  2010

Page | 116

and private sectors—to further the process of understanding contemporary Use,

which may otherwise have been docile.

4.5 The Experiment: An ES Hands-on Experience

4.5.1 The Setting

,A leading Australian institute of higher learning introduced. a new ES module

in mid-2007. The objective of the module was to facilitate the learning and

awareness of the concepts of ES, the business processes enabled by ES, and ES

software-specific knowledge. The module consists of a detailed teaching case and

a set of instructions to complete exercises in the teaching case. This module

offering was developed within the institute’s Faculty of Information Technology,

which had previously enrolled around 150 undergraduate and graduate

participants in each term. The module runs over a nine-week period of a 13-

week term. The teaching plan entails weekly two-hour lectures to impart key ES

concepts and weekly one-hour computer laboratory sessions, where participants

are engaged in a pre-configured SAP (Version ECC6) system. All participants in

this course received individual access to the generic modules in the SAP system

(that is, Sales and Distribution, Finance and Controlling, Materials Management,

and Production Planning). The facilitators of the course include a senior lecturer,

an associate lecturer, and the chief researcher.

4.5.2 The Process-system Centric Approach

A teaching plan for the module was developed, derived from the main ES-related

knowledge types: (1) software-specific knowledge, and (2) business-process or

organisational knowledge (Davenport 1998), following the ‘learn-by-doing’

approach in Leger (2006).

This approach is further motivated by the demands of the industry for better-

equipped graduates. Specifically, contemporary organisations are shifting their

emphasis of ES from simply delivering ‘economies of scale’ to sustainable ‘value

creation and process orientation’ (Curran et al. 1998; Ferdian 2001). Such

organisations are hence seeking employees to meet the new challenges that

remain beyond the initial implementation. These challenges in post-

Conceptualising Use for IS Success

Page | 117

implementation range from highly technical maintenance and upgrade skills to

business-process-oriented software skills (Davenport 2000; Markus and Tanis

2000). Despite a healthy demand for business-process experts from the industry,

recent studies by Scott et al. (2002), Kim et al. (2006) and Rosemann and

Maurizio (2005) reveal that most IS graduates posses inadequate ES skills. Leger

(2006) identifies the importance of a carefully documented business scenario in

delivering functional and operational ES expertise.

Elaborating on the teaching plan, each activity in the course material discussed

has been specifically designed to provide participants with all pertinent types of

knowledge, while giving them an opportunity to ‘learn-by-doing’. Second, the

module thus created attempted to provide software-specific knowledge

pertaining to process execution, and software customisation and modification.

Although the software-specific knowledge includes hardware and network

knowledge, such aspects go beyond the foci of the module. The module allows

participants to gain first-hand experience of the features and functionality of the

SAP system. Third, business-process knowledge focuses on providing learning

about the business process and the organisation. The combination of software

and process knowledge therefore creates an understanding of how the ES would

be incorporated into completing the business process.

Given incorporation, participants assume the role of employees of a simulated

case study organisation where each participant deals with day-to-day

procurement and order-fulfilment business transactions. Hence, transactions

completed in ES refer to the participants’ work processes. Prior to these

execution steps, participants had to set up the working parameters and

environment in the SAP system. In this exercise, the assumption of a role in a

case organisation enabled participants to initiate these business transactions or

work processes and experience business relationships between vendors, clients,

and customers. A workbook describing a teaching case and a core set of

instructions for completing the exercise are provided for participants. Figure 4-4

illustrates the key activities and deliverables for the hands-on ES exercise.

Appendix B presents a further illustration of the user processes for completing

the exercises in the workbook.

Tan  2010

Page | 118

The teaching case first describes how the ES brought in have changed the face of

operations in the (case) organisation. The case study gives additional material

(for example product, vendor, material, and customer lists) to help the user

complete their tasks, just as in the real world. Next, the teaching case material is

divided into three interrelated phases: (1) preparing the SAP environment for

process execution, (2) procurement execution, and (3) order fulfilment. The set-

up exercise was developed to prepare the SAP environment for execution. The

process here commenced with each participant entering an employee number,

name, and contact information. Once completed, users were required to submit

a series of deliverables using a standard template. Procurement-process

execution involved acquiring a range of products from vendors, using a scenario

described in the case study. In order fulfilment, users change their role from

being the client organisation to the role of being a vendor organisation. At the

completion of each phase, participants are required to submit those deliverables

as evidence of completing the exercises.

Figure 4-4: Key Activities and Deliverables for a Hands-on ES Exercise

4.5.3 Quantitative Data Collection: Survey

Surveys are among the more popular methods used by IS researchers to study

phenomena. This is because they (1) allow researchers to determine the values

and relationships of variables and dimensions; (2) provide responses that can be

generalised to other members of the population; (3) can be reused easily and

Conceptualising Use for IS Success

Page | 119

provide an objective way of comparing responses, and (4) can be used to predict

behaviour (Newsted et al. 1998). Adopting quantitative surveys for evaluating IS

is a popular approach (Chin and Todd 1995), as seen in Gable et al. (2008),

DeLone and McLean (2004), Shang and Seddon (2000). Moreover, as evidenced

earlier, ‘Use’ is a commonly employed construct of success in surveys. It is

noteworthy that researchers have tended to prefer cross-sectional survey

methods to study Use of IS (Chin and Todd 1995), where a longitudinal

(Pinsonneault and Kraemer 1993) design is clearly more suitable.

This study gathered survey data from two points in time from the same

participant population. The significance of having two datasets was to perform

cross validation (cf. Chin and Todd 1995) to determine whether the solution of a

model to a given sample would fit another sample from the same population.

Comparing the data from two datasets would suggest that findings are

consistent and improve the predictability of the overall data and the a priori

model. Specifically, survey data collected from each round are analysed for the

purposes of empirically verifying the relationships (between constructs) posited

in the a priori research model, and for validating the Use construct and

measurement items. The next sections describe the design of the survey

instrument, and the administration of the longitudinal surveys.

4.5.4 The Survey Instrument

The survey instrument incorporates four sections: (1) instructions for completion,

(2) measurement questions, (3) overall criterion questions, and (4) demographic

questions. The front cover lists the introduction and instructions for completing

and returning the survey instrument. Appendix C illustrates the survey

instrument. Thirty-five measurement items—where the majority are from the

interdependent constructs in the measurement model earlier discussed—form

the original components of the instrument. The survey operationalises the

dimensions and measures defined in the a priori model. Furthermore, three

overall criterion items (overall IS quality, overall Use, and overall impacts) were

added. Four additional demographic questions, including the extent of business-

process knowledge, the extent of software knowledge, their age, and participants’

unique login ID (optional) were included.

Tan  2010

Page | 120

For measurement and criterion items, the researcher found the Likert scale15

most suited. A Likert survey comprises a series of statements related to a

stakeholder’s attitude to an object, in this case using a system in the

organisation (Burr, 2000). Statements are either favourable or unfavourable

towards the object. Each participant of the survey has to respond to each

statement. They may respond on whether they: strongly agree or agree; neither

agree nor disagree; or disagree or strongly agree. Survey participants respond to

the questions by ticking one check box per question. For amount items,

respondents were asked the frequency of Use of the system using the

measurement number of days. Finally, respondents were asked the duration of

Use in hours per sitting.

For demographic questions that describe the respondents at the time of the

survey, three-item scales were used. First, respondents answer whether for SAP

they have: never used or used; or used extensively. Second, respondents answer

whether they have for tasks (procurement or order fulfilment), never heard or

heard; or have a thorough understanding. As there were two sets of surveys—one

for procurement and one for order fulfilment—the wording of items were

changed to reflect the pertinent tasks. Appendix C illustrates the items used for

the second survey. It is noteworthy that Use items (see Table 3-4) were

intermingled with other Use-related items—that capture users’ interaction with

their tasks—for supplementary purposes. These items, although not directly

related to Use, capture a better understanding of how users feel about the tasks

they complete. Responses from these items provide invaluable feedback to the

educators for future improvements to the module.

4.5.5 Completing and Returning the Surveys

The survey was conducted between August and October 2007. Surveys were

physically distributed to the respondent groups at laboratory sessions or, in (few)

special cases, via email. Each survey typically took the respondents

15 Other methods include the semantic-differential method that consists of a concept—in this case using a system in the organisation—and a set of bipolar scales. The participant has to indicate the direction and intensity of the association. The semantic-differential scale is best portrayed in the Repertory–Grid methodology used in many fields for drawing and analysing knowledge and for researching almost any issue in a more precise and less biased way (Stewart and Stewart, 1981; Tan and Hunter, 2002).

Conceptualising Use for IS Success

Page | 121

approximately 15 to 20 minutes to complete. Participation in the survey was

voluntary and participants were not under any obligation to complete the survey

should they choose not to do so. Participants were to complete and return the

survey by the end of the day.

The first survey was conducted at Week 5 (from the commencement of the

course). The number of completed responses returned and included for analysis

was 103. The second survey was conducted at Week 9. This time the number of

completed responses returned and included for analysis was 91. The number of

responses matched with the user ID from the first round with the second round

was 57. The drop in numbers is in keeping with the promised anonymity of the

survey, where participants had the choice of not indicating their login ID.

4.5.6 Minimising Measurement Error

This section accounts for potential sources of systematic variance that may

result in measurement errors and the steps to minimise the variability in error

through survey instrument design.

Whitman and Woszczynski (2004) report that despite some methods reducing

researchers’ ability to measure a construct truly, few researchers control for its

effects or explicitly mention its potential in a study. Burton-Jones and Straub

(2004) purport that a measure’s variance is made up of variability due to true

score, variability due to randomness, and more importantly to systematic error.

Their study further distinguishes between two components of systematic

variance—common methods and distance bias, which can potentially lead to

inaccurate measures of system Use and inaccurate measures of its relationships

with other constructs. The relevance of the above equation16 is that given the

multidimensional nature of Use, measuring its true score requires one to

measure each dimension of Use with minimum common method bias and

distance bias.

The potential sources of common methods bias include (1) the effects of the

person rating the methods, (2) characteristics of the research instrument items,

16 The equation and impacts of method bias is summarized in Burton-Jones, A. "Minimizing Method Bias through Programmatic Research," MIS Quarterly, (33:3) Sept 2009, pp. 445-471.

Tan  2010

Page | 122

and (3) the context of these items (Podsakoff et al. 2003). We turn to discuss the

steps taken during survey instrument design to address these three sources.

First, Rater effects—(1) are generally concerned with the social desirability bias,

acquiescence (yah-saying) bias, and mood of the raters. Participants in the

research are judged not to have (or be informed of) motives for personal gain (for

example receiving a payment or other reward) for completing the two

questionnaires, or for returning a biased response. Participants are encouraged

to partake in the survey, but only after the reminder of the value and importance

of their feedback in helping the researchers to review their course. At both

collection times, the surveys are immediately after laboratory or teaching

sessions. This minimises individuals having difficulties in recalling past

behaviour and then rating their behaviour once recalled. Throughout the course,

participants develop a knowledge of the ES implementation lifecycle and IS

critical success factors, but they are generally not required to be familiar with its

nuances (for example formative nature or snapshot basis) prior to completing the

survey. These measures reduce the raters’ mental distance from the constructs

and the items, thus reducing distance bias.

It is logical that the effectiveness of a research instrument depends largely on its

measurement items. Despite this being a reasonable assumption, it needs some

clarification. From a measurement error standpoint, two factors—the

characteristic and the context of items—are crucial sources of method error

(Podsakoff et al. 2003).

Characteristics of items—(2) that contribute to method variance are social

desirability, demand characteristics, scales, and item wording. The context of

the design of items constitutes item priming, context-induced mood, grouping

items, and scale length. Logically, some of these factors do have overlap. To deal

with the effects of social desirability bias first, it begins with clarifying the

purpose of the inclusion of items. In this quantitative phase of study, the items

probe participants’ perceptions of ES for individual learning. The items generally

do not prompt a certain response (for example confronting a drug-use question

may prompt a person to play down their response, as society treats drug use as

illegal); and neither are they sensitive to participants’ interpretations (for

example income level or religion). Further, items are parsimonious, not repeated,

Conceptualising Use for IS Success

Page | 123

and embed key principles of Use (system, information, task, or the user). The

last point also minimises the effects of distance bias. As one of the propositions

of this study, self-rated items are treated as more valuable than independently

generated observations such as computer logs (which are found in many earlier

measurements of Use), that make capturing Use as behaviour questionable.

Despite addressing the problem of distance bias (self-rated data of Use are closer

to the construct space), relying on self-reported data contributes to common

method biases. According to Burton-Jones and Straub (2004), if a researcher

wishes to measure system usage with a minimum of method variance, they

should measure the behavioural dimensions of Use (types of Use) via

independent observation, and cognitions (attention level) via self-reported data.

This consideration is made in the light of the research questions.

Turning attention to addressing item-context-related effects—(3) first, the survey

is voluntary and raters have no motives nor are they induced to complete the

survey. Further, items are generally not relevant to users outside the context of

using ES in classrooms. In terms of specifying a scale, the seven-point Likert

scale is appropriate and it covers a range of responses. The scale caters for

raters with strong opinions (strongly agree or disagree), marginal opinions (agree

or disagree) and no opinion (neutral) about a particular item. On the other hand,

two item-context-related effects of method bias are noted. First, items are

grouped according to their constructs and the groupings are explained, although

the intentions for doing so remain for clarification. While dimensions resonating

from the IS success models probe users’ perceptions of the system, information,

and impacts, Use items are designed to probe users’ cognitions and behaviour

during their Use. Second, both positively and negatively worded items are used

but they are designed to evoke a particular response about the item, which is

relevant to capturing an holistic statement of the Use phenomena.

To tackle the effects of measurement error caused by systematic variance,

multiple methods are used—both self-reported data and independent

observations—within dimensions of the Use construct and across models in its

nomological net. As suggested by Burton-Jones and Straub (2004), the former is

to tackle the effects of distance bias and the latter for common method variance.

This phase of the study emphasises self-reported data and although restrictive,

Tan  2010

Page | 124

meets the objectives of answering the research questions. We demonstrate—with

evidence from the instrument design—that attempts to minimise measurement

error have been made, although obviously insufficient for arriving at a near ‘true’

score of the construct.

4.6 A Qualitative Perspective: ES Managers’ Experience

Researchers adopt a qualitative perspective if they want to delve into a subject

area that has not been thoroughly investigated before (Hunter 2004). Given the

above, a qualitative investigation is conducted to explore the patterns of Use in a

natural setting where ES is adopted, and to inform results from the quantitative

investigation better. Furthermore, using only the quantitative method and data

has shortcomings. These are: (1) the emphasis on course participant

respondents, (2) a focus on a laboratory setting, rather than a natural work

setting of ES, (3) researcher bias, and (4) narrow qualitative content.

First, the quantitative data are drawn largely from participant respondents or

external stakeholders of an organisation instead of internal stakeholders.

Participant findings may have the potential for generalisation; qualitative data

on attitudes, opinions, impressions, and beliefs of real-world practitioners in

their contextual situations and experiences of ES Use are useful. Second,

although the quantitative study mimics a real-world scenario, this research is

conducted in an unnatural environment with a certain level of control. This level

of control may not normally be available in the real workplace. Third, questions

in the quantitative survey (designed by the researcher) may be biased and

perhaps lead to false representation as outlined earlier. Participants may

respond to the questions themselves rather than to their experience. Finally,

quantitative data define and provide numerical descriptions, rather than detailed

accounts of human perception. Hence, we seek richer, more descriptive content

for the key hypotheses in the a priori model.

In addition to earlier stated motivations (see Section 4.4), the researcher seeks

in-depth answers to questions such as the following. What are the general

attitudes concealed in post-implementation Use? What patterns of Use are most

prevalent? For answering our ‘how’ and ‘what’ questions, and addressing these

shortcomings, the researcher attempts to gather the opinions of practitioners on

Conceptualising Use for IS Success

Page | 125

the key hypotheses in the research model: that is, on how the context of ES

influences their daily Use and in turn determines its impacts.

This phase of the project examines the routines and issues of everyday ES users

in different organisations. To do this, we chose interviews as the appropriate

data-collection technique. The rest of the section discusses the advantages of

interviews for this study, the interview protocol, the interviewees’ profile, and the

interview conduct.

Although the objective of the interviews and some of the analysis employed

therein are similar to that of a case study methodology in design, this research

(the interviews) is not a case study. It is more a phenomenon design—the study

focuses on a particular phenomenon. However, insights and issues reported here

can be included in a future case study or in survey designs. A case study, on the

other hand, employs multiple methods of data collection to gather information

from one or a few entities such as people, groups, or organisations. Research

that adopts the case study approach can be found to source data from multiple

stakeholder groups within a single organisation (Berchet and Habchi 2005;

Tchokogue et al. 2005; Yuseuf et al. 2004), or multiple stakeholder groups

across multiple organisations (Parr and Shanks 2003). Case-study methods

involve a more in-depth and longitudinal examination of a single instance or

event: the case (Yin 1994; Yin 2003). A case study would provide a more

systematic way of looking at the events, collecting data, analysing information,

and reporting the results.

4.6.1 Qualitative Data Collection: Interviews

This section describes the interview method. Interviews are the most widely used

method in qualitative research for collecting rich data (Bryman and Bell 2007;

Taylor and Bogdan 1998). An interview is an appropriate and revealing

qualitative data-gathering method, as it may provide rich insights into the life

experiences, motivations, feelings, and perspectives of individuals. Although for

the interviewee the interview can be time consuming and a potential threat to

privacy, the advantages of conducting interviews and doing them properly are

well documented (by Myers 2009 among others). Interviewees can freely express

their opinions on particular phenomena in their own words and thoughts.

Tan  2010

Page | 126

Individual interviews are useful and appropriate for this study in a sequential

explanatory design (see Figure 4-2), as they provide objective reality (Klein and

Myers 1999) and rigour for the study method through strengthening the

precision, the validity, and stability of the findings (Miles and Huberman, 1994).

Multiple interviews allow the researcher to shape the analytic strategy to

compare findings based on theoretical propositions. Similar treatment of

interviews (as part of a larger qualitative research project) to address ES-related

research questions is not rare. For example, Shang and Seddon (2002)

canvassed 34 case interviews to classify ES success measures. Ross and Vitale

(2000) examined 40 hour-long interviews to describe stages in the ES

implementation journey.

Interviews are generally of three types: structured, semi-structured, and

unstructured. For this study, a semi-structured interview is undertaken. Semi-

structured interviews contain some pre-formulated questions, but not a

requirement for strict adherence to them (Myers 2009). A semi-structured

interview is appropriate where the researcher has a clear focus (Bryman and Bell

2003). Considering this, a semi-structured interview is appropriate for this study,

as it helps to control the kind and amount of data obtained, and it maximises

the usefulness of time spent with the interviewee(s). Probing questions and an

interview guide generally supplement a semi-structured interview (Robson 1993).

However, McCracken (1988) warns researchers to retain the elements of freedom

and variability within the interview.

4.6.2 Interview Protocol

A semi-structured interview typically encompasses an interview guide

(McCracken 1988), or what in this case is referred to as the interview protocol in

which the questions asked are listed and varied. The interview protocol and

guide were designed and followed to introduce commonality, while minimising

the potential for overlooking the unique aspects of each context (Firestone and

Herriott 1982). The questions for exploration in the interview protocol, based on

a thorough literature review, should reveal known findings in this area of

investigation, and concepts that other researchers have used. Following

scholarly advice (in Patton 2002), questions that are inappropriate are omitted,

and at the same time additional probing (and therefore unplanned) questions are

Conceptualising Use for IS Success

Page | 127

used to understand the topic fully (See Appendix E) . The advantages of having a

protocol are that it increases the comprehensiveness of the data obtained,

ensures each interviewee addresses all issues, helps improve the researchers’

ability to listen, ensures researchers are not distracted by taking questions, and

yet it retains the conversational nature of the interview (McCracken 1988).

Appended to the interview protocol is a document detailing information for the

potential interviewee. When interviewees are first contacted they receive this

document. It contains information about the the researchers, the aims and

objectives of the research, the length of the interview, the pertinent risks,

information on consent, and an ethics statement. The distribution of such a

document at initial contact adds clarity to the background information required

by the user for preparation, so as to maximise the benefit of the time spent in

the interview (Robson 1993). Appendix D presents the document and the

instructions for the interviewee, outlining the interview process and their rights.

Appendix E indicates the general flow of the interview, summarising the planned

and unplanned questions used during the course of the interview. Where

appropriate (see latter discussions in Section 4.6.4), the sequence of questions

asked varies. Table 4-2 lists the questions in the interview protocol.

Profiling: Who are the interviewees, what do they do in their jobs, and how does the ES play a part in what they do?

Aspect Sought Questions

Employment level What role do you have in your organisation?

Can you describe your department of work?

Experience in role

How long have you been working in the current organisation?

What sort of experience do you bring to this role?

System knowledge

Is this your first experience of using the system?

What system were you using previously?

Learning: Where does the knowledge for the job come from, how did they get it and is that knowledge sufficient?

Aspect Sought Questions

Systems Can you describe the systems you use for your daily role?

Tasks Can you describe the tasks that you do with the system?

What is it about your role that makes you want or not want to do it?

Information Can you describe the information generated from and put into the system?

Tan  2010

Page | 128

What are some of the outputs, and how do they compare to those of other systems?

Initiation: At the early stages, what did they expect of the system, what were the biggest problems, and did the system bring changes?

Aspect Sought Questions

Training Can you describe the training you undertook for the current role?

Support Can you describe the support system (if any) in your organisation?

Initiation Can you describe briefly what you went through in the early stages of joining the current organisation?

Routine: How comfortable and proficient have you become in using the system, what and how much has changed since using the system?

Aspect Sought Questions

Attitude Can you describe how you felt while using the system today?

Why do you feel, or think you feel, this way?

When using the system do you feel challenged, confident, or have a sense of respect?

Appropriateness and Nature of Use

What do you see as the difference between this current system and the previous one that you were using?

What can (or cannot) the system do better?

How dependent on the system have you become?

For what else do you use the system?

Impacts: What benefits (if any) has the system brought?

Aspect Sought Questions

Consensus Do all the other colleagues feel the same way about these systems as you do?

Individual impacts

How would you rate the system and why?

In what ways did the system help you in your current role?

Do you think you were better or worse off with the introduction of the system?

Table 4-2: Interview Protocol

4.6.3 Interviewee Profiles

A selection criterion for the appropriate profile of interviewees and their context

of work for the purpose of the interview was developed. This includes (1) that the

Use of ES is predominant in the interviewee’s organisation of work; (2) the

interviewee has had at least a year of first-hand experience with an ES; and (3)

the processes or actions claimed must be real or reported.

Conceptualising Use for IS Success

Page | 129

Eventually, six interviews with six operational managers from six different

organisations (not companies) occupying one common geographical location in

India (see Statement A below) were conducted. Considering earlier established

motivations, the particular profile of ES users—operational managers—is

appropriate. Operational managers are active, in constant contact with the ES,

and are generally more aware of management issues (Cooper and Ellram. 1993).

Using the employment cohort classification in this study (see Table 3-3), an

operational manager is a role closest to a blend of a management and an

operational cohort.

The researcher obtained interviewees through the recommendation of a visiting

academic from the same geographic location as the potential interviewees. All

potential interviewees were initially contacted by email. From the pool of eight

potential interviewees, six confirmed their participation and were subsequently

contacted to arrange interview times and dates, and to provide their working-

profile details. The demographics, and profile descriptions of each of the six

interviewees and their organisations are summarised in Table 4-3.

“India makes an interesting case as the common impression of India is that it’s a

technology savvy country, they would expect IT infrastructure to be matured and

prevalent. People are often guilty of associating good technology with India. This is

not the case surprisingly. ‘India develops IT for developed countries’. The

penetration of IT is still shallow. The key issue is cost”.—A conversation trail with

the visiting academic (Statement A).

It is coincidental that two pairs of the six interviewees (respondents R2 and R3,

and respondents R1 and R5) work for the same company, but in different

departments. This should allow us to attest to the credibility of our

interpretations by observing whether their responses reach a consensus.

Tan  2010

Page | 130

Interviewee Identifier

Company Identifier and Business

Corporate Profile Organisational Roles (length of time in role, period spent with ES)

Department (Full-Time Employees)

R1 TPA Limited—Manufacturing, research, and export of therapeutic products

Six marketing divisions; a 2,300 strong field caters to around 200,000 doctors across the country

Assistant Product Manager (13 months; 13 months)

Marketing division (5)

R2 TP Limited—Generation, transmission and distribution of power

1.9 million customers

Assistant Payment Manager (13 months; 7 months)

Treasury and Finance (35-40)

R3 TP Limited—Generation, transmission and distribution of power

1.9 million customers

Assistant HR Manager (14 months; 13 months)

Human resource department (25)

R4 R Limited—Exploration, production, refining, distribution of petroleum products, and chemicals

Large Oil and Gas acreage holder among private sector companies

Business Development and Sales Manager (12 months; 12 months)

Business Development and Sales (5000)

R5 TPA Limited—Manufacturing, research, and export of therapeutic products

Six marketing divisions

Assistant Operations Manager (14 months; 14 months)

Techno-commercial department (5)

R6 F group—An agri-service cum rural retail chain

One of India’s leading rural retailing chain

Store Manager (18 months; 18 months)

Store operations (22)

Table 4-3: Overview of Interviewees and their Organisations

4.6.4 Conducting the Interviews

Data collection was completed in March 2008, and was conducted over a two-

week period. The interview was conducted following the initial establishment of

an interviewee’s profile, ensuring sensitive questioning, tailored to the role of the

interviewee and subsequently these were comfortably answered. Each

interviewee heard a statement of confidentiality and anonymity before the

Conceptualising Use for IS Success

Page | 131

telephone interview began, and they were assured the same confidentiality and

anonymity for themselves as for the data.

These interviews were not face-to-face interviews; but the researcher attempted

to contact the interviewees to ensure that the interview was conducted in a

private place, such as their home or a work place, free from interruption, and

where the interviewee felt relaxed (Taylor and Bogdan 1998). The aim was to

ensure that the interviewee was sufficiently at ease to provide full and honest

answers to the questions asked. Each interview took at least one hour.

Interviews were recorded and transcribed to ensure that a complete record was

made of the interview in the interviewee’s words (Bryman and Bell 2007;

Seidman 2006). Analysis should proceed at same time as data gathering (Taylor

and Bogdan 1998). With new themes identified in initial interviews, questions

can be added to the protocol for subsequent interviews to test such themes.

Throughout the conduct of the interview, questions of differing nature were used.

Opening statements explained the researcher’s interest, and affirmed that what

interviewees said would be important. Open-ended questions should be used

throughout such interviews to allow the interviewees to express opinions freely

(Patton 2002). Probing questions or follow-up questions attempted to clarify

what the interviewee said. Closing questions asked interviewees if there was

anything to add to their responses. An essential step was to close an interview

with a review or follow-up, to thank the interviewee, and to tell them what to do

with the data they provided. Hence, the researcher must pay careful attention to

sensitivity issues for the interviewee, and attempt to relate to the interviewee on

individual level as far as possible. To do this, topic avoidance, deliberate

distortion, or misunderstanding of questions was treated carefully. In addition,

interviewees were reassured throughout that their anonymity is guaranteed.

The questions in the interview were generally organised in logical phases. These

were (1) profiling—to define the role of the interviewees, (2) learning—to describe

sources of knowledge for the job, (3) initiation—to describe the early stages of

Use and the problems faced, (4) routine—to describe subsequent Use and

settling into a role, and (5) impacts—to describe perceptions of a system and any

net gains. Table 4.2 lists and categorically arranges the questions and the aspect

of response sought in the interview protocol. The list does not indicate the order

Tan  2010

Page | 132

of the questions during the interview, as these were intermingled according to

the direction that the interviews took. Appendix E illustrates how a typical

interview generally flows and how questions in Table 4-2 relate. This, and the

open-ended exploratory nature of the questions avoided leading the interviewees

(Yin 2003); despite this, questions generally pointed to the main objects and the

inter-relationships in our conceptual model. We note that these questions serve

as triggers for more in-depth questions about a particular theme of value.

4.6.5 A Statement on Analytical Tools

For the conduct of in-depth analysis, spreadsheet tables and qualitative

research software (NVivo 817) are employed to organise the wealth of information

transcribed from the interviews. The transcribed interviews were first coded in

NVivo 8. The purpose was to discover patterns, identify themes, and ultimately

to make sense of the semi-structured interview material in the light of our initial

propositions. Frequency counts and matrix techniques were used to identify the

most frequently occurring statements, keywords, and reactions. These results

were then exported to spreadsheets (see Figure 4-5) to develop more focused

mappings of the coded responses and theoretical intentions, and to add more

meaning to our interpretations. Spreadsheets became the core instrument with

which more focused coding and mappings were developed. We compared the

mapping tables back and forth with our consolidated theoretical perspectives to

shape emerging theoretical opinions of the Use phenomena. Appendix F contains

examples of the mapping of responses to study themes. These responses indicate

the actual occurrences that best18 matches the aspects of Use studied.

17 NVivo 8® is a registered product of QSR International Pty Ltd© 2007.

18 We use a combination of frequency counts and keyword matching (reported occurrences and study themes) capabilities of Excel.

Conceptualising Use for IS Success

Page | 133

Figure 4-5 Sample of Spreadsheet Exported from NVivo

4.6.6 Qualitative Validity

To judge and account for the validity of this qualitatively oriented research, we

adopted a set of four standards; these summarise the definition and the

responsibility within which the aspect of the standard sits. These standards

were first offered as an alternative to more traditional, quantitatively oriented

criteria by Guba and Lincoln (1994), and further examined by Trochim (2006).

Although there has been some debate among methodologists about the labelling,

philosophical perspectives, and ultimately the legitimacy of these standards

when translated from quantitative criteria, Trochim (2006) explains the value

and appropriateness that quantitative criteria have when applied equally well to

qualitative research. Specifically, Trochim (2006) reminds us to emphasise the

legitimate operational procedures to assess validity and reliability, in a similar

way as when we validate quantitative research.

As reported earlier, the managers interviewed have extensive knowledge of the

background of these respondents. Furthermore, all interviewees were

enthusiastic in their responses and eager to describe in-depth instances of an

interview topic. Although this is useful for enhancing the accuracy of their

responses and that of their opinions, we took further steps to improve the

credibility of the material derived from interviews. Since from a creditability

Tan  2010

Page | 134

perspective the purpose of qualitative research is to describe or understand the

phenomena of Use from the managers’ eyes, the managers ‘are the only ones’

(Trochim 2006) who can legitimately judge the credibility of the results.

Aspect of Validity*

Description Responsibility

Credibility Results of qualitative research are credible or believable from the perspective of the participant in the research study.

Participant

Transferability Results from qualitative research can be generalised or transferred to other contexts or settings to this degree.

Researcher

Dependability Emphasises the need for the researcher to account for the ever-changing context within which research occurs, and how these changes affected the way the researcher approached the study.

Researcher

Confirmation The degree to which the results could be confirmed or corroborated by others.

Researcher

Table 4-4: Summary of Qualitative Validity Standards

*Source: Trochim (2006)

As reported earlier, the researcher uses probing and follow-up questions during

the interviews to read back responses and ensure interpretations made during

the interview were accurate. After the interview, the candidate attempted to

contact the respondents on interview data that were unclear or incomplete. The

preferred response was via emails. No further follow-up interviews were required,

in the opinion of the researcher, to verify further interpretations and analysis.

Furthermore, credibility of managers’ responses was established when

comparing responses among managers; respondents from the same company

expressed similar concerns over the same system and (or) management

structure (see examples in sections 6.2 and 6.3, notably R1 and R5, and R2 and

R3).

The qualitative researcher can enhance transferability by doing a thorough job of

describing the research context and the assumptions central to the research

(Trochim 2006). The person who wishes to ‘transfer’ the results to a different

context is then responsible for making the judgement on how sensible the

transfer is. The context and assumptions are described later in Section 6.2.1.

Conceptualising Use for IS Success

Page | 135

This study focuses on the opinions and conditions surrounding managers in

post-ES implementation Use, in an attempt to understand better and formulate

a trend for contemporary Use. Although one can logically argue the potential for

single stakeholder cohort bias in this case, it is reported that managers are still

important and credible sources for data (see Section 6.2.3).

The idea of dependability emphasises the need for the researcher to account for

the ever-changing context within which research occurs. The researcher is

responsible for describing the changes that occur in the setting, and how these

changes affected the way the researcher approached the study. Responses from

the first manager’s interviews gave insights and helped to guide the subsequent

interviews and thus maximise the Use of interview time. On top of this,

respondents were also interviewed when they were most comfortable (either in

their homes or at work) to generate more considered responses.

There were a number of strategies applied in this study for enhancing

confirmation or corroboration of unique perspectives identified in the study: (1)

checking and rechecking the data, (2) verifying data with another researcher,

and (3) describing instances of contradictory observations. Two other

researchers (a doctoral student and a senior academic) participated in the data-

verification process (see also Section 4.6.6). One researcher sat through the

telephone interviews and played the role of note taker. Following the interviews,

notes were compared with transcribed interviews and later verified by the senior

academic. Analysis illustrated in Section 6.4 suggests that instances of

phenomena were compared with findings across other interviewees and with

prior literature. The similarities and differences were noted and discussed to the

best knowledge of the investigator.

4.7 Summary

This study proposes a mixed-methods research design, comprising a two-phase

sequential explanatory approach to collect empirical data to verify the research

agenda. The first phase seeks quantitative evidence to validate the research

model hypotheses; and the second phase proposes qualitative data to form

insights into the dynamics of contemporary Use, and implicitly to generate

explanations and (or) support for key findings in the model-testing phase. To

Tan  2010

Page | 136

accomplish this, a survey method in the first phase tests the a priori model

developed to predict the relationships between Use and the other primary

constructs that make up its nomological net.

A dual survey was chosen over a cross-sectional approach to derive regularities

and to demonstrate better the role of Use over time. The survey was conducted

to canvass participants’ initial and ongoing experiences with a SAP system

introduced to help participants complete a series of tasks outlined in their

course. Data collected from the two surveys are analysed; the a posteriori data

analysis is presented next. Careful design of the survey instrument and

measurement context reduces common method variance.

While survey findings attempt to demonstrate—using quantitative data—that

Use could be an important antecedent, consequence, and dimension of IS

success, the question of why users responded in the ways they did to the survey

is unanswered. For addressing this, we seek a qualitative study to explain ‘how’

users interact with contemporary information systems, of which ES are an

example. This study adopts interviews to canvass responses simply from

practitioners on Use in the real world, and it maps those responses towards

earlier-established theoretical expositions. The nature of this phase of study is

largely explanatory and a reasonable degree of underlying rationale and direction

is subscribed. Quantitative and qualitative results are integrated at the later

(discussion) stages of the research.

Conceptualising Use for IS Success

Page | 137

Chapter 5: Survey Data Analysis and Findings

5.1 Introduction

This chapter presents results from the quantitative investigation. In this chapter,

statistical findings reported are examined in two parts; first descriptive statistics

and the measurement model are presented, followed by inferential statistics and

the testing of structural models. As outlined earlier, empirical data are collected

from two surveys. The first was conducted in Week 5 (referred to as T1 data) and

the second in Week 9 (referred to as T2 data) of a nine-week course. The number

of completed responses returned across T1 and T2 were 103 and 91 respectively.

The survey is anonymous. However, participants had an option to include their

ES login ID, with an intention to facilitate matching responses and more detailed

multivariate analysis although this was not the main objective. The number of

responses matched with the user ID from the first round with the second round

is 57. While T1 data are sufficient for describing a dataset and testing the

structural models, T2 data are for confirmation of certain relationships and the

hypotheses established. Descriptive statistical analysis was performed using

SPSS® (version 16.0.2), and inferential statistics were derived by adopting SEM

techniques using smart Partial Least Squares (SmartPLS®).

Descriptive statistics describe and summarise groups of respondents and the set

of responses at T1. In descriptive statistics, the focus is on simplifying and

presenting the large amount of data in a manageable form or summary by

adopting techniques such as distribution measures of central tendency (Trochim

2006). Herein, the demographics of the participants are described. Based on the

descriptive statistics collected and compared, we make inferences about how ES

Use might transpire over time. The position taken in this study is that ES Use is

a construct measured by users’ attitude, depth in Use, and amount in Use.

These three dimensions are reflective and several indicators capture them.

Subsequently, the reliability and validity of the Use construct and its indicators

are assessed.

Inferential statistics are used to support inferential statements about the

population; that is performing a formal hypothesis test on the scores. Following

the examination of the measurement models, several structural models were

Tan  2010

Page | 138

examined in the light of the role of Use. In the light of appropriate techniques

emphasised by IS scholars (Gefen et al. 2000; Petter et al. 2007), and with

consistent theoretical considerations, the a priori research model is analysed

through PLS structural analysis. The objective is to examine which model best

explains the relationship between Use and other key IS success dimensions to

yield a more positive influence on impacts from IS. Inferential statistics derived

follow a set of specific guidelines for establishing the statistical conclusion

validity of the models. Each step and criteria test for validating reflective and

formative constructs in the models is reported and interpreted. Subsequently,

the tests for hypotheses and ensuring the validity of the models are summarised.

Finally, the amount of Use and reporting on other implications of the empirical

investigation are re-examined.

5.2 Demographics and Descriptive Statistics

This section describes the distribution of survey participants by their knowledge

and their age. While there are many ways, including the amount of Use

distribution bar charts (or histograms), and pie charts to illustrate the raw

distribution data, a simple table is used here.

Table 5-1 shows the percentages of participants in different levels of business

process and systems knowledge (none, some, and thorough), and the distribution

of the age of the sample group. It is relevant to gauge participants’ knowledge, as

part of gauging the level of impact of the ES course.

We note the demographics of the sample based on the assumptions that

participants will improve their knowledge of the systems and the process over

time, and that comparison of ES knowledge among participants is not the core

focus of the study. However, understanding the demographics of respondents

helps define the parameters of the investigation. The majority of participants had

no (35%) to little (>55%) understanding of the business processes in the exercise.

The business process referred to here is procurement and order fulfilment. Given

that most of the participants enrolled in the course are first-time users of SAP,

the majority of participants (>75%) have limited to no prior knowledge of the SAP

system or its functionalities. However, this trend is partly explained by the

relatively young cohort of participants who are enrolled in the subject and who

Conceptualising Use for IS Success

Page | 139

responded to the study. More than 60 per cent of the sample cohort is younger

than 25 years, while a smaller percentage (<25%) are working, matured-age

participants. The sample size (N = 103) is adequate for further SEM analysis,

where Kline (1998) recommends 10 times as many cases as variables.

Items Sub-items N (103)

Process Knowledge

No understanding 36

Some understanding 57

Thorough understanding 10

System Knowledge

No understanding 81

Some understanding 21

Thorough understanding 1

Table 5-1: Sample Demographics

Skewness and kurtosis tests on Use dimensions further support distribution

normality. As the skewness statistic departs further from zero, a positively

skewed distribution (>2) is represented by scores bunching up on the low end of

the score scale. On the other hand, a negatively skewed distribution (<−2)

represents scores bunched up on the high end of the scale. As the kurtosis

statistic departs further from zero, a significant positive value (>2) indicates a

distribution that is too tall, while a significant negative value (<−2) indicates that

the distribution is too flat. Values of 2 standard errors of skewness and kurtosis

or more (regardless of sign) are probably skewed to a significant degree. The

skewness and kurtosis statistics of attitude, depth, and amount of Use range

from −0.819 (standard error −0.239) to 0.362 (standard error −0.474)

respectively.

Constructor Item

Description N** Mean Standard Deviation

AM1 frequency* 102 5.38 1.42

AM2 duration* 102 4.37 1.46

SQ1 ease of Use 102 4.30 1.58

SQ2 ease of learning 102 4.24 1.52

SQ3 meets requirements 102 4.54 1.17

SQ4 ease of access 102 4.49 1.55

Tan  2010

Page | 140

SQ5 features and functions 102 5.31 1.18

SQ6 system accuracy 102 4.75 1.32

SQ7 system adaptability 102 4.20 1.39

SQ8 level of complexity 101 4.20 1.60

SQ9 level of integration 101 5.33 1.30

SQ10 level of customisation 101 4.02 1.44

IQ1 output accuracy 102 4.98 1.13

IQ2 output usability 101 4.73 1.11

IQ3 ease of understanding 102 4.75 1.25

IQ4 formatting 102 4.93 1.14

IQ5 conciseness 102 4.90 1.12

DP1 clarity of goals 102 4.53 1.18

DP2 clarity of given state 101 4.58 1.29

DP3 configuration value add 102 4.89 1.15

DP4 strategic value add 101 4.95 1.21

DP5 exploration 102 3.89 1.62

AT1 reward 102 4.26 1.36

AT2 intrinsic interest 101 4.45 1.57

AT3 acceptance 102 5.53 1.33

AT4 comfort 103 4.43 1.58

AT5 respect 103 4.34 1.52

AT6 challenge 103 5.30 1.27

II1 learning 102 4.77 1.50

II2 awareness 102 4.97 1.36

II3 task effectiveness 101 4.77 1.38

II4 task productivity 101 4.68 1.43

II5 task performance 101 4.76 1.35

Figure 5-1: Descriptive Statistics

*Note: Frequency and duration are measured on a 1 to 3 scale (frequency: once a week, a few

times a week, many times a week; duration: <1hour, 1 to 2hours, >2hours). We recomputed the

values deliberately on a 7-point scale for further statistical conclusion validity tests.

** Missing values (<5% of sample) were dropped to prevent distortion of any multivariate analysis

(Kalton and Kasprzyk 1982).

Conceptualising Use for IS Success

Page | 141

16

33

8

19

30

8

0 10 20 30 40

More than 2hours

1-2 hours

less than 1/2hour

Du

ratio

n o

f S

yste

m U

se P

er

Sitt

ing

No of Respondents

time 2

time 1

Time 1 Time 2

Number of Cases

57 57

Missing 0 0

Mean 1.86 1.81

Standard Deviation 0.639 0.667

Variance 0.409 0.444

Minimum 1 1

Maximum 3 3

6

29

22

4

24

29

0 10 20 30 40

at least oncea day

a few times aweek

less thanonce a week

Fre

qu

en

cy o

f S

ys

tem

Use

No of Respondents

time 2

time 1

Time 1

Time 2

Number of Cases

57 57

Missing 0 0

Mean 2.28 2.44

Standard Deviation

0.648 0.627

Variance 0.420 0.393

Minimum 1 1

Maximum 3 3

Figure 5-2: Distribution, Central Tendency, and Dispersion of Amount of Use

As mentioned earlier, the notion of quantity discussed herein comprises

frequency and duration. Figure 5-2 illustrates the distribution, central tendency,

and dispersion of the quantity of ES Use. While two bar charts depict the

comparisons of duration and frequency across two sections of time, the table

highlights the central tendencies and deviations. The vertical axis of the duration

bar chart accounts for the number of hours per sitting (less than 0.5 hours; 1 to

2 hours; more than 2 hours), while the horizontal axis shows the number of

participants. On the other hand, the vertical axis of the frequency bar chart

accounts for the number of log-on times per week (less than once a week, a few

times a week or at least once a day), while the horizontal axis shows the number

of participants.

Tan  2010

Page | 142

5.3 Measurement Model

Prior to the assessment of reliability and validity of the measurement models,

factor analysis was conducted for a parsimonious Use construct. This also

ensures that the number of measures underlying the Use sub-constructs is

necessary. At both times of Use, we find that a number of measures—for

example, confusions and enforcement of Use in the original instrument—do not

load well on its intended factor (that is attitude). Upon closer theoretical

inspection, they are subsequently removed from further analysis. Supported by

poor descriptive statistics, it is agreed that a user’s sense of confusion and

feelings of forced completion of the exercise are not suitable items for this sub-

construct; perhaps the items are inappropriately worded.

Building from the factor analysis that identified a more parsimonious factor

solution for each latent construct, a check for internal consistency reliability is

conducted next. This is the initial step to ensure a reliable measurement when

assessing reflective constructs. As indicated by Henseler et al. (2008), as PLS

does not require all measures to have equal reliability—as the more traditional

Cronbach (1951) alpha does—composite reliability (Werts et al. 1974) is a more

appropriate measure of reliability. Internal consistency is predicted for attitude

and depth constructs of Use, as composite reliability scores of above 0.8

(Nunnally and Bernstein 1994) are recorded for all variables in both datasets

(Table 5-2). Not surprisingly, the amount construct registered poor consistency

scores. For comparison purposes, Cronbach alpha scores were good (above 0.8)

(Nunnally 1978) for attitude and depth, but unacceptable for amount (<0.6). In

terms of composite reliability scores however, amount is acceptable (>0.6) but

less so than attitude and depth. This demonstrates consistency in the set of

depth and attitude item measurements, indicating low random error, and that

the same measurement results are likely to ensue given a retest. The opposite is

true for amount.

There are a number of potential validity issues in Table 5-2. One item—respect

for the system (AT5)—loads poorly on the construct and is subsequently

removed from the final factor solution. This has little effect on the reliability

scores. It is noted that AT4 (Level of Comfort) and DP5 (Level of Exploration) are

almost loading evenly on both ‘attitude’ and ‘depth’. This indicates potential

Conceptualising Use for IS Success

Page | 143

problems of discriminant validity. The researcher acknowledges that there exists

an apparent relationship between Level of Comfort and Level of Exploration, but

at item level, the distinction is clear and the final loading solution suggests that

they are separate. This however, given the objectives of the current analysis,

indicates a potential expansion of research.

Items Attitude Depth Amount Criterion* Reliability**

AT1 0.84 0.69 −0.16 0.60

CA: 0.83

CR: 0.88

AT2 0.83 0.67 −0.22 0.64

AT3 0.73 0.51 −0.21 0.47

AT4 0.82 0.70 −0.15 0.69

AT5 0.36 0.28 0.14 0.23

AT6 0.80 0.58 −0.12 0.54

DP1 0.62 0.82 −0.08 0.46

CA: 0.80

CR: 0.86

DP2 0.65 0.87 −0.03 0.50

DP3 0.58 0.77 −0.04 0.40

DP4 0.52 0.69 −0.09 0.32

DP5 0.56 0.64 −0.26 0.46

Duration −0.04 −0.07 0.48 −0.09 CA: 0.10

CR: 0.67 Frequency −0.20 −0.10 0.90 −0.18

Criterion 0.74 0.65 −0.20 1.00

Table 5-2 Cronbach’s Alpha, Composite Scores, and Final Factor Loadings (T1)

* Criterion item (overall Use) is measured on a 7-point Likert scale

** CA: Cronbach Alpha, CR: Composite Reliability

Besides the reliability of each latent variable, the reliability of each measure is

assessed. As shown in Table 5-2, standardised outer loadings for all except one

measure (level of respect) is higher than 0.7 (Hair, Anderson, Tatham et al. 1998)

for attitude and depth items. For amount items, duration is weak but frequency

is high. The ideal (cf. Henseler et al. 2008) loadings indicate that each measure

accounts for 50 per cent or more of the variance of the underlying LV–Use.

However, reliability does not imply validity. That is, a reliable measure may be

measuring something consistently, but not necessarily what it is supposed to be

measuring. In terms of accuracy and precision, reliability is precision, while

Tan  2010

Page | 144

validity is accuracy. Validity is the degree to which an observed result can be

relied upon and not attributed to random error in sampling and measurement.

Cross-loadings of measures are examined for discriminant validity at a measures

level. In factor analysis, it is ideal that each measure have strong factor loadings

(0.70 and above) on one factor and weak loading (0.40 and below) on the other

factors (Gaur and Gaur 2006; Gefen et al. 2000). As depicted in Table 5-3, the

loading of each measure is greater than all of its cross-loadings (c.f. Chin 1998)

across two datasets. From these findings, it is also noted that while most

attitude items and depth items had strong loadings on their primary factor,

there are some with moderate (0.40 to 0.60) loadings (Gaur and Gaur 2006) on a

second factor. For instance, Feeling of comfort (AT4) loads reasonably strongly

on the other depth. This suggests that those items may not fit into just the

primary factor structure. This also suggests that discriminant validity is weaker

and that we could use these items to measure other, theoretically different,

concepts (for instance its antecedents and consequences) besides Use.

Researchers must then consider removing them or revert to theoretical

considerations. In this case, attitude towards Use and depth of Use are

theoretically distinct and they still make a meaningful (strong validity) and

useful (non-redundant) contribution. In addition, there is still a gap (of around

0.2) between primary factor and secondary factor loading item scores. For these

reasons, we retain the current factor structure.

Attitude Depth Amount Criterion

Attitude 0.75 (0.56)

Depth 0.79 0.75

(0.56)

Amount −0.19 −0.12 0.72

(0.52)

Criterion 0.74 0.65 −0.20 1.00

Table 5-3: Inter-construct Correlations and Average Variance Extracted (T1)

* Note: The bold values on the diagonal are the square root of each construct’s Average Variance

Extracted values (AVE in brackets) and should be greater than 0.50.

Conceptualising Use for IS Success

Page | 145

Convergent validity (test of uni-dimensionality) and discriminant validity (joint set

of measures not expected to be uni-dimensional) are assessed next. Table 5-3

illustrates the results. The AVE values recorded across the two datasets are

above 0.5 (Fornell and Larcker 1981) indicating sufficient convergent validity,

meaning that all latent variables explain more than half of the variance of its

measures on average. Successful or strong convergent validity, as shown here,

suggests that the set of measures is suited to the theory of Use. In other words,

the measures of each construct converge or correlate well onto the construct.

This concludes the tests on reflective indicators to first-order constructs (amount,

depth, and attitude) of Use.

5.4 Structural Equation Models

SEM with latent variables has become a quasi-standard for empirical research,

in validating instruments and testing linkages between constructs (Gefen et al.

2000; Henseler et al. 2008). SEM using PLS (variance-based), LISREL

(covariance-based) or any other second generation data analysis technique (see

Bagozzi and Fornell 1982) are increasingly being applied in IS research (Chin

and Todd 1995; Henseler et al. 2008) to address key research problems and

other aspects of Use. For instance, SEM techniques have been employed

previously (by Adams, Nelson and Todd 1992; Segars 1993) in understanding

relationships between Use, usefulness, and ease of Use—the basis of the TAM.

For its benefits (see Chin and Newsted 1999), this study adopts PLS (Wold 1985)

path modelling to establish the validity of the constructs and their measures,

and to assess the study’s measurement and structural models.

5.4.1 Specifying the Use Nomological Net

As outlined in the literature review, there have been calls by researchers to

consider the specification of constructs (Diamantopoulos and Winklhofer 2001;

Jarvis et al. 2003; Petter et al. 2007). With the widespread application of

covariance-based SEM tools, it has become apparent even in premier scholarly

journals that many researchers simply assume that the constructs are, by

default, reflective. These studies warn researchers to heed caution when

specifying constructs prior to validation and assessment.

Tan  2010

Page | 146

Extending the research model (shown earlier in Figure 3-1), Use is specified as a

formative second-order construct that is determined in turn by three first-order

reflective dimensions. These dimensions are attitude, depth, and amount of Use.

Attitude, depth, and amount are reflective dimensions where it is anticipated

that changes in these constructs are not only expected to be manifested in

changes in all its measures (Diamantopoulos and Winklhofer 2001), but the

measures are highly correlated with one another. As an example, increased

depth of Use is realised by the extent to which the users possess knowledge and

familiarity with the goals of their work processes, value-added features, and

functions of the system used and explored. Similarly, observed good comfort

levels may reflect a positive attitude, a rewarding experience, and acceptance of

the IS. Finally, higher amount is generally observable through higher quantity

and duration of Use.

In this study, Use is analysed in a formative mode, where the variables attitude,

depth, and amount collectively represent all the relevant dimensions or the

independent underpinning of the latent variable. Therefore, omitting one

dimension could omit a unique part of the formative measurement model and

change the meaning of the latent variable (Diamantopoulos and Winklhofer

2001). An increase in amount would suggest an increase in Use, even if there

were no increases in depth or attitude. As indicated in the literature review, this

is certainly the view of most researchers when operationalising the Use

construct, although Use is often tested as a reflective construct. Similarly, an

increase in depth of Use does not suggest an increase in attitude or amount.

As shown in the literature, conventional procedures of factor analysis and

assessment of internal consistency are suitable to examine the validity and

reliability of the reflective mode of measurement models, theoretic rationale, and

expert opinion (Rossiter 2002). On the other hand, significance tests of formative

weights (Tenenhaus, Vinzi, Chatelin et al. 2005) and a critical level of

multicollinearity, allow the appropriateness of a formative measurement model’s

operationalisation to be assessed. Confronting the lack of a global goodness of fit

criterion in PLS path modelling, a systematic attempt to show evidence of

sufficient reliability and validity in the outer measurement models (in both

reflective and formative modes) and thereafter the inner structural model is

Conceptualising Use for IS Success

Page | 147

appropriate to assess the research model (Chin 1998; Henseler et al. 2008). This

attempt adopts the steps in Petter et al. (2007) and Jarvis et al. (2003) for

identifying, validating, and assessing formative constructs. Figure 5-3 depicts

the inner and outer structural models, cumulating the research model to be

tested.

Individual Impact

UseQuality

of IS

Measures Quality

Measures Use

Measures Impacts

Figure 5-3: The Nomological Model of IS Use

5.4.2 PLS Structural Models

Figure 5-3 shows the results of the PLS structural model analysis and

nomological validity. Consistent with arguments outlined in the previous section,

and suggestions by (Mathieson et al. 2001), because one cannot obtain a

goodness-of-fit statistic in PLS, R2 values, path coefficients, and the effects of

size are compared instead. SmartPLS provides the software tool to test the

models. Of the preliminary models tested to study the effect of each Use

construct on individual impact, one included amount as an independent sub-

construct, and the other attitude and depth as independent sub-constructs. The

motivation for testing each component of Use is to study the effects of qualitative

versus quantitative measures. To study the effects of Use as a higher-order

construct, one model tests the component effects of depth, attitude, and amount

and the other model includes depth, attitude, and amount as a higher-order

construct. The motivation for testing these models is to distinguish the effects of

higher-order and component models. Two other models relevant to IS success

are tested. The first one partially mimics the IS success model where Use is left

out, the second is the IS-nomological net specified for this study. The objective is

to study the effects of Use (and lack of) on individual impact. This further tests

Use as an antecedent and a consequence. Another separate analysis was

conducted to examine the mediating effects of Use, which is Use as a mediator of

individual impacts. Results are shown in Table 5-4 and they are discussed in the

Tan  2010

Page | 148

next section. Several results were noteworthy from the PLS structural model

analysis.

1. From the first sets of models (panels 1 to 3 in Table 5-4), the first

observation made is that amount of system Use has a slightly negative

relationship with individual impact. Although not shown here, the results

were similar when a second model using an aggregate (criterion) impact

score in the survey was tested. On the contrary, depth of Use, which is a

surrogate for value-added Use of a system, yields nearly 10 times (five

times with T2 data) more variance than core measures (amount) in a

component model. The result is similar to attitude of Use. This and the

previous result are similar to findings in Burton-Jones and Straub (2006)

who found that lean measures of amount explain three times less

variance than other richer measures such as deep structure Use.

2. From the second set of models (panels 4 to 5 Table 5-4), it is observed

that a higher-order model in which system Use as a construct comprised

only of value-added and qualitative measures (depth and attitude) yielded

10 times (four times with T2 data) more variance than only core measures

(amount). However, it is noteworthy that with a higher-order construct (of

system Use) consisting of both value-added and core measures (attitude,

amount, and depth), this model yields a larger effect size than the

previous one did. These results also support our arguments for having

both core and value-added, and quantitative and qualitative dimensions

in system Use. It is no surprise that depth and attitude dimensions

constitute higher weights to Use (see Table 5-4) than amount of Use

across both sets of data.

3. Although the results do not indicate a clear distinction between the

‘attitude’ and ‘depth’ constructs, the final solution suggests that they are

important (qualitative) constructs to consider. From Table 5-4 (panels 2

and 3), it is observed that the incremental increase in R2 in the final

dependent variable is similar for both constructs (0.41 and 0.45). On the

other hand, the R2 change in panel 4 and panel 6, indicates small

incremental increase in the dependent variable (0.49). On the other hand,

we expect that upon adding ‘amount’ to the research model, the R2

Conceptualising Use for IS Success

Page | 149

change is less significant. The R2 change in Use on the other hand has

increased slightly (from 0.49 to 0.56) in the final model. Although

deductions from looking at R2 change are less convincing, amount of Use

as a less significant construct and Use as an antecedent of impacts from

IS is clearly demonstrated here. This conclusion raises another discussion:

Should ‘amount’ be a formative construct? As earlier indicated (in 2.6.1

and Table 2-3), a large body of IS literature has tested Use as a reflective

construct, using solely quantitative measures of frequency and duration

(similar to amount of Use). While the analysis does not provide a direct

answer, it has merely concluded that Use, if treated formatively and

employed solely as quantitative measures (of frequency and duration), is

less significant.

4. In the last set of models (panels 6 and 7 in Table 5-4), the influence of Use

(or lack of it) in IS success models is drawn into focus. From the results,

system quality and information quality explain approximately 50 per cent

of the variance of scores in individual impacts. However, the variance

yield of impacts predicted by system and information quality reduces over

time (Model 6).

On the contrary, there is no significant change in variance yield when

system Use is tested in partial models constructed to mimic the IS

success models with both T1 and T2 data (Model 7). The results

demonstrate the significance of system Use as both an antecedent and a

consequence in IS success models. This significance is further tested with

an examination of potential mediation. These results contradict findings

by (Iivari 2005; McGill et al. 2003) that Use is insignificant as a predictor

of individual impact in IS success models. Although not shown in the

table, there is only a minor difference. This is the fall of effect of size on

system Use when system and information quality are represented as sub-

constructs of a higher-order quality construct. Observing inner model

weights (see Table 5-5), the depth and attitude dimensions constitute

higher weights to Use than amount of Use. In addition, system quality

accounts for a much higher weight on ES quality than on information

Tan  2010

Page | 150

quality. This observation argues the importance of the characteristics of

ES system type when capturing quality and its effects.

Typea Inner Modelsb Tested**

Paths, R-squared, Effect

Sizec

(T1 data) (T2 data)

1 Amount of

UseIndividual

Impact

BF =−0.23,

t = 0.7,

R2 = 0.054

BF = −0.27,

t = 0.82,

R2 = 0.074

2

Attitude of User

Individual Impact

BAT = 0.64,

t = 7.99,

R2 = 0.41

BAT = 0.64,

t = 10.05,

R2 = 0.41

3 Depth of

UseIndividual

Impact

BDP = 0.67,

t = 9.2,

R2 = 0.45

BDP = 0.22,

t = 11.1,

R2 = 0.42

4

Amount of Use

Attitude of User

Individual Impact

Depth of Use

BAT = 0.30,

t = 1.91,

BDP = 0.44,

t = 3.14,

BFR = −0.04,

t = 0.36,

R2 = 0.49

BAT = 0.35,

t = 2.45,

BDP = 0.38,

t = 2.64,

BFR = −0.03,

t = 0.18,

R2 = 0.47

5

Use

Attitude of User

Depth of Use

Amount of Use

Individual Impact

BUSE = 0.56, t

= 7.0,

R2 USE = 0.55

R2 IMPACT =

0.31

BUSE = 0.56, t

= 7.3,

R2 Use = 0.51

R2 IMPACT=

0.31

6

System Quality

Information Quality

Individual Impact

BSQ = 0.38, t

= 3.35,

BIQ = 0.38, t

= 3.54,

R2 = 0.5

BSQ = 0.33, t

= 1.88,

BIQ = 0.31, t

= 2.20,

R2 = 0.34

Conceptualising Use for IS Success

Page | 151

7

Use Individual Impact

Quality of IS

Attitude of User

Depth of Use

Amount of Use

System Quality

Information Quality

BQUALITY =

0.2, t = 1.85

BUSE = 0.56, t

= 6.67,

R2 Use = 0.56

R2IMPACT =

0.31

BQUALITY =

0.31, t = 2.83

BUSE = 0.56, t

= 6.35,

R2 USE = 0.56

R2IMPACT =

0.31

Table 5-4: PLS Structural Models

a: Model types: type 1—effect of amount on individual impact; type 2—effect of attitude on

individual impact; type 3—effect of depth on individual impact; type 4—stepwise effect of Use

components on individual impact; type 5—higher-order (Use) model of model 4; type 6—partial test

of IS Net (no Use); type 7—test of IS-Net (with Use) and IS success models.

b: Horizontal arrows depict paths; vertical arrows depict sub-constructs.

c: B represents Beta (Path) Coefficients between an antecedent and Individual Impact unless

otherwise stated (for example, BQUALITY represents path coefficient between quality and Use), t

represents t-statistics (t-stats more than 2 represents significant impact between independent and

dependent variable). R2 here represents regression score on individual impact unless otherwise

stated.

#: Assuming Service Quality and Organisational Impact are constructs not tested here. Assuming

Technological managerial capabilities are constructs not tested here.

**: Higher-order latent constructs were formed by calculating regression factor scores of each

component (Garson 2010). Survey aggregate item score and averages of components were also

used to verify the model results and no substantial differences were found in the results.

Model Type Latent Variable Inner Model Weights*

T1 T2

Model 5 AT → Use 0.50 0.61

FR → Use −0.17 0.07

DP → Use 0.53 0.49

Model 7 AT → Use 0.55 0.63

FR → Use −0.12 −0.05

DP → Use 0.50 0.46

SQ → Quality 0.80 0.68

IQ → Quality 0.20 0.40

Table 5-5 : Inner Weights Model

*To achieve inner model weights, regression factor scores of outer indicators were calculated.

Tan  2010

Page | 152

5.4.3 Testing for Potential Mediation

Mediation occurs when a causal effect of some variable X on an outcome Y is

explained by some intervening variable M (Shrout and Bolger 2002). A final

structural model was tested to examine the potential mediating effects of Use on

individual impacts; that is, Use mediates the relationship between the quality of

the IS and its information with individual impact. Psychological research defines

mediation in its simplest form as representing the addition of a third variable to

this X → Y stimulus-response relationship, whereby X causes the mediator or

organism M, and M causes Y, so X → M → Y, or stimulus-organism-response

(MacKinnon et al. 2007; Ringle, Wende and Will 2005).

Consistent with prior literature, two models where Use can potentially be a

mediator are tested. The first model is suggested by the decomposition of the

(Benbasat and Zmud 2003) IS nomological net; IS quality through Use affects

future net benefits. This illustrates a Quality (of IS) → Use (of IS) → Impacts (of

IS) depiction (see mediation model A, in Table 5-6). In other words, a test of

mediation follows closely behind model type 7 tested previously (see Table 5-4).

The difference between a mediation model and model 7 is that a relationship

between Quality of IS and Individual Impact exists (supported somewhat by

model 6).

The second model is suggested through decomposing the IS-Impact model (Gable

et al. 2008), where current IS impacts predict future IS quality. This is

illustrated by an Impacts (of IS) → Use (of IS) → Quality (of IS) relationship (see

mediation model B, in Table 5-6). To test the models, respondent data from T1

and T2 are first matched. The number of matched respondents is 56 (recall that

matching is done through a unique login ID that participants enter in their

survey voluntarily). Next, regression factor scores of all three latent constructs

are calculated. Finally, to test the presence of Use as a mediator in both models

using the scores, steps recommended by Baron and Kenny (1986), and Judd and

Kenny (1981) are followed.

As indicated by Baron and Kenny (1986), a variable functions as a mediator

when it meets the following three conditions. They are that variations in the

levels of the independent variable significantly account for variations in the

presumed mediator; variations in the mediator significantly account for

Conceptualising Use for IS Success

Page | 153

variations in the dependent variable; and when the first two conditions (paths)

are controlled, a previously significant relationship between the independent and

dependent variables is no longer significant. To achieve this, we first adopt two

assumptions: IS-net is causal and the quality of IS incorporates the artefact and

its practices. The mediation model is evaluated using three tests. The first, not

shown here (but available on request), is essentially an effects test. Results from

an effects test indicate that the total effects of IS quality on impacts fall (from t=

7.72) following the introduction of Use (to t= 4.21), indicating mediation.

Mediation is partial, as IS quality still has a large (t size >2) direct effect on

impacts following the introduction of Use. The second model achieved similar

results.

A Sobel test calculator19 (Preacher and Hayes 2008) is used to perform a second

test of mediation. To run the tests, linear regression is first conducted between

the independent, mediator, and dependent constructs (in panel 1, Table 5-6) to

obtain the relevant input scores for the calculator. A Sobel test then reports

whether the indirect effect (c in panel 1, Table 5-6) of the independent variable

(quality) on the dependent variable (impact) through the mediator variable (Use)

is significant (>1.96 at p<0.05). The calculator returns both the one-tailed and

two-tailed probability values. Results from a Sobel test for mediation suggests

the presence of a positive mediating influence (of Use) that is also significantly

large. The Sobel test, conducted using averages of constructs, yielded a not too

significantly different result (Sobel Statistic: 7.33).

A third and final test was conducted to determine the relative size of mediating

effects once the presence of mediation has been established. This tested the

Variance Accounted For (VAF) (Shrout and Bolger 2002). The formula (from

Shrout and Bolger 2002) for calculating the VAF is*:

19 Using two online versions of the calculator ensured that the tests are accurate. The first is published by Preacher and Hayes (2008) <URL: http://www.people.ku.edu/~preacher/sobel/sobel.htm> and another is available at <http://www.danielsoper.com/statcalc/calc31.aspx>. According to Kenny (2008), the test of the indirect effect (c’ is the direct effect) is given by dividing ab by the square root of the above variance and treating the ratio as a Z test (that is, larger than 1.96 in absolute value is significant at the .05 level).

Tan  2010

Page | 154

*where a is the path coefficient between the independent variable and the mediating variable, b is

the path coefficient between the mediating variable and the dependent variable and c is the path

coefficient between the suggested independent and dependent variables or the direct effect

between them.

The upper bound for VAF is set to 1.00 (Shrout and Bolger 2002, p. 434), which

puts the significance of VAF at above 0.5. Using a combination of path

coefficients from the PLS model analysis and the depicted mediation model A (in

Table 5-6), and Use scores at T1, the VAF calculated for the mediation model

(where a= 0.699, b= 0.112 and c= 0.235) is 0.25. This means that IS uset1

accounts for 25 per cent of the variance of the relationship between the quality

of IS and its impacts. Using IS useT2 scores, Use surprisingly accounted for

nearly 70 per cent of the variance. For mediation model A, the VAF accounted for

is approximately 0.70 for both sets for Use and both T1 and T2.

Hoyle and Robinson (2003) warn about the bias introduced into estimates of

mediation effects by measurement error. They recommend that the mediating

variable should demonstrate high reliability. Internal consistency is predicted for

attitude and depth, as Cronbach alpha scores of above 0.8 (Nunnally 1978) and

composite reliability scores of above 0.8 (Nunnally and Bernstein 1994) were

recorded, but not for amount of Use. There is no attempt to delve into this

aspect, as the arguments for the importance of including amount of Use

supersede this one statistic.

Type# Illustration of Mediation Model Results*

(T1 Use Data)

Results

(T2 Use Data)

Mediation

Model

Example

B

CA

a (sa) b (sb)

c’

(Baron and Kenny 1986)

NA NA

Mediation

Model A

Use

Quality of IS time x

Impact of IS time x+1

Ba: 0.666

Sa: 0.103

Bb: 0.279

Sb: 0.132

Sobel Statistic:

2.01**

Std Error: 0.09

p = 0.04

Ba: 0.479

Sa: 0.124

Bb: 0.576

Sb: 0.112

Sobel Statistic:

3.09**

Std Error: 0.09

p = 0.00

Conceptualising Use for IS Success

Page | 155

Mediation

Model B

Use

Quality of IS time x+1

Impact of IS time x

Ba: 0.645

Sa: 0.103

Bb: 0.613

Sb: 0.119

Sobel Statistic:

3.55**

Std Error: 0.09

p = 0.000

Ba: 0.406

Sa: 0.124

Bb: 0.673

Sb: 0.106

Sobel Statistic:

2.91**

Std Error: 0.09

p = 0.000

Table 5-6: Mediation Models

*Ba: The (not standardised) regression coefficient for the association between the independent

variable and the mediator; Sa: standard error of a

Bb: The (not standardised) regression coefficient for the association between the mediator and the

dependent variable; Sb: standard error of b

** +/− 1.96 are the critical values of the test ratio which contain the central 95 per cent of the unit

normal distribution; significant at p<0.1

5.5 Additional Findings

5.5.1 The Value of Quantitative IS Use Measures

Despite earlier arguments of the inadequacies of using solely quantitative

measures for determining the individual impact of ES (refer to PLS structural

models analysis), quantity of Use still forms an important variable determining

success for many types of systems and technologies including e-library (Hong et

al. 2001), email (Yao and Murphy 2007), and web Use (D'Ambra and Wilson

2004). If quantitative measures of Use are insufficient, why do researchers

continue using them?

Approaches to include quantitative measures (or not) have been ad hoc and need

to consider the domain and context of the study, as purported in the two-stage

approach outlined earlier. For example, few have examined amount of Use

across time, opting to rely on a single dataset. Although using a single dataset is

useful, its value for understanding the effects of Use across time is seriously

questionable. This section demonstrates the value of (and lack of) including

quantity of Use, guided from an empirical and statistical standpoint. To achieve

this, we compare responses on quantity of Use drawn from 57 user responses

matched with the user ID from the first round with the second round. Data

Tan  2010

Page | 156

analysis proceeded in two steps. First, the means and dispersion of Use are

compared over time and second, the effects of quantity of Use are examined

through paired sample tests. From a statistical standpoint, the findings

demonstrate the value of Use in this study context, but more importantly, they

pose important questions about how researchers might adopt more quantitative

measures of Use.

As illustrated by the graphs (see Figure 5-2 earlier), we observe that throughout

the course (as measured at two points in time), the majority of participants log

on to the SAP system at least once a week or a few times a week. In addition, the

duration of each sitting for the majority of participants is one to two hours.

Assuming a normal distribution, duration spent during each sitting with the ES

from time 1 falls slightly (where approximately 58 per cent of the sample scores

fall within one standard deviation of the mean) in time 2 (where approximately

53 per cent of the sample scores fall within one standard deviation of the mean).

On the other hand, there is a rise in the mean of amount of ES Use in terms of

times and days from time 1 (51 per cent of sample scores fall within one

standard deviation of the mean) to time 2 (42 per cent of sample scores fall

within one standard deviation of the mean).

These observations suggest some interesting ideas: that ceteris paribus,

extraneous factors such as the non-volitional nature of users to Use (or the

willingness of Use) caused by an often-mandatory adoption of the system has

little effect on the quantity of Use and vice versa. One suspected cause for the

fall in duration but rise in amount of ES Use over time is the participants’

growing familiarity with the system. Neither extraneous contextual factors such

as assignment deadlines, technical-related issues, and lab availability are

potential explanations for the observation and are neither the study focus nor

are they conclusive here.

Despite these results, the validity of the claim that duration and amount change

significantly is tested. A paired sample t-test is therefore used to make

observations on the same sample at two different times (see Table 5-7). The

assumption for the test is that participants become familiar with the systems

over time. The null hypothesis here would be that the average duration and

frequency are the same across time. Note also that t-test does not require a large

Conceptualising Use for IS Success

Page | 157

sample size (30 to 40 is acceptable) (Gaur and Gaur 2006). Test results (duration)

show a t-statistic of 0.554. The two-tailed p-value is 0.582, which is more than

the conventional 5% or 1% levels (Gaur and Gaur 2006). Therefore, we cannot

reject the null hypothesis at a 5% (or 1%) significance level, which means that

the increase in average duration of time spent on each sitting by the users over

time is negligible. Test results (frequency) show a t-statistic of −1.587. The two-

tailed p-value is 0.118, which is more than the conventional 5% or 1% levels.

Therefore, we cannot reject the null hypothesis at a 5% (or 1%) significance level,

which means that the fall in the average time spent by the users is negligible.

Mean Std Deviation

Std Error Mean

95% Confidence Interval of the Difference

t df Sig (2-tailed)

Item Name Lower Upper

DurationT1—DurationT2

.053 .718 .095 −.138 .243 .554 56 .582

FrequencyT1—FrequencyT2

−.158 .751 .099 −.357 .041 −1.587 56 .118

Table 5-7: Paired Sample T-test of Quantity of ES Use

The results lead us to preliminary conclusions that solely quantitative results

render little value for researchers attempting to evaluate ES Use for education in

this particular context. The descriptive results set the platform for including

other qualitative measures of Use and support the Use of the two-stage approach.

From these observations, we demonstrate that to measure Use, it must still be

quantified. However, this study urges researchers to pay attention to

establishing the nature of the system and work processes, to account for the

changes in quantity of Use, and to justify the value of quantitative measures for

a study domain.

5.5.2 ES Use for Higher Education

Although the focus of the quantitative study is on Use the observations and

analysis draw some useful conclusions for ES Use in higher education. Despite a

strong demand for ‘business-process experts’ from the industry, recent studies

(Kim et al. 2006; Rosemann and Maurizio 2005) reveal that most graduates do

Tan  2010

Page | 158

not possess the necessary business-process knowledge of ES applications. With

support from academic literature (including Boyle and Strong 2006; Leger 2006)

on ES education and the significance of our results, a number of

recommendations for ES Use in an education domain are presented in Table 5-8.

In fact, the implications drawn from key stakeholders and beneficiaries of ES

education—the participants themselves—help educators to design curricula that

relate closely back to the participants. Recommendations not only suggest steps

that educators can take to improve the Use of ES for teaching and learning, but

canvass more positive and related responses to the dimensions of Use.

Dimension of IS Use

Significance of Change*

Related Literature Recommendations for ES Educators

Attitude of Use

Significant Students tend to continue, and enjoy, an exercise if they feel that they are capable of successfully mastering the Use of the system (Compeau and Higgins 1995).

Students generally have a positive impression of a large software vendor (Rosemann and Maurizo 2005).

Design a situational scenario that emphasises the completion of a real-world business process typical of the organisation type from start to finish. Completion of the business process end-to-end thus requires the Use of modules and best practices incorporated into a popular ES.

Depth of Use

Significant ES serves to integrate and automate operations, in multiple functional business areas; each independently operated in traditional IS (Brady, Monk et al. 2001, p.6). Exercise should therefore be substantive enough to reflect a real situation, and stimulating enough to invoke discussion and subsequent learning (Hackney et al. 2003).

Many ES-teaching approaches tended to have either favoured certain modular functions of the ES (e.g. Strong, Johnson and Mistry 2004), certain business processes (e.g. Draijer and Schenk 2004; Leger 2006), or distanced ES concepts from ES software practice.

Use latest version of the popular ES suite or leading provider of e-business software solutions.

Trained educators to perform error checking and to provide support. Establish a network of technical support staff, resources permitting.

The philosophy of exercise and its explanations to the students must be straightforward, while emphasising the learning that we may gain through their descriptions and analysis of the steps they are performing.

Conceptualising Use for IS Success

Page | 159

Amount of Use

Not Significant

Mandatory nature of ES is often discussed (Gable et al. 2008) but its psychological effects are rarely published. Users’ psychological state during system interaction form interdependencies that are more important in Use (Hong et al. 2001).

(Antonucci, Corbitt, Stewart et al. 2004p. 241) urge measures to challenge students’ understanding of course material and their broader knowledge of business issues.

Go beyond the traditional ‘bouncing-ball’ training approach. Focus on software-specific knowledge and business-process knowledge.

Course participants and students should recognise the integrated nature of ES where they see that the data entered through one module can be transferred to, or is available in, another module, and in real time; and any business processes are executed in an identical manner.

Table 5-8: Preliminary Recommendations for ES Use in Education

*Significance of change here represents changes to the variance of individual impacts explained by

the dimension, as evidenced in the PLS structural model analysis.

As illustrated in Table 5-8, ES Use for educational purposes should encompass

the characteristics of being a course designed to deliver both functional and

process aspects of ES, using the ‘learn-by-doing’ approach. There, participants

are encouraged not only to follow the sequential instructions (also known as the

bouncing-ball approach), but are to explore and discover various aspects of the

software. The course should emphasise the completion of a real-world business

process from start to finish, where completion requires the Use of modules and

best practices incorporated in a popular ES. The Use of ES must allow

participants to appreciate the integration of business processes, data sharing

across the enterprise, and real-time data processing environments as promised

by an ES. Using the ES should encourage educators to extend teaching and

incorporate other ES concepts like configuration and extended enterprise

systems (and additional modules).

5.6 Chapter Summary

This chapter reported the statistical findings of the empirical investigation of ES

Use for education. Specifically, the chapter reported on the descriptive and

inferential statistics gathered from analysing the survey data and the

conclusions drawn from them. Following a description of the sample and an

examination of the measurement model, a series of structural models were

tested to assess the multiple views of Use in the domain of IS success. Use is a

Tan  2010

Page | 160

formative second-order construct that is determined by its three reflective sub-

constructs.

Using SEM techniques in PLS and regression, among other methods, several

structural models underpinned in the domain of IS success were assessed.

Quality and the impact of IS formed the other key constructs of the models and

they are formative constructs, as theoretically implied (in Gable et al. 2008). Of

the three constructs, depth of Use has the highest variance yield on impact.

Among other findings, Use is relevant as an antecedent, a consequence, and

more so as a dimension of IS success, and Use has a mediating effect on the

relationship between the quality and the impact of IS.

Results of the analysis are twofold; they challenge researchers wanting to employ

Use to consider its various roles for determining IS success. We infer a series of

implications for educators regarding ES Use. In order to facilitate a more positive

participant Use experience, preliminary findings suggest a checklist of

recommendations that educators could consider when designing an ES

curriculum. These recommendations are explained through amount of Use and

curricula design to draw a focus towards facilitating a more positive participant

Use experience.

Conceptualising Use for IS Success

Page | 161

Chapter 6: Qualitative Data Analysis and Findings

6.1 Introduction

This chapter discusses the results of a qualitative investigation on IS (in this

case ES) Use in industry. The examination of the qualitative results describes

how an assumed sequence of events unfolds to cause the set of key findings

observed in the quantitative investigation. In doing so, this qualitative

investigation serves other two additional objectives (see assumption for methods

in Table 4-1): one is to demonstrate how important it is to manage, as it is to

measure Use, and second, to describe the relationships between different

contextual considerations envisaged in the unified approach for developing the

appropriate measures. Therefore it answers the ‘what’ and more importantly the

‘how’ questions. Hence, we ask what is Use (research question 1), incorporating

how the context of Use (systems, information, work processes, and environment)

influences Use, and also how Use affects IS success.

From the quantitative results, it is concluded that users report varying impacts

from IS over time, and that depth of Use is an important consideration when

measuring Use of value added and complex systems like ES. Similarly,

measuring requisite Use through amount is still an important consideration,

especially for initial stages of Use and, finally, Use is an important mediator

between quality of an IS and its individual impacts. On these premises and

seeking support from relevant literature (and common sense), the notion that

users can be in different levels of Use can be contemplated, and that patterns,

trends, and insights from users’ account of their experiences with ES over time

can be classified in a coherent structure. This prescription of levels in Use is

akin to theories on its multilevel nature (Burton-Jones and Gallivan 2007) and

the realisations of lifecycle phases in ES adoption and Use (Markus et al. 2003;

Ross et al. 2003). This investigation analyses the accounts of managers who

have had extensive experience with and daily exposure to ES. Specific reports of

ES-related activities of these users are classified into emergent yet conceptual

themes to support a coherent and chronological framework. This classification

contributes towards a deeper understanding of ES Use in organisations by its

various stakeholders (in particular its managers). The framework established will

Tan  2010

Page | 162

introduce principles to help researchers identify levels of Use, explain why

different users receive and report varying impacts from IS at different times,

expand on the notion of value add in ES Use and emphasise its importance,

alongside requisite system Use during measurement.

The rest of the chapter is outlined as follows. First, preparations for analysing

the data collected from the interviews are discussed. This includes revisiting the

research questions and the procedures for coding the transcripts, understanding

managers’ demographics and background, the software tools used, and a

statement of validity. Second, we introduce the levels of Use, and before

discussing them, a number of theoretical perspectives underpinning the levels

are explained. This includes understanding the dimensions and explanations

which differentiate the levels The three levels and nine dimensions formed and

evidenced by coded empirical data make up a classification of Use activities, and

an observable process of Use behaviour. The benefits of such a classification

include helping managers recognise, classify, and manage user behaviour. For

research, the classification offers an alternative lens through which to study Use.

The chapter concludes with a discussion of the triangulation of qualitative

results derived from this chapter and the statistical findings from the previous

chapter.

6.2 Preparing to Analyse

Steps reported predominantly in qualitative research were adopted to analyse

the interview data. First, multiple sources of evidence were used to triangulate

the data analysis. Data from the interviewees were further supplemented from

company publications and corporate websites20 . Using steps detailed in Yin

(1994, p. 111) an explanation of Use as described in the earlier chapters of this

thesis is built; and this is done with reference to the interviewees’ descriptions of

their natural contemporary Use setting. Data analysis was performed

concurrently with data collection (Eisenhardt 1989) to compare the findings of

the initial interview against the initial statements. These statements were revised

and the revision compared with other details of the interview against the revision.

20 In the interests of anonymity, the respondents and their firms have pseudonyms here. Due to sensitivity, there is no disclosure of details of the company profiles (obtainable from the researcher on request).

Conceptualising Use for IS Success

Page | 163

The revisions were continually compared with the second, third, and subsequent

interview notes. This moving back and forth between empirical data, theoretical

perspectives, relevant literature, and other sources of evidence to build an

explanation (Yin 2003, p. 111) of the Use phenomena becomes the core activity

in the data-analysis technique.

6.2.1 A Contextual Statement on ES Use

Establishing an initial statement of ES Use and the assumptions of the research

context are important aids to enhance the validity of findings (see also Section

4.6.6). This is done before and after the first interview (later adding to it)—to

clarify the conditions and control the phenomena studied, and to recognise the

likely chronological sequence of events related to determining later ‘how’ Use is

scored. The statement also helps to build a coding schema, to amalgamate and

analyse key points from subsequent interviews. The basic principle of these

initial statements is that the definition of Use adopted by the study underpins

the statement, and it accounts for the current environment of the ES setup in

the workplace. Constant moving back and forth between the first sets of

statements from the interview data, theoretical perspectives, and consulting

relevant literature are crucial to building an ongoing pattern of analysis.

The contextual statement on ES Use here is the manner and degree to which a

user incorporates the ES into their work processes. Applying the above conceptual

definition to the interview context, the in-depth analysis of Use requires us to

form interdependencies between ES technology and non-technological elements

in a workplace. It also provides an idea of the conditions in the workplace itself,

the core and value-added aspects of work processes, the attitude of ES Use, and

finally the activities during ES Use and its likely impacts.

Building on the profile of interviewees, all held managerial positions in their

organisations and had held their role for at least a year. Similarly, all

interviewees had one to two years of experience with the ES in their organisation

(excluding prior experience of a similar sort elsewhere). Hence, the longer-term

post-implementation ES Use is emphasised; there is an ES currently in place in

their organisations. Every interviewee uses ES on a daily basis and for a variety

of work processes (besides purely transactional).

Tan  2010

Page | 164

6.2.2 Coding the Data

After transcribing the interview recordings and notes, the coding process begins.

For this, the technique used mimics DeSanctis and Poole (1994) in their

interpretive analysis to evidence how advanced technologies, in their case GDSS,

are appropriated at a micro-level. An analysis at the micro-level (see DeSanctis

and Poole 1994, p. 136-137) involves drawing specific acts from individual

speech, interviews, or in a meeting about the phenomena (appropriation moves

in the case of DeSanctis and Poole), to reveal their dominant patterns. In

contrast, a global level of analysis looks across multiple meetings and at the

institutional level, the analysis looks across multiple groups and organisations.

Similarly, textual data collected from the start to the end of each set of

individual interview notes are analysed, while moving back and forth between

the reference theories and the data to create a set of logical mappings, organised

in a chronological fashion. The micro-level analytical strategy used here has its

advantages and relevance for individual-level research. A review of relevant

literature suggests that more ERP studies (such asAl-Mashari and Al-Mudimigh

2003; Mandal and Gunasekaran 2003) tend to focus on an organisational level

rather than an individual level of analysis. They report mainly on the ‘how

should’ and ‘outcomes of Use’, rather than a more detailed analysis of instances

of actual Use itself. ES studies have generally tended to report on and classify

the ES activities into broad critical success factors, implementation lessons (Al-

Mashari et al.2003; Nah et al. 2001), and implementation lifecycle phases (Ross

and Vitale 2000; Willis and Willis-Brown 2002). These observations are useful

for developing and understanding strategies and key lessons, but they offer little

to aid in understanding individual user’s behaviour at a cognitive level.

6.2.3 Managers’ Backgrounds

In this study, we canvassed data from one employment cohort (managers) across

single and multiple organisations21. Before discussing the levels of Use, it is

21 Each of two pairs (of six) respondents work for the same organisation. Besides sourcing data from one employment cohort across single and multiple organisations, researchers also source data from multiple stakeholder groups within a single organisation (for example Yusuf et al. 2004; Tchokogue et al. 2005; Berchet and Habchi 2005), or multiple stakeholders’ groups across multiple organisations (Parr and Shanks 2003).

Conceptualising Use for IS Success

Page | 165

important to revisit and understand the relevance of interviewing managers.

Managers according to (Zaleznik 2004) relate to people and their roles well,

recognise the passage of time when making major decisions, are impersonal, and

set goals that arise from necessities and are thus deeply embedded in the

organisation’s history and culture. Furthermore, a manager emphasises

rationale and control, is less concerned with status but more with individual

responsibilities, and has the ability to tolerate practical work. Transcend that to

the current practice of using IS, or in this case ES, and a manager’s statements

are thus an important source of records in recognising Use in an organisation.

Studies such as Nah, Lau et al. (2003), and Shang and Seddon (2000) have

clearly illustrated the value and appropriateness of canvassing managers’

opinions and inputs on factors influencing ES implementation and its adoption

success. We turn to discuss the backgrounds of the managers at the time of the

interviews (refer also to Table 4-3):

Respondent 1 (R1) had been working for TPA Limited as an assistant manager for

13 months (at the time of the interview). Prior to his managerial role, R1 had

been a management trainee for nine months and spent another four months

afterwards as a trainee assistant product manager. R1 stated that he had no

prior knowledge of the SAP (the ES at TPA) system used at TPA when he started

working there.

Respondent 2 (R2) joined the company TPA Limited as an assistant manager in

the last year (since the interview). There are three assistant managers in total

including himself in his department. The department had 38 members at the

time of the interview, with an officer (presumably a manager) and seven or eight

staff assigned to looking after the accounts in each of the four stipulated zones

of provincial Ahmadabad. Prior to his role at TPA Limited, R2 was a marketing

executive for a year. He had little prior work experience with RAMCO (the ES at

TPA), at TPA Limited, or any other organisation of that sort prior to joining TPA.

Respondent 3 (R3) has been working at TP Limited as human resources (HR)

systems manager for 14 months. She is one of five executives working in a

department made up of 25 local people. Rather surprisingly, this is her first job;

and R3 had little knowledge of RAMCO (the ES used at TP) prior to joining TP.

Tan  2010

Page | 166

Respondent 4 (R4) had been working at R Limited as a business development

and sales manager for 12 months at the time of the interview, and R4 works

closely with marketing, development, and services departments. The total

strength of these departments numbers around five thousand, comprising

mainly local workers. R4 worked at M Power Limited for four years as a chemical

engineer. After M Power Limited, respondent 4 pursued his MBA and joined R

Limited shortly afterwards, first as an engineer and later as a sales manager. At

M Power Limited and at R Limited, R4 had some experience at using SAP.

Respondent 5 (R5) had been working in the techno-commercial management

department at TPA Limited for 14 months as systems operations manager at the

time of the interview. R5 highlighted that the main task for his five-person

department is to conduct performance evaluation of eight other departments

(including molecular biology and analytics departments). Prior to joining TPA, R5

was studying for his MBA part time, specialising in marketing, finance, and

operations afterwards. R5 heard about SAP, the ES they use at TPA, while

completing his MBA.

Respondent 6 (R6) had been a store manager for Aa Limited, a large rural branch

of the F Group for a little over 15 months at the time of the interview. R6 is in

charge of synergising all rural retailing operations in Aa Limited. R6 stated that

he was responsible for overlooking the systems and applications dealing with

sales and distribution, inventory management, and ‘everything related to rural

consumers’ (R6). R6 claimed to have some theoretical knowledge of the SAP

system used at Aa and the Point of Sales system they use at the front end.

6.3 Organising Patterns of IS Use into Levels

For the coded responses to be broken down and re-organised into a logical

‘roadmap’ of comparable activities and chronological events, the researcher

sought a number of theoretical perspectives first.

Literature from the fields of theoretical and applied psychology suggests that

human development typically comprises stages, mapping action, operation

dynamics, and broadening the scope of actions. This notion is captured in

activity theory (Nardi 1996), which provides a framework to explore the

decomposition of tasks or activities into actions and subsequent operations. This

Conceptualising Use for IS Success

Page | 167

framework is commonly adopted in the study of Human–Computer Interaction

(HCI). More importantly, research of this nature recognises that to become more

skilled at something, operations must be developed so that one’s scope of actions

can be broadened while the execution itself becomes more fluent (Kuutti 1995).

Some actions in the early phase, such as planning and sequencing, fade in

consciousness as other actions such as strategising and influencing take over.

The underlying thinking and purported relevance of the stream of research

described above to the data analysis is that Use can be decomposed to related

progress levels (that is, one must negotiate a lower level to reach the higher

levels).

In an independent but related study, Burton-Jones and Gallivan (2007)

introduce the multilevel nature of system Use. They introduced a set of

guidelines to enable researchers to differentiate between levels of Use. They ask

researchers attempting to study multilevel Use to consider the functions of

system Use, and the structures in collective and contextual factors. The notion of

having phases in the Use of complex systems is further supported by the

concept of ES performance lifecycle (Davenport 1998, Deloitte 1999). Generally,

studies investigating the implementation of ES cite three phases. They are

project, shakedown, and onward and upward. The phases focus on preparing

the organisation for Go-Live, a transition to a new system and processes, and

ongoing maintenance and enhancement of ES respectively (Markus et al. 2003;

Ross et al. 2003). In summary, the literature discussed here provides the

philosophy for hypothesising and thereby differentiating levels of Use.

Continuing from the above premise, statements of occurrences and activity22

from managers interviewed were analysed to demonstrate levels of Use. There is

a mass of statements from the interviews and a multitude of ways to interpret it,

so how does one start to determine whether an instance of a type of Use should

belong to a level, and to filter out what is not? To answer this, I refer firstly to

the definition and conceptualisation of Use explained in Section 2.7.1. Two

important concepts are central to defining the levels of Use: its manner and its

22 The term activity is defined as a process in which a person or organism participates, actually or potentially, involving mental function, designed to stimulate learning by first-hand experience. It also describes a state of being active or an organisational unit’s specific function (Merriam-Webster 2010).

Tan  2010

Page | 168

degree. Together, they prescribe the extent or depth of ES Use on which the

analysis is focused. The system, tasks, information, and the context of Use form

important considerations for differentiating between the levels that make up its

depth. These considerations guide us in gathering, sorting, merging, and

referencing evidence or instances of ES Use. Appendix F illustrates how these

elements form the core of the themes to where occurrences reported by the

managers are mapped. From the mappings, data-driven characteristics of each

level of Use are subsequently identified, and recorded in spreadsheets (see

earlier discussions in Section 4.6.5). From the analysis, it we concluded that

when managers Use ES, their Use can be at the:

1. Orientate level—using the ES to plan and prepare for the role. The user

focuses mainly on his core processes and understanding automated

processes.

2. Routine level—using the ES within the scope or defined role, following

orientation. The user becomes familiar with core processes and starts to

engage in value-added processes.

3. Innovate level—using the ES beyond its scope or defined role, but add

value to actions in the previous two levels. Core processes are regulated

and the user explores and begins to incorporate more value-added

functions to their work process.

These three concepts represent the three levels of Use. Each level is in turn

further characterised by supporting considerations of Use:

a) Supporting conditions: focus on the circumstances of system Use—

integration of policies, knowledge of co-workers, and social norms of the

environment with systems that constrain or facilitate user activities

b) Supporting system tools or instruments: focus on the features and

functions of the system in Use that constrain or facilitate user activities

c) Supporting information: focus on Use of the outputs of system Use that

constrain and (or) facilitate user activities.

Conceptualising Use for IS Success

Page | 169

6.3.1 Levels of Use and Supporting Elements

This section expands on and explains the relationships between levels of Use

and the supporting elements. As highlighted by the approach for developing

appropriate measures of Use (see Section 3.3), defining Use, and describing Use

elements in the context of the investigation (see also Table 6-1) represent the

first two steps towards meaningful investigation into its phenomena. Figure 6-1

illustrates the procedural and dependent relationship between the Use levels and

the supporting nature of the Use elements. The shaded column in Figure 6-1

represents the three levels purported in actual ES Use (its’ depth). In order to

understand the formation of the depths of ES Use better, one must consider the

changes in other supporting considerations in Use. Hence, the model implies

that for researchers to study different acts of ES Use, they must consider the

supporting elements that the user draws on at different levels as Use unfolds

over time.

ES Use Level

Supporting Conditions

Supporting

System Tools

Supporting

Information

Use Duration

Orientation Level

–Relate and replace conditions of Use with another (1a)

–Using features and functions to learn about application and process (1b)

–Using information for transaction and linear tasks (1c)

Shorter-term

Routine Level

–Adjust to and interpret conditions of Use (2a)

–Using features and functions to complete routine tasks (2b)

–Using information for consolidation and specification tasks (2c)

Mid-term

Innovation Level

–Negotiate new conditions of Use (3a)

–Using features and functions to rewrite and command routines (3b)

–Using information to develop new strategies and to influence a collective (3c)

Longer-term

Figure 6-1: An Illustration of Levels of IS Use and Supporting Elements

Suffice to say that this model (Figure 6-1) is an incomplete explication of

relationships between activities and levels of Use; it attempts to illustrate the

Tan  2010

Page | 170

breadth and depth of how users interact with ES. The horizontal view (from left

to right of the classification) clearly reflects the manner or the actual activities

completed when a user interacts with the ES, either directly or indirectly through

information produced by the system, hence the breadth. In contrast, the vertical

view (from the top to the bottom of the model) reflects the degree, or the

development, of cognitive states when a user interacts with the system, from

tentative and transactional to more fluent and matured, hence the depth.

This table illustrates nine possible sets behaviours based on patterns drawn

from the interviews. Each instance of a Use activity is analysed and grouped

according to the description of key elements: system tools, information, tasks,

and the environment. The premise for this is that users’ cognitive and learning

processes differ at all ES Use levels; generally, to reach innovation, supporting

conditions must be present. Both orientation and routine-level Use must be

completed; this is illustrated by the vertical and horizontal arrows in Figure 6-1.

Considerations of Use

Contextual Definition Examples

System System hardware, software and procedures

Software capabilities (all), Sales and Marketing Modules (R1), HR modules (R3)

Work Processes*

Activities that a user accomplishes in IS to achieve a business goal

Creating records (R5), Coordination and Implementing marketing strategies for marketing (R1, R4), Authorising payment (R2), configuration (R3) and Value Retailing(R6)

Organisational Environment

Knowledge or rules of action drawn from the organisation or, less so, society at large

Reporting hierarchies (R1), management policies and mandates (all), demand versus supply principles (R5)

Information System Task and Organisational Outputs

Data, text, or other results produced by the system and as a result of operating on task data or procedures

Yearly and daily financial reports (R1, R5, R6), Goods notes (R6), Material Codes (R5)

Table 6-1: Summary of Supporting Elements of ES Use

* Following a work systems definition of Use (see Section 2.7.1, users use the system to perform

work processes, hence work processes as an important consideration of Use is included in the

table.

Conceptualising Use for IS Success

Page | 171

As depicted in Figure 6-1, three levels of Use, each characterised by supporting

elements23 of Use are derived to classify the multitude of Use instances reported

by managers during the interviews. They were subsequently broken down into a

set of nine sub-level Use dimensions. A more concise description of each of the

three levels and of the nine sub-level dimensions is in Table 6-2. The brackets in

Figure 6-1 (for example 1a) thus correspond to levels and sub-levels of

managerial Use featured in Table 6-2. These sub-levels indicate detailed

instances of user intentions and activity. It is plausible that during mapping the

interpretation and description of a user activity there is a combination of

occurring (sub-level) states. For example, this could be differentiating between

features of two systems to study the differences between two types of sales

report generated. In other words, one sub-level fundamentally builds on the next.

Where appropriate two sub-levels are hyphenated to create notation (1a-1b) that

represents the process nature of Use.

Levels of ES Use

Description Sub-level Examples of Direct and Indirect ES Use Activity

Orientation Use is not developed and scope is typically for planning and preparation for tasks in user role.

1a. Relate Relate Use to prior personal experience

Refer Use to another’s personal experience

1b. Differentiate

Note difference in function and feature Use

Note similarity in feature and function Use

1c. Study Confirm output of Use with requirement

Note meaning of output without reference to business process

Express work as one-off

Follow an ordered (often training) process strictly

Routine Use is 2a. Adjust Express adjustment to Use conditions

23 The fourth element, tasks, is embedded in the description of each activity as the intended purpose of both direct and indirect Use.

Tan  2010

Page | 172

structured and scope of the user role is generally to negotiate routine tasks.

Express meaning of conditions and how the system should be used

2b. Accustom Demonstrate how the features and functions area are used to support daily tasks referring to an explicit structure of Use

2c. Specify Define meaning of output with reference to business process

Express work as following a transactional process

Following more intuitive process to achieve goals

Innovation Use is developed and scope of user role has broadened to non-routine tasks.

3a. Affirm Diagnose if the environment of system Use is working or not

Establish how Use ought to be

3b. Command Show how Use is completed in a process

State what is done, what else needs to be done, and in what order

Give new directions or order others

3c. Influence Persuade others to agree or disagree with Use

Exploration of new ideas, broadened scope of user role

Promote or discourage Use

Table 6-2: Levels and Sub-levels of (Managerial) IS Use and Examples

6.3.2 Use at Orientation Level

Use at the orientation level can be characterised as undeveloped, where the

scope of the user’s role involves typically planning and preparation-related

activities. Although generally associated with the early stages of Use, Use at the

orientation level is not restricted to only this view; it includes attempting to

complete any task using the ES for the first time or by virtual preference over

another ES. The rest of the section summarises the other Use instances

Conceptualising Use for IS Success

Page | 173

indicative of Use at the orientation level (see Table 6-3) derived from the

managers’ accounts.

Responding to the questions probing this initial phase of using ES, the

managers interviewed suggested that when a user is learning to use the system

to commence a set of new tasks, they are likely to do so by attending explicit

training sessions (R1, R2, R3, R4, R5) and (or) build knowledge through

socialisation (R2, R3) and experimentation (R6). “I went through thorough training

for six months…there was a senior who guided us in technical and practical

training. After training, we were given practice assignments like running report

analysis”—R2. “She taught me only what she used to do. If software can give me

100 solutions, but if she only knew 20 solutions, then I only know 20 solutions. I

rate the training with the lady 10 out of 10; the company training with RAMCO is

2 out of 10”—R1. During this time, they would (actually) relate and (or) refer

their Use with that of others, or with their previous experience. No doubt,

training is an important form of relating system designers’ and management’s

Use intentions to the user. Firms generally provide some form of systems

training following the appointment of new staff. Software training can also be

provided by the software vendors. In cases of highly customised systems, in-

house teams train the users. Some of these in-house teams are mobile, moving

between business units (R4). These training workshops or on-the-job training,

may take six to eight months to complete following the appointment of the user

(R2, R3 and R5). Through the interviews, the formalised training was found to be

generally poor, unstructured, and deemed largely inadequate (by R2, R3, R5 and

R6).

On the other hand, the preference or tendency is for the user to approach other

colleagues or ‘power users’ of ES. According to the literature, power users are

individuals in a firm who are generally familiar with the functions and features

of ES and are able to translate an ES’s technical view—transactions, screens,

and data fields—into how the ES can help perform operational tasks (Strong and

Volkoff 2004). They have either undergone broad user training provided by the

company, or have been identified as someone, or a group, with accumulated,

extensive working experience with the company and are familiar with system

operations. Through the interviews, there is strong evidence (not restricted to

Tan  2010

Page | 174

interviewees from the same company) that social learning (Boisot 1998) or

learning through socialisation with these so-called seasoned users is more

important. In this process, tacit knowledge residing in the minds of more

experienced users is converted to more useable, explicit knowledge (Nonaka

1994). This process can only be built through extensive contact and trust.

However, the users who rely heavily on colleagues are generally bound by the

knowledge of those colleagues, leading to what can be interpreted sometimes as

excessive dependence on the knowledge of key or experienced users during Use

(R2, R3).

Through initial training and socialisation, the user becomes more aware of their

role and responsibilities, although it is not expected of the user to add valued

feedback at the early stages of Use on an unfamiliar system. This is often

described as top-down coordination (Ross et al. 2003). “It is easy to learn, but

problem comes in front of them while they are doing; training does not prepare

them. I learnt by experimentation”—R6. It is generally easy to recite received

instructions of Use, but the difficulties only surface upon using the system (R6).

In this case, the user will start to differentiate between what they used before

and what they are using now (R2, R5) in an attempt to diffuse the problems.

Through these obstacles, users begin to draw out some similarities and

differences in system features and functions. Hence, there is very little value-

adding aspect of system Use.

Generally, at the early stages of Use, the user draws little value from initial data

and reports produced by the system. “We generate millions of reports daily. It is

common sense what SAP can create, it’s general work documents, you get out

what you put in”—R4. As mentioned earlier, it is often the case of the user being

left with little choice over what is produced by the system, only to be still

following instructions closely and be studying the outputs (R4). “…we had to

figure it out how to use (the system) for finance, and marketing and we have to

structure it regarding to the operation”—R5. Suffice to say, the user is more

familiar with the when and the how, but not the why of Use. “The main

drawback is that we are reacting to the problem in the report, we are not

anticipative”—R6. Users generally lack knowledge in terms of the value or

meaning of the product, or of the value of their interaction with the system to

the larger business process. However, as the number of reports required of the

Conceptualising Use for IS Success

Page | 175

user and the time spent attempting to study them grows they begin to

understand the circumstances of Use. Table 6-3 summarises the other instances

and explanations of Use activities indicative of the orientation level. The table

illustrates that negotiating these activities enables the user to move to the next

level of Use—routine. In contrast, failing to manage some of these activities

effectively would constrain the user’s work development and performance. In

Table 6-3, Table 6-4 and Table 6-5, ‘oddities’ or grammar errors of the responses

are retained, to demonstrate the authenticity of data and interpretations.

Use Sub-Level*

Respondent Instances of Use Explanation of Use behaviour**

1a R1 I came to know the RAMCO (system) when I came to TP, and I received no training. The person who taught me was this lady transferred here from the outside doing the same process. Training was unstructured.

In his firm, R1 seeks knowledge from another user. Knowledge of system Use is constrained, related only by fellow worker.

1a R3 There is one person who is very much master in RAMCO operation; he got more experience in RAMCO, so we usually go for him.

R3 seeks knowledge from another user.

1a-1b R5 Training was about a month or so, and wasn't structured.... There was no training for SAP. We had to figure it out for ourselves in the design phase during the installation

R5 relates to his own experience to figure out functions of the system.

1b R2 We get real time reports (information) on a daily basis. In MIS (previous system), a report that is prepared by another division is available only on a monthly basis.

R2 notes the differences between current system and one they used before.

1c R6 For e.g. If there is some price change of more than 5 per cent, if we are making a purchase order, the system would not let you proceed, we react to this problem.

R6 explains a particular negative circumstance of using system outputs.

Table 6-3: Use Instances at Orientation Level

Tan  2010

Page | 176

*1 refers to Use at the orientation level and 1a refers to relating Use. See levels of Use in

Table 6-2.

**Note that mapping user activities to Use levels is made not only on the basis of

transcripts of the interviews, but from the manner in which interviewees describe the

instances in retrospect, and from similar instances reported in literature. Certain

inferences of the interviewees’ intent were made.

6.3.3 Use at Routine Level

Use can be characterised to be at the routine level when the scope of the user’s

role becomes defined and structured, typically involving activities that are

performed with the ES to negotiate routine tasks. The rest of the section

summarises the remaining Use instances indicative of Use at a routine level (see

Table 6-4), as derived from the managers’ accounts.

Having studied the system functions and its outputs for a specified time, the

user should now be able to adjust to the structure of the organisation, and

familiarise themselves with the context and purpose of Use. However, the

circumstances of Use changed. Some examples include human-related factors

(R1), hierarchical structure (R2), and department priorities (R5). “In the previous

hierarchy, our SAP reports on their own divisions only; now hierarchy has

changed and therefore reporting structure has changed. This (change) has an

impact on how we use SAP...for general managers, there are some reports

exclusively authorised to them; we cannot run these reports. Regarding structural

change, we are not able to customise some aspects in our way”—R2. When

conditions are new or when they change over the course of time, the user needs

to adjust. Some examples of adjustments include protocols (R5) and hierarchy

(R2). “of course… There are protocols, you’ve to follow some protocols in doing

things...but if I have issues that are very important, I can’t figure out; I’ll go first to

the finance people. They’re better…that will be my first point of contact”—R5.

Successfully negotiating the adjustment means the user has more time to get a

sense of what the role entails, and what the system features and functions mean

to them in this role. “From a finance perspective, the system has helped to reduce

fraud. Because there is a logical flow, where money come from, which account it

goes to, when and how is it paid”—R1. On the other hand, some users may resist

the new processes (R1) or ‘best practices’ in lieu of old ones or others, sometimes

Conceptualising Use for IS Success

Page | 177

citing misalignment of goals and poor quality outputs as reported in the

literature (Hakkinen and Hilmola 2008).

As the user becomes accustomed to the functions and features of ES that

support daily tasks, they become familiar with what the system and its modules

were designed to achieve. In essence, a user would also relate what they are

doing with what the rest of the organisation is doing. “We run SAP for producing

sales related report, pre-specified reports in SAP and customised reports for our

company in SAP. From SAP, we get real time information on a daily basis. Like

today 12 July, what is the position of our division, our branch as compared to

others? In the past reports prepared by another division are available only on a

monthly basis...In between, we want to compare last month's total sales of our

division with rest of India today on 13th July, we can use SAP. SAP is the

backbone... from practice and experience we are well versed in all types of

reports”—R2. Modules support distribution of material across production

facilities and business units (R4), define codes that record and identify all raw

materials and finished goods (R5), enable payment authorisation and checking

(R1), include human resource functions (R3) and individual reporting (R2, R4).

The user would generally find that their ES are able to provide customised and

real-time data, for example viewing the material stock, shipping details, and

contacting the customers who have picked a product and placed an order (R4).

In summary, for the user who is accustomed to the working conditions and

system functions, their activities generally demonstrate the use of features and

functions to support the organisation’s daily routine tasks, hence requisite Use.

The well-trained user will adjust to the conditions, systems, and outputs faster,

and thus when required to do so is able to specify the meaning of their work. It

means is that by this stage, the user is also more likely to be involved in

consolidation tasks, be able to derive new meanings from outputs, and be able to

provide feedback on certain processes. “We use SAP for few purposes: firstly,

when we receive goods we make good receipt note in SAP. Store transfer note in

SAP. SAP Use to manage how much goods, stock sold in one particular store, what

are required in different stores. Indenting in SAP, replenish stock or work orders

using SAP. We get all categories of information in real time”—R6. “Every month we

run the reports and tell HQ where they should be focusing their efforts. This report

Tan  2010

Page | 178

helps us to devise inventory management strategy for example. We provide the

product related information to pharmaceutical, medical representatives across

India and reporting totals to HQ”—R2. Drawing evidence from the interviews,

some examples include using information to specify corporate strategy (R2),

data-prompted routines like additional payment checking (R6), and user-

initiated feedback mechanisms, triggered both by problems in the data produced

and by system warnings when inconsistent data are entered at a location (R5).

Elaborating on each of these, feedback is important for both the user and the

organisation; not having a structure that accommodates feedback spells longer-

term restrictions. “The department has no book, it’s all individual initiated and we

have to take responsibility...this is something very bad. We should not try to call

the other departments for help, every time if we have a problem. Everyone writes

their own processes down, this is what my boss tells me. I have a diary so I don't

have to ask problem…if I lose the diary that’s it, the diary is more important than

anything else right now. We should not try to call the other departments for help

every time if we have a problem. What I did is I wrote down the process each and

when I do it three to five times, you automatically know”—R1. Feeding this claim

is evidence that new knowledge of system Use is tightly guarded when there is a

lack of incentive to share (R1), or when feedback is generally left on the shelf

(R5). As predominantly operational managers, our respondents are essentially

required at some level to manage people, operations, and processes (see section

6.2.3).Thus, they are ultimately responsible for how a collective (see Burton-

Jones and Gallivan 2006, p. 661) functions to achieve a common goal, and to

manage feedback from a collective. Exemplar work that managers do as a part

and (or) a result of consolidating information produced by the system includes

streamlining processes, such as transferring material information from one site

to another by using system-generated codes (R5) (thus deriving an enabling

coding structure). They may coordinate group-designed manuals derived from

prolonged Use to implicate how the system ought to be used (R3). R6 sums it up

by explaining how reports can potentially help negotiate new routines. “Do you

feel restricted by SAP? No, I want this system because I know if the system gets

streamlined, we can remove lots of manual work, my admin people can get reports

from SAP, my accountant can report directly to the head office. My reporting load

Conceptualising Use for IS Success

Page | 179

will be reduced. It is not restrictive in that sense”—R6. Hence, one can clearly see

evidence of the potential for value-adding Use in the routine phase.

In summary, when a user demonstrates making adjustments to their conditions

of Use, being able to negotiate and understand routine tasks, and offering

feedback to a collective, it is indicative of Use at a routine level. Well-trained

users are more likely then to accept the system, thus buying themselves off

(Yusuf et al. 2004). Other signs of Use at a routine level include a user specifying

a meaning to his outputs, explaining to others how to produce similar ones, and

being able to report their status or run a diagnosis on this work. However, it is

quite apparent from the responses (see Table 6-4, in particular R1, R2, R5) that

all respondents experienced frustration, confusion, and felt at times a general

lack of ownership and transference of knowledge. This general attitude is similar

to those reported in the shakedown phase when implementing ES (Al-Mashari

and Al-Mudimigh 2003; Ross et al. 2003). It is difficult to specify how long users

generally spend on a routine level of Use. Nevertheless, one can assume that the

amount of time it takes to reach the next level (innovation) is directly

proportional to the time spent adjusting, becoming accustomed, and specifying

their role.

Use Level

Respondent Instances of Use Explanation of Use Behaviour

2a R1 Company TP is a private group; there’s 160. The majority of people are retiring people about 55 years old. The problem was that many people are rigid, they cannot be patient with the system, they have no idea of the system.

R1 explains a particular Use context (the background of users) in his firm.

2a R5 There is a communication gap between techno-commercial people and the scientist that I’m not able to understand exactly. So, this creates miscommunication.

Priorities between departments hamper communication in Use.

2a R6 When accountant making GRM (goods receipt), he ask me, I will have a solution, if I don’t have solution, I will ask my superior and we will get solution.

R6 understanding the hierarchy to get answers.

2b R1 Accounting entry pass through the system, it helps minimise fraud...but the job is repetitive. In payments from supplier, I have a voucher for you, I

An advantage of the system to the business process is noted by R1.

Tan  2010

Page | 180

Use Level

Respondent Instances of Use Explanation of Use Behaviour

authorise a payment, I will prepare a check, the system happens at the back end, checks are duplicated... it’s routine, no innovation, no change. Quality of report is also routine. I haven’t added value to the system I feel because of my limited knowledge. I get more or less the same thing because I haven’t to go above my routine role.

Routine Use of the system for payment authorisation is described.

2b R1 The process is as such: say we want to procure goods; we have a code for the supplier. RAMCO helps tracks materials needed when quantity in shortage. Materials department handles this (procuring of materials) using RAMCO, which supplier, what quantity for setting terms 60days credit, 30 day credits, when payment due. They will send me this voucher which has the payment date. The checks are prepared, and then payment is made. This is what we do daily.

System is used to check stock availability and to procure goods when required.

2b R3 RAMCO now is used for HR purposes only. So in RAMCO we enter the employee record...we record the relevant number to each employee. Besides like a name and number given to an employee, whatever salary that they make, their attendance and their appointments are recorded.

The system enables the manager (R3) to record, store, and produce employee HR information.

2b R4 The other one is a distribution module which supports facility at Jamnagar, Hyderabad, Bangalore…5 to 6 sites across India. We can see the distribution failures; which material being produced; we can see the distributions across the country. Yes...if we want to view material, contact the customer, know where to distribute the material using the system. Yes…it’s totally customised system.

System is used to track material orders.

2b R5 You learn and you know this has to be done that has more than the initial parameters. For example, now we have created a code for our materials. If the raw material is from the outside, it will have different code. Or if the material is manufactured from outside and if you have used a different system from an outside facility, they will return to our company with a separate code. So, by this the code it’s identifiable and recorded. So

System is used to generate coding structures.

Conceptualising Use for IS Success

Page | 181

Use Level

Respondent Instances of Use Explanation of Use Behaviour

the code is important to know where to go, where it from, how much, everything recorded based on material. The system becomes very efficient.

2b R5 Before that you don’t know about the system, and after a while, I found it’s more easy for me, I know where to go, I know about parameters, and recently I can conduct the parameters, I can select and I get precisely what I want. Integrated, of course yes. Real time, of course yes. I don’t know about standardised.

R5's comments suggest appropriateness of the system towards the intended design.

2c-3a R3 So now we having data to in doing ongoing launch, so second we have a social working focus on implementation in the organisation. Because each of every purpose must well establish and well define and everyone knows that we have a transmission, distribution and addition in establish all of this. So we adjust the system, all the platform should be unified, and make sure properly define. So right now we’re going to develop preparation mainly focus on manual, and we’re in the last stage of preparation to that manual.

Group is using the system to design a manual, negotiating how the system ought to be used, and Use outputs to create a manual.

2c R6 Although it’s networked, you still need to make a call or send mail to check if item went through. System is not indicative of whether the person is there or the person will see it, I have to make a call or text, when I make outbound delivery.

Data prompts are another, additional, routine to check output.

2c R6 Yes, SAP was in place. For each store created, there is a particular site code that is created by head office, whenever we open a store, we use the site code, then we install the system, the system is networked with the head office, then we key in the site code, all the stocks and the store is streamlined within 1.5months, take some time for the store to go live.

Departments use system-generated site codes to streamline processes.

Table 6-4: Use Instances at the Routine Level

Tan  2010

Page | 182

6.3.4 Use at Innovation Level

At innovation level, Use is developed and the scope of the user role broadens

from routine, to non-compulsory, non-automated and often non-specified tasks.

Use at innovation level is characterised by the user activities that attempt to

affirm new circumstances of Use, promote and (or) partake in reconfiguring

system functions to influence new strategies. All of these activities aim to bring

value added to existing Use. In this study, non-routine tasks that generally bring

value-added to existing business processes are emphasised. Value-added Use

implies Use of IS for purposes that build upon and extend beyond the targeted

capabilities and benefits of the IS. This is the premise for benchmarking

innovation in Use. The rest of the section summarises the remaining Use

instances indicative of Use at innovation level (see Table 6-5) that came from the

managers’ accounts.

Building from ongoing and routine operations, the user soon becomes adapted to

their role and the organisation’s socio-economic system. The user has not only

acquired knowledge on the system’s capabilities, functions, and ability to meet

targeted outputs, but they are now able to make informed decisions on the

overall fit of the system and outputs for the organisation. From this, the user

can use their knowledge to affirm the culture of the organisation, their

ownership of the process they take charge of, and (or) others in the organisation.

A case in point of affirming the circumstances of Use for innovation is how the

user and their organisation treat their acquired knowledge from routine Use.

“We sit down to discuss our problems. Wherever new things come, for example

new transaction code to do a certain function, they do a PowerPoint presentation

and send to all stores and all stores can use the same way of doing things.

Besides PowerPoint, there are demo training and training modules”—R6. The user

can either choose to attempt to share their knowledge of Use (R2), or to protect

this knowledge (R1). On the other hand, the organisation can agree on the

knowledge of Use (R4) or, implicitly, disagree with it. Treating new knowledge is

just one example of user activity that stems from routine-level Use primarily

aimed at negotiating and announcing new conditions of Use, business practices,

and ownership of the process to leverage the system.

Conceptualising Use for IS Success

Page | 183

The capacity to have the affirming conditions to innovate is crucial and dictates

the changes required in the Use of IS features and functions. In addition,

managers at this stage should generally have a good command over how the IS

functions, and be able to direct the necessary enhancements. Some examples of

enhancement of ES reported at the organisational level to fit evolving needs

include reconfiguration of the current ES version, new module settings, and

planning for upgrades (Nah et al. 2001; Markus et al. 2003; Ross et al. 2003). To

reach organisational-level influence, in the case of Use activities at the individual

level, the focus is on instances of the user leveraging the system and its outputs

to rewrite and command routines. “Whenever you talking to person or client, you

need supporting information. Our role is coordination and implementing marketing

strategies for marketing our products across India. SAP is the only tool helping

that, if the person doing well, we will use a tone, if the person is not doing well,

we will use a different tone. It’s a backup for us. It gives us lots of confidence

through the information it provides. There is no denial, for example if the

department has not been meeting sales targets. We try to motivate people, rather

than being disheartening, so they achieve more. Whatever reports, decisions we

make, issues with products, we encourage them to send feedback, then we

contact them”—R2. Some examples from a manager’s perspective include

leveraging system reports to strategise interactions with others (workers, other

managers, and clients) (R2), exploring the ES for new reporting formats (R2), and

eventually influencing others to use these new findings about the system (R2).

Finally, new ideas of and from Use must be allowed time to stabilise and be

instantiated across an organisation or department. Although managers need to

use new findings and knowledge that are found in Use to influence others, it is

sufficient to say that not all suggested improvements are judged as useful, nor

are they all adopted within the organisation (R1, R2, R5, and R6). “Whenever we

are free, we will try to produce different formats, representations of same reports.

We have an IT help desk, we suggest these improvements in SAP. We suggested

this 1-2 months ago. They say they appreciate but the have to take it to the

corporate level and then they will get back to sales. They say they have received

the same suggestions from other departments as well”—R2. “(Our) IT dept never

asks. Only when problem happens, I take a snapshot of the screen and I send to

them. They go to the back and they fix it, and they call you to say it’s working.

Tan  2010

Page | 184

They never ask how we can add value to it and what other problems we have”—

R1. Specifically, the manager and the user must attempt to leverage system

outputs and reports to demonstrate that enhancements are required (R1 and R2).

“By representing the same data in different formats, I can find out what are the

regions of sales return or if a region has lots of expiry for inventory, I can ask them

why stock is left there for so long. We can see the branch is not doing well so I can

now ask the branch to offer some discounts to get rid of the groups”—R2. In

addition, more importantly, managers and users as a collective must

conscientiously agree on exploring and practising new competencies (R4). This

also depends largely on the contextual factors in the work environment to

complement their actions. Finally, managers should at this stage be able to

identify what new training is required (for the department) for developing new

strategies for effective Use (R6).

Use Level

Respondent Instances of Use Explanation of Use Behaviour

3a R1 There's a separate way I do my work now. Even the lady does her own way. My steps are defined and tells me how I can create efficiency, nobody tells you. It’s very subjective.

R1 generates his own knowledge of Use over time.

3b R2 For last 1 year, I was monitoring sales report format, there was only one credit report format which gives us credit product wise and HQ wise. I started running reports 3-4 times each day in different formats, from these reports, I can find out which department was hampering my percentage of sales and what were the reasons.

User explores the system for new reporting formats and routines. New knowledge is used to promote new strategies.

3c-1a R4 My colleagues have right to ask me, and I will help them. I don’t need to go to training team. If you ask me is there a culture in the company, that people always share knowledge willingly and participate on? Yes, most of them share knowledge, give feedback.

R4 suggests how sharing culture complements new knowledge.

3c-1a R6 I want to go beyond. Now I have operational knowledge, I want to go for technical knowledge. I want to be configuring system. I want to configure my own system. I want to take course in SAP and start another job.

R6 illustrates a need for a particular type of new knowledge.

Table 6-5: Use Instances at Innovation Level

Conceptualising Use for IS Success

Page | 185

6.4 Discussion

This section discusses the implications of classifying Use activities for both

practice and for knowledge. As mentioned earlier, for practice, the classification

of chronological events recalled by managers of their experience with ES

provides a number of principles with which to identify and manage ES Use. For

knowledge, the framework helps to support some of the key findings and

hypotheses from the quantitative investigation.

Classifying Use instances into its levels shows that the effective Use process

requires users to build constantly from, and on, each level over time. Findings

from the study suggest that outputs from each level convey emergent changes

that should be adapted and institutionalised within existing practices, and are

continuously drawn on by users as Use unfolds. A summary of the implications

of the classification (in Table 6-2) for the study is below. The classification—

1. offers a theoretical24 lens through which to study the characteristics by

which IS user activities are recognised;

2. attempts to exercise and demonstrate guidelines towards studying the

multilevel nature of Use as postulated by Burton-Jones and Gallivan

(2007);

3. demonstrates that Use is a continuum of activities; it is collection of

dynamic and iterative acts performed for an intended purpose and

facilitated by a set of circumstances;

4. demonstrates that Use activities incur a specific amount of time and effort

spent by an individual in negotiating all supporting elements of Use;

5. postulates that the cyclical processes of Use have important mediating25

effects between the context of Use and the intended outputs;

24 A theory that classifies and categorises phenomena is also known as type 1 theory (see Gregor 2006).

25 We take into account that Use in a closed loop can be both an antecedent and a consequence, although it is not the intention of the classification to do so (see results in Section 5.4.3).

Tan  2010

Page | 186

6. although preliminary, has potential implications for categorising and

managing complex ex post ES implementation user activities; and

7. generates an alternative, staged perspective on the extent and nature of

direct and indirect Use; this is a reflection that according to DeLone and

McLean (2003) often eludes IS success studies.

This understanding of the implications of the framework is linked to key

statistical findings from the quantitative investigations to draw further emergent

issues, and eventually to contribute to a deeper understanding of ES Use (see

Figure 6-2). The leftmost column of the figure describes key qualitative findings

while the rightmost column the statistical findings. The dotted arrows

connecting the columns show the predicted relationships between the findings,

and illustrate the sequential nature of how Use brings impact to the individual.

Conceptualising Use for IS Success

Page | 187

Qualitative Findings Statistical Findings

Time spent during training, socialisation and experimentation in orientation level promote requisite Use.

ES Use becomes dictated by external sources and prior experiences

Frequency of Use is useful to measure requisite Use but not sufficient to predict individual impact

Attitude of Use is an important dimension in Use

As the user’s role adjusts to protocols, hierarchical structures and organisational priorities, users negotiate routine tasks.

User specifies meaning to his work processes, be able to diagnose the status of his work. User displays work ownership and prompts value-added knowledge sharing mechanisms

Exploratory Use is lower at the beginning stages and would become higher at latter stages

Depth of Use yields a high variance on individual impact over time

Use aim to affirm new circumstances of Use, promote and (or) partake in reconfiguring system functions to influence new strategies.

New competencies become stabilised and instantiated across an organisation or department

Use becomes an important mediator between quality of IS and its impacts on the individual

Figure 6-2 : Triangulation of Qualitative and Quantitative Findings

6.4.1 Emergent Issues

From the above triangulation of qualitative and quantitative results, three

somewhat paradoxical (to common knowledge) topics extracted from the cross-

analysis of interview data are discussed: competence through social learning,

familiarity breeds contempt, and process ownership and buyoff. To arrive at

these topics, a re-analysis of Use instances is mapped across levels, across

respondents, and across elements of Use. Each of these three messages stems

Tan  2010

Page | 188

from observations in each level respectively: orientation, routine, and innovation.

Although we believe that these factors have direct relevance to how users score

systems, they warrant further in-depth investigation. This discussion also raises

problems or pain points facing management, implementers, and users of the

systems if levels of Use remain unmonitored.

6.4.1.1 Competence through Social Learning

The close ties between social learning and formal training when facilitating

system and (or) task transitions of users are far reaching. Inexperienced users

generally tend to make only a broad expression of the technology they are using,

or signal a transition from a previous task; thus, they depend on other means of

learning and acquiring competence. During this time, socialisation influences

users who do not attempt to relate their own experiences, but only those of

others, build their own interpretations, and do not pass judgements. The

investigations demonstrate the importance of having senior colleagues and good

viral networks. Over time, users can gradually choose to combine existing

knowledge with others when they think it is appropriate; Use knowledge to cover

shortfalls in existing knowledge; substitute knowledge that could be related or

unrelated, and (or) form a preference for other users’ knowledge. Problems

arising from this include excessive dependence on other key users and

comparisons with other solutions, not allowing their own interpretations or

judgements on Use elements to become stabilised. Organisations and

departments that do not establish good end-user training, proper routines, and

(or) support systems often leave employees feeling short-changed, left to fend for

themselves, and sometimes forming negative allusions (for example, having a

personal diary).

From this, managers must look to leverage on users’ support systems as an

alternative, rather than relying solely on user training. Blind adherence to

standardised training sessions is insufficient to create a strong mandate to

support end users. Managers must recognise the need for building good viral

opinion (where awareness is raised through conversations, presentations,

messages, and images and so on in a self-replicating viral process), and

acceptance of using the system among experienced and new users within a

reasonable adjustment window. This mandate must include seniors or mature

Conceptualising Use for IS Success

Page | 189

users helping others to create an account of their use of systems and aid them

in determining their required set of technology and non-technology elements. To

succeed, managers need to realise that users are rational beings and account for

the degree of objectivity, ability to reason, and the development of social bonds

within a support system.

6.4.1.2 Familiarity Breeds Contempt

Humans integrate a wide range of technological tools in everyday human activity

for work, entertainment, and communication (Carroll 2002). Even systems with

similar intended purposes can vary according to their applications and the

context of Use. Systems for work also vary, and users tend to build familiarity

with a work system—but not necessarily with its Use over time—based on their

personal experiences with other similar but non-work-related systems. ES are

complex systems that have seen many successes and equally many failures for

the firms that adopt them. Without a strong mandate of Use and (or) positive

viral opinions, users tend to find it hard to develop an appropriate routine or

sequence of processes within these systems. Drawing the impact of the above

statements from our investigations, users’ scope of activity becomes defined and

narrowed, leaving users to recite instructions rather than electing to understand

them. This results in excessive questioning by users, a lack of confidence, and

subsequent criticisms of systems and structure. Users resist new processes or

‘best practices’ in lieu of the old or others, citing misalignment of goals and poor-

quality outputs. Thus, it is easy for users to pass personal judgements on the

promotion, rejection, or disapproval of a system. Over time, these users generally

find excuses not to use the system, and may soon become negative agents.

In fact, when we asked managers to score the ES on a scale of 1 to 10—in terms

of its overall fit for the manager’s role and performance—the scores varied (see

Table 6-6). We recorded a median score of 6.5. There are some assumed

correlations between the respondents’ scores with their responses mapped at the

routine level (see discussions in section 6.3.3). Respondents who gradually

found difficulties in adjustment—having to rely on other colleagues more than

training (R1, R2)—thought that strategies could be improved (R5 and R6). Those

who found tasks and deliverables generally very routine over a period of Use (R2,

R5), and who faced trust issues with the system (R5, R6), generally scored the

Tan  2010

Page | 190

system average or below average26. Acknowledging other variables fundamentally

if the system does not provide a good means for users to interact with it, and

trust it after some time, users will start feeding and receiving less meaningful

data from the system. The managers’ scores generally reflected that there is

room for improvement in terms of the overall fit of the system for their

organisations. Thus it leads us to believe that management should conduct

periodic and extended (see Table 6-6) evaluations on one instance, enhancing

the principles of design27 of the system.

Respondent Score Concluding Comment

1 6.5/10 We need more flexibility for users

2 6/10 RAMCO made some processes difficult but it’s good enough.

3 8/10 RAMCO is little bit traditional, and conservative.

4 8/10 I can say that the system is convenient.

5 4/10 Of course (it’s effective). Depends on how you use it. I don’t depend (on the system) much.

6 6.5/10 I know SAP is good but if we were to use SAP, it will take time.

Average 6.5/10

Table 6-6: How Managers Scored their System

6.4.1.3 Process Ownership and Buy-offs

The purpose of interacting with an ES is embedded within a business process—

where it represents a collection of related, structured tasks that serve a

particular goal. Thus when innovations are discussed here, both business

process-oriented and software-oriented innovations and (or) improvements are 26 On a scale of 1 to 10, a median score of 5 or 6 constitutes an average. Seven and above constitutes a good score. Less than 5 on the scale is a low score.

27 The study of interaction between (people) users and systems in terms of design, evaluation, and implementation (human-computer interaction) is an important and developed stream of research in IS; it has potentially strong links with the current thesis.

Conceptualising Use for IS Success

Page | 191

counted. From the managers’ responses, positive signs of reaching innovation-

level Use are when users interpret their own strategies of Use and explain to

others how they may be used, run a diagnosis on the system, readily offer

feedback to specify whether work structures are working, and (or) report on

system status, and make queries. Subsequently, users then have with time to do,

among other things, exploration and smart improvements—previously

undiscovered uses for the system and its outputs (Ross et al. 2003). Evidence

that users want to use the system more and make it productive is if they make

requests about enhancements.

However, management should note the paradox that if enhancements involve or

result in major changes in processes or altering system design, it indicates

issues in the initial design and requirements phases, and hence it less positive

value-added. Generally, well-trained users will adjust and accept the systems

and work structures faster: ‘buying them off’. They are therefore more likely to

demonstrate genuine attempts to improve the work environment and provide

useful feedback. The reason is that they are able to juggle the changing elements

of Use and different parameters that feed continuously into their daily Use. A

simple message is that ideally workers should be convinced to feel a sense of

belonging and commitment to the organisation.

6.5 Summary

This chapter presented a conceptualisation of levels in Use in an attempt to

‘connect the dots’ between managing and measuring the depth of Use. To

develop the classification, we examined detailed managers’ accounts on ES Use.

We sought a natural pattern of Use and found it from the analysis, which

complements the quantitative results from the earlier findings. The

characteristics of these levels are first that they are inclusive and build upon one

another over time. Second, they include a set of observable behaviour that

classifiable into a set of nine dimensions underlying the three levels: orientation,

routine, and innovation. Further, we extracted three related notions from

analysis across responses and mapped into each Use level social learning

competence, contempt from familiarity, and process ownership. These factors

further enable and (or) constrain Use behaviour identified in each level.

Tan  2010

Page | 192

Classifying and analysing the spectrum of contemporary Use behaviour allows

us to build a process of Use and thereby explain how users would eventually

score Use. Recognising the levels of Use through detailed instances describes the

logical relationships between other aspects of Use and measurement dimensions

covered in prior model testing and variance-based phases, and thereby answers

the ‘how’ questions—in this case, how circumstances of Use influence Use, and

how Use in turn affects impacts. A set of four standards: credibility,

transferability, dependability and confirmation, were adapted to judge and

account for the validity of the responses and findings in this qualitatively

oriented research. Finally, the implications of the conceptualisation of Use levels

are summarised. Broadly, the benefits of understanding how to recognise levels

of Use for practice are that it sheds light on how to manage it; for research, it

subjects the concept to further theoretical scrutiny, and for this study, it aids

the triangulation of findings in the quantitative and qualitative phases of the

research. Through the triangulation, three emergent issues are raised that

further deepen the understanding of the scores and responses from our research

participants.

Conceptualising Use for IS Success

Page | 193

Chapter 7: Conclusions and Outlook

7.1 Introduction

The research problem defined in this thesis refers to the lack of a deeper

understanding of the Use phenomena in IS success, which arises from poor

theoretical treatment and validation, and often yields conflicting results

(summarised in Section 1.3). Given this problem domain, the study addresses

three core questions regarding Use: (1) how can one define Use for IS success? (2)

what are the salient dimensions and measures of Use for IS success? and (3)

what is the role of Use in IS success? (described in Section 1.4). Answers to these

questions, constructed throughout the conduct of the study present two distinct

but related sets of findings. While variance-based findings focus on the

measurement approach, process-based findings suggest how to interpret the

measurement results. The triangulation and discussions of these sets of results

explain the predictive rigour of Use in IS success scenarios and research models.

The rest of the chapter reflects on the new measurement ontology and its

application in IS success. First, the implications for a theory of Use, derived from

the key considerations of the ontology are presented. Three principles are

emphasised: elements of Use, the representation of Use, and Use types.

Establishing a deeper understanding of these principles contributes to the

development of a theoretical realm of Use. From there, researchers are urged to

establish a checklist against which to study Use. Derived through the study

findings, the checklist prescribes a simple set of considerations for a study

design involving Use. Next, we outline the limitations of the study and from there

highlight potential avenues of extending this research. The chapter concludes

with a concise summary of the key contributions of the thesis to both research

and practice, and positions these contributions against the relevant literature.

7.2 Theoretical Contributions to Explaining Use

Generally, a theory seeks either to analyse, explain, predict, explain and predict,

or prescribe action; most IS theories can generally be classified into one of these

theory types (refer to Gregor 2006, p. 619). Calls continue for IS researchers to

Tan  2010

Page | 194

develop, extend, and contribute new theory (Weber 2003; Gregor 2006).

According to Whetten (1989), an extension of theory should add to the body of

knowledge; that is, not just rewrite existing knowledge but rather, it should

contain principles to guide future research usefully. Where little is known of the

role of Use in IS success, this study derives considerations for a theory.

Principles derived are not highly predictive but constitute a general conceptual

system for analysis.

A set of principles to urge researchers who wish to study and (or) include the

Use of contemporary IS to consider and advance these are presented. These

principles form the ‘basic building blocks for theory’ (Gregor 2006, p. 620) in Use,

including the constructs, relationships, purpose, scope, and means of

representation. These principles are neither exclusive nor mutually exclusive but

they are circumstantial, and subject to interpretations by researchers in

particular domains. They contribute to existing knowledge on the topic of Use.

Principles are further compared to similar findings in the prior literature to

elucidate the value of the current study’s findings. The rest of the section

elaborates on these principles.

7.2.1 Interaction with Core Elements of Use

Use constitutes more than just the physical systems and its users. And different

types of IS (McAfee 2006) promote different types of Use (see Section 3.4). The

core elements of Use described by the definition include systems (hardware,

software, and procedures), business processes (activities that a task doer

accomplishes with the system), and information (data, text, or other results

produced by the system) that exists in an organisation (the contemporary

environmental context). When using complex systems like an ES, we are not

merely using the system, we are really trying to interact with and integrate all

the other elements of Use as well (see Section 2.7.1).

Therefore, Use is a result of people consciously and actively interacting with the

IT systems, and from it derive information upon completion of work processes;

this typifies the continuous iterative process in individual Use behaviour

envisaged by Schwarz and Chin (2007), Avison and Elliot (2006), and Orlikowski

1992). Interacting with complex and evolving systems brings inevitable change.

Conceptualising Use for IS Success

Page | 195

As reported in the literature (and as discussed in Section 3.4.1), and empirically

supported from the managers’ data collected, an ES often epitomises

organisational and structural change. Users (or in this case course participants

and managers) are expected to change as much as the systems; other non-

technological elements themselves are integrated in users’ work practices. We

use a work systems theoretical approach to explain the dynamics of interacting

Use elements. This dynamic view of interactive Use, where system typology plays

a central role, often eludes researchers, where prior definitions of Use typify a

more passive relationship. The scope of core elements and the dynamic

relationships between these core elements prescribe a rich definition of Use.

7.2.2 Representations of Use

This discussion urges researchers to think about how they would want to study

Use. Three possible ways to represent and study Use are supported in this study:

an antecedent, a consequence, and an event in a process. These streams carry

differing and in some cases conflicting meanings of Use (as discussed in Section

2.5). Correct specification of the conceptual representation of Use in a theoretical

model adds to its definition and, more importantly, helps decide on validation

techniques. For example, of these representations above, the last (an event in a

process) suggests a mediating effect of Use. Use is suggested as an event in IS

success. This depiction suggests Use can be both an antecedent and a

consequence (Delone and McLean 1992; Goodhue 1995; Benbasat and Zmud

2003; Gable et al. 2008 and others). As an antecedent, some studies (for

example, Rice 1994; Igbaria and Tan 1997; Devaraj and Kohli 2003; D'Ambra

and Wilson 2004; Jain and Kanungo 2005; Burton-Jones and Straub 2006 and

others) suggest that Use leads to downstream outcomes (such as impacts or

performance), thus determining how IT benefits individuals and (or)

organisations. Consequently, some studies (for example Davis 1989, 1993;

Segars and Grover 1993; Gefen et al. 2003; Venkatesh, Morris et al. 2003 and

others) suggest that actual system Use is a function of behavioural intentions.

On the other hand, empirical evidence supporting the mediating effect of Use is

still relatively scarce (only Boontaree et al. 2006a found). The study has

empirically demonstrated (in Section 5.4.3) that Use is an important mediator.

Statistically, ES Use is an important mediating variable of the impacts of ES for

Tan  2010

Page | 196

teaching and learning outcomes. Conceptually, we show that ES Use has a

mediating effect on the goodness of management operations in ES.

7.2.3 Levels and Types of Use

Use is classified and measured differently. This research developed two

frameworks: one informs the methodology for selecting appropriate measures

(see Section 3.3) and the second informs a methodology to classify types of Use

(see Table 6-2). Researchers in their past work developed similar frameworks for

classifying direct Use behaviour and related activities. Generally, these

frameworks use a variety of different lens, or concepts, to help to distinguish or

decompose user activities derived from different sources into their levels or

categories. For example, a recent publication by Burton-Jones and Gallivan

(2007) demonstrates how researchers can break down the behaviour of users

and a group of users observed in a study to conceive the multilevel nature of Use.

The levels described by Burton-Jones and Gallivan (2007) are individual, group,

and organisation. In the domain of knowledge management, Nonaka (1994)

introduced four modes to which users convert tacit knowledge to explicit

knowledge and vice versa. In the light of activity theory, Kaptelinin and Nardi et

al. (1997; 1999) among others have extended the theory to classify Use activities

into activity, action, and operations (see Figure 6-1). Studying how users

appropriate structures of advanced systems (like GDSS), DeSanctis and Poole

(1994) introduced a comprehensive schema of classifying Use activities into nine

categories of appropriation moves.

Drawing lessons from DeSanctis and Poole (1994), a classification which

constitutes three levels of Use—orientation, routine, and innovation—is

developed and further broken down into nine possible sets of Use behaviours

that can help in understanding the extent of Use (Figure 6-1). The three levels

and their decomposed set of behaviours are sufficiently characterised and

differentiated as the key elements of Use. Not only do they describe the

granularity of each level, but how one sub-level fundamentally builds on the

next is also described by the core elements of Use. Above all, the classification (1)

offers a theoretical lens for categorising and managing complex ex post ES

implementation user activities, (2) demonstrates that Use is a continuum of

Conceptualising Use for IS Success

Page | 197

activities, (3) postulates that the cyclical processes has important mediating

effects, and (4) comes from managers and has implications for managers.

Table 7-1 summarises the three major Use principles discussed, the

recommendations for practice, and the supporting literature.

Considerations Recommendations References

Core elements and the incorporation of IS

Researcher should consider the IS in Use , business process completed in IS, information produced, tasks completed, and the users that exist in a contemporary environment

Lean—System and Information or System and Tasks (Davis 1989; Vakkari 2003); System and User (Straub et al. 1995; DeLone and McLean 2004; Sabherwal 2006; Petter et al. 2008)

Rich—System, Tasks, and User (Burton-Jones and Straub 2006; Burton-Jones and Gallivan (2007); or

Very Rich—System, Organisational Environment, Tasks, and User (Lee 1999; DeSanctis and Poole 1994; Avison and Elliot 2006)

Researchers must consider that the extent of Use of IS for automating work processes is different in various contexts and for different systems

Functional Systems versus Networking Systems versus Enterprise Systems (McAfee 2006)

Core Competency versus Value-added processes (Porter 1996, 2001)

Work systems (Alter 2006)

Representations of IS Use

Researchers must consider whether Use is an antecedent, a consequence, or a mediating variable (or mediator) in their context

Antecedent (Rice 1994; Igbaria and Tan 1997; Devaraj and Kohli 2003; D'Ambra and Wilson 2004; Jain and Kanungo 2005; Burton-Jones and Straub 2006 and so on)

Consequence (Davis 1989, 1993; Segars and Grover 1993; Gefen et al. 2003; Venkatesh et al. 2003 and so on)

Antecedent and consequence (Delone and McLean 1992; Goodhue 1995; Benbasat and Zmud 2003; Gable, et al. 2008 and so on)

Mediator (only Boontaree et al. 2006 found)

Levels of Use Researchers should consider the

Multilevel nature: individual, group, organisation (Burton-Jones

Tan  2010

Page | 198

orientation, routine, and innovation levels, or the like, to account for the continuum of Use activities

and Gallivan 2007)

Activity levels: activity, action, operation (Nardi 1996)

Knowledge conversion mode: socialisation, combination, externalisation, and internalisation (Jashapara 2004; Nonaka 1994)

Appropriation Moves Schema: Direct Use, Relates to Other Structures, Constrain the Structure, Express Judgements about the Structure (DeSanctis and Poole 1994)

Table 7-1: Summary of IS Use Principles

7.3 A Checklist to Study Use

A checklist to summarise and review the key steps and findings in this new

conceptualisation of Use is developed. The checklist works as a conceptual tool

for researchers to oversee a study design of, or incorporate, Use and to help

identify the most important factors influencing Use in a contemporary domain.

(Kaptelinin et al. 1999) discuss the usefulness of such a checklist to guide

researchers’ activities to study the Use of complex systems. Developing a

checklist that oversees a study design to evaluate IS success (or any other

phenomenon that includes Use) is useful, and one that is often lacking (Burton-

Jones and Straub 2006). The steps prescribed by the checklist focus on five key

aspects: (1) define, (2) contextualise, (3) operationalise, (4) validate, and (5)

integrate.

7.3.1 Define Elements of Use

To study Use, one must first define it (Burton-Jones and Straub 2006). To define

Use, the researcher must identify its related IS elements, starting with the type

of system (enterprise systems, functional systems or networking systems).

Conceptually, these systems and other elements really represent the tools

through which one may structure IS-related things—including cognitions,

experiences, and knowledge. For example, SAP mySuite and Microsoft Dynamics

NAV refers to the system; sales and marketing reports refer to information;

procurement and order fulfilment refer to tasks, and company x, y, or z refers to

Conceptualising Use for IS Success

Page | 199

an organisation. The definition of Use must not only account for the integration

of the elements identified, but researchers must ensure that the relationships

between these elements of Use, together with the nature and representation of

Use, are consistent with the underlying theory and epistemology that they adopt

for their study. Although this may be obvious, defining the elements also sets an

important platform for describing the other activities in the checklist.

7.3.2 Contextualise Use

Specifying the function and purpose of Use inadvertently helps define a study’s

context or domain of study and argues for its relevance. While the purpose of

Use defines the intentions that guide Use, function of Use focuses on the

relationships and the dynamics, with other contextual factors determined by the

purpose of Use. They do not refer directly to the elements of Use but to other

factors in its immediate environment. One of the advantages for contextualising

elements of Use is that the research problem stays current, and the design of the

study best reflects the natural context of the problem. To do this, researchers

should define the true meaning of each element of Use or the purpose of its

existence in its natural environment, in terms of its purpose and function.

Specifying both the purpose and function of Use must likewise follow to ensure

that the data collection methods (for example, to whom to speak) best reflect the

contemporary nature of the research problem and how best to interpret study

findings to deliver richer value. To illustrate the above point, we turn to look at

the context of the current study.

In terms of purpose, this study looks at two important and emergent aspects of

ES Use: (1) ES for education and (2) ES for management. First, the purpose of

education with ES ties in closely to the pursuit of ES knowledge. Knowledge

constrains and empowers users (Nonaka 1995; Jashapara 2004). As explained

earlier, employers seek employees with both software-specific knowledge and

business-process knowledge to meet new challenges that range from highly

technical maintenance and upgrade skills, to business-process-oriented software

skills in post-ES implementation (Markus and Tanis 2000; Davenport 2000).

Second, the purpose of adopting ES for management is also clear. Contemporary

organisations are shifting their emphasis of ES from simply delivering economies

of scale to sustainable value creation, process standardisation, and orientation

Tan  2010

Page | 200

(Ferdian 2001). They implement ES to deliver scalability and service architecture

(Brady et al. 2001; Devadoss and Pan 2007), to deliver cross-functional

transaction automation, and add-on software products (Hendricks et al. 2007).

Mandate data formats and reports in ES offer consistent information to

customers (Bancroft et al. 1998; McAfee 2006).

As outlined above, the function of Use relates closely to the purpose of Use.

Where the purpose of Use defines intention, the function of Use refers to ongoing

dynamics surrounding the intentions for Use. Examples from the current study

context illustrate this point. It is theorised (by Jashapara 2004 and Nonaka

1994 among others) that Use activities can be analysed through the discourse,

practices, and processes along a continuum of converting types of knowledge.

While there is support for educators to teach and for course participants to learn,

both types of knowledge transfer through a learn-by-doing approach; evidence

from managers’ responses verifies this. In the early phases, managers use prior

experience to structure knowledge around how to use ES. Most managers

(interviewed in this research) further refer to knowledge gathered through

socialisation (Boisot 1998) with other power users as more valuable and

empowering during learning how to use ES. Managers use information generated

from the ES to develop knowledge on how to encourage teams, generate sales

strategies, explore new practices, break down communication barriers, create

working logs and personal diaries, and as a failsafe for verification processes.

Some managers further demonstrate the need for basic operational knowledge

(know how) to be in place before pushing towards technical knowledge (know

that).

7.3.3 Operationalise Use

The reliance and the adoption of only quantitative dimensions and measures to

determine Use is inappropriate for several reasons. The mobile business users of

today spend less time in the traditional workplace, thus rendering quantitative

Use variables and dimensions inadequate and less meaningful. This thesis

proposes a simple 5-step approach (see Section 3.3) for developing Use measures.

In this study, the demonstration of applying this approach is set in the context

of ES. Besides capturing the amount of Use (through duration and frequency),

instruments should include measures of Use that capture the nature and

Conceptualising Use for IS Success

Page | 201

general attitude towards Use, to provide a more holistic measurement of Use.

For the multifaceted nature of more contemporary enterprise systems users, this

study introduces a set of dimensions with which Use is determined: (1)

attitudes—the perspectives of the user in his interaction with the IS, (2) depth—

the extent of value-added Use of the IS, and (3) amount—the actual duration

and frequency of interaction with the IS. While this list of measures is not

purported to be exclusive, the salient measures identified are parsimonious,

more complete, and mutually exclusive.

7.3.4 Validate Use

Researchers are urged to adopt mixed methods to study Use. This study employs

a mixed-method approach that accommodates both variance-based and process-

based validation techniques in a sequential explanatory design (see Section 4.4

and Figure 4-2). Regardless of the methods chosen by the researcher, they must

consider the theory and epistemological stance prior to data collection. This

includes specifying a nomological net with which to specify how to test and

validate the phenomena. A nomological net must include: (1) a theoretical

framework for what to measure, (2) an empirical framework for how to measure

it, and (3) a specification of linkages among and between these frameworks (see

Section 3.5). To rely on only the statistics and not a deeper consideration of

theory is dangerous (Petter et al. 2008). This (deeper consideration of theory) is

on top of relating research questions to research methods (Creswell 2003).

Besides specifying a nomological network of contemporary Use, variability and

value in predicting and managing Use can be further validated and strengthened

across sections of time, organisations, and different sectors, cultures, and

stakeholder groups using a variety of data-collection techniques. In addition,

researchers should be aware that other extraneous factors including deadlines,

stress levels, and technology availability could cause variability of Use. Despite

the different contexts and constructs in which validation of Use can occur, the

focus is on validating the Use construct with constructs employed in IS success

models. To achieve this, factor analysis and structural models are used on

quantitative data, and using a combination of underpinning theory and content

validity techniques, analysis of qualitative data to find emergent themes, and to

argue the quantitative results.

Tan  2010

Page | 202

7.3.5 Integrate Results

Researchers should look to integrate different types of results in their study to

strengthen their findings. At the outset, the researcher acknowledges the small

sample size of the qualitative phase of research to form a sufficient

generalisation of managerial practices. However, and when combined with

quantitative data, integrated findings inform potentially larger-scale research

into how industry practitioners interact with ES in different working

environments.

As demonstrated in this research, the levels of Use provide an alternative lens

for looking into how packaged solutions cater for different users, even if they

belong to one employment cohort (that is, operational managers). Although in

this study we use levels of Use to explain the differentiating scores recorded in

the first investigation, the quantitative data are also useful for understanding

the qualitative findings. For example, what the levels (of Use) show is that an

effective Use process requires users to build constantly from, and on, each level

over time. Quantitative results illustrate that, over time, Use still has a strong

mediating effect between IS and the individual impacts. What the qualitative

results illustrate too is that exploratory and value-added Use has a higher

impact than amount has. This is also clear from results showing that amount

has a smaller effect on Use and impacts than on other dimensions of the Use

construct. The integration of results in this study to strengthen the findings

from both methods of investigation helps to deepen the understanding of ES Use.

7.4 Limitations and Future Research

Finally, this section discusses the limitations of the research and addresses the

potential for future work. Broadly, the need for further validation of the study’s

findings through replicating the study in a different context, extending it to other

forms of IS, and collecting multiple data sources are discussed.

First, the obvious criticism that one might make is that the ontological approach

is too simple and provides insufficient guidance to researchers who want to use

it in practice. Akin to arguments in Burton-Jones and Straub (2006, p. 242),

and through the literature review, there is hardly an approach in literature that

defines, contextualises, and measures Use. While the consideration of business

Conceptualising Use for IS Success

Page | 203

processes, systems typology, and type of system Use is simple and obvious to

some, many researchers have not adopted a similarly rigorous approach. The

principles of study and checklist items explained in the previous sections form

the basic considerations that one should make when designing a study to

capture the impact of an IS through its Use. Such an approach is effective in

improving this situation and the study findings contribute towards the deeper

understanding of Use.

Second, despite the triangulated results both demonstrating and supporting the

important role that Use plays in determining IS success, the current study lacks

the volume of data to argue comprehensively for the possibility of generalising

these results. As highlighted earlier in the previous section, the small sample

(six) of operational managers interviewed in the qualitative phase is perhaps

unrepresentative of managers and their practices. On the other hand and when

using quantitative data, sample size of respondents is an important

consideration for generalisation. For instance, the widely cited Taylor and Todd

(1995) study that investigates Use of computer resource centres collected over

700 student responses and over 3000 behavioural data items to test a larger

range of factors, including attitudinal structure, normative structure, and control

structure and their effects on Use behaviour. In comparison, this data collection

suffers from having two relatively small samples—the 103 participants and

accounts from being from just six managers. Citing another example, this study

pales in comparison with the (Kim, Malhotra and Narasimhan 2005) study on

utilitarian, hedonic, and social value of news (UCLA) website Use, where they

collected over 2075 responses. Further research involving larger sample sizes

would improve the possibilities for generalising the findings in this thesis. There

is further acknowledgement of the effects of theory, researcher, and respondent-

related bias arising in this research (see Section 4.5.6).

Third, although this study contributes to understanding ES Use in competing IS

success models the conceptualisations have wider-reaching implications.

However, we have yet to prove this. The present research may be compared to

other like studies, such as some cited by Taylor and Todd (1995), that

investigated notions of user behaviour in competing theories of planned

behaviour, reasoned action, and technology acceptance. Similarly, the popular

Tan  2010

Page | 204

(Rai et al. 2002) article investigated the Use of student information systems

through the lens of competing models, including the Delone and McLean IS

Success model (1992), the Shang and Seddon (2000) impact framework and

technology acceptance models in a single study. In sum, the unit of analysis and

scope of this study largely follow preceding work (Gable et al. 2008; Gable et al.

2003; Sedera et al. 2004) and the study objectives outlined earlier.

Fourth, this study reported on results of ES Use in both education and practice

in singular regions. However, to improve external validity we could replicate the

study across different systems and contexts. For example, (Venkatesh et al.

2003) studied the effects of experience, motivation, control, and emotion as

anchors to determine ease of Use. The study draws user data from medium and

large firms across retail, real estate, and financial industries, and different

online helpdesk, property management, and company payroll systems.

In addition, here the study looked at a participant sample that had completed

procurement and order fulfilment tasks; testing the conceptualisation against a

range of tasks would be useful. Other antecedents of Use are outside the scope

of this study. Computer self-efficacy, pre-usage beliefs and control mechanisms,

and subjective and behavioural norms (Bhattacherjee 1996; Compeau and

Higgins 1995; Taylor 1995) are aspects that have been reported to have a direct

impact on Use. These are potential areas for expanding the current research to

increase the external validity of its findings. Furthermore, the thesis did not

consider other facets such as cultural influences, inhibitors, collaborative tools,

and the ethics of Use. These are also potential facets for future exploration.

Last, it is perhaps obvious to say that we could adopt other metrics and

measures to study Use as a higher-order model. As demonstrated statistically,

this study prefers a higher-order Use construct to a component model. Several

dimensions can reflect a higher-order construct theoretically. Appendix A

illustrates an archival analysis of the same 54 studies cited in the literature

review in the light of the new dimensions introduced—depth, amount,

exploratory, attitude, and others. As shown and discussed in the literature

review, a multitude of dimensions and measures have been adopted by

researchers over the years. For example, Burton-Jones and Straub (2006)

conceived cognitive absorption and deep structure usage, and like many studies,

Conceptualising Use for IS Success

Page | 205

Collopy (1996) used logged activity and connect time to gauge Use. Some of the

more interesting measures found to capture Use behaviour include the benefits

of closest functions used (Halawi, McCarthy and Aronson 2007), Use for

horizontal and vertical integration (Doll and Torkzadeh 1998), and time using

historical and functional data (Szajna 1993). While it is important to use different

dimensions and measures to develop our (researchers) understanding of Use, it

is more important that researchers take steps to contextualise them in the Use

that they are studying, as typified for example through the prescribed two-phase

approach.

7.5 Questions for Practice

From the study findings, of the following questions surface for future discussion,

thereby adding to the review of potential issues for research in the previous

section.

(1) For management, what are the characteristics of a support system to

accommodate a broader domain of contemporary IS? A domain of Use is defined

as accounting for the elements beyond the technology that is pertinent to a

user’s daily interactions. This question focuses on other contextual factors

extraneous to this study.

(2) What is a reasonable window of time that management should propose for

systems to be accepted and used in the manner that management and system

designers intended? It is purported earlier that management must allow users to

discover the meaning of their Use through its elements. Our interview results

find that entrusting ownership to users is a first option. This directly affects the

meaning that users draw from the systems in completing and excelling in their

tasks. This question thus asks researchers to investigate the time it takes to

move between levels of Use.

(3) How do we recognise requests for enhancement as a good indication that users

want to use the system more and to make it productive? Consistent with the

theoretical prepositions, our study results show that users rely on changing

elements and different parameters to feed their daily IS uses continuously. The

question focuses on how to determine the value-added of an individual and the

extent of changes. If enhancements alter initial system design, or require major

Tan  2010

Page | 206

changes, then value added is questionable and there might be bigger issues in

user requirements and the design phase.

(4) How can practitioners and researchers better evaluate Use? Given the

limitations in generalisations about managers and their practices, the

exploratory findings still inform future research in other managerial contexts

and cultures. The work in this research suggests that managers should not

mistake or associate long hours of system Use with efficient and effective Use.

The study tested and validated a set of behavioural measures; results indicate

mediating effects of Use, described through a theoretical reference lens.

Managers would still have to compare the merits of both quantitative and

qualitative approaches to monitoring Use of advanced technologies in the

workplace. Results should demonstrate varying levels of Use through a range of

multiple stakeholders working on the same system.

7.6 Concluding Remarks

This thesis presented a new conceptualisation of contemporary Use and ontology

in which to study it. The epistemology and theoretical underpinning behind the

re-conceptualisation is derived from and deeply embedded in the IS success

stream. Researching for the motivations for a new conceptualisation uncovered

that despite its popularity, the concept of Use has suffered from too simplistic a

definition, a lack theoretical grounding and an inadequacy for the broad

application of advanced systems today, and that it often lacks appropriate

measures. Using ES as the epitome, contemporary IS is examined and compared

to more conventional IS. The thesis looks intensely into the literature and

attempts to explain and evaluate the recursive nature of interaction between ES

and its users. From this, the thesis unearthed defining elements that describe

the domain of Use, and which when contextualised represent a concise set of

constructs with which researchers should assess dimensions and measures

when evaluating Use. From this, the thesis proposes a simple 5-step approach

for developing Use measures and further purports a rigorous structure towards

selecting, testing, and validating these sets of measures, further hypothesising

Use as an important mediator of the impact of IS. To test the models and

hypotheses, and to enrich our understanding of study findings, here we used

two independent studies with similar contexts but different techniques: a study

Conceptualising Use for IS Success

Page | 207

of ES for education (quantitative survey and variance-based) and ES for

management (qualitative interview and process-based). To complement these

and to help to understand the statistics better, we conducted a set of interviews

to seek supporting evidence on how Use is actually staged. The thesis finds

evidence of Use as (1) a mediating variable, (2) a continuum of activities

distinguished by the granularity of its domain elements, and (3) a construct that

thrives on mixed methods and multi-item research. The thesis brings several

potential implications for practice including (1) a classification of Use activities,

(2) an improved understanding of the role of Use for IS-Impact, and (3) a

rigorous test of Use measures. The above contributions of the thesis, its

description, key readings, and the potential beneficiaries of this research are

summarised in Table 7-2. The new conceptualisation of Use is circumstantial,

and findings from the current study should urge researchers to extend the

principles of contemporary Use.

Contributions of Thesis

Description Potential Beneficiaries Supporting Readings

1. Critical assessment of Use in IS success

–Contrasting roles in various streams of IS studies, particularly IS success

–Central role of Use in IS nomological net unaccounted

–Inadequacies of measures

–Scholars attempting or considering Use as a construct for evaluating IS success

–Scholars employing Use as an antecedent or consequence of IS success

–Benbasat and Zmud (2003)

–DeLone and McLean (1992)

–Burton-Jones and Straub (2006)

–Gable, Sedera and Chan (2008)

2. Assessment of contemporary Use domain

–A simple 5-step approach for developing Use measures

–Acknowledging technological and non-technological elements

–Requirements for users and the daily tasks that users complete in systems

–Epitome of contemporary IS in the workplace are enterprise systems

–Scholars considering what IS users actually use in a modern stream: FIT, NIT or EIT

–Scholars and practitioners considering the likely antecedents and consequences of Use

–McAfee (2006)

–Gable, Sedera and Chan (2008)

3. Mixed-method data

–Quantitative results from two similar demographic

–Scholars considering observing the effects of

–Pinseounault and Kraemer

Tan  2010

Page | 208

collection sets within a single method are compared and contrasted.

–Quantitative and qualitative results from two non-similar demographic sets, using two different methods, are compared and contrasted

Use over multiple sessions of time

–Potentially practitioners and managers studying the effects of Use in different phases of a post-ES implementation lifecycle

(1993)

–Gable 1994

–Tashakkori and Teddlie 2003)

4. Specifying and validating in a consolidated measurement and structural model

–Identify relevant (to study context) constructs and measures retained from IS success and IS-Impact measurement model

–Specify formative and reflective constructs and measures

–Follow a set of guidelines and strategies: Content Validity; Weighting strategy; Construct Validity; Construct Reliability; Components-based SEM

–The hypotheses of (1) current quality as a predictor of future impacts, and (2) given hypothesis 1, Use as a better predictor of future impacts are tested and supported; (3) Use is a mediator

–Scholars considering the effects of tasks and new, more qualitative measures of Use

–Scholars considering a formative construct and model validation attempt (if one construct is formative, is the model formative?)

–Diamantopoulos and Winklhofer (2001)

–Petter et al. (2007)

–Gable, Sedera and Chan (2008)

–Gefen et al. (2000)

5. Triangulate findings to explain conflicting Use scores

–Map interview responses to Use schema in theory to explain the phenomena of levels in Use

–Scholars attempting to map transcribed interview data from a micro-analysis to existing Use schema

–Burton-Jones and Straub (2006)

–Yin (2003)

–DeSanctis and Poole (1994)

Table 7-2: Contributions of the Thesis

7.7 Chapter Summary

This chapter discussed the implications, limitations, and outlook of the research.

The implications—reflected largely in the principles of Use and a checklist to

study it—correspond to the study objectives and are compared with prior

Conceptualising Use for IS Success

Page | 209

literature and alternative views. As the study purports, Use is a composite,

multi-item, and multilevel measure determined at its core by the quality of the

system, task, and information. Second, to study Use, researchers should seek to

define, contextualise, and operationalise Use, followed by steps to validate and

integrate their findings. This is the approach adopted here. Third, this study

proposes a checklist for studying Use and which represents a series of steps and

considerations for designing a study on Use. We develop steps in the checklist

from findings from the two study phases featured in this research, and the steps

are therefore applicable to both scenarios. Together, the ontology of Use and the

considerations for Use reported constitute the new conceptualisation of

contemporary Use, extend current work on the impacts and success of IS, and

propose how researchers should employ Use in the future. There is potential for

the validation of the study findings in different contexts for alternative streams

in IS.

Tan  2010

Page | 210

Appendix A: Archival Analysis ** of Use Type of Use Dimension* --->

Requisite Value-

adding

Exploratory Attitude Others

Example of dimension(s) ---> e.g.

frequency

and

duration

of Use

e.g. I use

additional

system

features to

add value

to process

e.g. I

explore

other

features

and

functions of

the system

e.g. I feel

comforta

ble with

using the

system

e.g.

varieties

of uses

for

system,

depende

nce on

system

Study ^ No of

measure

s

examine

d

FIT NIT ES

(Barki and Huff 1985) 1

(Mahmood and 8

(Raymond 1985) 1

(Srinivasan 1985) 2

(Raymond 1990) 2

(Liker 1992) 1

(Adams et al. 1992) 2

(Szajna 1993) 6

(Leidner and Elam 2

(Rice 1994) 1

(Thompson, Higgins 4

(Taylor and Todd 3

(Compeau and Higgins 2

(Straub et al. 1995) 3

(Xia 1996) 3

(Choe 1996) 2

(Igbaria, Parasuraman 2

(Gill 1996) 1

(Massetti and Zmud 4

(Collopy 1996) 2

(Guimaraes and 2

(Igbaria and Tan 2

(Li 1997) 1

(Seddon 1997) 5

(Gelderman 1998) 4

(Doll and Torkzadeh 30

(Bhattacherjee 1998) 3

(Lucas and Spitler 15

(Tu 2001) 21

(Skok 2001) 2

(Staples, Wong and 8

(Rai et al. 2002) 1

(Pflughoeft, 6

(DeLone and McLean 4

(Devaraj 2003) 3

(McGill et al. 2003) 1

(Mao and Ambrose 4

(Gebauer 2004) 4

(DeLone and McLean 8

(Djekic and Loebbecke 7

(Cheung and Limayem 2

(Kim and Malhotra 1

(Jain and Kanungo 5

(Kim et al. 2005) 1

(Almutairi and 20

(Iivari 2005) 2

(Abdinnour-Helm and 10

(Wu and Wang 2006b) 5

(Burton-Jones and 17

(Sabherwal et al. 4

(Wang, Wang and 3

(Chien and Tsaur 8

(Tsai and Chen 2007) 5

(Halawi et al. 2007) 6

(Landrum, Prybutok, 1

Count ---> 273 38 19 2 42 10 2 9 19

Percentage of

Studies ->

69% 35% 4% 76% 18% 4% 16% 35%

Conceptualising Use for IS Success

Page | 211

** Results depict that 69% and 33% of studies reported on functional (FIT) and networking (NIT) systems

respectively, while there are only two studies that focused on ES (EIT).

* Examples of value-added measures are “I try new features in email/spreadsheets to make me more efficient

and do things differently than others” (Jain and Kanungo, 2005, p. 121) and “When I was using MS Excel, I

used features that helped test different assumptions” (Burton-Jones and Straub, p. 237). Other Use items

reported includes “number of assignments completed” (Taylor and Todd 1995, p. 156) and “user type (average,

heavy or light)” (Srinivasan 1985, p. 248).

^ Some studies have not published the full instrument; as such only reported measures are examined.

Tan  2010

Page | 212

Appendix B: The SAP Hands-on Exercise

An ES teaching case was created based on procurement and order-fulfilment business

processes, to facilitate both software-specific knowledge and business process knowledge.

The figure below provides examples from the actual course material, of the typical

execution steps. It is noted that each exercise consists of four interrelated course

participant tasks (1) Understanding Business Process Overview, (2) Functional

Navigation, (3) Tasks and Data Entry and (4) Producing Deliverables.

1. Business Process Overview: Course participants are introduced to the exercise workflow and briefed on the objectives of the current step (e.g. creating purchase requisition for procurement).

2. Functional Navigation: Course participants are briefed on additional assessment instructions and how to navigate to the process function in SAP (e.g. creating a purchase order).

3. Tasks and Data Entry: Course participants are given sequential instructions for completing the current step of the business process (e.g. creating outbound delivery).

4. Producing Deliverables: Course participants are asked to take screenshots and to print documents for assessment submission (e.g. printing a quotation)

Completing the Case Study Exercise—A Four-step Process*

Source: Tan and Sedera (2008)

Conceptualising Use for IS Success

Page | 213

Appendix C: Survey Instrument

PARTICIPANT INFORMATION for QUT RESEARCH

PROJECT

Reconceptualising System Usage for Contemporary Information

Systems Success

Research Team Contacts

Felix Ter Chian Tan, PhD Candidate Dr Darshana Sedera, Senior Lecturer

3138-9391 3138-2925

[email protected] [email protected]

Description

This project is being undertaken as part of Doctor of Philosophy degree for Felix Ter Chian Tan. The purpose of this project is to derive a better understanding of the broad interaction that occurs between users and the contemporary information systems (e.g. enterprise systems such as SAP R/3). A thorough review of prior literature suggests a need for rich measures that go beyond the traditional (eg. duration and frequency of Use) measures, to capture the nature of that interaction. In light of this purpose, the research team requests your assistance in completing this survey to aid in the development of rich measures that would hold up in information systems practice.

Participation

Your participation in this project is voluntary. If you do agree to participate, you can withdraw from participation at any time during the project without comment or penalty. Your decision to participate will in no way impact upon your current or future relationship with QUT (for example your grades).

Your participation will involve a survey which takes approximately 20-30min to complete.

Risks

There are minimal risks beyond normal day-to-day living associated with your participation in this project. We do however recognise the potential risks relating to anonymity of respondents. You would be asked about your SAP login ID and age. Management of these risks is indicated below.

Benefits

Your responses would subsequently assist in identifying key factors to maximising the benefits and value brought to the organisation and its employees by it. Insights into your experiences with SAP system will also be valuable in highlighting where organisations like SSB (Super Skateboard Builders), Inc Support should be focusing their attention, today and in future.

Tan  2010

Page | 214

Confidentiality

The SAP login ID surveyed and all associated comments and responses will be treated confidentially. The ID will not be matched to your name. Your responses would solely be used for research purposes and the responses would only be kept strictly within the research group. Survey responses would be placed under lock and key in QUT. The researchers ensure that the conduct of the study would not affect the relationship between the respondents and QUT.

Consent to Participate

An email would be sent to students informing them of the survey. An agreement from the student during lecture is accepted as an indication of your consent to participate in this project.

Questions / Further Information about the Project

Please contact the researcher team members named above (or in the email) to have any questions answered or if you require further information about the project.

Concerns / Complaints Regarding the Conduct of the Project

QUT is committed to researcher integrity and the ethical conduct of research projects. However, if you do have any concerns or complaints about the ethical conduct of the project you may contact the QUT Research Ethics Officer on 3138 2340 or [email protected]. The Research Ethics Officer is not connected with the research project and can facilitate a resolution to your concern in an impartial manner.

Conceptualising Use for IS Success

Page | 215

Extension of Data Collection Document—Survey

1. Ethics Number: 700000644

2. Variation: Please refer to attached updated list of questions (questions in red are

removed)

3. Participants: Students from ITB228/ITN228

Copy of Email Sent to Students:

Dear Student,

The ITB/ITN228 teaching team is conducting a survey to study and learn from your

experiences with the SAP system. This survey is being conducted by the IT professional

services research program, at Queensland University of Technology.

Results would subsequently assist in identifying key factors to maximising the benefits

and value brought to the organisation and its employees by it. Insights into your

experiences with SAP system will also be valuable in highlighting where organisations

like SSB, Inc Support should be focusing their attention, today and in future.

This survey is confidential. Participation in this study is voluntary and you are not under

any obligation to complete the survey if you choose not to. Your decision will not influence

your relationship with Queensland University of Technology in any circumstance.

If you should wish to participate in this survey, please inform the lecturer during the next

lecture session.

Best Regards,

ITB/ITN228 Teaching Team

Tan  2010

Page | 216

Your Experiences with SAP

Introduction: Over the past few years, Super Skateboard Builders (SSB), Incorporated has invested significant resources in developing its information technology infrastructure. SAP is the latest advanced technology implementation, employed to allow staff to perform daily operational business processes, access personal information, resources, and administrative services. The impact of SAP system is now being experienced across all levels in SSB, Inc. As a highly valued employee of SSB, Inc, you are encouraged to participate in this survey to evaluate the impacts brought about by the SAP system. Purpose of the Survey—The purpose of this survey is to study and learn from your experiences with the SAP system. This survey is being conducted by the IT professional services research program, at Queensland University of Technology. Results would subsequently assist in identifying key factors to maximising the benefits and value brought to the organisation and its employees by it. Insights into your experiences with SAP system will also be valuable in highlighting where organisations like SSB, Inc Support should be focusing their attention, today and in future. Confidentiality—This survey is confidential. Participation in this study is voluntary and you are not under any obligation to complete the survey if you choose not to. Your decision will not influence your relationship with Queensland University of Technology in any circumstance. For data integrity purposes, the researchers must be able to associate your demographic details with your survey responses. If you have any concerns regarding the ethical conduct of this research, you can contact the Research Ethics Officer of the Queensland University of Technology on 3138-2340. General Instructions for Completing and Returning the Survey Responses to the questions can be selected by ticking one check box per question. 'Comments' fields have been included at the end of each section. Feel free to include here any further views you have on the SAP system or on this survey. There is no word limit to these fields. It will take you approximately 15-20 minutes to complete this survey. Please return the completed survey by end-of-day. If you have any queries concerning the survey, please do not hesitate to contact the authors. Thank you for your valued time and effort in becoming involved in this study. Your participation is very much appreciated.

PLEASE ANSWER ALL QUESTIONS.

Page | 217

217

System quality—how well the system performs from a design and technical perspective Strongly Strongly As an employee, I feel… Disagree Neutral Agree

1 SAP is easy to Use 1

2

3

4

5

6 7

2 SAP is easy to learn 1

2

3

4

5

6 7

3 SAP meets my requirements 1

2

3

4

5

6 7

4 SAP is easy to access 1

2

3

4

5

6 7

5 SAP includes necessary features and functions 1

2

3

4

5

6 7

6 SAP always does what it should 1

2

3

4

5

6 7

7 SAP user interface can be easily adapted to one’s personal approach

1

2

3

4

5

6 7

8 SAP requires only the minimum number of fields and screens to achieve a task

1

2

3

4

5

6 7

9 All data within SAP is fully integrated and consistent 1

2

3

4

5

6 7

10 SAP can be easily modified, corrected or improved 1

2

3

4

5

6 7

Comments: Information Quality—your perceptions of the goodness of the task outputs produced by the system Strongly Strongly As an employee, I feel… Disagree Neutral Agree

1 Order Fulfilment outputs (for example, quotations and goods invoice) generated from SAP seems to be relevant and exactly what is needed

1

2

3

4

5

6 7

2 Order Fulfilment outputs generated from SAP is in a form that can be readily used for the next sub-task without any modification

1

2

3

4

5

6 7

3 Order Fulfilment outputs generated from SAP is easy to understand

1

2

3

4

5

6 7

4 Order Fulfilment outputs generated from SAP appears readable, clear and well formatted

1

2

3

4

5

6 7

5 Order Fulfilment outputs generated from SAP is concise (to the point)

1

2

3

4

5

6 7

Comments:

Tan  2010 

Page | 218

Interaction—your feelings and thoughts as you interact with SAP in completing Order Fulfilment and Task Quality—your perceptions of the goodness of the tasks that needs to be completed (*-removed for data analysis due to theoretical reasons) Strongly Strongly As an employee, I feel… Disagree Neutral Agree

1* I can adapt Order Fulfilment in any organisation. 1 2 3

4

5

6

7

2* I find Order Fulfilment difficult to complete 1 2 3

4

5

6

7

3* I find Order Fulfilment is easy to learn 1 2 3

4

5

6

7

4* I do not encounter unexpected results when completing Order Fulfilment.

1 2 3

4

5

6

7

5 I have a clear understanding of the outcomes of Order Fulfilment.

1 2 3

4

5

6

7

6 I have an overall understanding of what I need to complete in Order Fulfilment.

1 2 3

4

5

6

7

7* I find the instructions given to complete Order Fulfilment, adequate and sufficient.

1 2 3

4

5

6

7

8* I find Order Fulfilment has too many sub-tasks. 1 2 3

4

5

6

7

9* I find all Order Fulfilment sub-tasks interrelated. 1 2 3

4

5

6

7

10* I find Order Fulfilment has clear beginnings and endings with visible outcomes

1 2 3

4

5

6

7

11* I receive valuable feedback from the instructor when completing Order Fulfilment

1 2 3

4

5

6

7

12 The tasks that I undertake in Order Fulfilment are value-adding and strategically important to the organisation.

1 2 3

4

5

6

7

13 I find the Order Fulfilment tasks rewarding and fulfilling 1 2 3

4

5

6

7

14 I enjoy the environment which I work on Order Fulfilment (i.e. friends and instructor).

1 2 3

4

5

6

7

15 I find the Order Fulfilment exercises interesting and attractive.

1 2 3

4

5

6

7

16 I am willing to put in as much effort as required to complete Order Fulfilment.

1 2 3

4

5

6

7

17* I believe I would be successful at completing the Order Fulfilment

1 2 3

4

5

6

7

Comments: Strongly Strongly As an employee, I feel… Disagree Neutral Agree

1 I feel confident and relaxed when engaging with SAP 1 2 3

4

5

6

7

2* I feel that SAP is invaluable in completing Order Fulfilment

1 2 3

4

5

6

7

3 I am willing to challenge myself and excel at using SAP for Order Fulfilment

1 2 3

4

5

6

7

4* I find the Order Fulfilment sub-tasks in SAP well integrated

1

2

3

4

5

6

7

5* I find Order Fulfilment in SAP highly standardised (data format, screens, language)

1

2

3

4

5

6

7

6* In SAP, Order Fulfilment produces real time information

1

2

3

4

5

6

7

Conceptualising Use for IS Success

Page | 219

7 I use SAP to set up organisational and user parameters for Order Fulfilment

1

2

3

4

5

6 7

8* I use SAP to execute sub-tasks of Order Fulfilment 1

2

3

4

5

6 7

9 I have explored additional system features in SAP beyond the given specifications

1

2

3

4

5

6 7

10 I am confused with system features and functions in SAP

1

2

3

4

5

6 7

11 I am only using SAP for Order Fulfilment because I have to

1

2

3

4

5

6 7

Comments: Individual Impacts - how SAP system has influenced your individual performance. Strongly Strongly As an employee, I feel… Disagree Neutral Agree

1 I have learnt much about Order Fulfilment through SAP

1

2

3

4

5

6 7

2 What I completed in SAP has increased my awareness of Order Fulfilment

1

2

3

4

5

6 7

3 SAP has enhanced my effectiveness in Order Fulfilment

1

2

3

4

5

6 7

4 SAP has increased my productivity for Order Fulfilment

1

2

3

4

5

6 7

5 SAP has increased my overall performance in Order Fulfilment

1

2

3

4

5

6 7

Comments: OVERALL Strongly Strongly As an employee, I feel… Disagree Neutral Agree

1 That SAP facilitates all Order Fulfilment tasks and produces relevant outputs.

1

2

3

4

5

6 7

2 My interaction with SAP has been positive. 1

2

3

4

5

6 7

3 The impacts of SAP on me have been positive. 1

2

3

4

5

6 7

Comments:

Tan  2010 

Page | 220

Demographics

* Please tick or highlight the box next to the response that best describes your situation

Age: * I have…

never used SAP before. used SAP before. used SAP extensively.

* I have…

never heard of Order Fulfilment before. some knowledge of Order Fulfilment. a thorough understanding of Order Fulfilment.

* On average, I use SAP…

At least once a day. A few times a week. Less than once a week.

* On average, I use SAP…

More than 2 hours in one session. 1-2 hours in one session. Less than ½ hour in one session.

End of Survey – Thank you for your participation

Conceptualising Use for IS Success

Page | 221

Appendix D: Interview Instructions

PARTICIPANT INFORMATION for QUT RESEARCH PROJECT

Understanding The Issues of User Interaction and Knowledge Management for Enterprise Systems

Research Team Contacts Felix Ter Chian Tan, PhD Candidate Dr Darshana Sedera, Senior Lecturer

3138-9391 3138-2925 [email protected] [email protected]

Description This project is being undertaken as part of Doctor of Philosophy degree for Felix Ter Chian Tan of Queensland University of Technology (QUT). The purpose of this project is to derive a better understanding of the broad interaction that occurs between users and the contemporary information systems (e.g. enterprise systems such as SAP R/3) and the effective management of ES-related knowledge. In light of this purpose, the research team requests your participation in completing this interview to aid in the understanding of above-mentioned issues in information systems practice. Participation Your participation in this project is voluntary. If you do agree to participate, you can withdraw from participation at any time during the project without comment. Your decision to participate will in no way impact upon your current or future relationship with your company or with QUT. Your participation will involve an interview which takes approximately 45-60min to complete. Risks There are minimal risks beyond normal day-to-day living associated with your participation in this project. We do however recognise the potential risks relating to anonymity of respondents. You would be asked about their job titles and roles in their company. Only with your permission, interviews would also be taped. Management of these risks is indicated below. Benefits Participants’ responses will aid researchers in understanding knowledge management strategies and the impacts of recursive interaction with contemporary information systems for predicting system success. The researchers also expect responses to affirm several theoretical hypotheses that would contribute to existing body of knowledge in IS success. Confidentiality The names of individual persons and all associated comments and responses will be treated confidentially. Your responses would solely be used for research purposes and the responses would only be kept strictly within the research group. Any audio tapes used for recording would be destroyed following transcription. Transcripts would be placed under lock and key in QUT. The researchers ensure that the conduct of the study would not affect the relationship between the respondents and their companies. Consent to Participate The return of the email is accepted as an indication of your consent to participate in this project. Questions / further information about the project Please contact the researcher team members named above (or in the email) to have any questions answered or if you require further information about the project. Concerns / complaints regarding the conduct of the project QUT is committed to researcher integrity and the ethical conduct of research projects. However, if you do have any concerns or complaints about the ethical conduct of the project you may contact the QUT Research Ethics Officer on 3138 2340 or [email protected]. The Research Ethics Officer is not connected with the research project and can facilitate a resolution to your concern in an impartial manner.

Tan  2010 

Page | 222

Appendix E: Flowchart of Questions

Page |

Appendix F: Mapping Responses to Study Themes (1/13) Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)

Organization Large/ Medium

What company do you work for? How big is your company? What does the company do?

5 people- 2 product

managers, 2 asst product

managers, 1 general

manager. 2 branch, drug

and neuro

Company T Power Limited, a

power transmission and

generation company,

probably the largest in India,

in Ahmedabad in India

it s the largest private

company in India. It’s mainly

related on petroleum and

petrochemical. And the

business we’re

concern…everything is

Respondent E has been

working in the techno-

commercial management

department at TP limited

for 14 months as systems

operations manager, at

department of the future

group. The future group, 2

months. Synergise our

operations.

Employment level Strategic

What sort of role do you have in the org?

Working for Company T

pharmaceuticals as

assistant manager, last

13 months

Yes, I am Asst manager for

12-13 months, working in

treasury and finance

HR Department as a system

manager

working for Company T

pharmaceutical

now...Having move from

Zydus and Cillia

pharmeuticals... And

currently the system

manager of

operation...Yes..yes.. The

other one will do

performance and

Store manager, handling a

store of aadhaar…food,

clothing, everything

related to end user, retails

Managerialhow long have you been in this role?

9ths as management

trainee, 4 months as

assistant product

manager

Yes, I am Asst manager for

12-13 months, working in

treasury and finance

Yea, exactly. I’m in this job

in 14 months 14 months 1 year and 3months

Technical

Respondent C has been

working at TP as human

resources (HR) systems

manager for 14 months. She

is one of 5 executives

working in a 25 man strong

department made up of all

locals. Rather surprisingly,

this is respondent C's first

job. Respondent C has had

little knowledge of RAMCO,

Respondent D has been

working at R limited as a

business development and

sales manager for 12 months

at the time of the interview.

Respondent D works closely

with marketing development

and services departments.

The total strength of these

departments number around

5 thousand, comprising of

Mm..12 months. They

give training support..you

know, that related to

them.

Respondent F has been a

store manager for Aa

limited, a large rural

branch of the F group for

little over 15 months at

the time of the interview.

Respondent F is in charge

of synergizing all rural

retailing operations in Aa

limited. Respondent F

stated that he was

Operational

Department

can be organizational environment too

What department are you working in?

Respondent A has been

working for TPA limited

as an assistant manager

for 13 months (at the

time of the interview).

Prior to his managerial

role, he had been a

management trainee for

9 months and spent

Yes, I am Asst manager for

12-13 months, working in

treasury and finance HR department

I’m working with marketing

development, marketing

services and we work mainly

at everything in propylene

I’m working in the techno-

commercial management

department. In short

form is TCM. Techno-

commercial management

department. And the

basic function is

operation.

Systems to HR to

everything for a store

How big is your department?

35-40 people...35-40 in

admedabad hq. . We provide

power to whole city of

ahmedabad... We have

divided ahmedabad in 4

zones. 4 offices. 3 people look

after the connection

between 4 offices zones, any

customer getting undue

credit. 5-7 look after

There is 25 working in the

department, implementing

numbers of staff and clerical

works...have 3-5 executives

working in the department

OK…in marketing and

business development, power

plant and everything about

petroleum, it’s about 5

thousands people are

there..and then as a big,

there will be about

(mumbling)….so that around

20, 000 employees in all

OK. My department is just

consisting of 5 people, ok?

But all the departments

will report to us. I mean,

the molecule biology

department, the

pharmacology

department… the

analytical department,

(can’t hear clearly, but Rural retailing.

any foreigners in your department? No, we don’t have

No…All of them are local.

There is no foreigner in that

Tan 2010 

Page | 224

Mapping Responses to Study Themes (2/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)

Experience with role less than a year

What sort of experience do you have prior to joining this company?

MBA, immediately join

TP, no prior work

experience

company (TP limited) as

assistant manager for the

last year (since the

interview). There are 3

assistant managers

including himself His

department is 38 people

strong (at the time of the

interview) with an officer

and 7-8 staff assigned to

No, no. I don’t have it. This is

my first organization

At first, I work in chemical

engineering in 2002, and I

work at Mitsubishi, I got

working with the system.

After that I did my MBA, and

then I’m being an engineer

…and sales manager.

I did my MBA, special in

doing marketing, but in

my first year we had a

financial in detail. We

had 2 separate programs.

In general for the first

year we have to study

finance, operation.

Second year we become

specialize it

less then 5 years Is this your first company? Yea. This is my first job

more than 5 years

more than 10 years

Experience with IT Systems less than a year

you are currently using this XXX system, Is this your first time using the system?

Sales and distributions,

inventory management

less then 5 yearsDo you have prior experience of using this system?

No, I was a marketing

executive for 1 year. My

experience of erp, ramco, I

got from Company T. I have

no idea of other systems

I had theoretical

knowledge

more than 5 yearsHave you used other systems before?

MS project in MBA…no.

But I know it before I did

my MBA. Then I know it

when I’m in the company

also

Polaris system at the front

end, Point of sales

more than 10 yearsWhat systems do you use currently in your role? SAP

On the job training

Did they provide training for your role?

Yes, 6 months on roles,

including coding with

other,

when I came to Company T

and I received no training.

The person who taught me

was this lady transferred

here from outside doing the

same process. Training was

unstructured. She only

taught me only what she

used to do. If software can

give me 100 solutions, but if

Yes, training was involved in

HR and in RAMCO I have

training also in terms of HR

services

Company R is not really

giving training in SAP. But

we’ve a training team and we

go for them...No, we have

developer team employed by

Company R…its an inhouse

team

What? SAP..no…except

during the installation

when we designing…so

we have to figure it out

for finance, marketing

and we have to structure

it regarding to the

operation. That was in

designing stage

Was it useful 1-10 Yeah, it was really simple

On my role, I put on

6…something like that

The training I’ve found

good..it must be high

rate..because it’s so

useful.

Conceptualising Use for IS Success 

Page | 225

Mapping Responses to Study Themes (3/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)

Sources of Structures Systems

can you give me an indication of the types of systems you use for your daily role?

SAP Sales and Marketing

modules

This is authorization of

payment (but it’s a step in a

larger process) The process

is as such: Say we want to

procure goods, we have a

code for the supplier.

RAMCO, I just working for

last 4 months

Yeah… that s not so many

modules we use in SAP.

Particularly for IT

distribution, also e-PRM

module, e-commerce module

and ESS, that we called it

Employees Assessment. And

there is online order, which

customers can pick and put

order using the system. The

order, we can see it in the

system and will be recorded

in that module. The other

one is distribution module

which supports facility at Pre specified reports in

SAP and customized

reports for our company

in SAP. We are

pharmeutical, medical

representatives across

india, product wise as

Ramco helps tracks

materials needed when

quantity in shortage.

Materials department

handles this (procuring of

materials) using ramco,

which supplier, what

How long have these system been used in the company?

3years to my knowledge,

Based on reports I see Came in 6-8 years back.

How long have you been using this system?

Mm..12 months. They

give training support..you

know, that related to

them….Yes..I use almost

everyday

Did they train you on the system?

initial to the induction and

the orientation in group for

every function we’re having,

in order to carrying out the

development, and I was given

1 month training in the

function of HR, and in

during the installation

when we designing…so

we have to figure it out

for finance, marketing

and we have to structure

it regarding to the

operation. That was in

system trainingon a scale of 1-10 how was the training? I would rate it 8.

With the lady, its 10/10. on

the whole training with

ramco is 2/10

Yes, the training was very

useful, maybe 8 also

y,

is about some..a month or

so. There wasn’t

structured, but there will

be in interaction, where

there are different

features, what you don’t

understand and so. Only

raw material that

manufactured by the

company..everything that

related to the company,

Tan 2010 

Page | 226

Mapping Responses to Study Themes (4/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)

system fit

on a scale of 1-10 how fitting was XXX system for your role?

Considering the 2

limitations, I will rate it

6.5-7/10 not more than

that. I have been working

for 12 months, there was

a structural change.

Customization was

needed after that, we

need to run reports on

weekly/ daily basis. After 6 out of 10 I put it at 8

Do you feel restricted by the system No

let’s say you have 2000

steps in approval and

collective steps for

different material that

will be departure, so let’s

say you have approved

about ??, then what

happen is under the

departure ..there’s no

Tasks

Can you briefly describe your tasks?

Our cash department handle

collection of payment in my

department. we have many

customers, we connect

power to customers through

outsourced agents, we need

to cross check if they

charging the customers

properly. 5 people allocated to

Payment to government for

gas. 7 people- Relationship

with residential customers,

commercial customers and

Relates with Company R, the

system in my first year... in

general document work on,

which a SAP complement has

created. It’s common sense

that only documents, which

are to complete. If you want

to see your document, you’ve

to put it in your document...

that’s particular program. So

many documents that being

created...that I can say that

millions reports are created

whole project of tradition,

valuation, aligning,

costing, preparing the

value bank to market,

looking at overhead. This

is all parts of the costing

process...Payments for

particular party. I’m not

very sure whether I’ve

done it. Every customer

have their own code.

Parameters inspection

that to be made and

What sort of tasks do you do with the system?

SAP and customized

reports for our company

in SAP. We are

pharmeutical, medical

representatives across

india, product wise as

well as reporting total

wise to hq, after running

the thing. That is only

process, every month we

run the reports and tell

hq where they should be

enter the employee record,

we record the relevant

number to each employee.

Just like a name and

number given to an

employee, so all this

we...whatever name or

employee data, or whatever

salary that we have to make,

we given to them and we

delivery to each workers and

recorded attendant and

the next projects, what

we’re doing is we prepare

the timeline, with years,

our workout and estimate

budget. Our Company T

MS project is link to the

site. So what happen in a

MS Project, I design for

entire around 5 years.

Now, within the 5 years,

for the first year, each

our different department

Conceptualising Use for IS Success 

Page | 227

Mapping Responses to Study Themes (5/13)

system are fitting for the tasks?

Yea. RAMCO now is using for

HR purposes only, and the

appointment in the system,

see we have the plan to

implement the ERP system

at one point in this

department. They’re going for

the SAP and you know all the

system in the same platform

and in ERP will be easier

If you are unfamiliar with the tasks, who do you go to for help?

Its up the department to deal

with it. I am into cash, I will

deal with authorization

problems. If its procurement,

System checks should be

integrated in the

departments,

What is it about your role and the things that you do that is interesting and makes you want to do it? and share abt outside of the company?

Yes, because in HR we

interview people, of course

this is the interesting in my

job

did you have to develop your own SOP? (Evidence for new sources of structures as a result of appropriation)

Yes, there is a separate way

I do my work. Even the lady

do her own way. The steps

are defined but how can you

create efficiency etc,

noboday tells you. Its very

subjective. Yea, yea, but from company

I think (the working

protocols are) designed by

organization, because it

was designed for certain

ways

Organizational Environment

What was the working culture like when you first joined

Culture was very

cooperative, my division

has 5 people, mega

division of 4 divisions.

Mine…There is a central

division and 15-16people.

Talking to other division

improves share

knowledge. How to use ES

As petrochemical engineer, I

would say there are many

complaints on my works…so

when I did my MBA and being

a sales manager...I think

this is the best

company, I missed some

stages of the projects. So,

I’m not understood what

are going on. So, when I

joined the organization

that was a….what we

called it...umm...disliking

about the scientist

regarding to the techno-

commercial management

because of

communication gap

Importance of knowledge sources- When you do a task and when you don't have a clear understanding. Who would you go to for help? Colleague, helpdesk, user manual or what?

Thorough training for 6

months, there was a

senior, after training, we

were given practise

assignments, running

reports analysis.

Technical training and

practical component

Its up the department to deal

with it. I am into cash, I will

deal with authorization

problems. If its procurement,

System checks should be

integrated in the

departments,

In our department, there is

one person who is very much

master in RAMCO operation,

he got more experience in

RAMCO, so we usually go for

him

In that category, what we’ve

done is…we can do any. My

colleagues have right to ask

me, and I’m sure will help

them. I don’t need to go to

training team

Yes..they do write

program..programmer.

They provide ..know how

to do..very good in that

way. That people really

help in our source code,

discovery cost for the

project, the right market.

Tan 2010 

Page | 228

Mapping Responses to Study Themes (6/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)

Knowledge Sharing

If you do that, if they find out a new thing, would they provide the new knowledge to others?

There is no sharing. Nobody

in department come and

teach me. They never ask

me feedback. There is no

SOP for my job, they never

ask me how I do my work,

how to improve it, what

problems am I facing. I do

my own work, its how the

lady taught me

Yes. He shares with

everybody

Yes, almost..most of them

share knowledge, give

feedback and any..

At this time, the feedback

either on 70 percent. And

because of..in terms of

the feedback is good, so

there is a satisfaction.

What sort of knowledge do you share?

Reports, running sap,

supposed to run

knowledge management

information systems

finance department,

purchase department,

demonstration. If I have

issues that very

important, I can’t to

figure out; I’ll go first to

the finance people.

They’re better…that will

be my first point of

contact, if it still go on, I’ll

go back to them…and also

If you do that would you share your knowledge to other and feedback to the company? Through what channel do you do so?

Sales related report, 1

month to 12 months,

relevant percentages

We in our organization, we

have an appointment

through network, this is a

formal channel that we go

through

Yes. Actually there is a

quality in the system

providing the

feedback...there sort of

system that provide feedback

or just e-mail. So we just use

the system, using any

modules...What I concerned

is...I used twice in that

module

No..no..when we go to that

department, and back to

our place…actually we

work in group..together.

So that’s how we solve

that. Yea..yea..yea. If I go

for a meeting with the

finance people, then they

(group) don’t have to go

there.

After running reports, we

talk abt percentages, if

we go thru last 1-2 years,

improve in some time,

what are the reason,

making some analysis,

we give our report to sales

and they give their

feedback

The department has no

book. Individual initiated.

This is something very bad.

Individuals take

responsibility this is very

bad. What I did is I wrote

down the process each and

when I do it 3-5 times, you

automatically know. We

should not try to call the

other departments for help,

everytime if we have a I am not very sure…but they

have a training module but I

don’t know where it is. Even

the girl also don’t know,

nobody knows where is the

training module. Company T

is a private group, there’s

160. majority of people are

retiring people are about 55

years old. Ramco was

brought in because the

Conceptualising Use for IS Success 

Page | 229

Mapping Responses to Study Themes (7/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)

What sort of feedback do you give to the helpdesk?

Feedback important,

basically we are into

marketing strategies, we

are to map them out for

implementation in sales

IT dept never asks. Only

when problem happens, I

take a snapshot of the

screen and I send to them.

They go to the back and they

fix it, and they call you to say

its working. They never ask

how we can add value to it

and what other problems we

have.

Yes..they do write

program..programmer.

They provide ..know how

to do..very good in that

way. That people really

help in our source code,

discovery cost for the

project, the right market.

tele-conversation with

individuals, no formal

emails. We are in

marketing, we are to

motivate people, rather

than being disheartened

(mistake hurt with

heard). They achieve

more when what records

they are doing, what is

only the interface. People

like us are support to the

scientist. Most of the

employees are the

scientists. That was

design, complain, what

are the responses. There

are lots of issues. When

the people in projects

enter the data, then the

Its usually individual

queries. Fine with system

Yea…not really. That’s

actually..I just help the

help desk and the IT

department.

We have an IT helpdesk,

we suggest these

improvements in SAP. We

suggested these 1-2

months ago. They say

they appreciate but they

have to take it to the

corporate level and then

they will get back to

sales. They say they have

received the same

suggestions from other

departments as well.Willingness to share knowledge 1-10 I will give 7

Information

Are you happy with the information generated and outputs produced?

Do you see any changes in the kinds of information you put in or receive?

Improve the data part of it.

Payment details. There were

human errors in authorizing

payment manually, ramco

helped with that. Efficiency

has improved in one way or

another. Company going for

SAP so data will go over.

Better data management.

data management is very

important it is the heart and

Mm…no…I don’t think

there’s a change

Tan 2010 

Page | 230

Mapping Responses to Study Themes (8/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)More of less the same, my

kind is credit to supplier,

whether payment has been

made, whether its right or

wrong. Quality of report is

also routine. I haven’t added

value to the system I feel

because of my limited

knowledge. I get more or less

the same thing because I

haven’t to go above my

routine role

Use Familiar with system

Yes, I think. Before that

you don’t know about the

system, and after a while,

I found it’s more easy for

me, I know where to go, I

know about parameters,

and recently I can

conduct the parameters, I

can select and I get

Attitude

Do you feel challenged, do you want to learn more about system

Yes, I always want to find

out more. I only run one-

two reports, then we

export to excel

spreadsheet. We can do

expiry etc…So from the

one report, we can do

many things. Whenever

we are free, we will try to

produce different formats,

representations of same

reports

I am interested in how the

system can contribute to me

and more how I can

contribute back to the

system

Yes…one is the report,

and one is the payment

issue

monitoring sales related

format, last month I was

doing an analysis of sales

return, there’s one credit

report available which

gives us the credit

product wise, hq wise,

decision wise from SAP. I

ran report 3-4 different

Can you look for superlatives to describe the system XXX?

different formats, for eg

region wise, what are the

regions of sales return eg

there is lots of expiry for

inventory, then I ask

them why so. Why is

stock left there for long.

This report helps me to

devise inventory

management strategy. In

my division 20 branch,

and frustrating…Interesting

because there are more

things I can do, pros and

cons. I am not happy about

ramco, if I know more, had

more training, I know I can

do more, I can deliver

more…Because of slow of

system, it is

frustrating…Today when I

use it, I feel more frustrated

RAMCO is little bit

traditional, and conservative

I can say that the system is

convenient

Superlatives for that?

Mm…I can say wonderful

Conceptualising Use for IS Success 

Page | 231

Mapping Responses to Study Themes (9/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)

Do you think there's more to the system you can learn?

Yes, I always want to find

out more. I only run one-

two reports, then we

export to excel

spreadsheet. We can do

expiry etc…So from the

one report, we can do

many things. Whenever

we are free, we will try to

produce different formats,

representations of sa yes, I want to know more

Yes….I can feel when I deal

with customers… I willing to

learn more and trying to

work so many things with

the system

Ok..that I can say lots of

things. I learn something

new..that huge.

Depth

What do you see as the main differences between this system and any previous ones you use?

Yeah, I found it very

much (standard data and

integration). At times,

apart from sales we

coordinate with

representatives on

products, what products

are available across

different csf in country.

Lots of integration- when

we put Sales order for

dispensing across

india…we able to see how

many inputs displaced

from our factory… which

is far from head office

where we are…different

RAMCO is little bit

traditional, and conservative

Yea…we can say that, we can

keep track using the system.

I can access to the orders

across the country, I can

manage that in 10 seconds

Yes..yes. Because I guess

lots of stages you can be

made all the time, you

learn and you know this

has to be done that

initially more than

parameters. It looks for

me as learning things,

significant. For example,

now we have created

code. If you come to our

place, you’re from outside

then the raw material

will have different code. If

you come, use system

from outside facility, and

then they will give back ,

admin department uses

that more, favouring this

system. We analyse same

thing and we give

suggestions. But for SAP,

we get real time

information on a daily

basis. Like today is 12

july, what is the position

of our division, our

branch. If its MIS, reports

that is prepared by

Tan 2010 

Page | 232

Mapping Responses to Study Themes (10/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)

Standardised? Integrated? Real-time? Do you see these in XXX system?

Ramco is platform

independent. We have

customer information

system separate, they are

not integrated. Ramco was

bought in for accounting

purposes. Ramco is not an

erp. When erp like SAP

comes into the company,

ramco will be integrated into

SAP, then everyboday can

use, data will be migrated

then all standardization will

take place. Company T has

only bought the accounting

module of ERP. They haven’t

gone for entire erp.

Yes..if we want to view

material, contact the

customer, know where to

distribute the material using

the system...Yea…real-

time….Yes…information we

see from the system is

consistent and standard

Standardized, I don’t

know….Yes..sure..sure,

right, right. You’ve to

follow some protocols in

doing things. Integrated,

of course yes. Real time,

of course yes. I don’t know

about standardized

Extent

How heavy was the configuration/ customization at the early stages?

Not too heavy

customization, see, there

is a hierarchy. Medical

repà regional manager à

general

manager…according to

our requirement it will

give all info/ report to all

levels. From the one

single report only

Yes…it’s totally customized

system

Yes…yea…because when

I came to Company T,

they have installed

additional parameters

and lot of things, having

work in configuration to

the system. We have

actual time in Company T

Research Centre where

the entire discovery

takes place. Now there

have Pharmaceutical

Centre. We give it name

manufacturing place. The

staff requirement to that

rely heavily on system Not really Yes..Do you think you would not have learnt as much if not for the system? Yes…really good Yes..yes..

did u use SAP today

Yes, its Saturday and a

holiday but because I had

to coordinate something

today yes, everyday

Conceptualising Use for IS Success 

Page | 233

Mapping Responses to Study Themes (11/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)Consensus in use

does all colleagues feel same way about systems, do you all chat abt systems?

Yes, upper management

is concerned, we do not

have access to some of

reports. For general

managers, there are

some reports exclusively

authorized to them, we

cannot run these reports.

I operate more on ramco.

They only ask for help if they

run into problems. Most of

the time, they come to me if

they got problem, if they got

IT problem, they will go to IT

dept. Depending on the

problem, if its HR problem,

we will go to HR dept. We are

planning to go for SAP. We

shall see how it workout

Yes…when they use the

system, and I use the

system, we feel easier

Value added interaction

When do you see that value added interactions

person/ client, you need

supporting information.

Our role is coordination

and implementing

marketing strategies for

marketing our products

across India. SAP is the

only tool helping that, if

the person doing well, we

will use a tone, if the

person is not doing well,

we will use a different

tone. It’s a backup for us.

If not for ramco, I don’t know

the accounting part of it.

Because of RAMCO, I know

accounting, my accounting

has improved

Yes...Yes. Everything con be

done through the systemWhen do you see non-value added interactions

Individual Impacts learning

What else did the system help to make better? Is it making a process more efficient, etc

From practice and

experience, we are well

versed in all types of

reports.

Improve the data part of it.

Payment details. There were

human errors in authorizing

payment manually, ramco

helped with that. Efficiency

has improved in one way or

another. Company going for

SAP so data will go over.

Better data management.

data management is very

important it is the heart and

Yea…we can say that, we can

keep track using the system.

I can access to the orders

across the country, I can

manage that in 10 seconds

Yes..of course (its

effective). Depends on

how you use it

Tan 2010 

Page | 234

Mapping Responses to Study Themes (12/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)

If I had joined it earlier, I

would be better to answer

this better. Vs manually But

from a finance perspective,

the system has helped to

reduce fraud. Because there

is a logical flow, where

money come from, which

account it goes to, when and

how is it paid. Accounting

entry pass thru the system,

it help minimise fraud. Any

auditors can open the syst

and account for the system

awarenesswhat do you think is missing from the system?

Regarding structural

change, we are not able to

customize some aspects

in our way. We need more

flexibility for users in that

way. Same module but we

should be able to

introduce some variance.

For eg, we got 10million. We

prepare a check of 1 million

dollars, the balance is

9million, we cancel the

check for xyz reason, it

should show 10million.

Ramco will show 10million.

The moment I prepare

another check of $50, it

would allow me to do so, don’t

know why, it happened 3

times. I have told the IT

department.

You know now we having a

usual wages and salary

management and all that,

only we satisfied with the

ERP, so by implementing

RAMCO we’ll be going to

a…added new function well,

this too obvious with the

system, we’ll be going to…you

know, information staff will

be without match... Yea, SAP

will totally replace the

RAMCO. We’re going to

implement SAP in all of

every single phase... It is the

whole organization, that

going to happen

Yes…can be improved…

There are so many things in

the system. Let’s say if I

want to see order, I can

pick any fields and arranged

the fields on my own. So

there’s a bit problem

particularly on that..

We are able to run the

system for data for 12

months only. If you need

to run for 24months, you

have to run for 24

months. 2006-2008, I

need to run 2006-2007,

then run 2007-2008.

A: When you working with

ramco, you cannot work with

Microsoft word or other

programs because of the

memory. It takes 2-3 hours

to send a voucher. I lose lots

of time. From 930am-6pm,

we can only authorize 10-12

vouchers when I need to

authorize 100 vouchers

because the system is slow.

Allocated memory is small.

Ramco made some processes

difficult but its good enough

The faster communication

inter departments and better

coordination, of course (so we

are going with SAP)

task effectivenssHow do you rate the system on a scale of 1-10? RAMCO 2/10

I give it 4. I’m not depend

much

task productivityWhat changes do you see with the new system?

A: No changes in the

system itself

More the job is repetitive. I

authorise pay batches, its

routine, no innovation, no

change

Conceptualising Use for IS Success 

Page | 235

Mapping Responses to Study Themes (13/13)

Themes Aspects General Questions Respondent A(A) Respondent B (M) Respondent C (D) Respondent D (T) Respondent E (D) Respondent F (U)

task performanceA: 6-8 months structure

change

A: Training structure, 11

divisions to 8 divisions, 6

divisions structure the

same, 2 divisions not the

same, different in design

phases. We are going to

have some structural

change.

did policies change?

In 2 divisions no change,

in 6 divisions change in

structure and policy. That

has impact on how we

use SAP. Last time in

hierarchy, our sap reports

on their own divisions

Other context based questionsHow is the firm preparing for an upgrade of systems

People who know more in

the company are selected for

implementation team.

These people are picked

from individual departments.

they know more about

logical flow of the

department. 5 people from

my department. It should be

in phases, first accounts,

then building.

Yea. We’re running for SAP

implementation next term.

So now we having data to in

doing ongoing launch, so

second we have a social

working focus on

implementation in the

organization. Because each

of every purpose must well

establish and well define and

everyone knows that we

have a transmission,

distribution and addition in

The configuration will take

some time. Because right

now we should be clear at

some stage, we just doing for

SAP implementation right

now

Page |

Appendix G: Publications28 and Contributions

Paper 1: Sedera, D. Tan, F. Dey, S. (2006) "Identifying and Evaluating the

importance of multiple stakeholder perspective in measuring ES-Success"

in proceedings of the European Conference on Information Systems

(ECIS ,06), June 12-14, Goteborg, Sweden.

This article highlights respondents’ ‘Perspective on measurement’ as an

important design consideration in contemporary Information System (IS)

evaluations. The two-phased study analyses data of 310 respondents and

examines 81 IS-success studies. The researcher’s direct contribution was in

the second phase; where the analysis help identified three key employment

cohorts in the context of ES and highlights the importance of measuring

ES-success from a multi-stakeholder view point (see Section 2.7.4). As

highlighted in the thesis, an Enterprise System (ES), unlike a traditional

Information System, entails many stakeholders, which typically have

multiple and often conflicting objectives and priorities.

Paper 2: Sedera, Darshana & Tan, Felix (2007) “Reconceptualizing Usage

for Contemporary Information Systems (ERP) Success,” in proceedings of

the European Conference on Information Systems (ECIS ,07), 7-9 June

2007, St. Gallen, Switzerland.

This article examines the conceptualisation of System Usage in Information

System (IS) success research in the last three decades. The researcher’s

contribution includes summarizing the weaknesses of Usage identified in

literature, including a lack of theoretical grounding, no widely accepted

definition, and the use of unsystematized measures. In an attempt to

address the aforementioned weaknesses, the researcher proposes Adaptive

Structuration Theory as a possible theoretical reference to distil rich and

comprehensive Usage measures for contemporary IS.

28 A number of auxiliary articles were published during the researcher’s candidature. These articles examine auxiliary topics in information systems research that are related to the thesis theme, and therefore not listed here. Furthermore, a number of manuscripts (three) are currently under review at the competition of the thesis.

Conceptualising Use for IS Success

Page | 237

Paper 3: Tan, Felix Ter Chian and Sedera, Darshana, "Introducing a

Business Process and Software Centric Approach for Enterprise System

Teaching" (2008). ICIS 2008 Proceedings. Paper 109.

In this paper, the researcher shares insights and experiences from a course

that was designed to provide a business process centric view of a market

leading Enterprise System, SAP. Furthermore, the course reflects the

research context of the quantitative phase of the thesis (see Section 4.5.1).

The researcher was teaching in the course, designed for both

undergraduate and graduate students, uses two common business

processes in a case study that employs both sequential and explorative

exercises. Student feedback gained through two longitudinal surveys across

two phases of the course demonstrates promising signs of the teaching

approach to better-equip Information Systems (IS) graduates to meet the

challenges of modern organizations.

Tan  2010 

Page | 238

References

Abdinnour-Helm, S., and Saeed, K. "Examining Post Adoption Usage: Conceptual Development and Empirical Assessment," Proceedings of the Twelfth Americas Conference on Information Systems,, Acapulco, Mexico, 2006.

Adams, D.A., Nelson, R.R., and Todd, P.A. "Perceived Usefulness, Ease of Use and Usage of Information Technology: A Replication," MIS Quarterly (16:2) 1992, pp 227-247.

Agarwal, R., and Prasad, J. "A Conceptual and Operational Definition of Personal Innovativeness in the Domain of Information Technology," Information System Research (9:2) 1998, pp 204-215.

Al-Mashari, M. "Process orientation through enterprise resource planning (ERP): a review of critical issues," Knowledge and Process Management (8:3) 2001, pp 175 - 185.

Al-Mashari, M., and Al-Mudimigh, A. "ERP implementation: Lessons from a case study.," Information Technology & People (16:1) 2003, pp 21-33.

Al-Mashari, M., Al-Mudimigh, A., and Zairi, M. "Enterprise resource planning: A taxonomy of critical factors," European Journal of Operational Research (146:2), 2003/4/16 2003, pp 352-364.

Al-Mudimigh, A., Zairi, M., Al-Mashari, M. "ERP software implementation: an integrative framework," European Journal of Information Systems (10) 2001, pp 216-226.

Al-Qirim, N., Corbitt, B. "Determinants of electronic commerce usage in small businesses in New Zealand," European Conference on Information Systems, 2004.

Alavi, M., and Henderson, J.C. "An Evolutionary Strategy for Implementing a Dcision Support System," Management Science (27:11) 1981, pp 1309-1323.

Alloway, R.M. "Defining Success for Data Processing: A Practical Approach to Strategic Planning for the Department," Massachusetts Institute of Technology.

Almutairi, H., and Subramanian, G.H. "An Empirical Application of the DeLone and McLean Model in the Kuwaiti Private Sector," The Journal of Computer Information Systems (45:3) 2005, pp 113-122.

Alter, S. "18 Reasons Why IT-Reliant Work Systems Should Replace 'The IT Artifact' as the Core Subject Matter of the IS Field," Communications of the Association for Information Systems (12:23), October 2003, pp 365-394.

Alter, S. "Work Systems and IT Artifacts- Does the Definition Matter? ," Communications of the Association for Information Systems (17) 2006, pp 299-313.

Antonucci, Y.L., Corbitt, G., Stewart, G., and Harris, A.L. "Enterprise Systems Education: Where Are We? Where Are We Going?," Journal of Information Systems EducationJournal of Information Systems Education (15:3) 2004, p 227227.

Auster, E., Choo, C.W. "Environmental Scanning by CEOs in two Canadian Industries," Journal of the American Society for Information Science (44) 1993, pp 194-203.

Avison, D., and Fitzgerald, G. "Where Now for Development Methodologies?," Communications of the ACM (46:1) 2003, pp 78-82.

Baggozi, R.P. Causal models in marketing Wiley, New York, 1980.

Conceptualising Use for IS Success

Page | 239

Bagozzi, R., and Fornell, C. "Theoretical Concepts Measure-ments, and Meaning," in: A Second Generation of Multi-variate Analysis, C. Fornell (ed.), New York 1982, pp. 24-38.

Bailey, J.E., and Pearson, S.W. "Development of a Tool for Measuring and Analyzing Computer User Satisfaction," Management Science (29:5), May 1983, pp 530-545.

Ballantine, J., Bonner, M., Levy, M., Munro, and Powell, P. "The 3-D Model of Information Systems Success: The Search for the Dependent Variable Continues," Information Resources Management Journal (9:4) 1996, pp 5-14.

Bancroft, N.H., Seip, H., and Sprengel, A. Implementing SAP R/3: How to Introduce a Large System into a Large Organization, (2 ed.) Manning Publications Co., Greenwich, 1998.

Barki, H., and Huff, S.L. "Change, attitude to change, and decision support systems success," Information & Management (9) 1985, pp 261-268.

Barney, J.B. "Is the Resource-Based "View" a Useful Perspective for Strategic Management Research? Yes," Academy of Management Review (26:1) 2001, pp 41-56.

Baron, R.M., and Kenny, D.A. "The moderator-mediator variable distinction in social psychological research: Conceptual, strategic and statistical considerations," Journal of Personality and Social Psychology (51) 1986, pp 1173-1182.

Benbasat, I., and Zmud, R.W. "The Identity Crisis Within the IS Discipline: Defining and Communicating the Discipline's Core Properties," MIS Quarterly (27:2) 2003, pp 183-194.

Berchet, C., and Habchi, G. "The implementation and deployment of an ERP system: an industrial case study. , 56(6), 588-605.," Computers in Industry (56:6) 2005, pp 588-605.

Bhattacherjee, A. "Explaining the Effect of Incentives and Control Mechanisms on Information Technology Usage: A Theoretical Model and an Empirical Test," International Conference on Information Systems, 1996.

Bhattacherjee, A. "Managerial Influences on Intraorganizational Information Technology Use: A Principal-Agent Model," Decision Sciences (29:1) 1998, p 139.

Bhattacherjee, A. "Understanding Information Systems Continuance: An Expectation-Confirmation Model," MIS quarterly (25:3) 2001, pp 351-370.

Bhattacherjee, A., and Premkumar, G. "Understanding Changes in Belief and Attitude Toward Information Technology Usage: A Theoretical Model and Longitudinal Test. ," MIS quarterly (28:2) 2004, pp 229-254.

Boisot, M.H. Knowledge Assets: Securing Competitive Advantage in the Information Economy Oxford University Press, New York, 1998.

Bokhari, R.H. "The Relationship Between System Usage and User Satisfaction: a Meta Analysis," Journal of Enterprise Information Management (18:1) 2005, pp 211-234.

Bollen, K.A. Structural Equations with Latent Variables Wiley, New York, 1989.

Boontaree, K., Ojelanki, N., and Kweku-Muata "An exploration of factors that impact individual performance in an ERP environment: an analysis using multiple analytical techniques," European Journal of Information Systems (15) 2006a, pp 556-568.

Tan  2010 

Page | 240

Boontaree, K., Ojelanki, N., and Kweku-Muata, O.-B. "An exploration of factors that impact individual performance in an ERP environment: an analysis using multiple analytical techniques," European Journal of Information Systems (15:6) 2006b, p 556.

Boudreau, M.-C., and Robey, D. "Enacting Integrated Information Technology: A Human Perspective," Organization Science (16:1) 2005.

Brady, J.A., Monk, E.F., and Wagner, B.J. Concepts in Enterprise Resource Planning, (1 ed.) Course Technology, Thomson Learning, Boston, Massachusetts, 2001.

Briggs, R.O., Jan De Vreede, G., Nunamaker, J.F., and Sprague, R.H. "Special Issue: Information Systems Success," Journal of Management Information Systems (19:4) 2003, pp 5-8.

Bryman, A., and Bell, E. Business Research Methods (2nd Edition ed.) Oxford University Press, New York, 2007.

Burton-Jones, A., and Gallivan, M.J. "Toward a Deeper Understanding of System Usage in Organizations: A Multilevel Perspective," MIS Quarterly (31:4) 2007, pp 657-679.

Burton-Jones, A., and Straub, D. "Minimizing Method Variance in Measures of System Usage," 7th Annual Conference of the Southern Association for Information Systems, 2004.

Burton-Jones, A., and Straub, D.W. "Reconceptualizing System Usage," Information Systems Research (17:3) 2006, pp 228-246.

Cameron, K.S., and Whetten, D.A. "Some Conclusions About Organizational Effectiveness," in: Organizational Effectiveness: A Comparison Of Multiple Models, Academic Press, New York, 1983, pp. 261-277.

Campbell, D. "Task Complexity: A Review and Analysis," Academy of Management Review (13:1) 1988, pp 40-52.

Chan, A. "Derive Research Question," in: Faculty of Health Sciences, L.a.L. Unit (ed.), University of Sydney, Sydney, 1998.

Chervany, N.L., Dickson, G.W., and Kozar, K. "An experimental gaming framework for investigating the influence of management information systems on decision effectiveness", ." MISRC Working Paper No. 71-12. Management Information Systems Research Center, University of Minnesota, Minneapolis, MN.) 1972.

Cheung, C.M.K., and Limayem, M. "The Role of Habit and Changing Nature of the Relationship between Intention and Usage," 13th European Conference on Information Systems, Regensberg, Germany, 2005.

Chien, S.W., and Tsaur, S.M. "Investigating the success of ERP systems: Case studies in three Taiwanese high-tech industries," Computers in Industry), February 2007.

Chin, W.W. "The Partial Least Squares Approach for Structural Equation Modeling," in: Modern Methods for Business Research, G.A. Marcoulides (ed.), Lawrence Erlbaum Associates, Mahwah, NJ, 1998, pp. 295-336.

Chin, W.W., and Newsted, P.R. "Structural Equation Modeling: Analysis with Small Samples Using Partial Least Squares," in: Statistical Strategies for Small Sample Research R. Hoyle (ed.), Sage Publications, 1999, pp. 307-341.

Chin, W.W., and Todd, P.A. "On the use, usefulness, and ease of structural equation modeling in MIS research: A note of caution," MIS Quarterly (19:2), June 1995, pp 237-246.

Conceptualising Use for IS Success

Page | 241

Choe, J.M. "The relationships among Performance of Accounting Information Systems, Influence factors and Evolution level of Information Systems," Journal of Management Information Systems (12:4) 1996, pp 215-239.

Christ, M., Baron, S., Krishnan, R., Nagin, D., and Guenther, O. "A Session-based Empirical Investigation of Web Usage," European Conference on Information Systems, 2003.

Collopy, F. "Biases in Retrospective Self-reports of Time Use: An Empirical Study of Computers Users," Management Science (42:5) 1996, pp 758-767.

Compeau, D.R., and Higgins, C.A. "Computer Self-Efficacy: Development of a Measure and Initial Test," MIS quarterly (19::2), June 1995, pp 189-211.

Consulting, D. "ERP's Second Wave- Maximizing the Value of ERP-Enabled Processes," 1999.

Cooper, M.C., and Ellram., L.M. "Characteristics of Supply Chain Management and the Implications for Purchasing and Logistics Strategy," The International Journal of Logistics Managemeny (4:2) 1993, pp 13-24.

Creswell, J., Plano, C.V., Guttman, M., and Hanson, W. "Advanced mixed methods research designs," in: Handbook on Mixed Methods in the Behavioral and Social Sciences, T.C. Tashakkori A, eds. (ed.), Sage Publications, Thousand Oaks, Calif, 2003, pp. 209–240.

Creswell, J.W. Research Design: Qualitative, Quantitative and Mixed Methods Approaches Sage Publications, Thousand Oaks, CA, 2003.

Creswell, J.W. Research Design: Qualitative, Quantitative and Mixed Methods Approaches Sage Publications, Thousand Oaks, California, 2009.

Cronbach, L.J. "Coefficient alpha and the internal structure of tests," Psy-chometrika (16:3) 1951, pp 297-334.

Cronbach, L.J. " Test validation," in: Educational Measurement (2nd ed), R.L. Thorndike (ed.), American Council on Education, Washington, DC, 1971, pp. 443-507.

Cronbach, L.J., and Meehl, P.E. "Construct Validity in Psychological Tests," Psychological Bulletin (52:1) 1955, pp 281-302.

Crowston, K., Howison, J., and Annabi, H. "Information Systems Success in Free and Open Source Software Development: Theory and Measures," Software Process: Improvement and Practice (11:2) 2006, pp 123 - 148.

Culnan, M.J. "The Dimensions of Perceived Accessibility to Information:Implications for the delivery of Information Systems and Services.," Journal of the American Society for Information Science (36) 1985, pp 302-308.

Curran, T., Keller, G., and Ladd, A. SAP R/3 Business Blueprint. Understanding the Business Process Reference Model Prentice Hall, 1998.

D'Ambra, J., and S., W.C. "Explaining perceived performance of the World Wide Web: uncertainty and the task-technology fit model," Internet Research (14:4) 2004, p 294.

D'Ambra, J., and Wilson, C.S. "Explaining perceived performance of the World Wide Web: uncertainty and the task-technology fit model," Internet Research (14:4) 2004, p 294.

Tan  2010 

Page | 242

Davenport, T.H. "Process Innovation: Reengineering Work through Information Technology," Harvard Business School Press) 1993.

Davenport, T.H. "Putting the Enterprise into the Enterprise System.," Harvard Business Review (76:4), 1998/07//Jul/Aug98 1998, pp 121-131.

Davenport, T.H. "The Future of Enterprise System-Enabled Organizations," Information Systems Frontiers (2:2) 2000, pp 163-180.

Davenport, T.H., and Short, J.E. "The New Industrial Engineering: Information Technology and Business Process Redesign," Sloan Management Review (31:4) 1990, pp 11-27.

Davis, F.D. "Perceived Usefulness, Perceived Ease of Use, and End User Acceptance of Information Technology," MIS Quarterly (13:3) 1989, pp 318-339.

Davis, F.D., Bagozzi, R.P., and Warshaw, P.R. "Comparison of two Theoretical Models," Management Science (35:8) 1989, pp 982-1003.

DeLone, W.H., and McLean, E.R. "Information Systems Success: The Quest For The Dependent Variable," Information Systems Research (3:1) 1992, pp 60-95.

DeLone, W.H., and McLean, E.R. "The DeLone and McLean Model of Information Systems Success: A Ten year Update," Journal of Management Information Systems (19:4) 2003, pp 9-30.

DeLone, W.H., and McLean, E.R. "Measuring e-Commerce Success: Applying the DeLone & McLean Information Systems Success Model," International Journal of Electronic Commerce (9:1) 2004, pp 31-47.

DeSanctis, G., and Poole, M.S. "Understanding the use of group decision support systems: the theory of adaptive structuration," in: Organizations and Communication Technology, C.S. J. Fulk (ed.), Sage, Newbury Park, CA, 1990, pp. 173-193.

DeSanctis, G., and Poole, M.S. "Capturing the Complexity in Advanced Technology use: Adaptive Structuration Theory," Organization Science (5:2), May 1994, pp 121-147.

Desanctis, G., Poole, M.S., Zigurs, I., DeSharnais, G., D'Onofrio, M., Gallupe, B., Holmes, M., Jackson, B., Lewis, H., Limayem, M., Lee-Partridge, J., Niederman, F., Sambamurty, V., Vician, C., Watson, R., Billingsley, J., Kirsch, L., Lind, R., and Shannon, D. "The Minnesota GDSS Research Project: Group Support Systems, Group Processes, and Outcomes," Journal of the Association for Information Systems (9:10/11) 2008, pp 551-608.

Despont-Gros, C., Muller, H., et al. "Evaluating User Interactions with Clinical Information Systems: A model based on human-computer interaction models.," Journal of Biomedical Informatics (38) 2005, pp 244-255.

Devaraj, S., and Kohli, R. "Performance Impacts of Information Technology: Is Actual Usage the Missing Link?," Management Science (49:3) 2003, pp 273-289.

Diamantopoulos, A., and Winklhofer, H. "Index Construction with Formative Indicators: An Alternative to Scale Development," Journal of Marketing Research (38:2) 2001, pp 269-277.

Dickson, G., Senn, J., and Chervany, N. "Research In Management Information Systems: The Minnesota Experiments," Management Science (23:9) 1977, pp 913-923.

Conceptualising Use for IS Success

Page | 243

Dishaw, M.T., Strong, D.M. "Extending the Technology Acceptance Model with Task Technology Fit Constructs," Information & Management (36:1) 1999, pp 9-21.

Djekic, P., and Loebbecke, C. "The Impact of Technical Copy Protection and Internet Services Usage on Software Piracy: An International Survey on Sequencer Software Piracy," 13th European Conference of Information Systems 2005, Regensberg, Germany, 2005.

Doll, W.J., and Torkzadeh, G. "The Measurement Of End-User Computing Satisfaction," MIS Quarterly (12:2), June 1988, pp 259-274.

Doll, W.J., and Torkzadeh, G. "Developing a Multidimensional Measure of System-Use in an Organizational Context," Information & Management (33:4) 1998, pp 171-185.

Draijer, C., and Schenk, D. "Best Practices of Business Simulation with SAP R/3," Journal of Information Systems Education (15:3) 2004, pp 261-265.

Drori, O. "Integration of Text Retrieval Technology Into Formatted (Conventional) Information Systems," ACM SIGSOFT (24:1) 1999, pp 78-80.

Dwivedi, Y. "Consumer Adoption and Usage of Broadband in Bangladesh," Americas Conference on Information Systems, 2006.

Edwards, J.R., and Bagozzi, R.P. "On the Nature and Direction of Relationships between Constructs," Psychological Methods (5:2) 2000, pp 155-174.

Ein-Dor, P., Segev, E. "Organizational Context and the Success of Management Information Systems," Management Science (24:10) 1978, pp 1064-1077.

Eisenhardt, K.M. "Building theories from case study research," Academy of Management Review (14:4) 1989, pp 532 - 550.

Emery, J.C. "Cost/Benefit Analysis of Information Systems," Chicago, IL. Etezadi-Amoli, J., and Farhoomand, A.F. "A Structural Model Of End User

Computing Satisfaction And User Performance," Information & Management (30:2), 1996/5 1996, pp 65-73.

Ferdian "A Comparison of Event-driven Process Chains and UML Activity Diagram for Denoting Business Processes," Technische Universitat Hamburg-Harburg, pp. 1-42.

Firestone, W.A., and Herriott, R.E. "Two images of schools as organisations: an explication and illustrative empirical tes," Educational Administration Quarterly (18:2) 1982, pp 39-59.

Fornell, C., and Larcker, D. "Structural equation models with unobservable variables and measurement error," Journal of Marketing Research (18:1) 1981, pp 39-50.

Fraser, S.G., and Salter, G. "A Motivational View of Information Systems Success: A Reinterpretation of DeLone and McLean's Model," Department of Accounting and Finance Research Paper Series (95:2) 1995, pp 1-35.

Freeze, R., and Raschke, R. "An Assessment of Formative and Reflective Constructs in IS

Research," European Conference of Information Systems, St. Gallen, Switzerland, 2007.

Furumo, K., and Melcher, A. "The Importance of Social Structures in Implementing ERP Systems: A Case Study using Adaptive Structuration Theory," Journal of Information Technology Case an Applicatioon Research (8:2) 2006, pp 39-58.

Tan  2010 

Page | 244

Gable, G., Sedera, D., and Chan, T. "Re-conceptualizing Information System Success: The IS-Impact Measurement Model," Journal of the Association for Information Systems (9:7) 2008, p 377.

Gable, G.G. "Integrating case study and survey research methods: an example in information systems," European Journal of Information Systems (3:2) 1994, pp 112-126.

Gable, G.G. "A Multidimensional Model of Client Success When Engaging External Consultants," Management Science (42:8) 1996, pp 1175-1198.

Gable, G.G., Sedera, D., and Chan, T. "Enterprise Systems Success: A Measurement Model," 24th International Conference of Information Systems, Seattle, Washington, 2003.

Gallagher, C.A. "Perceptions of the value of a Management Information System," Academy of Management (17:1) 1974, pp 46-55.

Galliers, R.D. "On the Nature of Information," in: Information Analysis, R.D. Galliers (ed.), Addison-Wesley, Wokingham, Bershire, 1987, pp. 3-6.

Garson, D. "Quantitative Research in Public Administration," in: Statnotes: Topics in Multivariate Analysis, NC State University, 2010.

Gatian, A.W. "Is User Satisfaction A Valid Measure Of System Effectiveness," Information and Management, (26:3) 1994, pp 119-131.

Gaur, A.S., and Gaur, S.S. Statistical Methods for Practice and Research: A Guide to Data Analysis Using SPSS Sage Publications: New Delhi, 2006.

Gebauer, J., Shaw, M.J., and Gribbins, M.L. "Usage and Impact of Mobile Business Applications ?An Assessment Based on the Concepts of Task/Technology Fit," Tenth Americas Conference on Information Systems,, New York, 2004.

Gefen, D., Karahanna, E., and Straub, D.W. "Trust and TAM in Online Shopping: An Integrated Model," MIS Quarterly (27:1) 2003, pp 51-90.

Gefen, D., Straub, D.W., and Boudreau, M.-C. "Structural Equation Modeling and Regression: Guidelines for Research Practice," Communications of the AIS (4:7) 2000, pp 1-79.

Gelderman, M. "The relation between user satisfaction, usage of information systems and performance," Information & Management (34:1) 1998, pp 11-18.

Giddens, A. Central Problems in Social Theory- Action, Structure and Contradiction in Social Analysis Berkeley, University of California Press, 1979.

Gill, G.T. "Expert Systems Usage: Task Change and Intrinsic Motivation," MIS Quarterly (20:3) 1996, pp 301-328.

Goodhue, D.L. "User evaluations of MIS success: What are we really measuring? ," Proceedings of the Hawaii International Conference on Systems Sciences (4) 1992, pp 303-314.

Goodhue, D.L., and Thompson, R.L. "Task-Technology Fit and Individual Performance," MIS Quarterly (19:2) 1995, pp 213-236.

Gopal, A., Bostrom, R.P., and al., e. "Applying Adaptive Structuration Theory to Investigate the Process of Group Support Systems Use," Journal of Management Information Systems (9:3) 1992, pp 2-16.

Greene, J.C., Caracelli, V.J., and Graham, W.F. "Toward a conceptual framework for mixed-method evaluation design.," Educational Evaluation and Policy Analysis (11:3) 1989, pp 255-274.

Conceptualising Use for IS Success

Page | 245

Gregor, S. "The nature of theory in information systems," MIS Quarterly (30:3) 2006, pp 611-642.

Gregor, S. "Building Theory in the Sciences of the Artificial," 4th International Conference on Design Science Research in Information Systems and Technology, Philadelphia, Pennsylvania, 2009.

Guba, E.G., and Lincoln, Y.S. "Competing paradigms in qualitative research," in: Handbook of qualitative research N.K.D.a.Y.S. Lincoln (ed.), Sage, London, 1994, pp. 105-117.

Guimaraes, T., and Igbaria, M. "Client/server system success: Exploring the human side," Decision Sciences (28:4) 1997, p 851.

Hair, J.F.J., Anderson, R.E., Tatham, R.L., and Black, W.C. Multivariate Data Analysis, (5th Edition ed.) Prentice Hall, Upper Saddle River, NJ, 1998.

Hair, J.H.J., Anderson, R.E., Tatham, R.L., and Black, W.C. Multivariate Data Analysis Prentice Hall, 1995.

Hakkinen, L., and Hilmola, O.-P. "ERP evaluation during the shakedown phase: lessons from an after-sales division doi:10.1111/j.1365-2575.2007.00261.x," Information Systems Journal (18:1) 2008, pp 73-100.

Halawi, L.A., McCarthy, R.V., and Aronson, J.E. "An Empirical Investigation of Knowledge Management Systems' Success," The Journal of Computer Information Systems (48:2) 2007, pp 121-135.

Hamilton, S., and Chervany, N.L. "Evaluating Information System Effectivess. Part I. Comparing Evaluation Approaches," MIS Quarterly (5:3), September 1981, pp 55-69.

Heise, D.R. "Employing Nominal Variables, Induced Variables, and Block Variables in Path Analysis," Sociological Methods and Research (1) 1972, pp 147-173.

Hendricks, K.B., Singhal, V.R., and Stratman, J.K. "The impact of enterprise systems on corporate performance: A study of ERP, SCM, and CRM system implementations," Journal of Operations Management (25:1) 2007, pp 65-82.

Henseler, J., Ringle, C.M., and Sinkovics, R. "The Use of Partial Least Squares Path Modeling in International Marketing," Advances in International Marketing (AIS) (In Print) 2008.

Hirt, S.G., and Swanson, E.B. "Maintaining ERP: Rethinking relational foundations," The Anderson School at UCLA, 1999.

Hong, W., Thong, J., Y. L. , Wong, W.-M., and Tam, K.-Y. "Determinants of user acceptance of digital libraries: An empirical examination of individual differences and systems characteristics," Journal of Management Information Systems (18:3) 2001, p 97.

Hoyle, R.H., and Robinson, J.I. (eds.) Mediated and moderated effects in social psychological research: Measurement, design, and analysis issues. Sage Publications, Thousand Oaks, CA, 2003.

Hunter, M.G. Qualitative research in information systems: An exploration of methods Idea, 2004.

Igbaria, M., and Nachman, S.A. "Correlates Of User Satisfaction With End User Computing: An Exploratory Study," Information and Management, (19:2) 1990, pp 73-82.

Igbaria, M., Parasuraman, S., and Baroudi, J.J. "A motivational model of microcomputer usage," Journal of Management Information Systems (13:1) 1996, pp 127-143.

Tan  2010 

Page | 246

Igbaria, M., and Tan, M. "The Consequences Of Information Technology Acceptance On Subsequent Individual Performance," Information and Management, (32:3) 1997, pp 113-121.

Iivari, J. "An Empirical Test of the DeLone and McLean Model of Information System Success," Database for Advances in Information Systems (36:2) 2005, pp 8-27.

Ives, B., and Olson, M.H. "User Involvement and MIS Success: A Review of Research," Management Science (30:5), May 1984, pp 586 - 603.

Jain, V., and Kanungo, S. "Beyond perceptions and usage: Impact of nature of IS use on IS-enabled productivity gain," International Journal Of Human-Computer Interaction(Special Issue on HCI in MIS) (19:1) 2005, pp 113-136.

Jarvis, C.B., MacKenzie, S.B., and Podsakoff, P.A. "A critical review of construct indicators and measurement model misspecification in marketing and consumer research," Journal of Consumer Research (30) 2003, pp 199-216.

Jashapara, A. Knowledge Management: An integrated approach Pearson Education, London, UK, 2004.

Jasperson, J., Carter, P., and Zmud, R. "A Comprehensive Conceptualization of Post-adoptive Behaviors Associated with IT Enabled Work Systems," MIS Quarterly (29:3) 2005, pp 525-557.

Jenkins, A.M. Research Methodologies and MIS Research Elsevier Science Publishers B.V., Amsterdam, Holland, 1985, pp. 103-117.

Jeong, M., and Lambert, C.U. "Adaptation of an Information Quality Framework to Measure Customers' Behavioral Intentions to use Lodging Web sites," Hospitality Management (20:1) 2001, pp 129-146.

Jonassen, D.H. "Toward a design theory of problem solving," Educational Technology, Research and Development (48:4) 2000, p 63.

Kalton, G., and Kasprzyk, D. "Imputing for missing survey responses," Section on Survey Research Methods, American Statistical Association, 1982, pp. 22-31.

Kaplan, R.S., and Norton, D.P. (eds.) The strategy-focused organization: How balanced scorecard companies thrive in the new business environment. Harvard Business School Publishing Corporation, 2001.

Kaptelinin, V., Nardi, B., and Macaulay, C. "Methods & tools: The activity checklist: a tool for representing the “space” of context," Interactions (6:4) 1999, pp 27-39.

Khalifa, M., Shen, K.N. "Effects of Electronic Customer Relationship Management on Customer Satisfaction: A Temporal Model," 38th Annual Hawaii International Conference on System Sciences, Big Island, Hawaii, 2005.

Kim, J., Lee, J., Han, K., and Lee, M. "Businesses as Buildings: Metrics for the Architectural Quality of Internet Businesses," Information Systems Research (13:3) 2002, pp 239-254.

Kim, S., and Soergel, D. "Selecting and Measuring Task Characteristics as Independent Variables," American Society for Information Science and Technology (42:1) 2005.

Kim, S.S., and Malhotra, N.K. "A Longitudinal Model of Continued Use: An Integrative View of Four Mechanisms Underlying Postadoption Phenomena," Management Science (51:5) 2005, pp 741-755.

Conceptualising Use for IS Success

Page | 247

Kim, S.S., Malhotra, N.K., and Narasimhan, S. "Two Competing Perspectives on Automatic Use: A Theoretical and Empirical Comparison," Information Systems Research (16:4) 2005, pp 418-432.

Kim, Y., Hsu, J., and Stern, M. "An Update on the IS/IT Skills Gap," Journal of Information Systems Education (17::4) 2006, p 395.

Klein, H.K., and Myers, M.D. "A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems," MIS Quarterly (23:1) 1999, pp 67-93.

Kozlowski, S., and Klein, K. "A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes," in: Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions., K.J. Klein and S.W.J. Kozlowski (eds.), 2000, pp. 3-90.

Kriebel, C., and Raviv, A. "An Economics approach to modeling the Productivity of Computer Systems," Management Science (25:3) 1980, pp 297-311.

Kuan, K.K.Y., and Chau, P.Y.K. "A perception-based model for EDI adoption in small businesses using a technology-organization-environment framework," Information & Management (38:8) 2001, pp 507-521.

Kuutti, K. Activity Theory as a potential framework for HCI Research Cambridge: MIT Press, Boston, 1995.

Lamb, R., and Kling, R. "Reconceptualizing Users as Social Actors in Information Systems Research," MIS Quarterly (27:2), June 2003, pp 197-235.

Landrum, H., Prybutok, V., Strutton, D., and Zhang, X. "Examining the Merits of Usefulness Versus Use in an Information Service Quality and Information System Success Web-Based Model," Information Resources Management Journal (21:2) 2008, pp 1-17.

Lee, A. "Inaugural Editors Comments," MIS quarterly (23:1), March, 1999 1999, pp 5-11.

Lee, A. "Researchable Directions for ERP and Other New Information Technologies," MIS quarterly (24:1), March 2000, pp iii-viii.

Lee, J.K., Braynov, S., and Rao, R. "Effects of Public Emergency on Citizens' Usage Intention Toward E-Government: A Study in the Context of War in Iraq," International Conference on Information Systems 2003, 2003.

Lee, Y.W., Strong, D.M., Kahn, B.K., and Wang, R.Y. "AIMQ: A Methodology for Information Quality Assessment," Information & Management (40:2) 2002, pp 133-146.

Lee, Z., and Lee, Y.H. "Cultural Implications of Electronic Communication Usage: A Theory-Based Empirical Analysis," International Conference on Information systems, 2003.

Leger, P.-M. "Using a Simulation Game Approach to Teach Enterprise Resource Planning Concepts," Journal of Information Systems Education (17:4) 2006, p 441.

Leidner, D.E., and Elam, J.J. "Executive Information Systems: Their Impact On Executive Decision-Making," Journal of Management Information Systems (10:3) 1994, pp 139-156.

LeRouge, C., and Webb, H.W. "Appropriating Enterprise Resource Planning Systems in Colleges of Business: Extending Adaptive Structuration Theory for Testability," Journal of Information Systems Education (15:3) 2004, p 315.

Tan  2010 

Page | 248

Levy, M., and Powell, P. Strategies for Growth in SMEs: The role of information and information systems Butterworth Heinemann, 2005.

Li, E.Y. "Perceived importance of information system success factors: A meta analysis of group differences," Information & Management (32:1) 1997, pp 15-28.

Li, Y. "Task type and a faceted classification of tasks.," In proceedings of the 67th ASIS&T Annual Meeting, volume 41, Information Today, Medford (N.J.), 2004.

Liang, H., Saraf, N., Qing, H., and Yajiong, X. "Assimilation of Enterprise Systems: The Effect of Institutional Pressures and the Mediating Role of Top Management," MIS quarterly (31:1) 2007, pp 59-87.

Liker, J.K., Fleisher, M., Nagamachi, M., Zonnevylle, M.S. "Designers and their Machines: CAD Use and Support in the US and Japan," Association for Computing Machinery: Communications of the ACM (35:2) 1992, pp 76-95.

Lucas, H., C.Jr., , and Nielsen, N.R. "The Impact of the Mode of Information on Learning and Performance," Management Science (26:10), October 1980, pp 982-993.

Lucas, H.C. Why Information Systems Fail Columbia University Press, New York, 1975.

Lucas, H.C., and Spitler, V.K. "Technology Use and Performance: A Field Study of Broker Workstations," Decision Sciences (30:2) 1999, pp 1-21.

MacCallum, R.C., Wegener, D.T., Uchino, B.N., and Fabrigar, L.R. "The problem of equivalent models in applications of covariance structure analysis," Psychological Bulletin (114) 1993, pp 185-199.

MacKenzie, S.B., Podsakoff, P.M., and Jarvis, C.B. "The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions," Journal of Applied Psychology (90:4) 2005, pp 710–730.

MacKinnon, D.P., Fairchild, A.J., and Fritz, M.S. "Mediation Analysis," Annu Rev Psychol (58:593) 2007.

Mahmood, M.A., and Medewitz, J.N. "Impact of Design Methods on Decision Support Systems Success: An Empirical Assessment," Information & Management (9) 1985, pp 137-151.

Mandal, P., and Gunasekaran, A. "Issues in implementing ERP: A case study," European Journal of Operational Research (146:2), 2003/4/16 2003, pp 274-283.

Mao, E., and Ambrose, P. "A Theoretical and Empirical Validation of IS Success Models in a Temporal and Quasi Volitional Technology Usage Context," Proceedings of the Americas Conference on Information Systems,, New York, 2004.

Markus, L.M., Axline, S., and al., e. Learning from Adopters Experiences with ERP Problems Encountered and Success Achieved Cambridge University Press, Cambridge, 2003, pp. 23-55.

Markus, L.M., and Tanis, C. (eds.) The Enterprise Systems Experience–From Adoption to Success. OH: Pinnaflex Educational Resources, Inc., Cincinnati, 2000.

Mason, R.O. "Measuring Information Output: A Communication Systems Approach," Information & Management (1:5), October 1978, pp 219-234.

Conceptualising Use for IS Success

Page | 249

Massetti, B., and Zmud, R.W. "Measuring the extent of EDI usage in complex organizations: Strategies and Illustrative examples," MIS Quarterly (20:3) 1996, pp 331-345.

Massey, A., Wheeler, B., and Keen, P. Technology Matters Prentice Hall, New Jersey, 2001, pp. 25-48.

Mathieson, K., Peacock, E., and Chin, W.W. "Extending the Technology Acceptance Model: The Influence of Perceived User Resources," Database for Advances in Information Systems (32:3) 2001, pp 86-112.

McAfee, A. "Mastering the Three Worlds of Information Technology," Harvard Business Review), November 2006, pp 141-148.

McCracken, G. The Long Interview. Sage, London, 1988. McGill, T., Hobbs, V., and Klobas, J. "User-developed applications and

information systems success: A test of DeLone and McLean's Model," Information Resources Management Journal (16:1) 2003, pp 24-45.

Microsoft "Microsoft Dynamics Academic Alliance Faculty Content," 2010. Miles, M.B., and Huberman, A.M. Qualitative data analysis (2nd ed.) SAGE,

Thousand Oaks, CA 1994. Miller, T.L. "Segmenting the Internet," American Demographics) 1996, pp

48-52. Mingers, J. "Combining IS Research Methods: Towards a Pluralist Methodology," Information System Research (12:3) 2001, pp 240-

259. Mohr, L.B. Explaining Organizational Behavior, the Limits and Possibilities of

Theory and Research Jossey-Bass Publishers, San Francisco, CA, 1982.

Morgeson, E.P., and Hofmann, D.A. "The Structure and Function of Collective Constructs: Implications for Multilevel Research and Theory Development," Academy of Management Review (24:2) 1999, pp 249-265.

Morse, J.M. "Principles of Mixed Methods and Multimethod Research Design," in: Handbook of Mixed Methods in Social and Behavoural Research, T.C. Tashakkori A, eds. (ed.), Sage, Thousand Oaks, California, 2003.

Myers, M.D. "Qualitative Research in Information Systems," MIS Quarterly (21:2), June 1997, pp 241-242.

Myers, M.D. Qualitative Research in Business and Management Sage Publications, Thousand Oaks, California, 2009.

Nah, F.F., Lau, J.L., and Kuang, J. "Critical factors for successful implementation of enterprise systems," Business Process Manaagement Journal (7:3) 2001, pp 285-296.

Nardi, B.A. (ed.) Context and consciousness: activity theory and human-computer interaction. Massachusetts Institute of Technology, 1996.

Nonaka, I. "A Dynamic Theory of Organizatinal Knowledge Creation," Organization Science (5:1) 1994, pp 14-37.

Nunnally, J.C. Psychometric theory (2nd ed.) McGraw-Hill, New York, 1978. Nunnally, J.C., and Bernstein, I.H. Psychometric theory (3rd Ed.) McGraw-

Hill, New York, 1994. O. Briggs, R., Adkins, M., Mittleman, D., and Kruse, J. "A technology

transition model derived from field investigation of GSS use aboard the U.S.S. CORONADO," Journal of Management Information Systems (15:3) 1998, p 151.

Tan  2010 

Page | 250

Orlikowski, W.J. "The Duality of Technology: Rethinking the Concept of Technology in Organizations," Organization Science (3:3) 1992, pp 398-427.

Orlikowski, W.J., and Iacono, S.C. "Desperately Seeking IT in IT Research- A Call to Theoizing the IT Artifact," Information Systems Research (12:2) 2001, pp 121-134.

Oxford "Compact Oxford English Dictionary of Current English " in: Compact Oxford English Dictionary of Current English Oxford Corpus, 2008.

Pall, G.A. Quality Press Management Prentice Hall, Englewood Cliffs, New Jersey, 1987.

Parr, A., and Shanks, G. "Critical Success Factors Revisited: A Model for ERP Project Implementation," in: Second Wave Enterprise Resource Planning Systems: Implementation and Effectiveness, G. Shanks, P. Seddon and L. Willcocks (eds.), Cambridge University Press, 2003, pp. 196-219.

Patton, M.Q. Qualitative research and evaluation methods, (3rd ed.) Sage, Thousand Oaks, CA, 2002.

Petter, S., DeLone, W.H., and McLean, E.R. "Measuring Information Systems Success: Models, Dimensions, Measures, and Interrelationships," European Journal of Information Systems (17) 2008, pp 236-263.

Petter, S., Straub, D., and Rai, A. "Specifying Formative Constructs in Information systems research," MIS Quarterly (31:4) 2007, pp 623-656.

Pflughoeft, K.A., Ramamurthy, K., Soofi, E.S., Yasai-Ardekani, M., and Zahedi, F.M. "Multiple Conceptualizations of Small Business Web Use and Benefit," Decision Sciences (34:3) 2003, pp 467-511.

Pinsonneault, A., and Kraemer, K.L. "Survey Research Methodology in Management Information Systems: An Assessment," Journal of Management Information Systems (10:2) 1993, pp 75-105.

Podsakoff, P.M., Scott, B.M., Lee, J.-Y., and Podsakoff, N.P. "Common Method Bias in Behavioral Research: A Critical Review of the Literature and Recommended Remedies," Journal of Applied Psychology (88:5) 2003, pp 879-903.

Porter, M.E. Competitive Advantage, Creating and Sustaining Superior Performance, (1 ed.) The Free Press, New York, 1985.

Preacher, K.J., and Hayes, A.F. "Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models," Behavior Research Methods (40) 2008, pp 879-891.

Rai, A., Lang, S.S., and , and Welker, R.B. "Assessing the Validity of IS Success Models: An Empirical Test and Theoretical Analysis," Information Systems Research, (13:1), March 2002, pp 50-69.

Rawstorne, P., Jayasuriya, R., Caputi, P. "Issues in predicting and explaining usage behaviors with the technology acceptance model and the theory of planned behavior when usage is mandatory," Proceedings of the twenty first international conference on Information systems, Brisbane, Queensland, 2000, pp. 35-44.

Raymond, L. "Organizational Characteristics and MIS Success in the context of Small Business," MIS Quarterly (9:1) 1985, pp 37-52.

Conceptualising Use for IS Success

Page | 251

Raymond, L. "Organizational Context and Information Systems Success: A Contingency Approach," Journal of Management Information Systems (6:4) 1990, pp 1-20.

Rice, R.E. "Relating electronic mail use and network structure to R&D work networks and performance," Journal of Management Information Systems (11:1) 1994, pp 9-21.

Ringle, C.M., Wende, S., and Will, A. "SmartPLS," University of Hamburg, Hamburg, Germany, 2005.

Rivard, S., and Huff, S.L. "User Developed Applications: Evaluation of Success from the Department Perspective," MIS Quarterly (8:1) 1984, pp 39-50.

Robson, C. Real World Research Oxford: Blackwell Publishers, 1993. Rogers, E., M Diffusion of Innovations, (Fifth ed.) Simon and Schuster, New

York, 2003. Rosemann, M., and Maurizio, A.A. "SAP-related Education - Status Quo and

Experiences," Journal of Information Systems Education (16:4) 2005, p 437.

Ross, J.W., and Vitale, M.R. "The ERP Revolution: Surviving vs. Thriving," Information Systems Frontiers (2:2) 2000, pp 233-241.

Ross, J.W., Vitale, M.R., and Willcocks, L.P. "The Continuing ERP revolution: Sustainable Lessons, New Modes of Delivery," in: Second-Wave Enterprise Resource Planning Systems: Implementing for effectiveness, P.B.S.a.L.P.W.G. Shanks (ed.), Cambridge University Press, Cambridge, 2003.

Rossiter, J.R. "The C-OAR-SE procedure for scale development in marketing," International Journal of Research in Marketing (19) 2002, pp 305-335.

Sabherwal, R., Jeyaraj, A., and Chowa, C. "Information Systems Success: Individual and Organizational Determinants," Management Science (52:12) 2006, pp 1849-1864.

Sabherwal, R., and Robey, D. "Reconciling variance and process strategies for studying information system development," Information System Research (6:4) 1995, pp 303-327.

Saeed, K.A., and Abdinnour-Helm, S. "Examining the effects of information system characteristics and perceived usefulness on post adoption usage of information systems," Information & Management (45:6) 2008, pp 376-386.

Schwartz, E. "Does ERP Matter," in: InfoWorld, 2007. Schwarz, A., and Chin, W. "Looking Forward: Toward an Understanding of

the Nature and Definition of IT Acceptance," Journal of the Association for Information Systems (8:4), April 2007, pp 230-243.

Scott, E., Alger, R., Pequeno, S., and Sessions, N. "The Skills Gap as Observed between IS Graduates and the System Development Industry- A South African Experience," IS2002 Conference, Informing Science, 2002, pp. 1403-1411.

Scott, J.E., and Vessey, I. "Implementing Enterprise Resource Planning Systems: The Role of Learning from Failure," Information Systems Frontiers (2:2) 2000, pp 213-232.

Seddon, P.B. "A Respecification and Extension of the DeLone and McLean Model of IS Success," Information Systems Research (8:3) 1997, pp 240-253.

Tan  2010 

Page | 252

Seddon, P.B., and Kiew, M.Y. "A Partial test and development of the DeLone and McLean Model of IS Success," International Conference on Information Systems, Vancouver, British Columbia, Canada, 1994.

Seddon, P.B., Staples, S., and Patnayakuni, M. "Dimensions of Information Systems Success," Communications of the AIS (2:20) 1999.

Sedera, D., Gable, G., and Chan, T. "Knowledge Management as an antecedent of Enterprise Systems Success," 10th Americas Conference of Information Systems, Association of Information Systems, New York City, New York, 2004.

Sedera, D., and Tan, T.C. "User Satisfaction: An Overarching Measure of Enterprise System Success," Pacific Asian Conference of Information Systems, Bangkok, Thailand, 2005.

Sedera, D., Tan, T.C., and Dey, S. "Identifying and Evaluating the Importance of Multiple Stakeholder Perspective in Measuring ES Success: The Importance of a Multiple Stakeholder Perspective," 14th European Conference of Information Systems, Association for Information Systems, Geborg, Sweden, 2006.

Segars, A.H., Grover, V. "Re-examining Perceived Ease of Use and Usefulness: A Confirmatory Factor Analysis," MIS Quarterly (17) 1993, pp 517-525.

Seidman, I. Interviewing as qualitative research : a guide for researchers in education and the social sciences., (3rd ed.) Teachers College Press., New York, 2006.

Senn, J.A. Information Systems Management Wadsworth Publishing Company, Belmount, CA, 1982.

Shang, S., and Seddon, P.B. "Assessing and managing the benefits of enterprise systems: the business manager's perspective

doi:10.1046/j.1365-2575.2002.00132.x," Information Systems Journal (12:4) 2002, pp 271-299.

Shang, S., Seddon, P.B. "A Comprehensive Framework for Classifying Benefits of ERP Systems," 6th Americas Conference of Information Systems, Association for Information Systems, Long Beach, California, 2000.

Shanks, G., Seddon, P.B., and Wilcocks, L.P. (eds.) Second-Wave Enterprise Resource Planning Systems. Cambridge, London, 2003.

Shannon, C.E., and Weaver, W. Mathematical Theory Of Communication University of Illinois Press, Urbana, IL, 1963.

Shrout, P.E., and Bolger, N. "Mediation in Experimental and Nonexperimental Studies: New Procedures and Recommendations," Psychological Methods (7:4) 2002, pp 422–445.

Skok, W., Kophamel, A., and Richardson, I. "Diagnosing information systems success: importance-performance maps in the health club industry," Information & Management (38:7) 2001, pp 409-419.

Smith, A.G. "Criteria for Evaluation of Internet Information Resources," 1996.

Sobel, M.E.I.E., (pp. ). (ed.) Asymptotic confidence intervals for indirect effects in structural equation models. American Sociological Association., Washington DC: , 1982.

Srinivasan, A. "Alternative Measures of System Effectiveness: Associations and Implications," MIS quarterly) 1985, pp 243-253.

Conceptualising Use for IS Success

Page | 253

Staples, D.S., Wong, I., and Seddon, P.B. "Having expectations of information systems benefits that match received benefits: does it really matter?," Information & Management (40) 2002, pp 115-131.

Straub, D., Limayem, M., and Karahanna-Evaristo, E. "Measuring System Usage: Implications for IS Theory Testing," Management Science (41:8) 1995, pp 1328-1342.

Straub, D.W., Boudreau, M.-C., and Gefen, D. "Validation guidelines for IS positivist research," Communications of the Association for Information Systems (13) 2004, pp 380-427.

Strong, D., and Volkoff, O. "A roadmap for enterprise system implementation," IEEE Computer Magazine (37:6) 2004, pp 22-28.

Strong, D.M., Johnson, S.A., and Mistry, J.J. "Integrating Enterprise Decision-Making Modules into Undergraduate Management and Industrial Engineering Curricula," Journal of Information Systems Education (15:3), Fall 2004, p 301.

Sun, H., and Zhang, P. "A Research Agenda towards a Better Conceptualization of IT Use," Eleventh Americas Conference on Information Systems, Association of Information Systems, Omaha, NE, 2005.

Sundaram, S., Schwarz, A., Jones, E., and Chin, W.W. "Technology use on the front line: how information technology enhances individual performance," Journal of the Academy of Marketing Science (35:1) 2007, pp 101-112.

Sutanto, J., Phang, C.W., Kankanhalli, A., Tan, B. "Toward a Process Model of Media Usage in Global Virtual Teams," European Conference on Information Systems, 2004.

Szajna, B. "Determining information systems usage: Some issues and examples," Information Management (25:3) 1993, pp 147-154.

Tallon, P.P., Kraemer, K.L., and Gurbaxani, V. "Executives' Perceptions Of The Business Value Of Information Technology: A Process-Oriented Approach," Journal of Management Information Systems (16:4), 2000///Spring 2000, pp 145-173.

Tang, X.L., Hornyak, R., and Rai, A. "Patterns of Information Usage in Inter-firm Processes," Americas Conference on Information Systems, 2006.

Tashakkori, A., and Teddlie, C. Handbook of mixed methods in social & behavioral research Sage Publications, Thousand Oaks, California, 2003, pp. 671-703.

Taylor, S., and Todd, P.A. "Assessing IT Usage: The Role of Prior Experience," MIS Quarterly (19:4) 1995, pp 561-570.

Taylor, S., Todd, P.A. "Assessing IT Usage: The Role of Prior Experience," MIS Quarterly (19:4) 1995, pp 561-570.

Taylor, S.J., and Bogdan, R. Introduction to qualitative research methods, (3rd ed.) John Wiley, New York, 1998.

Tchokogue, A., Bareil, C., and Duguay, C.R. "Key lessons from the implementation of an ERP at Pratt & Whitney Canada," International Journal of Production Economics (95:2) 2005, pp 151-163.

Tenenhaus, M., Vinzi, V.E., Chatelin, Y.-M., and Lauro, C. "PLS path modeling," Computational Statistics & Data Analysis (48:1), January 2005, pp 159-205.

Thompson, R.L., Higgins, C.A., and Howell, J.M. "Influence of experience on personal computer utilization: Testing a conceptual model," Journal of Management Information Systems (11:1) 1994, p 167.

Tan  2010 

Page | 254

Tornatzky, L.G., and Fleischer, M. The Processes of Technological Innovation Lexington Books, Lexington, Massachusetts, 1990.

Trochim, W.M. "The Research Methods Knowledge Base,," 2006. Trochim, W.M.K. "The Nomological Network," 2002. Tsai, C.H., and Chen, H.Y. "Assessing Knowledge Management System

Success: An Empirical Study in Taiwan's High-Tech Industry," Journal of American Academy of Business (10:2) 2007, pp 257-262.

Tsohou, A., Kokolakis, S., Karyda, M., and Kiountouzis, E. "Process-variance models in information security awareness research," Information Management & Computer Security (16:3) 2008, pp 271-287.

Tu, Q. "Measuring Organizational Level IS Usage and Its Impact on Manufacturing Performance," Americas Conference on Information Systems, 2001.

Umble, E.J., Haft, R.R., and Umble, M.M. "Enterprise resource planning: Implementation procedures and critical success factors," European Journal of Operational Research (146:2), April 16 2002, pp 241-257.

Vakkari, P. "Task-based information searching," Annual Review of Information Science and Technology (37) 2003, pp 413-464.

van der Aalst, W.M.P. "Workflow mining: a survey of issues and approaches," Data & Knowledge Engineering (47) 2003, pp 237-267.

van der Heijden, H. "Measuring IT Core Capabilities for Electronic Commerce: Results from a confirmatory factor analysis," International Conference of Information Systems, Brisbane, Australia, 2000.

Venkatesh, V., Morris, M.G., Davis, G.B., and Davis, F.D. "User Acceptance of Information Technology: Toward a Unified View," MIS Quarterly (27:3), September 2003, pp 425-478.

Von Hellens, L.A. "Information Systems Quality versus Software Quality: A Discussion from a Managerial, an Organizational and an Engineering Viewpoint," Information and Software Technology (39) 1997, pp 801-808.

Wang, Y.S., Wang, H.Y., and Shee, D.Y. "Measuring e-learning systems success in an organizational context: Scale development and validation," Computers in Human Behaviour (23) 2007, pp 1792-1808.

Whetten, D.A. "What Constitutes a Theoretical Contribution," Academy of Management Review (14:4) 1989, pp 490-495.

Whitman, M.E., and Woszczynski, A.B. The handbook of information systems research Idea Group Publishing, Hershey PA, 2004.

Willis, T.H., and Willis-Brown, A.H. "Extending the value of ERP," Industrial Management and Data Systems (102:1) 2002, pp 35-38

Wixom, B.H., and Todd, P.A. "A Theoretical Integration of User Satisfaction and Technology Acceptance," Information Systems Research (16:1), March 2005 2005, pp 85-102.

Wold, H. "Partial Least Squares," in: Encyclopedia of Statistical Sciences, S.K.a.N.L. Johnson (ed.), Wiley, New York, 1985, pp. 581-591.

Worthen, B. "Nestle's ERP Odyssey," 2000. Wu, J.-H., and Wang, Y.-M. "Measuring ERP success: The key-users'

viewpoint of the ERP to produce a viable IS in the organization," Computers in Human Behavior Including the Special Issue: Avoiding Simplicity, Confronting Complexity: Advances in Designing Powerful Electronic Learning Environments (23:3), 2007/5 2007, pp 1582-1596.

Conceptualising Use for IS Success

Page | 255

Wu, J.-H., and Wang, Y.M. "Measuring ERP success: the ultimate user' view," International Journal of Operations and Production Management (26:8) 2006a, pp 882-903.

Wu, J.H., and Wang, Y.M. "Measuring KMS success: A respecification of the DeLone and McLean's Model," Information & Management (43) 2006b, pp 728-739.

Xia, W.D., King, W. "Interdependency of the Determinants of User Interaction and Usage: An Empirical Test," International Conference on Information Systems, 1996.

Yao, Y., and Murphy, L. "Remote electronic voting systems: an exploration of voters' perceptions and intention to use," European Journal of Information Systems (16:2) 2007, p 106.

Yin, R.K. Case Study Research: Design and Methods (Second ed.) Sage Publications., Thousand Oaks, California, 1994.

Yin, R.K. Case Study Research: Design and Methods, 3rd ed. Sage Publications, Thousand Oaks, California, 2003.

Yusuf, Y., Gunasekaran, A., and Abthorpe, M.S. "Enterprise information systems project implementation: A case study of ERP in Rolls-Royce.," International Journal of Production Economics (87) 2004, pp 251-266.

Zain, M., Rose, R. C., Abdullah, I., & Masrom, M. "The relationship between information technology acceptance and organizational agility in Malaysia. ," Information & Management (42:6) 2005, pp 829-839.

Zaleznik, A. "Managers and leaders: Are they different? ," Harvard Business Review (82:1) 2004, pp 74-81.

Zigurs, I. Methodological and Measurement Issues in Group Support Systems Research Macmillan Publishing Company, New York, 1993.

Zmud, R.W. "Empirical Investigation of the Dimensionality of the Concept of Information," Decision Sciences (9) 1978, pp 187-195.