Performance Management Practices in Institutions of Higher Education: An Instrument Development

Table of contents

1. Introduction

ccording to Sekhar (2007), there is no broader system of management of the people which has received much importance and attention as performance management system in organizations. Baron and Armstrong (2002) assert that performance Author ?: Department of Psychology, Kyambogo University. e-mail: [email protected] organization, teams, and individuals within it, management is getting better results from understanding and managing performance, within agreed framework of planned goals, standards and competence requirements. Houldsworth and Jirasinghe (2005) argue that performance is through which managers ensure that employee activities and outputs are congruent with the organizations goals. Halachmi (2005) argues that performance management can take many forms from dealing with issues internal to the organization to catering to stakeholders or handling issues in its environment and paying due attention to the human (behavioral) side of the enterprise. To better understand, explain and implement PM requires having practices that involve: establishing results-oriented relationships by developing appropriate PM processes and structures; identifying and using available resources that are paramount to regular setting of targets; ensuring information flow in a changing work environment (Kagaari, 2011).

According to de Waal (2007), performance management, and especially the fostering of performance-driven behaviour, cannot be implemented lightly and should not be underestimated. It takes continuous attention, dedication and in particular, stamina from management to keep focusing on performance management in order to keep it "alive" in the organisation (de Waal, 2007). For instance, de Waal's (2007) study on performance management systems in institutions of higher education, found a low score on action orientation, which is caused by the management being composed of mainly academics who, in contrast to practitioners, tend to think things through (too long) before acting. Kagaari (2011) also found that even when employees in institutions of higher education in Uganda are involved in strategic planning, a core activity of performance management, the implementation process becomes difficult because of the poor incentive structures. Armstrong (1992) argued that studies on performance management mostly concentrate on macro factors and examination of individual perceptions of performance management practices is still scanty. de Waal citing Abdel Aziz et al. (2005) further argued that scientific and professional literature specifically on implementing performance management in developing countries is scarce. In Africa, studies on PM are limited and particularly for Institutions of Higher Education in Uganda.

This study particularly focuses on performance management (PM) practices in higher institutions of learning. Unfortunately, there is no existing reliable and valid instrument for measuring these PM practices. The purpose of this study is to develop and validate an instrument that will reliably assist in tapping information from employees for purposes of testing a conceptual model of performance management practices in Institutions of Higher Education in Uganda. This will in turn minimise introducing and copying tools and systems from the western world which are not always the best suited to local circumstances (de Waal, 2007).

2. II.

3. Literature Review

Kagaari's study (2011) based on the regular activites employees in public universities are engaged in identifies five constructs of performance management practices: Agency relations, locus of decision making, relevant resources, dynamic capability and goal setting. The purpose of this study was to develop and evaluate an instrument for empirical gauging of performance management practices in Institutions of Higher Education in Uganda. Such an understanding is best achieved by meeting the following objectives (Straub et al., 2004): 1. identifying the initial items that may help explain performance management practices and determine them by employing an exploratory survey approach; 2. confirming the representativeness to a particular construct domain; and 3. finally testing the instrument in order to confirm the reliability of items and construct validity. Accordingly, exploratory factor analysis (EFA) as a modelling approach is normally used for studying hypothetical constructs by using a variety of observable proxies or indicators of them that can be directly measured (Raykov & Marcoulides, 2006) but well aware that it is not a hypothesis-testing procedure (Hanley, Meigs, Williams, Haffner, & D'Agostino, 2005).

Raykov and Marcoulides (2006) argue that the major concern of exploratory factor analysis is to determine how many factors, latent constructs, are needed to explain well the relationship among a given set of observed measures. Then, the confirmatory factor analysis quantifies, tests and confirms the details of the of a pre-existing factor structure. CFA requires that the complete details of the proposed model be specified before it is fitted to the data. According to Brown (2006), confirmatory factor analysis is appropriate for construct validation and test construction.

CFA is also frequently used as a first step to assess the proposed measurement model in a structural equation model (MacCallum & Austin, 2000). Many of the rules of interpretation regarding assessment of model fit and model modification in structural equation modelling apply equally to CFA. CFA is distinguished from structural equation modelling by the fact that in CFA, there are no directed arrows between latent factors. In other words, while in CFA factors are not presumed to directly cause one another, SEM often does specify particular factors and variables to cause one another. In the context of SEM, the CFA often is called 'the measurement model', while the relations between the latent variables (with directed arrows) are called 'the structural model'. Structural equation modelling is a multivariate technique that has a number of advantages: explicit assumptions, precision of the model, and complete representation of complex theories (Bagozzi, 1980 cited in Fisher, Elrod, & Mehta, 2009) because it requires clear definitions.

According to Tomarken and Waller (2003), the primary purpose of structural equation modelling (SEM) as a broad-analytic framework, is to assess whether a specific model fits well or which of the several alternative models fits best. Accordingly the development, assessment, selection of statistical tests of fit and fit indices is critical in SEM domain (Tomarken & Waller, 2003). Marsh and Grayson cited in Schermelleh-Engel, Moosbrugger and Muller (2003) noted that there are no established guidelines for what minimal conditions constitute an adequate fit rather establishing that the model is identified, the iterative estimation procedure converges, all parameter estimates have reasonable sizes and the patterns in the residual matrix for standardized residuals do not indicate signs of ill fit.

4. III. Methodology

According to Straub, Boudreau, and Gefen (2004), validating an instrument is a critical step before testing a conceptual model. Validating an instrument is rigorous and requires patience (Straub et al., 2004). The development of an instrument intended to measure performance management practices in institutions of higher learning in Uganda began from scratch following a number of stages that involved selection and creation of items, exploratory survey, content validity, pilot test and confirmatory study (Dwivedi, With the review of the literature on agency, upper echelon, resource-based view, dynamic capability and goal setting (Locke & Latham, 1990, 2003, 2005) theories. Metaphors such as agency relations, relevant resources, and locus of decision making, dynamic capability and goal setting were derived and a pool of items generated.

This was part of the exploratory survey that led to initial and selection of items, testing their reliability and content validation. The pilot tests revealed areas to be improved on such as wording, format that the questionnaire is not very long and logical sequencing of the questions. Using twenty five subject experts who mainly comprised of postgraduate students, item clarity and readability of the questionnaire was ensured. These steps of face and content validity of items confirmed the extent to which the items reflected the constructs. Face validity being the extent to which the content of the items is consistent with the construct definition was based solely on the researcher's judgement (Din, Zakaria, Mastor, Razak, Embi & Ariffin, 2009). Content validity is the extent to which the items comprehensively represent the identified construct (Joo & Lee, 2011) (see Table 1.

Table 1 Thereafter, a self-administered structured questionnaire was administered to 900 respondents, 477 questionnaires were returned and only 447 were usable. The original questionnaire comprised of 67 items measuring five exogenous dimensions. A fourpoint Likert scale was used, where 1 = strongly disagree and 4 = strongly agree. This scale was adopted after (Munene, 2005, personal communication) realising that most respondents would mainly score the neutral anchor of any odd scale.

5. IV. Data Cleaning, Editing and Reliability

To confirm the instrument, a Software Package for Social Scientists (SPSS) version 17 was used for statistical analysis to obtain descriptive statistics, (Ntayi, 2011). Data was then filled using maximum likelihood (ML), which assumes multivariate normality, but provides goodness of fit evaluation and, in some cases, significance tests and confidence intervals of parameter estimates. MCAR is a precursor to confirmatory factor analysis and structural equation. The descriptive statistics, including the mean, standard deviation, skewness, and kurtosis were examined (see Table 1). Skewness and kurtosis of an item with an absolute value exceeding 1.0 is considered unsuitable for measurement instruments (EOM, 1996) cited in Joo and Lee (2010). The values of skewness and kurtosis obtained were acceptable. The adequacy of the sample was determined using the Kaiser-Meyer-Olkin measure of sampling adequacy (0.87) and the Bartlett's test of sphericity (?² = 1977.09, df = 91, p = .00). The results indicated that the preconditions of normality and homoscedasticity were satisfied. The sample size was greater than 300 and Cronbach's alpha values obtained for all the constructs exceeded acceptable value of .70 (Nunnally, 1998;Field, 2005;Garson, 2005) in Table 2.

In order to examine whether the items are unidimensional, inter-item and corrected item-to-total correlations were analysed. Particularly, all those items with item-to-total correlations within the range of .30 to .40, which are considered the minimum level of interpretation of the structure, were kept (Din, Zakaria, Mastor, Razak, Embi, & Ariffin, 2009). According to Burton and Mazerolle (2011), inter-item correlations for items intended to measure the same item the same construct should be moderate and not too high (i.e. .30 -.60). A survey item unidimmensionality means a single item helps the researcher understand or assess only one latent construct not multiple constructs being measured by the survey (Burton & Mazerolle, 2011).

All methods indicated that exploratory factor analysis was appropriate and was conducted to examine the relationships among the items and to identify clusters of items that share sufficient variation to justify their existence as a factor or construct to be measured by the instrument (Burton & Mazerolle, 2011). EFA helps in reducing the number of items in a proposed survey so that the remaining items can best explain the constructs under investigation. Researchers use exploratory factor analysis (EFA) in determining the underlying factors that structure the instrument. For instance, in this study all cross loading items and items with factor loadings less than .50 were eliminated from the instrument (Table 2). V.

6. Description of the Sample

The findings showed that of the respondents: 62 percent were male; 38 percent were female; 64 percent had ages below 40 years and 36 per cent above 40 years; 66.2 percent were married; 29.5 per cent were single; 2.2 percent separated; .7 percent divorced and 1.3 per cent widowed; 45 percent had a postgraduate degree and above; 5.6 percent had certificates; 13.4 per cent had diplomas; 35.6 per cent had a first degree; 36 elsewhere before joining university service whereas 26 percent had no working experience on joining university employment.

7. VI.

8. The Measurement Model

Exploratory factor analysis (EFA) results suggested five factors, which seem to measure performance management practices. However, EFA is generally acknowledged as insufficient for the assessment of dimensionality (Rubio et al., 2001 cited in Vieira, 2011). According to Brown (2006), EFA has a problem of indeterminacy of factor scores, which is resolved by confirmatory factor analysis (CFA) and structural equation modelling (SEM) because the analytic framework eliminates the need to compute factor scores. Unlike EFA, CFA/SEM offer modelling flexible such that additional variables can be brought into analysis to serve as correlates, predictors, or outcomes of the latent variables. Often, CFA is used as a precursor to SEM (Brown, 2006). CFA is used in the measurement model to specify the number of factors, how the various indicators are related to the latent factors, and the relationship among indicators' errors. CFA was conducted to minimise the difference between estimated and observed matrices (Din, Zakaria, Mastor, Razak, Embi & Ariffin, 2009). The structural equation model specifies how the various latent variables are related to one another such as direct or indirect effects, no relationship and spurious relationship (Brown, 2006).

For the identified dimensions (latent variable) in the measurement model, three to seven items were developed for each latent variable. To confirm the measurement items, reliability and validation was conducted following empirical data using a confirmatory factor analysis (CFA). According to DeVellis (2003), confirmation of the instrument minimizes costs and risks that could arise out of poor measures.

For the confirmatory analysis (CFA) in structural equation modeling (SEM), AMOS 8.0 software program was used (Schermelleh-Engel, Moosbrugger, & Müller, 2003). The program adopted maximum likelihood estimation to generate estimates in the fullfledged measurement model. According to Hair et al. (2010), there is no single rule for reporting or guaranteeing a correct model but a researcher should report at least one incremental index, one absolute index, in addition to ?² values and associated degrees of freedom. The goodness-of-fit statistics that were tested included: Chi square, Absolute Fit Indices and Incremental fit indices in Table 3. A non-significant ?² (p>0.05) is considered to be a good fit for the ?² GOF measure. However, it is believed that this does not necessarily mean a model with significant ?² to be a poor fit. This is because the results are highly dependent on sample sizes (Barret, 2006). Large sample sizes can lead to almost rejection of the null hypothesis even when models are trivially misspecified. Also, poorly specified models might be accepted if sample sizes are small. According to Tomarken and Waller (2003), chi-square test of exact fit is primarily a badness-of-fit measure that facilitates dichotomous acceptation or rejection decisions but provides less information about degree of fit. As a result consideration of the ratio of ?² to degrees of freedom (?²/df) is proposed to measure as an additional measure of GOF. A value smaller than 3 is recommended for the ratio (?² /df) for accepting the model to be a good fit (Chin, et al. 1995) mathematically similar to ?2 and Bollen (1989) dismissed this ratio as unreasonable for assessing fit.

The GFI is developed to overcome the limitations of the sample size dependent ?² measures as GOF (Joreskog, et al. 1993). A GFI value higher than 0.90 is recommended as a guideline for a good fit. Extension of the GFI is AGFI, adjusted by the ratio of degrees of freedom for the proposed model to the degrees of freedom for the null model. An AGFI value greater than 0.9, is an indicator of good fit (Segars, et al., 1993). RMSEA measures the mean discrepancy between the population estimates from the model and the observed sample values. RMSEA < 0.1 indicates good model fit (Browne Cudeck, 1993;Hair, Anderson, Tatham, & Black, 1998). Reporting the ?² value and degrees of freedom, the CFI or TLI, and RMSEA will usually provide unique information to evaluate the model (Hair et al., 2010). However, the problem of sample size dependency cannot be eliminated by this procedure (Ruiz, 2000 cited in Schermelleh-Engel, Moosbrugger, & Müller, 2003). The Incremental fit indices measure the improvement of fit by comparing the proposed model with a model that assumes that there is no association among the observed variables and which is usually called the independence model. The normed fit index (NFI), the Tucker-Lewis index (TLI) and the comparative fit index (CFI) -the values of these indices should be close to 1 to indicate a good fit were tested (Hair et al., 1998).

9. VII.

10. Reliability and Validity

In this study, the reliability tests included internal consistency reliability measures, item reliability measures and construct reliability measures. The Cronbach coefficient values for the final model are indicated in Table 2. The acceptable values range from .68 to .86. Goal setting has the lowest value of .68. After CFA, the overall internal consistency reliability coefficient, Cronbach value obtained was .86. Hair et al. (2010) argue that as SEM matures the previous guidelines such as "sample sizes of 300 are required" are no longer appropriate rather that sample size decisions should be based on a set of factors. For instance, a minimum sample size of 300, models with seven or fewer constructs, lower communalities below .45 and/or multiple underidentified (fewer than three) constructs are plausible. The communality measures the percent of variance in a given variable explained by all the factors jointly and may be interpreted as the reliability of the indicator (Gason, 2008) in Table 4. An item's communality or item reliability is the square of a standardized factor loading, which represents how much variation in an item is explained by the latent factor. An item reliability of .50 is the minimum acceptable value although lower values be accepted with large sample sizes. The standardised factor loadings ranged from .53 to .87 as shown in Table 2 are an indication of acceptable convergent validity. The construct reliability values are indicated in Table 4, ranging from .68 to .87. Construct reliability (CR) above the 0.70 threshold and an average extracted variance (AVE) above the .50 threshold are recommended by Hair et al. (1998), which this study achieved as indicated in Table 4.

To get satisfactory discriminant validity, the square root of average variance extracted (AVE) for each construct should be greater than the correlation between the construct and the other constructs (Sridharan, Deng, Kirk & Corbitt (2010). Table 5 shows the obtained and acceptable discriminant validity values between each pair of construct and all AVE square root values indicated are greater than the correlation between the constructs. For example, dynamic capability showed highest discriminant validity among all other constructs. The square root of AVE for dynamic capability was .83 while the correlation between dynamic capability and other constructs ranged from .52 to .63. Following Cohen, Cohen, Aiken and West's (2003) criteria correlation value (r >.10) was considered to be weak, (r > .30) was defined to be moderate and (r >.50) was considered to be strong.

11. VIII.

12. Results

The measurement model of 52 items deductively generated (Hinkin, 1998 cited in Yeo & Frederiks, 2011) loading on five exogeneous variables that yielded unsatisfactory fit indices (e.g. NFI = .77, GFI = .74, TLI = .85, CFI = .85). Based on the guidelines for these values, problematic items that caused unacceptable model fit were excluded. Remodelling to assess which specific model fits the data well (Tomarken & Waller, 2003), yielded a more parsimonious model of 15 items in Figure 1 (e.g. RMSEA = .039,Ninety per cent confidence interval for RMSEA is .039 (LO90 = .027, HI90 = .050), GFI = .961, NFI = .944, TLI = .969, CFI = .977, RMR = .001, AGFI = .942, PNFI = .719, ?² = 133.886, df = 80; ? = .000, ?²/df = 1.674 in Table 3). Schermelleh-Engel , Moosbrugger and Muller (2003) argued that the number of variable indicators should be considered for choosing a sufficient large sample size. Hau, Balla, and Grayson (1998), Marsh and Hau (1999), Boomsma and Hoogland (2001), cited in Schermelleh-Engel , Moosbrugger and Muller (2003) argued that using confirmatory factor analyses with 6 to 12 indicator variables per latent factor a sample size of N = 100 is necessary. With two indicators per factor one should at least have a sample size of N ? 400. In otherwords, more indicators may compensate for small sample size, a large sample size may compensate for a few indicators. In this study, the sample size of 447 was sufficiently large enough to meet this requirement.

In CFA, there are no "outcome variables". The model that was fitted could only be assessed using the discrepancy between model implied covariances and the observed covariances (Barret, 2006). In view of that assertion, SEM deals with the relationships between latent variables only with the advantage that these variables are free of random error (Stoelting, 2009); errors were estimated and removed, leaving only the common variance. Byrne (2010) argued that the fit statistics resulting from the model will be equivalent, either if it is parameterised as a first order or a secondorder structure based on theory.

13. Discussion

The purpose of this study was to to develop and validate an instrument for measuring and assessing perceived performance management practices by exploring the psychometric properties, generalisability, and applicability of this instrument in Institutions of Higher Education in Uganda. The obtained well-fitting model was one plausible representation of the underlying structure from the many possible others using the study data. The goal setting variable in the fitting model had a low Cronbach value but was retained because of the exact model fit indices. To validate the instrument, the study examined the internal reliability, item reliability, construct validity to identify whether the instrument is properly designed to measure what it intends to assess. Overall internal consistency reliability coefficient of Cronbach Alpha value of .95 was obtained from an analysis of the data using software SPSS v19.0. After CFA the overall internal consistency reliability coefficient was .83. All these values are over and above the generally agreed upon lower limit for Cronbach's alpha value of .70. The Goodness-of-fit measures of, Goodness-of-fit index (GFI), comparative fit index (CFI) and normed-fit-index (NFI) and Tucker Lewis Index (TLI) were all above practitioners, cut off values of .95 (Hu & Bentler, 1999 cited in (Hu & Bentler, 1999). According to Browne and Cudeck (1993), a value of .08 or less for the RMSEA would indicate an acceptable and reasonable error approximation. The final revised model RMSEA was .038. In this study, SEM estimates the degree to which the hypothesised model fits the data for the second order model with results still indicating a reliable and valid instrument in Figure 2.

X.

14. Conclusion

This study current research makes an important contribution to the field of performance management in particular and scientific contribution in general following the rigour exhibited in the process of instrument creation and validation. The process involved literature search, extraction, operationalisation and testing the authenticity of constructs, and linking these constructs to measurement. This is a good attempt of contextualising the nature and dimensionality of performance management practices as a construct. In practice, the established measures of performance management practices should act as guidelines of managers of Institutions of Higher Education in Uganda in managing employee performance.

However, this study had its own limitations. The model used directional influences which require a finite amount of time to operate yet this was a cross sectional study rendering the interpretation of such effects problematic. This model still needed to be subjected to a CFA test with new data. A replication of this study with more literature search to establish better indicators of the constructs would be recommended.

Figure 1. Figure 1 :
1Figure 1 : Measurement model for performance management practice
Figure 2.
calculate both the exploratory factor analysis and
instrument reliability analysis results. The missing data
was checked and confirmed to be missing completely at
random (MCAR). Maximum likelihood (direct ML) is one
of the most widely preferred methods for handling
missing data in SEM and other data analytic contexts
(Allison, 2003; Schafer & Graham, 2002). Missing
completely at random (MCAR) with ? > .05 means that
the two groups are significantly different from each other
and so the missing values are random
Indices Year 2014
1 2 3 4 5
Dynamic Locus of Relevant Goal Agency 3
In this institution there is documentation of new knowledge in decision making In this institution there is sharing of new knowledge in problem solving situation In this institution there is sharing of new knowledge in decision making In this institution incentives are administered by objective criteria In this institution rewards are administered by objective criteria with employees In this institution top management team members share the vision Capability .84 .82 .80 Decision Making .87 .83 .61 resources setting relations Volume XIV Issue IV Version I
In this institution relevant resources are act as triggers for innovation In this institution resources act as triggers for collaborative problem .80 .79 ( A )
solving A number of relevant resources are integrated to increase our effectiveness In this institution employees set themselves challenging but achievable goals In this institution employees are committed to their goals In this institution employees are encouraged to set their own task goals Policies and procedures of the institution are clearly defined The review of the of decisions taken by the university top leaders s done formally The reviews of the decisions taken by the university top leaders is comprehensively Eigen Values % of Variance Cumulative % 2.31 15.43 15.43 2.14 14.28 29.71 .77 2.09 13.91 43.62 .81 .82 .6 2 1.89 12.57 56.18 .78 .76 .66 1.89 12.50 68.68 Global Journal of Human Social Science
Note: © 2014 Global Journals Inc. (US) -
Figure 3. Table 2 :
2
Year 2014
4
Volume XIV Issue IV Version I
A ) ( Global Journal of Human Social Science Item Indicator Agency Relations (Problem solving){Literature: Jensen and Meckling(1976); Martinez and Kennerley (2005); Sperber (1996);Morris, Item Code Mean SDSkewness Kurtosis Item-Total Corr. EFA loadings ?-Cronb. Alpha Menon and Ames (2001); Hendry (2005); Daily, Dalton and Cannella (2003); Hermalin and Weisbach (2003)}. 1. Policies and procedures of the institution are clearly defined Agency4 2.67 .90 -.26 -.67 .47 .78 .71 2. The review of the of decisions taken by the university top leaders s done formal Agency5 2.72 .81 -.51 -.03 .49 .76 3. The reviews of the decisions taken by the university top leaders is comprehensive Agency7 2.18 .74 .22 -.20 .53 .66 Locus of Decision Making {Literature: Hambrick and Mason (1984, 1992); Carlzon (1989); Brode (1994); Katzenback and Smith (1993)}. 1. In this institution rewards are administered by objective criteria Echelon 7 2.06 .82 .38 -.43 .52 .83 .76 2. In this institution incentives are administered by objective criteria Echelon 8 2.06 .79 .48 -.09 .52 .87 employees 3. In this institution TMT members share the vision with Echelon 13 2.13 .90 .32 -.73 .54 .61
Relevant Resources (Resource utilisation) {Literature: Penrose(1959); Isobe, Makino and Montomery (2003); Donaldson and Lorsch
(1983); Dutton and Duncan (1987); Gordon and Cummins (1979); Amit and Schoemaker (1993); Barney (1991, 2001, 2002); Wernefel t
(1984); Collis and Montgomery (1995); Rousse and Dallenbach (2002)}
A number of relevant resources are integrated to Rbv 13 2.65 .67 -.52 .23 .43 .77 .76
increase our effectiveness
In this institution relevant resources are act as Rbv 14 2.72 .68 -.79 .72 .46 .80
triggers for innovation.
Note: © 2014 Global Journals Inc. (US) -
Figure 4.
In this institution resources act as triggers for Rbv 15 2.51 .76 -.33 -.33 .46 .79
collaborative problem solving.
Dynamic Capability(Information sharing and flowing){ Literature: Shore, Porter and Zahra (2004); Coyle-Shapiro, Shore, Taylor and
Tetrick (2004); Choo and Johnson (2004)}
in decision making 1. In this institution there is sharing of new knowledge Dynmc6 2.49 .74 -.31 -.31 .68 .80
knowledge in decision making 2. In this institution there is documentation of new Dynmc7 2.39 .75 -.12 -.43 .61 .84 .86
problem solving situation 3. In this institution there is sharing of new knowledge in Dynmc8 2.43 .76 -.06 -.37 .61 .82
Goal Setting (Planning) {Literature: Locke (1978, 2001, ); Locke and Latham (1990); Vandewalle (1997); Latham (2001);
Latham and Lee (1986); Ryan (1970); Veccho and Appelbaum (1995)}
challenging but achievable goals 1. in this institution employees set themselves Goal4 2.55 .70 -.29 -.14 .35 .81
goals 2. In this institution employees are committed to their Goal5 2.67 .73 -.30 -.07 .38 .82 .68
their own task goals 3. In this institution employees are encouraged to set Goal11 2.34 .79 -.20 -.36 .40 .62
Figure 5. Table 3 :
3
Figure 6. Table 4 :
4
Figure 7. Table 5 :
5
Chi-square (?²) Absolute Incremental
fit indices fit indices
M ?² = df = ? = ?² ? df = RMR = GFI = AGFI = RMSEA = NFI = TLI = CFI
od 133.886 80 .000 1.674 .001 .961 .942 .039 .944 .969 =
el LO90 =.027 .977
HI90 = .050
Model with 5 correlated factors
Recommend ? .05 ? 3.0 ? .90 ? .90 ? .10 ? .90 ? .90 ?
ed value .90
Construct Standardised = Item ? = 1-?² ?(?) ?(?²) Average ? (?)² Construct
factor Reliability Standardised = variance Reliability
loadings (?) (communalities error Eigen extracted ? (?)²/
in EFA) variance values (AVE ) ? (?)² + ?(?)
in EFA =?(?²)/n
agen4 .61 .37 .63 1.73 1.27 .42 3.80 .70
agen5 .63 .40 .60
agen7 .71 .50 .50
Echlo7 .79 .62 .38 1.35 1.65 .55 4.88 .75
Echlo8 .82 .67 .33
echlo13 .60 .36 .64
rebv13 .68 .46 .54 1.46 1.54 .51 4.62 .76
rebv14 .75 .56 .44
rebv15 .72 .52 .48
dymc6 .87 .76 .24 .93 2.07 .69 6.20 .87
dymc7 .83 .69 .31
dymc8 .79 .62 .38
gol4 .70 .49 .51 1.71 1.29 .43 3.76 .68
gol5 .75 .56 .44
gol11 .49 .24 .76
Agency Echelon Resources Capability Goal setting
Agency .65
Echelon .64 .74
Resources .41 .35 .71
Capability .63 .52 .53 .83
Goal setting .37 .32 .42 .40 .66

Appendix A

  1. Is performance management applicable in developing countries? 11. The case of a Tanzanian college. A A De Waal . 10.1108/17468800710718903. International Journal of Emerging Markets 2007. 2 (1) p. .
  2. The contemporary performance measurement techniques in Egypt: a contingency approach, Abdel Aziz , A E Dixon , R Ragheb , MA . 2005.
  3. Performance measurement is only one way of managing performance. 19. A Halachmi . International Journal of Productivity and Performance Management 2005. 54 (7) p. .
  4. Re-examining perceived ease of use and usefulness: A confirmatory factor analysis. A H Segars , V Grover . MIS quarterly 1993. 17 (4) p. .
  5. Mis)use of factor analysis in the study of insulin resistance syndrome. A J G Hanley , J B Meigs , K Williams , S M Haffner , R B Agostino , Sr . American Journal of Epidemiology 2005. 161 (12) p. .
  6. Interactive LISREL in Practice: Getting Started with a SIMPLIS Approach, A L Vieira . 2011. London: Springer.
  7. Potential Problems with "Well Fitting. A Tomarken , N Waller . Models. Journal of Abnormal Psychology 2003. 112 (4) p. .
  8. Structural Equation Modelling with AMOS: Basic Concepts, Applications, and Programming, B Byrne . 2010. London: Routledge Taylor and Francis.
  9. Structural equation modelling for evaluating the user perceptions of e-learning effectiveness in higher education. B Sridharan , H Deng , J Kirk , B Corbitt . 18th European Conference on Information Systems, 2010. 2010. ECIS.
  10. A replication to validate and improve a measurement instrument for Deming's 14 Points. C Fisher , C Elrod , R Mehta . International Journal of Quality & Reliability Management 2011. 28 (3) p. .
  11. Assessment of effectiveness of performance appraisal system: Scale development and its usage, C Sekhar . www.mainstayin.com 2007. 1.
  12. Validation guidelines for IS positivist research. D Straub , M-C Boudreau , D Gefen . Communications of the Association for Information Systems 2004. 13 p. .
  13. A Theory of Goal Setting and Task Performance, E A Locke , G P Latham . 1990.
  14. Building a practically useful theory of goal setting and task motivation: a 35-year odyssey. E A Locke , G P Latham . American Psychologist 2003. 57 p. .
  15. Goal setting theory: theory building by induction, E A Locke , G P Latham . 2005.
  16. Why journal editors should encourage the replication of applied econometric research. E Kane . Quarterly Journal of Business Economics 1984. 23 p. .
  17. Cognitive and affective regulation: Validation and nomological network analysis. G Yeo , E Frederiks . 10.11/j.1464-0597.2011.0047.x. Applied Psychology: An International Review 2011. 60 (4) p. .
  18. Goodness of fit indexes in confirmatory factor analysis: The effect of sample size. H Marsh , J Balla , R Mcdonald . Psychological Bulletin 1988. 103 p. .
  19. Applied Multiple Regression/Correlation Analysis for the Behavioural Sciences, J Cohen , P Cohen , S G West , L S Aiken . 2003. (3rd edn.)
  20. Multivariate data analysis (Fifth Edition ed.), J F Hair , R E Anderson , R L Tatham , W C Black . 1998. Englewood Cliffs, New Jersey: Printice-Hall Inc.
  21. J F Hair , W C Black , B J Babin , R E Anderson . Multivariate data Analysis, (London
    ) 2010. Pearson Prentice Hall.
  22. Performance management practices and managed performance: the moderating influence of organisational culture and climate in public universities in Uganda. J Kagaari . Measuring Business Excellence 2011. 15 (4) p. .
  23. Missing data: Our view of the state of the art. J L Schafer , J W Graham . Psychological Methods 2002. 7 p. .
  24. Emotional outcomes of Ugandan SME buyer-supplier contractual conflicts. J Ntayi . International Journal of Social Economics 2011. 39 (1/2) p. .
  25. Liseral8: Structural equation modelling with the SIMPLIS command language, K G Joreskog , D Sorbom . 1993. Hillsdale, NJ: Lawrence Erlbaum Associates.
  26. Cutoff criteria for fit indixes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation modelling, L T Hu , P M Bentler . 1999. 6 p. .
  27. Human Resource Management: Strategy and Action, M Armstrong . 1992. London; Thousand Oaks (CA: Sage.
  28. Analysis of the Animosity Model of Foreign Product Purchase in Egypt 355 Global. M Mostafa . Business Review 2010. 11 (3) p. .
  29. Factor structure of the perceived stress scale (PSS) in a sample for Mexico. M Teresa , G Ramirez , R Hernandez . The Spanish Journal of Psychology 2007. 10 (1) p. .
  30. Alternative ways of assessing model fit. M W Browne , R Cudeck . Testing Structural Equation Models, K A Bollen, & J S Long (ed.) (Newsburry Park, CA
    ) 1993. Sage. p. .
  31. , N J Hillsdale . Lawrence Elbaum Associates.
  32. Structural equation modelling: Adjudging model fit. P Barret . Personality and Individual Differences 2006. 42 p. .
  33. Missing data techniques for structural equation modeling. P D Allison . Journal of Abnormal Psychology 2003. 112 p. .
  34. Meaningful Hybrid e-Training Model via POPEYE Orientation. R Din , M S Zakaria , N A Razak , M A Embi , S R Ariffin . Internation Journal of Education and Information Technologies 2009. 3 (1) .
  35. Structural equation modelling/path analysis, Ricka Stoelting . http://userwww.sfsu.edu/~efc/classes/biol710/path/SEMwebpage.htm 2009.
  36. Principles and Practice of Structural Equation Modeling, R Kline . 1998. New York: Guilford Press.
  37. Application of Structural Equation Modelling in Psychological Research. R Macallum , J Austin . Annu. Rev. Psyhol 2000. 51 p. .
  38. Essentials of Behavioral Research: Methods and Data Analysis, R Rosenthal , R Rosnow . 1984. New York: McGraw-Hill.
  39. Evaluating the fit of structural equation models: Tests of significance and descriptive goodness of fit measures. Schermelleh-Engel , H Moosbrugger , H Muller . http://www.mpr-online.de Methods of Psychological Research Online 2003. 8 (2) p. .
  40. Measuring the usability of academic digital libraries: Instrument development and validation. S Joo , J Y Lee . The Electronic library 2010. 29 (4) p. .
  41. Great Minds in Management: The Process of Theory Development, Smith, K.G. and Hitt, M.A. (ed.) New York, NY; Oxford.
  42. Confirmatory Factor Analysis for Applied Research, T Brown . 2006. London: The Guilford Press.
  43. A First Course in Structural Equation Modelling, T Raykov , G Marcoulides . 2006. London: Lawrence Erlbaum Associates.
  44. On the use, usefulness and ease of structural equation modelling in mis research: A note of caution. W W Chin , P A Todd . MIS quarterly 1995. 19 (2) p. .
  45. Development of a survey instrument to examine consumer adoption of broadband. Industrial Management 7 data Systems, Y K Dwivedi , J Choudrie , W Brinkman . 2006. 106 p. .
Date: 2014-01-15