# Introduction ccording to Sekhar (2007), there is no broader system of management of the people which has received much importance and attention as performance management system in organizations. Baron and Armstrong (2002) assert that performance Author ?: Department of Psychology, Kyambogo University. e-mail: jmskagaari@gmail.com organization, teams, and individuals within it, management is getting better results from understanding and managing performance, within agreed framework of planned goals, standards and competence requirements. Houldsworth and Jirasinghe (2005) argue that performance is through which managers ensure that employee activities and outputs are congruent with the organizations goals. Halachmi (2005) argues that performance management can take many forms from dealing with issues internal to the organization to catering to stakeholders or handling issues in its environment and paying due attention to the human (behavioral) side of the enterprise. To better understand, explain and implement PM requires having practices that involve: establishing results-oriented relationships by developing appropriate PM processes and structures; identifying and using available resources that are paramount to regular setting of targets; ensuring information flow in a changing work environment (Kagaari, 2011). According to de Waal (2007), performance management, and especially the fostering of performance-driven behaviour, cannot be implemented lightly and should not be underestimated. It takes continuous attention, dedication and in particular, stamina from management to keep focusing on performance management in order to keep it "alive" in the organisation (de Waal, 2007). For instance, de Waal's (2007) study on performance management systems in institutions of higher education, found a low score on action orientation, which is caused by the management being composed of mainly academics who, in contrast to practitioners, tend to think things through (too long) before acting. Kagaari (2011) also found that even when employees in institutions of higher education in Uganda are involved in strategic planning, a core activity of performance management, the implementation process becomes difficult because of the poor incentive structures. Armstrong (1992) argued that studies on performance management mostly concentrate on macro factors and examination of individual perceptions of performance management practices is still scanty. de Waal citing Abdel Aziz et al. (2005) further argued that scientific and professional literature specifically on implementing performance management in developing countries is scarce. In Africa, studies on PM are limited and particularly for Institutions of Higher Education in Uganda. This study particularly focuses on performance management (PM) practices in higher institutions of learning. Unfortunately, there is no existing reliable and valid instrument for measuring these PM practices. The purpose of this study is to develop and validate an instrument that will reliably assist in tapping information from employees for purposes of testing a conceptual model of performance management practices in Institutions of Higher Education in Uganda. This will in turn minimise introducing and copying tools and systems from the western world which are not always the best suited to local circumstances (de Waal, 2007). # II. # Literature Review Kagaari's study (2011) based on the regular activites employees in public universities are engaged in identifies five constructs of performance management practices: Agency relations, locus of decision making, relevant resources, dynamic capability and goal setting. The purpose of this study was to develop and evaluate an instrument for empirical gauging of performance management practices in Institutions of Higher Education in Uganda. Such an understanding is best achieved by meeting the following objectives (Straub et al., 2004): 1. identifying the initial items that may help explain performance management practices and determine them by employing an exploratory survey approach; 2. confirming the representativeness to a particular construct domain; and 3. finally testing the instrument in order to confirm the reliability of items and construct validity. Accordingly, exploratory factor analysis (EFA) as a modelling approach is normally used for studying hypothetical constructs by using a variety of observable proxies or indicators of them that can be directly measured (Raykov & Marcoulides, 2006) but well aware that it is not a hypothesis-testing procedure (Hanley, Meigs, Williams, Haffner, & D'Agostino, 2005). Raykov and Marcoulides (2006) argue that the major concern of exploratory factor analysis is to determine how many factors, latent constructs, are needed to explain well the relationship among a given set of observed measures. Then, the confirmatory factor analysis quantifies, tests and confirms the details of the of a pre-existing factor structure. CFA requires that the complete details of the proposed model be specified before it is fitted to the data. According to Brown (2006), confirmatory factor analysis is appropriate for construct validation and test construction. CFA is also frequently used as a first step to assess the proposed measurement model in a structural equation model (MacCallum & Austin, 2000). Many of the rules of interpretation regarding assessment of model fit and model modification in structural equation modelling apply equally to CFA. CFA is distinguished from structural equation modelling by the fact that in CFA, there are no directed arrows between latent factors. In other words, while in CFA factors are not presumed to directly cause one another, SEM often does specify particular factors and variables to cause one another. In the context of SEM, the CFA often is called 'the measurement model', while the relations between the latent variables (with directed arrows) are called 'the structural model'. Structural equation modelling is a multivariate technique that has a number of advantages: explicit assumptions, precision of the model, and complete representation of complex theories (Bagozzi, 1980 cited in Fisher, Elrod, & Mehta, 2009) because it requires clear definitions. According to Tomarken and Waller (2003), the primary purpose of structural equation modelling (SEM) as a broad-analytic framework, is to assess whether a specific model fits well or which of the several alternative models fits best. Accordingly the development, assessment, selection of statistical tests of fit and fit indices is critical in SEM domain (Tomarken & Waller, 2003). Marsh and Grayson cited in Schermelleh-Engel, Moosbrugger and Muller (2003) noted that there are no established guidelines for what minimal conditions constitute an adequate fit rather establishing that the model is identified, the iterative estimation procedure converges, all parameter estimates have reasonable sizes and the patterns in the residual matrix for standardized residuals do not indicate signs of ill fit. # III. Methodology According to Straub, Boudreau, and Gefen (2004), validating an instrument is a critical step before testing a conceptual model. Validating an instrument is rigorous and requires patience (Straub et al., 2004). The development of an instrument intended to measure performance management practices in institutions of higher learning in Uganda began from scratch following a number of stages that involved selection and creation of items, exploratory survey, content validity, pilot test and confirmatory study (Dwivedi, With the review of the literature on agency, upper echelon, resource-based view, dynamic capability and goal setting (Locke & Latham, 1990, 2003, 2005) theories. Metaphors such as agency relations, relevant resources, and locus of decision making, dynamic capability and goal setting were derived and a pool of items generated. This was part of the exploratory survey that led to initial and selection of items, testing their reliability and content validation. The pilot tests revealed areas to be improved on such as wording, format that the questionnaire is not very long and logical sequencing of the questions. Using twenty five subject experts who mainly comprised of postgraduate students, item clarity and readability of the questionnaire was ensured. These steps of face and content validity of items confirmed the extent to which the items reflected the constructs. Face validity being the extent to which the content of the items is consistent with the construct definition was based solely on the researcher's judgement (Din, Zakaria, Mastor, Razak, Embi & Ariffin, 2009). Content validity is the extent to which the items comprehensively represent the identified construct (Joo & Lee, 2011) (see Table 1. Table 1 Thereafter, a self-administered structured questionnaire was administered to 900 respondents, 477 questionnaires were returned and only 447 were usable. The original questionnaire comprised of 67 items measuring five exogenous dimensions. A fourpoint Likert scale was used, where 1 = strongly disagree and 4 = strongly agree. This scale was adopted after (Munene, 2005, personal communication) realising that most respondents would mainly score the neutral anchor of any odd scale. # IV. Data Cleaning, Editing and Reliability To confirm the instrument, a Software Package for Social Scientists (SPSS) version 17 was used for statistical analysis to obtain descriptive statistics, (Ntayi, 2011). Data was then filled using maximum likelihood (ML), which assumes multivariate normality, but provides goodness of fit evaluation and, in some cases, significance tests and confidence intervals of parameter estimates. MCAR is a precursor to confirmatory factor analysis and structural equation. The descriptive statistics, including the mean, standard deviation, skewness, and kurtosis were examined (see Table 1). Skewness and kurtosis of an item with an absolute value exceeding 1.0 is considered unsuitable for measurement instruments (EOM, 1996) cited in Joo and Lee (2010). The values of skewness and kurtosis obtained were acceptable. The adequacy of the sample was determined using the Kaiser-Meyer-Olkin measure of sampling adequacy (0.87) and the Bartlett's test of sphericity (?² = 1977.09, df = 91, p = .00). The results indicated that the preconditions of normality and homoscedasticity were satisfied. The sample size was greater than 300 and Cronbach's alpha values obtained for all the constructs exceeded acceptable value of .70 (Nunnally, 1998;Field, 2005;Garson, 2005) in Table 2. In order to examine whether the items are unidimensional, inter-item and corrected item-to-total correlations were analysed. Particularly, all those items with item-to-total correlations within the range of .30 to .40, which are considered the minimum level of interpretation of the structure, were kept (Din, Zakaria, Mastor, Razak, Embi, & Ariffin, 2009). According to Burton and Mazerolle (2011), inter-item correlations for items intended to measure the same item the same construct should be moderate and not too high (i.e. .30 -.60). A survey item unidimmensionality means a single item helps the researcher understand or assess only one latent construct not multiple constructs being measured by the survey (Burton & Mazerolle, 2011). All methods indicated that exploratory factor analysis was appropriate and was conducted to examine the relationships among the items and to identify clusters of items that share sufficient variation to justify their existence as a factor or construct to be measured by the instrument (Burton & Mazerolle, 2011). EFA helps in reducing the number of items in a proposed survey so that the remaining items can best explain the constructs under investigation. Researchers use exploratory factor analysis (EFA) in determining the underlying factors that structure the instrument. For instance, in this study all cross loading items and items with factor loadings less than .50 were eliminated from the instrument (Table 2). V. # Description of the Sample The findings showed that of the respondents: 62 percent were male; 38 percent were female; 64 percent had ages below 40 years and 36 per cent above 40 years; 66.2 percent were married; 29.5 per cent were single; 2.2 percent separated; .7 percent divorced and 1.3 per cent widowed; 45 percent had a postgraduate degree and above; 5.6 percent had certificates; 13.4 per cent had diplomas; 35.6 per cent had a first degree; 36 elsewhere before joining university service whereas 26 percent had no working experience on joining university employment. # VI. # The Measurement Model Exploratory factor analysis (EFA) results suggested five factors, which seem to measure performance management practices. However, EFA is generally acknowledged as insufficient for the assessment of dimensionality (Rubio et al., 2001 cited in Vieira, 2011). According to Brown (2006), EFA has a problem of indeterminacy of factor scores, which is resolved by confirmatory factor analysis (CFA) and structural equation modelling (SEM) because the analytic framework eliminates the need to compute factor scores. Unlike EFA, CFA/SEM offer modelling flexible such that additional variables can be brought into analysis to serve as correlates, predictors, or outcomes of the latent variables. Often, CFA is used as a precursor to SEM (Brown, 2006). CFA is used in the measurement model to specify the number of factors, how the various indicators are related to the latent factors, and the relationship among indicators' errors. CFA was conducted to minimise the difference between estimated and observed matrices (Din, Zakaria, Mastor, Razak, Embi & Ariffin, 2009). The structural equation model specifies how the various latent variables are related to one another such as direct or indirect effects, no relationship and spurious relationship (Brown, 2006). For the identified dimensions (latent variable) in the measurement model, three to seven items were developed for each latent variable. To confirm the measurement items, reliability and validation was conducted following empirical data using a confirmatory factor analysis (CFA). According to DeVellis (2003), confirmation of the instrument minimizes costs and risks that could arise out of poor measures. For the confirmatory analysis (CFA) in structural equation modeling (SEM), AMOS 8.0 software program was used (Schermelleh-Engel, Moosbrugger, & Müller, 2003). The program adopted maximum likelihood estimation to generate estimates in the fullfledged measurement model. According to Hair et al. (2010), there is no single rule for reporting or guaranteeing a correct model but a researcher should report at least one incremental index, one absolute index, in addition to ?² values and associated degrees of freedom. The goodness-of-fit statistics that were tested included: Chi square, Absolute Fit Indices and Incremental fit indices in Table 3. A non-significant ?² (p>0.05) is considered to be a good fit for the ?² GOF measure. However, it is believed that this does not necessarily mean a model with significant ?² to be a poor fit. This is because the results are highly dependent on sample sizes (Barret, 2006). Large sample sizes can lead to almost rejection of the null hypothesis even when models are trivially misspecified. Also, poorly specified models might be accepted if sample sizes are small. According to Tomarken and Waller (2003), chi-square test of exact fit is primarily a badness-of-fit measure that facilitates dichotomous acceptation or rejection decisions but provides less information about degree of fit. As a result consideration of the ratio of ?² to degrees of freedom (?²/df) is proposed to measure as an additional measure of GOF. A value smaller than 3 is recommended for the ratio (?² /df) for accepting the model to be a good fit (Chin, et al. 1995) mathematically similar to ?2 and Bollen (1989) dismissed this ratio as unreasonable for assessing fit. The GFI is developed to overcome the limitations of the sample size dependent ?² measures as GOF (Joreskog, et al. 1993). A GFI value higher than 0.90 is recommended as a guideline for a good fit. Extension of the GFI is AGFI, adjusted by the ratio of degrees of freedom for the proposed model to the degrees of freedom for the null model. An AGFI value greater than 0.9, is an indicator of good fit (Segars, et al., 1993). RMSEA measures the mean discrepancy between the population estimates from the model and the observed sample values. RMSEA < 0.1 indicates good model fit (Browne Cudeck, 1993;Hair, Anderson, Tatham, & Black, 1998). Reporting the ?² value and degrees of freedom, the CFI or TLI, and RMSEA will usually provide unique information to evaluate the model (Hair et al., 2010). However, the problem of sample size dependency cannot be eliminated by this procedure (Ruiz, 2000 cited in Schermelleh-Engel, Moosbrugger, & Müller, 2003). The Incremental fit indices measure the improvement of fit by comparing the proposed model with a model that assumes that there is no association among the observed variables and which is usually called the independence model. The normed fit index (NFI), the Tucker-Lewis index (TLI) and the comparative fit index (CFI) -the values of these indices should be close to 1 to indicate a good fit were tested (Hair et al., 1998). # VII. # Reliability and Validity In this study, the reliability tests included internal consistency reliability measures, item reliability measures and construct reliability measures. The Cronbach coefficient values for the final model are indicated in Table 2. The acceptable values range from .68 to .86. Goal setting has the lowest value of .68. After CFA, the overall internal consistency reliability coefficient, Cronbach value obtained was .86. Hair et al. (2010) argue that as SEM matures the previous guidelines such as "sample sizes of 300 are required" are no longer appropriate rather that sample size decisions should be based on a set of factors. For instance, a minimum sample size of 300, models with seven or fewer constructs, lower communalities below .45 and/or multiple underidentified (fewer than three) constructs are plausible. The communality measures the percent of variance in a given variable explained by all the factors jointly and may be interpreted as the reliability of the indicator (Gason, 2008) in Table 4. An item's communality or item reliability is the square of a standardized factor loading, which represents how much variation in an item is explained by the latent factor. An item reliability of .50 is the minimum acceptable value although lower values be accepted with large sample sizes. The standardised factor loadings ranged from .53 to .87 as shown in Table 2 are an indication of acceptable convergent validity. The construct reliability values are indicated in Table 4, ranging from .68 to .87. Construct reliability (CR) above the 0.70 threshold and an average extracted variance (AVE) above the .50 threshold are recommended by Hair et al. (1998), which this study achieved as indicated in Table 4. To get satisfactory discriminant validity, the square root of average variance extracted (AVE) for each construct should be greater than the correlation between the construct and the other constructs (Sridharan, Deng, Kirk & Corbitt (2010). Table 5 shows the obtained and acceptable discriminant validity values between each pair of construct and all AVE square root values indicated are greater than the correlation between the constructs. For example, dynamic capability showed highest discriminant validity among all other constructs. The square root of AVE for dynamic capability was .83 while the correlation between dynamic capability and other constructs ranged from .52 to .63. Following Cohen, Cohen, Aiken and West's (2003) criteria correlation value (r >.10) was considered to be weak, (r > .30) was defined to be moderate and (r >.50) was considered to be strong. # VIII. # Results The measurement model of 52 items deductively generated (Hinkin, 1998 cited in Yeo & Frederiks, 2011) loading on five exogeneous variables that yielded unsatisfactory fit indices (e.g. NFI = .77, GFI = .74, TLI = .85, CFI = .85). Based on the guidelines for these values, problematic items that caused unacceptable model fit were excluded. Remodelling to assess which specific model fits the data well (Tomarken & Waller, 2003), yielded a more parsimonious model of 15 items in Figure 1 (e.g. RMSEA = .039,Ninety per cent confidence interval for RMSEA is .039 (LO90 = .027, HI90 = .050), GFI = .961, NFI = .944, TLI = .969, CFI = .977, RMR = .001, AGFI = .942, PNFI = .719, ?² = 133.886, df = 80; ? = .000, ?²/df = 1.674 in Table 3). Schermelleh-Engel , Moosbrugger and Muller (2003) argued that the number of variable indicators should be considered for choosing a sufficient large sample size. Hau, Balla, and Grayson (1998), Marsh and Hau (1999), Boomsma and Hoogland (2001), cited in Schermelleh-Engel , Moosbrugger and Muller (2003) argued that using confirmatory factor analyses with 6 to 12 indicator variables per latent factor a sample size of N = 100 is necessary. With two indicators per factor one should at least have a sample size of N ? 400. In otherwords, more indicators may compensate for small sample size, a large sample size may compensate for a few indicators. In this study, the sample size of 447 was sufficiently large enough to meet this requirement. In CFA, there are no "outcome variables". The model that was fitted could only be assessed using the discrepancy between model implied covariances and the observed covariances (Barret, 2006). In view of that assertion, SEM deals with the relationships between latent variables only with the advantage that these variables are free of random error (Stoelting, 2009); errors were estimated and removed, leaving only the common variance. Byrne (2010) argued that the fit statistics resulting from the model will be equivalent, either if it is parameterised as a first order or a secondorder structure based on theory. # Discussion The purpose of this study was to to develop and validate an instrument for measuring and assessing perceived performance management practices by exploring the psychometric properties, generalisability, and applicability of this instrument in Institutions of Higher Education in Uganda. The obtained well-fitting model was one plausible representation of the underlying structure from the many possible others using the study data. The goal setting variable in the fitting model had a low Cronbach value but was retained because of the exact model fit indices. To validate the instrument, the study examined the internal reliability, item reliability, construct validity to identify whether the instrument is properly designed to measure what it intends to assess. Overall internal consistency reliability coefficient of Cronbach Alpha value of .95 was obtained from an analysis of the data using software SPSS v19.0. After CFA the overall internal consistency reliability coefficient was .83. All these values are over and above the generally agreed upon lower limit for Cronbach's alpha value of .70. The Goodness-of-fit measures of, Goodness-of-fit index (GFI), comparative fit index (CFI) and normed-fit-index (NFI) and Tucker Lewis Index (TLI) were all above practitioners, cut off values of .95 (Hu & Bentler, 1999 cited in (Hu & Bentler, 1999). According to Browne and Cudeck (1993), a value of .08 or less for the RMSEA would indicate an acceptable and reasonable error approximation. The final revised model RMSEA was .038. In this study, SEM estimates the degree to which the hypothesised model fits the data for the second order model with results still indicating a reliable and valid instrument in Figure 2. X. # Conclusion This study current research makes an important contribution to the field of performance management in particular and scientific contribution in general following the rigour exhibited in the process of instrument creation and validation. The process involved literature search, extraction, operationalisation and testing the authenticity of constructs, and linking these constructs to measurement. This is a good attempt of contextualising the nature and dimensionality of performance management practices as a construct. In practice, the established measures of performance management practices should act as guidelines of managers of Institutions of Higher Education in Uganda in managing employee performance. However, this study had its own limitations. The model used directional influences which require a finite amount of time to operate yet this was a cross sectional study rendering the interpretation of such effects problematic. This model still needed to be subjected to a CFA test with new data. A replication of this study with more literature search to establish better indicators of the constructs would be recommended. 1![Figure 1 : Measurement model for performance management practice](image-2.png "Figure 1 :") calculate both the exploratory factor analysis andinstrument reliability analysis results. The missing datawas checked and confirmed to be missing completely atrandom (MCAR). Maximum likelihood (direct ML) is oneof the most widely preferred methods for handlingmissing data in SEM and other data analytic contexts(Allison, 2003; Schafer & Graham, 2002). Missingcompletely at random (MCAR) with ? > .05 means thatthe two groups are significantly different from each otherand so the missing values are randomIndicesYear 201412345DynamicLocus ofRelevantGoalAgency3In this institution there is documentation of new knowledge in decision making In this institution there is sharing of new knowledge in problem solving situation In this institution there is sharing of new knowledge in decision making In this institution incentives are administered by objective criteria In this institution rewards are administered by objective criteria with employees In this institution top management team members share the visionCapability .84 .82 .80Decision Making .87 .83 .61resourcessettingrelationsVolume XIV Issue IV Version IIn this institution relevant resources are act as triggers for innovation In this institution resources act as triggers for collaborative problem.80 .79( A )solving A number of relevant resources are integrated to increase our effectiveness In this institution employees set themselves challenging but achievable goals In this institution employees are committed to their goals In this institution employees are encouraged to set their own task goals Policies and procedures of the institution are clearly defined The review of the of decisions taken by the university top leaders s done formally The reviews of the decisions taken by the university top leaders is comprehensively Eigen Values % of Variance Cumulative %2.31 15.43 15.432.14 14.28 29.71.77 2.09 13.91 43.62.81 .82 .6 2 1.89 12.57 56.18.78 .76 .66 1.89 12.50 68.68Global Journal of Human Social Science© 2014 Global Journals Inc. (US) - 2Year 20144Volume XIV Issue IV Version IA ) ( Global Journal of Human Social ScienceItem Indicator Agency Relations (Problem solving){Literature: Jensen and Meckling(1976); Martinez and Kennerley (2005); Sperber (1996);Morris, Item Code Mean SDSkewness Kurtosis Item-Total Corr. EFA loadings ?-Cronb. Alpha Menon and Ames (2001); Hendry (2005); Daily, Dalton and Cannella (2003); Hermalin and Weisbach (2003)}. 1. Policies and procedures of the institution are clearly defined Agency4 2.67 .90 -.26 -.67 .47 .78 .71 2. The review of the of decisions taken by the university top leaders s done formal Agency5 2.72 .81 -.51 -.03 .49 .76 3. The reviews of the decisions taken by the university top leaders is comprehensive Agency7 2.18 .74 .22 -.20 .53 .66 Locus of Decision Making {Literature: Hambrick and Mason (1984, 1992); Carlzon (1989); Brode (1994); Katzenback and Smith (1993)}. 1. In this institution rewards are administered by objective criteria Echelon 7 2.06 .82 .38 -.43 .52 .83 .76 2. In this institution incentives are administered by objective criteria Echelon 8 2.06 .79 .48 -.09 .52 .87 employees 3. In this institution TMT members share the vision with Echelon 13 2.13 .90 .32 -.73 .54 .61Relevant Resources (Resource utilisation) {Literature: Penrose(1959); Isobe, Makino and Montomery (2003); Donaldson and Lorsch(1983); Dutton and Duncan (1987); Gordon and Cummins (1979); Amit and Schoemaker (1993); Barney (1991, 2001, 2002); Wernefel t(1984); Collis and Montgomery (1995); Rousse and Dallenbach (2002)}A number of relevant resources are integrated toRbv 132.65 .67-.52.23.43.77.76increase our effectivenessIn this institution relevant resources are act asRbv 142.72 .68-.79.72.46.80triggers for innovation.© 2014 Global Journals Inc. (US) - In this institution resources act as triggers forRbv 152.51 .76-.33-.33.46.79collaborative problem solving.Dynamic Capability(Information sharing and flowing){ Literature: Shore, Porter and Zahra (2004); Coyle-Shapiro, Shore, Taylor andTetrick (2004); Choo and Johnson (2004)}in decision making 1. In this institution there is sharing of new knowledgeDynmc62.49 .74-.31-.31.68.80knowledge in decision making 2. In this institution there is documentation of newDynmc72.39 .75-.12-.43.61.84.86problem solving situation 3. In this institution there is sharing of new knowledge inDynmc82.43 .76-.06-.37.61.82Goal Setting (Planning) {Literature: Locke (1978, 2001, ); Locke and Latham (1990); Vandewalle (1997); Latham (2001);Latham and Lee (1986); Ryan (1970); Veccho and Appelbaum (1995)}challenging but achievable goals 1. in this institution employees set themselvesGoal42.55 .70-.29-.14.35.81goals 2. In this institution employees are committed to theirGoal52.67 .73-.30-.07.38.82.68their own task goals 3. In this institution employees are encouraged to setGoal112.34 .79-.20-.36.40.62 3 4 5Chi-square (?²)AbsoluteIncrementalfit indicesfit indicesM?² =df =? =?² ? df =RMR =GFI =AGFI =RMSEA =NFI =TLI =CFIod133.88680.0001.674.001.961.942.039.944.969=elLO90 =.027.977HI90 = .050Model with 5 correlated factorsRecommend? .05? 3.0? .90? .90? .10? .90? .90?ed value.90Construct Standardised?²=Item? = 1-?²?(?) ?(?²)Average? (?)² ConstructfactorReliabilityStandardised=varianceReliabilityloadings (?)(communalitieserrorEigenextracted? (?)²/in EFA)variancevalues(AVE )? (?)² + ?(?)in EFA=?(?²)/nagen4.61.37.631.73 1.27.423.80.70agen5.63.40.60agen7.71.50.50Echlo7.79.62.381.35 1.65.554.88.75Echlo8.82.67.33echlo13.60.36.64rebv13.68.46.541.46 1.54.514.62.76rebv14.75.56.44rebv15.72.52.48dymc6.87.76.24.932.07.696.20.87dymc7.83.69.31dymc8.79.62.38gol4.70.49.511.71 1.29.433.76.68gol5.75.56.44gol11.49.24.76Agency Echelon Resources Capability Goal settingAgency.65Echelon.64.74Resources.41.35.71Capability.63.52.53.83Goal setting.37.32.42.40.66 * The contemporary performance measurement techniques in Egypt: a contingency approach AbdelAziz AEDixon RRagheb MA 2005 * Missing data techniques for structural equation modeling PDAllison Journal of Abnormal Psychology 112 2003 * Human Resource Management: Strategy and Action MArmstrong 1992 Sage London; Thousand Oaks (CA * Structural equation modelling: Adjudging model fit PBarret Personality and Individual Differences 42 2006 * Confirmatory Factor Analysis for Applied Research TBrown 2006 The Guilford Press London * Alternative ways of assessing model fit MWBrowne RCudeck Testing Structural Equation Models KABollen &J SLong Newsburry Park, CA Sage 1993 * Structural Equation Modelling with AMOS: Basic Concepts, Applications, and Programming BByrne 2010 Routledge Taylor and Francis London * On the use, usefulness and ease of structural equation modelling in mis research: A note of caution WWChin PATodd MIS quarterly 19 2 1995 * Applied Multiple Regression/Correlation Analysis for the Behavioural Sciences JCohen PCohen SGWest LSAiken 2003 3rd edn. * NJHillsdale Lawrence Elbaum Associates * Is performance management applicable in developing countries? 11. The case of a Tanzanian college AADe Waal 10.1108/17468800710718903 International Journal of Emerging Markets 2 1 2007 * Meaningful Hybrid e-Training Model via POPEYE Orientation RDin MSZakaria NARazak MAEmbi SRAriffin Internation Journal of Education and Information Technologies 3 1 2009 * Development of a survey instrument to examine consumer adoption of broadband. Industrial Management 7 data Systems YKDwivedi JChoudrie WBrinkman 2006 106 * A replication to validate and improve a measurement instrument for Deming's 14 Points CFisher CElrod RMehta International Journal of Quality & Reliability Management 28 3 2011 * Multivariate data analysis (Fifth Edition ed.) JFHair REAnderson RLTatham WCBlack 1998 Printice-Hall Inc Englewood Cliffs, New Jersey * JFHair WCBlack BJBabin REAnderson Multivariate data Analysis London Pearson Prentice Hall 2010 * Mis)use of factor analysis in the study of insulin resistance syndrome AJ GHanley JBMeigs KWilliams SMHaffner RBAgostino Sr American Journal of Epidemiology 161 12 2005 * Performance measurement is only one way of managing performance. 19 AHalachmi International Journal of Productivity and Performance Management 54 7 2005 * Cutoff criteria for fit indixes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation modelling LTHu PMBentler 1999 6 * Measuring the usability of academic digital libraries: Instrument development and validation SJoo JYLee The Electronic library 29 4 2010 * Liseral8: Structural equation modelling with the SIMPLIS command language KGJoreskog DSorbom 1993 Lawrence Erlbaum Associates Hillsdale, NJ * Performance management practices and managed performance: the moderating influence of organisational culture and climate in public universities in Uganda JKagaari Measuring Business Excellence 15 4 2011 * Why journal editors should encourage the replication of applied econometric research EKane Quarterly Journal of Business Economics 23 1984 * Principles and Practice of Structural Equation Modeling RKline 1998 Guilford Press New York * A Theory of Goal Setting and Task Performance EALocke GPLatham 1990 * Building a practically useful theory of goal setting and task motivation: a 35-year odyssey EALocke GPLatham American Psychologist 57 2003 * Goal setting theory: theory building by induction EALocke GPLatham 2005 * Great Minds in Management: The Process of Theory Development Smith, K.G. and Hitt, M.A. New York, NY; Oxford * Application of Structural Equation Modelling in Psychological Research RMacallum JAustin Annu. Rev. Psyhol 51 2000 * Goodness of fit indexes in confirmatory factor analysis: The effect of sample size HMarsh JBalla RMcdonald Psychological Bulletin 103 1988 * Analysis of the Animosity Model of Foreign Product Purchase in Egypt 355 Global MMostafa Business Review 11 3 2010 * Emotional outcomes of Ugandan SME buyer-supplier contractual conflicts JNtayi International Journal of Social Economics 39 1/2 2011 * A First Course in Structural Equation Modelling TRaykov GMarcoulides 2006 Lawrence Erlbaum Associates London * Missing data: Our view of the state of the art JLSchafer JWGraham Psychological Methods 7 2002 * Evaluating the fit of structural equation models: Tests of significance and descriptive goodness of fit measures Schermelleh-Engel HMoosbrugger HMuller Methods of Psychological Research Online 8 2 2003 * Re-examining perceived ease of use and usefulness: A confirmatory factor analysis AHSegars VGrover MIS quarterly 17 4 1993 * Assessment of effectiveness of performance appraisal system: Scale development and its usage CSekhar 2007 1 * Structural equation modelling for evaluating the user perceptions of e-learning effectiveness in higher education BSridharan HDeng JKirk BCorbitt 18th European Conference on Information Systems ECIS 2010. 2010 * Structural equation modelling/path analysis RickaStoelting 2009 * Validation guidelines for IS positivist research DStraub M-CBoudreau DGefen Communications of the Association for Information Systems 13 2004 * Factor structure of the perceived stress scale (PSS) in a sample for Mexico MTeresa GRamirez RHernandez The Spanish Journal of Psychology 10 1 2007 * Potential Problems with "Well Fitting ATomarken NWaller Models. Journal of Abnormal Psychology 112 4 2003 * Interactive LISREL in Practice: Getting Started with a SIMPLIS Approach ALVieira 2011 Springer London * Cognitive and affective regulation: Validation and nomological network analysis GYeo EFrederiks 10.11/j.1464-0597.2011.0047.x Applied Psychology: An International Review 60 4 2011 * Essentials of Behavioral Research: Methods and Data Analysis RRosenthal RRosnow 1984 McGraw-Hill New York