Step V- validating measures



Download 136 Kb.
Page1/4
Date03.02.2017
Size136 Kb.
  1   2   3   4
TESTING LATENT VARIABLE MODELS WITH SURVEY DATA

(2nd Edition)



STEP V-- VALIDATING MEASURES1
Measure validation or demonstrating the adequacy of the measures to be used in the study, appeared to be the least consistent of the six steps above (see Peter and Churchill, 1986 for similar findings). Perhaps this was because there are several issues that should be addressed in validating measures. Measures should be shown to be unidimensional (having one underlying construct), consistent (fitting the model in structural equation analysis), reliable (comparatively free of measurement error), and valid (measuring what they should). Demonstrating validity has also been called measure validation (see Heeler and Ray, 1972). However, I will use the term measure validation to mean demonstrating measure unidimensionality, consistency (i.e., model-to-data fit), reliability, and validity.

While measure validation is well covered elsewhere, based on the articles reviewed it appears to merit review. I will begin with unidimensionality and consistency, then proceed to reliability and validity.


UNIDIMENSIONALITY
Assessing reliability usually assumes unidimensional measures (Bollen, 1989; Gerbing and Anderson, 1988; Hunter and Gerbing, 1982). However, coefficient alpha, the customary index of reliability, underestimates the reliability of a multidimensional measure (Novick and Lewis, 1967). Thus, unidimensionality is required for the effective use of coefficient alpha (Heise and Bohrnstedt, 1970-- see Hunter and Gerbing, 1982) (other indexes of reliability such as coefficient omega have been proposed for multidimensional measures -- see Heise and Bohrnstedt, 1970). Thus the reliability of a measure, as it was typically assessed in the studies reviewed (i.e., using coefficient alpha), should be assessed after unidimensionality has been demonstrated (Gerbing and Anderson, 1988).

A unidimensional item or indicator has only one underlying construct, and a unidimensional measure consists of unidimensional items or indicators (Aker and Bagozzi, 1979; Anderson and Gerbing, 1988; Burt, 1973; Gerbing and Anderson, 1988; Hattie, 1985; Jöreskog, 1970 and 1971; McDonald, 1981). In the articles reviewed, unidimensionality was typically assumed in the specification of a model estimated with structural equation analysis. Perhaps this was because authors have stressed the need for unidimensionality in structural equation analysis models in order to separate measurement issues (i.e., the relationship between a construct and its observed variables or indicators) from model structural issues (i.e., the relationships or paths among constructs) (Anderson, Gerbing and Hunter, 1987; Anderson and Gerbing, 1988; Bentler, 1989; Bollen, 1989; Burt, 1976; Jöreskog, 1993) (however, see Kumar and Dillon, 1987a and 1987b for an alternative view). Separating measurement issues from model structural issues in structural equation analysis avoids interpreta­tional con­found­ing (Burt, 1976), the interac­tion of measurement and structure in structural equation models. In particular, an item or indicator x can be viewed as composed of variance due to its construct X and variance due to error, and thus

Var(x) = λ2Var(X) + Var(e) , (2

if X and e are independent of each other, where Var denotes variance, λ or lambda is the path coefficient on the path connecting X with x (also called the loading of item x on X), and e is measurement error. Intrepretational confounding in structural equation analysis means that changes in model structure (i.e., adding or deleting paths among constructs) can produce changes in the measurement parameter estimates of a construct (i.e., changes in item loadings, in measurement errors, and in construct variances). Thus, with interpretational confounding, changes in the structural equation model can affect the empirical meaning of a construct.


CONSISTENCY
Many criteria for demonstrating unidimensionality have been proposed (see Hattie, 1985). Perhaps in response to calls for more work in this area (e.g., Lord, 1980), Anderson and Gerbing (1982) proposed operationalizing unidimensionality using the structural equation analysis notions of internal and external consistency (also see Kenny, 1979; Lord and Novick, 1968; McDonald, 1981) (however see Kumar and Dillon, 1987a and 1987b for an alternative view).

Consistency has been defined as the structural equation model fitting the data (see Kenny, 1979). It is important because coefficient estimates from structural equation analysis may be meaningless unless the model adequately fits the data (Bollen, 1989; Jöreskog ,1993:297). As Anderson and Gerbing (1982) defined consistency, two indicators of X, x1 and x2, are internally consistent if the correlation between them is the same as the product of their correlations with their construct X. Similarly an indicator of X and an indicator of Z, x and z, are externally consistent if the correlation between x and z is the same as the product of three correlations: x with its construct X, z with its construct Z, and X with Z. Thus, if X is internally and externally consistent, it is also unidimensional (i.e., its items have but one underlying construct, X), and I will use the general term consistent/unidimensional for Anderson and Gerbing's (1982) operationalizations of internal and external consistency.

Anderson and Gerbing (1982) also proposed assessing consistency/unidimensionality with what they termed similarity coefficients (see Hunter, 1973; Tyron, 1935). The similarity coefficient for items a and b (in the same or different measures) is derived from the dot (inner) product, or angle, between the vector of correlations of a with the other study items, and the vector of correlations of b with the other study items, and it is given by
Va·Vb

cos v = ــــــــــــــــــــ ,

|Va| * |Vb|
where Va is the row vector of correlations between a and the other study items, (including b), Vb is the column vector of correlations between b and the other study items (including a), Va·Vb is matrix product of row vector Va and column vector Vb , |V| is the length of a correlation vector (= (Σr i2)1/2 , the square root of the sum of the square of each correlation r i), "*" indicates multiplication, and cos v is the coefficient of similarity (i.e., the cosine of the angle v between the correlation vectors). Items that are similar have a small angle between their correlation vectors, and they have a cosine of this angle that is near one. Specifically, Anderson and Gerbing (1982) proposed that the vectors a and b have high internal consistency if their correlation vectors lie sufficiently close together, and suggested a similarity coefficient of .8 or above. External consistency is suggested by items that cluster together in a matrix of sorted or ordered similarity coefficients (Anderson and Gerbing, 1982:458) (see Appendix F for an example).

Consistency/unidimensionality is also suggested by a structural equation measurement model that fits the data when its constructs are specified as unidimensional (i.e., each observed variable or indicator is connected to only one construct). With consistency/unidimensionality there is little change in measurement parameter estimates (i.e., loadings and variances-- see Equation 2) between the measurement model and subsequent structural models (Anderson and Gerbing, 1988) (i.e., differences in second or third decimal digits only). Thus consistency/unidimensionality can also be suggested by showing little if any change in measurement parameters estimates between a full measurement model (i.e., one containing all the model constructs, and their indicators, with correlations among all the constructs) and the structural model (i.e., one that replaces certain correlations among the constructs with directional paths).


PROCEDURES FOR ATTAINING UNIDIMENSIONALITY AND CONSISTENCY
Procedures for attaining unidimension­ality using exploratory (common) factor analysis are well known. However, procedures for obtaining consistent/unidimensional measures are less well documented. Procedures using ordered similarity coefficients are suggested in Anderson and Gerbing (1982:454), and Gerbing and Anderson (1988). The ordered similarity coefficients help identify inconsistent items. Alternatively, consistency/unidimensionality for constructs specified unidimensionally (i.e., each observed variable or indicator is "pointed to" by only one construct) can be attained using a procedure that has been in use for some time (see Dwyer and Oh, 1987; Kumar and Dillon, 1987b; Jöreskog, 1993) (however see Cattell, 1973 and 1978 for a dissenting view). The procedure involves estimating a single construct measurement model (i.e., one that specifies a single construct and its items) for each construct, then measurement models with pairs of constructs, etc., through estimating a full measurement model containing all the constructs. Items are omitted as required at each step to obtain adequate measurement model fit (and thus consistency/unidimensionality because the process begins with single construct measurement models) while maintaining content or face validity (content or face validity is discussed later and should be a serious concern in omitting items using any consistency-improvement procedure). Standardized residuals, or specification searches (e.g., involving modification indices in LISREL or LMTEST in EQS) can also be used to suggest items to be omitted at each step to improve model-to-data fit.

However, these methods are not particularly efficient. In addition, they usually do not identify multiple subsets of consistent indicators, and they may not produce the largest consistent/unidimensional subset of indicators. Instead, partial derivatives of the likelihood function with respect to the measurement error term of the indicators (termed "FIRST ORDER DERIVATIVES" in LISREL) could be used to suggest inconsistent items (see Ping, 1998a). This approach involves the examination of the matrix of these derivatives from a single construct measurement model (i.e., one that specifies a single construct and its items). The item with the largest summed first derivatives without regard to sign that preserves the content or face validity of the measure is omitted. The matrix of first derivatives is then re estimated without the omitted item, and the process is repeated until the single construct measurement model fits the data (see Appendix E for an example of this procedure and relevant criteria for model fit).

Items with similarly sized summed first derivatives without regard to sign suggest that there are at least two consistent subsets of items, and my own experience with this procedure and real-world survey data sets has been that it produces one or more comparatively large (i.e., about six) internally consistent subsets of item. The approach is similar to Saris, de Pijper and Zegwaart's (1987) and Sörbom's (1975) proposal to improve model-to-data fit using partial derivatives of the likelihood function with respect to fixed parameters (i.e., to suggest paths that could be freed, e.g., modification indices in LISREL).

The internally consistent subsets of items that are produced also are frequently externally consistent. Nevertheless, the summed first derivative procedure could also be used on a full measurement model containing all the constructs specified unidimensionally (i.e., each observed variable or indicator is connected to only one construct). This approach involves the examination of the matrix of these derivatives from this full measurement model. As before, the item with the largest summed first derivatives without regard to sign that preserves the content or face validity of its measure is omitted. The matrix of first derivatives is then re estimated without the omitted item, and the process is repeated until the full measurement model fits the data (e.g., its RMSEA is .08 or less--fit indices are discussed below and in Step VI-- Validating The Model). This full-measurement-model variant of the first derivative approach will be discussed further later.


COMMENTS ON UNIDIMENSIONALITY AND CONSISTENCY
Unidimensionality in the exploratory common factor analytic sense is required for coefficient alpha, and consistency/unidimensionality is required for model-to-data fit in structural equation analysis. Further, it is well known that the reliability of a measure is necessary for its validity. Thus, there is a sequence of steps in validating or demonstrating the adequacy of a measure: establish the measure's consistency/unidimensionality for structural equation analysis, or establish its unidimensionality using Maximum Likelihood exploratory common factor analysis (i.e., not principal components factor analysis) for regression (however, regression is considered inappropriate for latent variables-- e.g., Bohrnstedt and Carter, 1971; Cohen and Cohen, 1983; Rock, Werts, Linn and Jöreskog, 1977; Warren, White and Fuller, 1974), then show its reliability, and finally its validity.

Unidimensionality in two and three item measures is difficult to demonstrate using exploratory or confirmatory factor analysis because these measures are under- or just determined (and they trivially fit the data). However, ordered similarity coefficients will gauge both internal and external consistency and thus unidimensionality using the criteria discussed above. If the matrix of similarity coefficients for the two or three item measure has similarity coefficients of .8 or above, this suggests internal consistency/unidimensionality. The matrix of similarity coefficients for all the study measures can be sorted by the sum of coefficients for each item, and if the items for the two or three item measure clustering together in the sorted matrix this suggests external consistency.

While Churchill and Peter (1984) found no effect on reliability when positively and negatively worded or reverse-polarity items are mixed in a measure, subsequent studies have suggested that mixing positively and negatively worded items can adversely affect measure consistency/unidimensionality (see the citations in Herche and Engelland, 1996). If concern for acquiescence bias (see Ray, 1983) produces a measure with positively and negatively worded items that has consistency/unidimensionality problems, inconsistent items might be retained as a second facet in a second-order construct (see Bagozzi, 1981 for a similar situation).

Similarly, I have noticed that changes in verb tense among items in the same measure can lead to internal consistency problems (and as previously discussed, mixed verb tenses can lead to specification problems in the structural model--i.e., a path from present or future tense items to past tense items). Similarly in interorganizational research, changes in the subject of item stems (i.e., I versus company) among items in the same measure can also lead to internal consistency problems.

Parenthetically, ordered similarity coefficients do not always suggest maximally consistent item clusters in survey data. Instead they usually suggest sufficiently consistent clusters of items that are also sufficiently reliable (see Appendix F for an example).
INTERNAL CONSISTENCY, EXTERNAL CONSISTENCY AND ITEM OMISSION In survey data it is easy to show that unidimensionality obtained using Maximum Likelihood exploratory common factor analysis does not guarantee internal consistency (see Appendix O). Further, in real-world data internal consistency is a stronger demonstration of unidimensionality than a single factor solution in Maximum Likelihood exploratory common factor analysis (see Appendix O).

In addition, while it is frequently possible with real-world data for internally consistent latent variables (i.e., ones that each fit their single construct measurement model) to also be externally consistent (i.e., all the latent variables together fit their full measurement model containing all of them jointly) (see Appendix O--all the internally consistent latent variables shown there were also externally consistent), the reverse is not true. In real-world data externally consistent measures can be internally inconsistent (see Appendix O-- all the fully itemized latent variables shown there were externally consistent because their full measurement model fit the data, but individually several were internally inconsistent).

Because there is a third type of consistency, "structural model consistency" (i.e., the structural model fits the data), that will be discussed later, there is a recommended sequence of steps in establishing consistency that was implied earlier: establish that each measure is internally consistent (i.e., its single construct measurement model fits the data), next establish that all the measures taken together are externally consistent (i.e., a full measurement model jointly containing all the measures fits the data), then establish that the structural model fits the data (see Anderson and Gerbing 1988) (however, see Kumar and Dillon, 1987a,b). The summed first derivative procedure could be used to attain internal consistency for each latent variable as discussed earlier (see Appendix E). External consistency can also be attained using the summed first derivative procedure, as mentioned earlier, and it is demonstrated using the model-to-data fit of a full measurement model (i.e., one that contains all the study variables), and "structural consistency" is demonstrated using the model-to-data fit of the structural model, which will be discussed later. While not always well documented in the articles reviewed, this sequence of steps appeared to be generally followed.
A Unidimensionality Step Unfortunately this sequence of steps has an important disadvantage: the first step, attaining internal consistency (with summed first derivatives or any other approach) will produce measures that frequently contain a small number of items (i.e., about six--see Gerbing and Anderson, 1993; and Bagozzi and Baumgartner 1994). Because this can destroy the content or face validity of measures from which items have been dropped, an alternative would be to replace the internal consistency step with an exploratory factor analysis step that establishes unidimensionality in the exploratory factor analysis sense (i.e., the items load on only one factor), then establish external and structural consistency as before. With real-world data this unidimensionality step can be accomplished for each measure by performing a Maximum Likelihood exploratory (common) factor analysis on its items (i.e., without the items from other measures), and dropping any items that do not load on Factor 1. As a final check the resulting measures could be jointly factored (i.e., in a factor analysis that includes all the measures) using Maximum Likelihood exploratory (common) factor analysis to verify that each item is unidimensional (i.e., it loads heavily on only one factor, and it "clusters" with the other items in its measure).

Next, the external consistency of these unidimensional (as opposed to the stronger internally consistent) items could be established using the full measurement model and the summed first derivative procedure discussed earlier. The indicator with largest summed first derivative without regard to sign from the first derivatives available in the full measurement model could be dropped, the full measurement model could be re-estimated without that item, and the full measurement model fit could be re-evaluated. If it is still not acceptable, the summed first derivatives without regard to sign could be examined again, and the largest one could be dropped. This process could be repeated until the full measurement model fits the data (e.g., RMSEA is .08 or less, and the measures are externally consistent--fit indices are discussed below and in Step VI-- Validating The Model) (see Appendix I for an example).

This "unidimensionality" approach to consistency has the advantage of producing a maximal subset of (externally) consistent items, which with real-world data usually does a better job of preserving the content or face validity of the study measures. In real-world data it also separates measurement issues from structural issues because the measures are unidimensional, thus avoiding interpretational confounding.2

Nevertheless, the resulting model is sample dependent if any items are dropped: The operationalization of the constructs embodied in the items has been altered by dropping items in order to fit the sample data from the sample). In addition, my own experience with the above procedures for obtaining consistency/unidimensionality is that they are all tedious, especially the first derivative procedure. An alternative is to avoid dropping items by summing one or more constructs' items and use regression (however, see Step VI-- Violations of Assumptions for cautions about regression), or use "single indicator" structural equation analysis (which will be discussed next).

As previously mentioned there seems to be an upper bound for the number of items in a consistent/unidimensional measure of about six items (also see Gerbing and Anderson, 1993; and Bagozzi and Baumgartner, 1994 for similar observations). Thus larger measures, especially older measures developed before structural equation analysis became popular, usually required item omission to attain consistency/unidimensionality in the articles reviewed. While the resulting consistent/unidimensional submeasures were invariably argued or implied to be content or face valid, they often seemed to be less so than the original full measures.

In fact, a common misconception in the articles that used structural equation analysis was that consistent measures are more desirable than less consistent fuller measures, especially older measures developed before structural equation analysis became popular. Many articles appeared to assume that older full measures were inherently flawed because, although they were unidimensional in the exploratory factor analysis sense, they were typically inconsistent and required item omission to attain a consistent subset of items for structural equation analysis. However, authors have criticized dropping items from otherwise psychometrically acceptable (i.e., unidimen­sional in the exploratory factor analysis sense, valid and reliable) measures to attain model fit on the grounds that it impairs content validity (e.g., Cattell, 1973, 1978; see Gerbing, Hamilton and Freeman, 1994). As previously mentioned, Cattell (1973) observed that the resulting measures are typically bloated specific (operationally narrow) instances of the target construct.

Because it frequently appeared that the full measures (i.e., the measures before items were dropped) were more desirable in the articles reviewed than the proposed more consistent reduced measures for reasons of face or content validity, I will discuss an alternative to item omission to attain consistency/unidimensionality in structural equation analysis.
Single Summed Indicator Structural Equation Analysis Item omission to attain acceptable measurement model-to-data fit may not always be necessary in order to use structural equation analysis. In situations where it is desirable to retain items in unidimensional measures, in the exploratory factor analysis sense, in order to retain face or content validity, the items in the measure could be summed and OLS regression could be used to validate a survey-data model (however, as previously mentioned, OLS regression is inappropriate for variables measured with error-- see Cohen and Cohen 1983). Alternatively Kenny (1979) hinted at an approach involving summed items and reliabilities that can be used with structural equation analysis. Variations of this approach have been used elsewhere in the social sciences (e.g., Heise and Smith-Lovin, 1982; James, Mulaik and Brett, 1982; and Williams and Hazer, 1986). The approach involves summing the items in a measure that is unidimensional using Maximum Likelihood exploratory common factor analysis, then averaging them to provide a single summed indicator of the unobserved construct.




Share with your friends:
  1   2   3   4


The database is protected by copyright ©dentisty.org 2019
send message

    Main page