Differences in findings between studies raise questions warranting further investigation in future research, including differences in correlations among latent constructs varying from very high (e.g., 12 inter-factor correlations above .9 in Study 2) to more moderate (e.g., only 3 correlations above .8 in Study 4).Further, the results from one study (Study 4) suggested that legitimacy, fairness, and voice were especially highly correlated and may form a single higher-order factor, but the other studies did not.

The authors review the traditional reliability coefficients but urge the reader to think about facets of generalizability, such as time, items, and observers, and to explicitly adopt a generalizability framework.

They criticize the indiscriminate use of alpha, pointing out its limitations and arguing for more complex interpretations of this ubiquitous index.

They discuss a unified conception of construct validity, suggesting that systematic construct validation efforts are needed to develop a theoretical understanding of their methods.

They note the voracious appetite the field has for "fast data" (easily obtained self-reports) and argue for a more diversified diet, calling for multimethod investigations as a rule, rather than the rare exception.

They briefly illustrate the power of the no-longer new techniques of structural equation modeling to address measurement problems, calling for their routine use, at least in samples of large size.

(Psyc INFO Database Record (c) 2012 APA, all rights reserved)ABSTRACT: Using confirmatory factor analyses and multiple indicators per construct, we examined a number of theoretically derived factor structures pertaining to numerous trust-relevant constructs (from 9 to12) across four institutional contexts (police, local governance, natural resources, state governance) and multiple participant-types (college students via an online survey, community residents as part of a city’s budget engagement activity, a random sample of rural landowners, and a national sample of adult Americans via an Amazon Mechanical Turk study).

Across studies, a number of common findings emerged.

First, the best fitting models in each study maintained separate factors for each trust-relevant construct.

Furthermore, post hoc analyses involving addition of higher-order factors tended to fit better than collapsing of factors.