

operationalization of the adequacyconfidence scale. But when he felt that the credibility of the sources was such that straightforward application of the operational definition resulted in a confidence code value that did not reflect his own belief in the truth value or accuracy of the variable code, then he was free to revise his adequacyconfidence score accordingly. The researcher was guided in interpreting the gradations in the adequacyconfidence scale by the conceptual definition of each scale category in Table 2.1 and the accompanying operationalizations of the coding categories. The purpose of tagging each variable code with an adequacyconfidence scorehenceforth referred to as the AC codeis twofold: (1) it provides a summary measure of the data quality for each variable in the study, and (2) it facilitates the selection of data for statistical analysis according to their rated quality. Thus, the summary statistics for each basic variable, reported in subsequent chapters, are accompanied by the mean of the AC values assigned by the analysts in coding the variable. A high mean AC score indicates adequate literature and confidence in coding; a low AC score suggests little information and much coding by inference. Those who might want to use only data above a certain quality level in their statistical analysis need only test for the accompanying AC code to filter out data below their selected level. If the assignment of AC codes reflects random measurement error in the coding of variables, then the AC codes should themselves tend to be uncorrelated with their corresponding basic variable codeshenceforth referred to as BV codes. In fact, there are some pronounced cases of highquality information being correlated with certain variable codes. Highquality electoral data, for example, tends to be associated with stability in a party's electoral fortunes. By and large, however, the correlations between AC and BV codes tend toward 0. Whenever the correlation between the AC and BV codes for a given variable is statistically significant at the .05 level, the correlation is noted. Those who might want to select data for analysis on the basis of some AC level should then be aware that they may be attenuating the variance in the BV codes because of the relationship between the values assigned to a variable and data quality. The quality of any data set, of course, should also be assessed through formal tests of reliability. Reliability assessments of our coding in general were performed by pairs of researchers independently coding the same party variable combinations in a total of 557 instances, which involved nearly all the variables in the file. The mean productmoment correlation between these pairs of BV codes calculated separately by variable sets to control for differences in variance was a healthy .79. After all the basic research was completed for each country and the information about its parties recorded to, the best of our analysts' abilities, we turned to outside experts to obtain critical evaluations of our product. Letters were sent to country and area specialists asking for their cooperation in reviewing the material we had generated on parties within their spheres of knowledge. About 40 specialists eventually agreed to review our material, with some gallant scholars consenting to handle two or more countries within their fields. The list of countries and cooperating reviewers is given in Table 2.2. The outside consultants were provided with all the material pertaining to their countries (i.e., the material in Part Two of this volume) and were asked to examine each component according to a specific set of instructions, including these guidelines:

