Skip to main content

by Cale Horn, Ph.D.

With increasing attention being paid to public diplomacy and the role of public opinion in international affairs, along with the popular movements for democracy in authoritarian states, there is more interest than ever before in the role of public opinion in non-democratic societies.—Ed.

Introduction

The recent wave of anti-regime protests sweeping the Middle East and North Africa go far in dispelling the notion that politically repressed peoples lack substantive opinions about domestic politics in their own countries, though they lack the means of expressing these opinions through meaningful elections, accountable political representation, a free press or rights of public assembly. The question remains, however, as to whether these opinions are measurable by conventional methods. In places where dissenting opinions may be punished harshly, the reliability and validity of politically sensitive public surveys is far from self-evident. Yet, as discussed below, political opinion surveys have been used in a variety of repressive settings for many years and relatively little attention has been given to the trustworthiness of these instruments. As the current situation continues to unfold across the Arab world, and as survey research becomes increasingly common in such places, the accuracy of survey results as indicators of domestic opinion will become increasingly important to the Foreign Service community.

This paper proceeds as follows. First, I present a brief history of survey research under repression. Survey research has been surprisingly widespread across autocracies since at least the 1960s, used for a variety of legitimate and illegitimate ends. Second, I offer a set of recommendations for assessing the accuracy of surveys conducted in repressive environments. This ‘checklist’ for evaluating such surveys can be employed even without specialized training in survey research or statistical methods. Finally, I conclude with a discussion of alternatives to mass surveys for assessing public attitudes in repressive polities. These alternatives can be used as informal checks on the validity of surveys or, when surveys are absent, as alternative proxies for public attitudes about government and other areas of interest.

An Overview of Survey Research under Repression

Public opinion polling under conditions of political repression is not new. Within a decade of the birth of modern polling in Western democracies, state-sponsored survey research was underway with similar scientific rigor, in the Communist East (Kwiatkowski 1992). Public surveys subsequently spread to other autocratic settings. The bulk of opinion research in non-democracies has proceeded in one of three ways:

1) Surveys organized and controlled by the state for its own political purposes, especially public manipulation, which does not require valid survey results.

2) Surveys addressing politically sensitive questions under conditions of rapid liberalization, when individuals are less fearful of criticizing the government and its positions, and hence less likely to descend into a “spiral of silence” (Noelle-Neumann 1993).

3) Surveys avoiding politically sensitive questions in contexts where repression is more immediate or acute.

Only rarely has survey research broached a fourth category: independent surveys dealing with politically sensitive questions under genuinely repressive conditions. Much rarer—but perhaps of the greatest interest—are independent, politically sensitive surveys, conducted under conditions of growing repression.

Survey research by (and for) the state
Given the purposes of politically sensitive surveys under authoritarian rule, response bias is not always a concern, because information gathering may be irrelevant to the purpose of polling. For example, during the 1960s and 1970s, polling efforts were undertaken by Communist states during periods of crisis as a means of satiating public discontent. “However,” one expert notes, “when periods of crisis passed and communist parties strengthened their power, the leaderships tended to restrain and control social research” (Kwiatkowski 1992, 359).

State-sponsored surveys were also used widely for propaganda purposes (Welsh 1981). In the aftermath of Poland’s 1980-81 Solidarity Revolution, the Institute for Basic Problems of Marxism-Leninism at the Central Committee of the Communist Party publicized its survey of Polish workers in Spring 1982, reporting high levels of satisfaction with the government (Mason 1985, 205ff).

Building on the logic of these earlier efforts, formalized state-run polling emerged as threats to the Communist system intensified. In 1982, following the Solidarity Revolution, Poland’s General Wojciech Jaruzelski announced the planned formation of a state-run public opinion research institution. The result, the Public Opinion Research Center in Warsaw (CBOS), served “to carry out fast surveys on issues relevant to current political events. Politicians expected to be informed of changes in the social mood of the public, of the most important perceived problems, and of sources of possible protests and conflicts. … the Center was conceived as a way of strengthening the Communist system” (Kwiatkowski 1992, 363; 1985). Soviet Premier Yuri Andropov made a similar announcement in 1983 and other Communist states began to follow suit. Toward the end of the Cold War the tables were turned when state-sponsored research became a tool of reform-minded leaders eager to use public opinion to enact policy change (Slider 1985). Here again the purposes of the state, and not the accuracy of the surveys, were in view.

Other politically sensitive survey research by authoritarian states appears to serve genuine information-gathering purposes. Sieger (1990) remarks on information gathering as an essential motivation for opinion research in Marxist-Leninist societies: “a policy tool employed to ensure the efficient control of society by the party elite. The information gathered may have been used in decision-making, in the evaluation of already implemented policies or in the manipulation and mobilization of the citizens” (ibid., 325). To this end East Germany had no less than four state-run institutions devoted to carrying out survey research (ibid., 327). In a similar vein, the Chinese government formed the China Social Survey System (CSSS) in 1984 to undertake polling on social, economic, and political issues of importance to the government (Mason 1989). Survey research to assess public mood, like polling to temper public mood, was common across the Soviet bloc (Welsh 1981).

It was never clear, however, to what extent these surveys provided the authorities useful information (Shlapentokh 1973). Sieger (1990, 328) notes the frequency of politically sensitive questions by the East Germans with no non-responses. Swafford (1992, 353) observes, under the Soviet system, “survey researchers had no basis for expecting candor on the broad range of topics that might arouse the ire of authorities. There was no public opinion.” Non-Communist authoritarian systems exhibit similar patterns. Suleiman (1987, 63), for example, concludes that survey research in the Arab world is only possible if a survey question or theme is not “termed sensitive” by authorities.

Survey research in transitional states

In transitional countries, polling has been “a weapon in the struggle to build and sustain democracy under conditions in which the success of liberalization programs and the emergence of civil society were by no means assured” (Gollin 1992, 300).1 Perhaps the earliest and most comprehensive collection of such polling can be found in Cantril and Strunk’s (1951) Public Opinion 1935-1946, which records surveys from a number of post-World War II transitional states: Czechoslovakia, France, Germany, and Italy among them.2 And by the late 1980s, as Albert E. Gollin noted in a 1992 special issue of the International Journal of Public Opinion Research devoted to public opinion research in the former Soviet Union and Eastern Europe, “opinion research had become fully internationalized. Even in countries under authoritarian rule, little-publicized, modest attempts were sometimes made to conduct opinion surveys” (Gollin 1992, 299). As early as December 1989, the Soviet Sociological Association (SSA) was interacting openly with their American counterparts, armed with survey results of the Soviet peoples. The following year, at a 1990 plenary session of the WAPOR/AAPOR annual meeting, Soviet scholars joined their American counterparts to assess Soviet public opinion in relation to glasnost and perestroika (ibid., 300).

Unlike the wave of polling in Western and Central Europe following World War II, however, this new wave of post-Soviet polling was accompanied by the same kinds of concerns over socially desirable response that typified polling under authoritarian rule. Grushin (1993) and Shlapentokh (1994) regard anxiety in respondents to be the chief, if not only, reason for the gross failures to predict the outcome of Russia’s 1993 Duma elections. Petrenko and Olson (1994) cite the effect on voters of the government’s harsh repression of the opposition during the constitutional crisis of October, only two months before elections, concluding that up to one-third of Zhirinovsky voters surveyed prior to voting answered pollsters insincerely. Fear-biased response was also an issue during democratic transition in Latin America (Anderson 1994; Bischoping and Schuman 1994; 1992).

Fortunately, many of the early concerns about respondent fear to politically sensitive questions during periods of political transition appear overstated. Using data unavailable in the earlier analyses of Russia’s 1993 elections, Miller, et al. (1996) find more support for the “late swing” hypothesis to explain Zhirinovsky’s unexpected success, casting doubt on explanations relying on ‘hidden,’ ‘fearful,’ or ‘ashamed’ voters (Shlapentokh 1994). In Latin America response bias identified in pre-election polls was corrected with modified collection methods (Beltran and Valdivia 1999).

In other settings the prospects for coercion—and hence concern over response bias—are even more remote. In his 1997 analysis of attitude constraint among elites and the public in Beijing, Chen (1999) is content to ignore the possibility of response-desirability issues altogether, though most of the survey questions used in the analysis invite respondents to criticize the government. The same omission characterizes Raghubir and Johar’s (1999) contemporaneous study of opinion in Hong Kong following the 1997 handover from Britain to China. This despite the fact that their survey, focusing on respondents’ attitudes about the return of Chinese rule, was conducted only one month after the July 1997 transition. However, before the Hong Kong transition, a majority of Hong Kong residents were optimistic about a return to Chinese sovereignty (ibid. 1999). Chen (1999, 194, 196) identifies China as a “transitional” state at this time, noting that authorities had reconsidered the use of mass coercion in the wake of Tiananmen Square.

Non-sensitive survey research
A third category of opinion research in non-democratic and politically repressive settings merits brief mention: survey research that is not overtly political, and thus is less sensitive, in content. Early survey research across the Middle East and North Africa was limited largely to studies of “fertility, family planning, and population dynamics” (Tessler 1987, 3). More recently, comparative development studies frequently include non-democracies, though the survey instruments are not overly political or controversial in nature (e.g., Almond and Verba 1963; Norris and Inglehart 2004). In this vein the now-defunct United States Information Agency, with its emphasis on public diplomacy, purposefully avoided questions dealing with a country’s internal politics in its survey research, which targeted transitional states (Millard 1989).

The bulk of contemporary opinion research in non-democracies—political or non-political—consists of attitudinal studies. Tessler (2002), for example, examines the relationship between Islamic religiosity and democratic-mindedness across several Middle Eastern and North African countries. Related work in the Middle East analyzes attitudes toward religion, national identity, family, gender relations, Western culture, and democracy (Moaddel 2007; Tessler, et al. 2006; Inglehart 2003; Moaddel 2003; Moaddel and Azadarmaki 2002). While these studies may seek to draw inferences from respondents’ political dispositions, questions tend to be of a general nature, and avoid specific issues that could be considered politically sensitive (e.g., Tessler, et al. 2004).

Politically sensitive surveys under repression
Though rarely attempted, when politically sensitive questions have been asked in repressive contexts, data quality has been questionable. The problems of valid responses under Communism were quickly identified (Sulek 1989; Welsh 1981; Shlapentokh 1973). Independent survey research under Communist repression, for example in 1980s Poland, was clandestine (Mason 1985; Tabin 1990). The findings of these surveys may be the most likely to report sincere opinion. However, these polls seriously over-represented opposition activists, and were intended more to bolster the Solidarity cause than to provide an accurate record of Polish opinion (Kwiatkowski 1992, 360).

Recent analyses of political opinion under repression have been confined to research questions and designs that can accommodate respondent fear. Geddes and Zaller (1989), for example, analyze support for government policies as a function of political awareness at the height of Brazil’s authoritarian rule in the early 1970s. Despite the repressive nature of the regime, complete with “abductions, torture, and murder to deal with outspoken opponents” (ibid., 325), Geddes and Zaller argue that the data are not overly taxed by problems of response desirability. Interviewers were college-aged and anti-regime, judged most interviewees to be ‘sincere’ respondents, and interviewees judged ‘sincere’ or ‘insincere’ exhibited no differences in opposition to the regime (ibid., 326). More importantly, since their concern was with patterns of support, distortions about levels of support for the government rendered response-desirability problems a non-issue for the analysis.

Similarly, Zhu and Rosen (1993) analyze individual-level causes for support for anti-government protests in China; a politically sensitive issue in the wake of the country’s 1986-87 student demonstrations. Of the three survey questions composing their dependent variable, support for protest, the two most politically sensitive—regarding the student demonstrations and the resignation of General Secretary Hu Yaobang—elicited “don’t know” responses of 34 and 44 percent, respectively. These “don’t knows” are retained in the analysis as a neutral category between support and opposition to protests. Most recently, in Kern and Hainmueller’s (2009) analysis of the effects of Western media exposure on support for the government among East Germans, preference falsification does not pose a problem for the research design for reasons similar to those cited by Geddes and Zaller (ibid., 381).

However it is the exception rather than the rule when, in a repressive setting, the nature of the research question negates concerns over desirability pressures from sensitive questions. Early efforts to measure the validity of politically sensitive questions in the context of repression reflect this reality. In Poland, Sulek (1989) compared identical questions from the government-run opinion center (CBOS) and an independent academic institute, from 1983-87. He found the identity of the survey institution affects respondents’ willingness to voice opinions critical of the government, though this effect diminished as the Communist system unraveled, in the late 1980s.3 Under perhaps more repressive conditions, in Bulgaria, Welsh (1981, 192) reports more striking “interviewer effects.” When identical questions were asked by party officials and associates of an academic institution, support for the regime was 15 percent higher among respondents answering questions from the regime’s surveyors.

Using Surveys from Politically Repressive Environments
The overview above provides a sense of how surveys have been used in non-democracies—for better or for worse. With few exceptions, the studies cited are not specifically interested in establishing the accuracy of the surveys used, even when these surveys involve politically sensitive questions. The Foreign Service community, however, may have more of an interest in this very question: to what extent can opinion surveys on politically sensitive topics reflect public opinion in countries intolerant of political dissent? If fear-biased response is not found, or is detected and is measurable, surveys emanating from repressive states may be of special use to practitioners, and accessible for broader types of analysis than those discussed above. This section offers a ‘checklist’ to allow non-experts to assess a survey’s accuracy, at least in a preliminary way. A survey that passes these criteria may accurately reflect public opinion; a survey that fails to pass almost certainly does not reflect opinion in a meaningful sense.

1: Proper and transparent survey design
First, the survey must meet the most basic requirements for any survey research: transparency. How was the sample derived? How were respondents sampled (face-to-face, by telephone, etc.)? Were respondents in any way compensated for participation? What is the rate of response, and what was done to encourage non-respondents to participate? What percentage of respondents was re-surveyed to ensure interviewer accuracy? Does the survey report its margin of error? Does the survey acknowledge any flaws to its methods and the likely effects of these flaws? If the method by which the survey was conducted is not transparent, it is not possible to estimate the survey’s accuracy, and any such survey should be used with caution. For example, recent politically sensitive surveys have been conducted in Iran using telephone and face-to-face methods, and yield starkly different results. The telephone surveys suffer from very low response rates on the one hand and, on the other hand, show far more support for the regime than do surveys based on in-person interviews. The combination of a low response rate and a method likely to under represent youth may explain the differences in results. Respondents may be less likely to express criticism of the government to a stranger over the phone as opposed to an in-person interviewer better positioned to gain the respondent’s trust. Also, the telephone survey likely under-samples Iranian youth—the popular base of Iranian opposition politics—because these youth rely heavily on mobile phones rather than the landline phones from which the survey’s sample was drawn. The overall result is survey response that is likely skewed in a pro-government direction.

However, assuming the transparency of a survey’s method, the method should be one that generates a survey with basic integrity: The sample should be randomly drawn from the population of interest and large enough to mirror that population with reasonable certainly. For example, a random sample of about 1,000 respondents is adequate for surveys of the adult population of the United States; the best surveys of public opinion in the Russian Federation use a sample size of about 1,600. A sample of inadequate size will lack precision and limit the analyst’s ability to make inferences about the opinions of the population. However, quota-sampling techniques—which correct for the problem of undersized sampling by approaching more and more potential respondents until the survey’s ‘quota’ is filled—may be inappropriate in repressive settings, where a particular type of respondent is willing to participate and another type is not. Quota sampling in repressive contexts may inadvertently oversample regime supporters and more outspoken regime opponents while under-representing those who oppose the regime or its policies but are fearful of voicing that opinion.

2: Survey reliability
Given the transparency and integrity of a survey’s method and execution, the next check involves the reliability of the survey results. Survey reliability can be thought of in at least three ways: demographic reliability, intra-survey reliability and inter-survey reliability. Demographic reliability can be assessed only when more than one wave of survey results are available. Simply, across multiple waves of a given survey, the demographic breakdown of the sample should be consistent. Intra-survey reliability is similar. The survey of interest can be assessed for demographic consistency with other surveys, which may have been conducted using a different method (as opposed to different waves of the same survey, which use the same method). Substantively, barring intervening events that could affect public opinion in a given issue area, the distribution of responses to repeated questions should be comparable from one survey to the next—particularly when two or more surveys are all conducted in a brief period of time. It is frequently possible to compare questions from different surveys inasmuch as identical or very similar questions are often asked by different organizations. For example, several organizations now repeat or approximate questions from the ubiquitous World Values Survey. Finally, internal reliability refers to the degree to which each respondent’s answers to survey questions are consistent. In a survey with a high degree of internal reliability, items that should correlate highly do so. For example, questions gauging respondent attitudes toward ‘the United States,’ ‘the United States government,’ ‘the president of the United States’ should correlate highly, both at the individual level and in terms of percentage breakdowns at the aggregate level.

3: Survey validity
Of course, the reliability of a survey is only one part of its accuracy. Survey results also must be judged in terms of validity. Survey validity can be decomposed into at least two parts: demographic validity and substantive validity. Demographic validity, as the name implies, is the degree to which the demographic makeup of the survey sample reflects the demographic makeup of the population from which the sample is drawn. Whereas demographic reliability is about consistency across survey instruments, demographic validity indicates the extent to which the sample maps onto the population it is intended to represent. This involves comparing the demographics reported in the survey results to national census figures and, especially when national census statistics are in doubt, comparison to other sources such as the State Department Country Background Notes, CIA World Factbook and non-U.S. sources such as the WHO and World Bank. Notably, significant demographic attributes are not limited to the typical categories of gender, age, marriage status, urban or rural residence, education and employment. Where possible demographic reliability should also include items such as possession of a landline telephone and Internet access and usage. These non-traditional demographic questions are an increasingly common feature of both mass surveys and national censuses, making comparisons possible.

Substantive validity is the degree to which responses to politically sensitive questions reflect actual opinions and preferences. Even if criteria for survey design, execution, reliability and demographic validity are met, the absence of substantive validity to politically sensitive surveys means that respondents are failing to tell the truth, presumably out of fear of the governing regime. In democratic countries, the honesty of individuals responding to politically sensitive questions, ceteris paribus, is the premise of all political and policy-oriented survey research. Such honesty cannot be assumed under political repression, necessitating new tests for discerning the truthfulness of individuals’ responses to questions they may fear to answer frankly.

Tests for substantive validity are more complex than the tests already summarized above, and a full series of such tests are detailed in Horne and Bakker (2009), using several politically oriented opinion surveys from Iran. In brief, tests for substantive validity involve the identification of politically oriented questions from the survey, and the classification of these political questions into two possible categories: ‘critical questions’ wherein the respondent has the opportunity to express criticism of the regime, and ‘non-critical questions’ where the opportunity to express support for or opposition to the government and its policies does not exist. Having identified a survey’s critical and non-critical questions (which must, among other qualities, possess ordinal-level response categories), it is possible to compare the attributes of the two question types. Surveys possessing a high degree of substantive validity are more likely to possess critical and non-critical questions with (1) similar variances and (2) similar frequency of non-response. Significantly less variance in critical questions suggest a possible validity problem, inasmuch as responses are more similar to one another when respondents have the opportunity to criticize the regime. That is, respondents may be ‘toeing the party line’ when the variance in response to critical questions is narrow relative to the variance in non-critical questions. Similarly, when non-response varies dramatically between critical and non-critical questions, it is likely that a significant number of respondents are adjusting their answers to fit the expectations of the regime.

Conclusion: Other Measures of Public Opinion

The overarching lesson for those interested in using opinion surveys to access public opinion in non-democratic and politically repressive societies is to do so with great care. Even if past surveys were found to possess a considerable degree of reliability and validity (as Horne and Bakker [2009] find in Iranian surveys conducted through 2008), it cannot be assumed that future surveys will possess this same accuracy. Intervening events, such as a government crackdown on political dissidents, may stifle the candor of many respondents. And, as discussed above, the choice of survey method may have exaggerated effects in repressive environments. For these reasons, consumers of such surveys must approach each new survey critically, and analysis and policy recommendations based on survey data should reflect the uncertainty (statistical and otherwise) that these instruments invariably possess.

In the short term, however, the more common problem for many country analysts will be the lack of regular survey data emanating from their country of interest. How can the practitioner or academic speak about public opinion without surveys? The means of assessing opinion in the absence of surveys has already been suggested by Charles Tilly (1983, 462), with reference to the study of opinion historically, before surveys: “…through a wide variety of collective action ordinary people have left a trail of interests, complaints, demands, and aspirations that remains visible to observers who know where to look.”

Taking her cue from Tilly, Herbst (1993, 48) details no less than a dozen techniques (what Tilly calls the repertoires for expression) for the expression and assessment of public opinion throughout history: From the oratory of ancient Greece to printing in the 16th century, the use of crowds, petitions, and salons in the 17th century, 18th century revolutionary movements, and organized strikes, general elections, straw polls, newspapers, and letter-writing in the 19th century. The 20th century sees the introduction of mass-media political programming, refinements in crowd estimation (ibid., 133ff), and finally the first sample-based political surveys in the 1930s. In a subsequent work, Herbst (1994) argues that indicators of opinion such as these are actually capable of telling us more about the sentiment of distinct interest groups than is sample-based polling. Politically marginalized groups generally are interested in expressing themselves within the political mainstream, Herbst argues; successful groups are adept at developing forms of communication accessible to elites and other segments of society.

The argument that the repertoires of collective action constitute a means for assessing public opinion in history is equally applicable to assessing opinion under repression. Further still, the study of collective action in the absence of surveys is not limited methodologically to qualitative analysis, along the lines of Herbst’s work. Burstein and Freudenberg (1978) assess the impact of anti-war demonstrations on Senate votes on the Vietnam War from 1964 to 1973; Burstein (1979) again looks to demonstrations to explain variation in the passage of civil rights legislation since World War II. More recently, Burstein and Linton (2002) provide a meta-study of findings pertaining to the policy effects of political parties, interest groups, and social movement organizations.

Beyond these examples, events-data analysis can capture both types and degrees of collective action, and can do so in politically opaque parts of the world, though this method has yet to be applied to the study of political responsiveness. Operational coding of autocratic leaders’ public statements, which may vary in content and emphasis over time, may provide clear clues about the state of domestic opinion from the perspective of the governing elite. Broader proxies of public sentiment, such as economic indicators and migration patterns, may serve as useful robustness checks. Regardless of the approach taken, if empirical research on the domestic politics of the most autocratic states is to advance in the near-term, it cannot be overly wed to survey methods. A broad approach to the measure of public opinion in politically repressive countries will serve the diplomatic practitioner and academic best.

Notes

1. It is worth noting that the use of polling as a weapon for democratization is not always successful. The Czechoslovak Institute of Public Opinion conducted extensive polling after its formation in 1945; Communists seized power three years later. The Brazilian Institute of Public Opinion was established in 1946 with the reinstitution of democracy under Eurico Gaspar Dutra, though democratic institutions remained unstable and finally collapsed in 1964, giving way to military dictatorship. See Cantril and Strunk’s (1951) Public Opinion 1935-1946 for early examples of survey research by these organizations.
2. Some of the post-war survey research institutes were formed by states, while others were international affiliates of the Gallup organization (Cantril and Strunk 1951, viii).
3. Also, see Kwiatkowski’s (1992, 366-69) excellent discussion of these and related debates over survey reliability taking place in Poland in the 1980s.

References
Almond, Gabriel A. and Sydney Verba. 1963. The Civic Culture: Political Attitudes and Democracy in Five Nations. Newbury Park, CA: Sage.

Anderson, Leslie. 1994. “Neutrality and bias in the 1990 Nicaraguan Pre-election Polls: A Comment on Bischoping and Schuman.” American Journal of Political Science 38(2): 486-94.

Beltran, Ulises and Marcos Valdivia. 1999. “Accuracy and Error in Electoral Forecasts: The Case of Mexico.” International Journal of Public Opinion Research 11(2): 115-34.

Bischoping, Katherine and Howard Schuman. 1992. “Pens and Polls in Nicaragua: An Analysis of the 1990 Pre-election Surveys.” American Journal of Political Science 36(2): 331-50.

________. 1994. “Pens, Polls, and Theories: The 1990 Nicaraguan Election Revisited: A Reply to Anderson.” American Journal of Political Science 38(2): 495-59.

Burstein, Paul. 1979. “Public Opinion, Demonstrations, and the Passage of Antidiscrimination Legislation.” Public Opinion Quarterly 43(2): 157-72.

Burstein, Paul and William Freudenburg. 1978. “Changing Public Policy: The Impact of Public Opinion, War Costs, and Anti-War Demonstrations on Senate Voting on Vietnam War Motions, 1964-1973.” American Journal of Sociology 84: 99-122.

Burstein, Paul and April Linton. 2002. “The Impact of Political Parties, Interest Groups, and Social Movement Organizations on Public Policy.” Social Forces 81: 380-408.

Cantril, Hadley and Mildred Strunk. 1951. Public Opinion 1936 – 1946. Princeton, NJ: Princeton University Press.

Chen, Jie. 1999. “Comparing Mass and Elite Subjective Orientations in Urban China.” Public Opinion Quarterly 63: 193-219.

Geddes, Barbara and John Zaller. 1989. “Sources of Popular Support for Authoritarian Regimes.” American Journal of Political Science 33(2): 319-47.

Gollin, Albert E. 1992. “Public Opinion Research as Monitor and Agency in Revolutionary Times: Editor’s Introduction.” International Journal of Public Opinion Research 4(4): 299-301.

Grushin, Boris. 1993. “Pochemu Nelzia Verit Bolshinstvu Oprosov, Provodimykh V Byvshem SSSR.” Nezavisimaia Gazeta, October 28.

Herbst, Susan. 1993. Numbered Voices: How Opinion Polling Has Shaped American Politics. Chicago: University of Chicago Press.

________. 1994. Politics at the Margin: Historical Studies of Public Expression Outside the Mainstream. Cambridge, UK: Cambridge University Press.

Horne, Cale and Ryan Bakker. 2009. “Public opinion in an Autocratic Regime: An Analysis of Iranian Public Opinion Data 2006-2008.” Midwest Political Science Association 2009 Annual Convention, Chicago, Illinois, March 2009.

Inglehart, Ronald (ed.). 2003. Human Values and Social Change: Findings form Values Surveys. Leiden, Netherlands: Brill.

Kern, Holger Lutz and Jens Hainmueller. 2009. “Opium for the Masses: How Foreign Media Can Stabilize Authoritarian Regimes.” Political Analysis 17: 377-99.

Kwiatkowski, Piotr. 1992. “Opinion Research and the Fall of Communism: Poland 1981-1990.” International Journal of Public Opinion Research 4(4): 358-74.

Mason, David S. 1985. Public Opinion and Political Change in Poland, 1980-1982. Cambridge, UK: Cambridge University Press.

Millard, William J. 1989. “The USIA Central American Surveys.” Public Opinion Quarterly 53: 134-35.

Miller, William L., Stephen White, and Paul Heywood. 1996. “Twenty-five Days To Go: Measuring and Interpreting the Trends in Public Opinion During the 1993 Russian Election Campaign.” Public Opinion Quarterly 60: 106-27.

Moaddel, Mansoor. 2003. “Public Opinion in Islamic Countries: Survey Results.” Footnotes 31(1): 1-7.

________. 2007. Values and Perceptions of the Islamic Publics: Findings from Values Surveys. New York: Palgrave.

Moaddel, Mansoor and Taghi Azadarmaki. 2002. “The Worldviews of Islamic Publics: The Cases of Egypt, Iran, and Jordan.” Comparative Sociology 1(3/4): 299-319.

________. 1993. The Spiral of Silence: Public Opinion—Our Social Skin, Chicago: University of Chicago Press.

Norris, Pipa and Ronald Inglehart. 2004. Sacred and Secular. Cambridge, UK: Cambridge University Press.

Petrenko, Elena and Alexander Olson. 1994. “Predskazuiema Li Politicheskaia Situatsia V Rossii.” Moskovie Novosti, March 27.

Raghubir, Priya and Gita Venkataramani Johar. 1999. “Hong Kong 1997 in Context.” Public Opinion Quarterly 63: 543-65.

Shlapentokh, Vladimir. 1973. The Empirical Validity of Sociological Information. Moscow: Statistika.

________. 1994. “The 1993 Russian Election Polls.” Public Opinion Quarterly 58: 579-602.

Sieger, Karin. 1990. “Opinion Research in East Germany: A Challenge to Professional Standards.” International Journal of Public Opinion Research 2(4): 323-44.

Slider, Darrell. 1985. “Party-Sponsored Public Opinion Research in the Soviet Union.” Journal of Politics 47(1): 209-27.

Suleiman, Michael W. 1987. “Challenges and Rewards of Survey Research in the Arab World: Problems of Sensitivity in a Study of Political Socialization.” In Survey Research in the Arab World by Mark A. Tessler, Monte Palmer, Tawfic E. Farah, and Barbara Lethem Ibrahim. Boulder, CO: Westview Press.

Sulek, A. 1989. “O rzetelnosci i nierzetelnosci badan sondazowych w Polsce. Proba analizy empirycznej.” Kultura i Spoleczenstwo 1: 23-49.

Swafford, Michael. “Sociological Aspects of Survey Research in the Commonwealth of Independent States.” International Journal of Public Opinion Research 4(4): 346-57.

Tabin, Marek. 1990. “Podziemne badania ankietowe w Polsce.” Kultura i Spo?eczen´stwo 34(1): 203-11.

Tessler, Mark. 1987. “Introduction: Survey Research in Arab Society.” In Survey Research in the Arab World by Mark A. Tessler, Monte Palmer, Tawfic E. Farah, and Barbara Lethem Ibrahim. Boulder, CO: Westview Press.

________. 2002. “Islam and Democracy in the Middle East: The Impact of Religious Orientations on Attitudes toward Democracy in Four Arab Countries.” Comparative Politics 34(3): 337-54.

Tessler, Mark, Carrie Konold, and Megan Reif. 2004. “Political Generations in Developing Countries: Evidence and Insights from Algeria.” Public Opinion Quarterly 68(2): 184-216.

Tessler, Mark, Mansoor Moaddel, and Ronald Inglehart. 2006. “Getting to Arab Democracy: What Kind of Democracy Do Iraqis Want?” Journal of Democracy 17(1): 38-50.

Tilly, Charles. 1983. “Speaking Your Mind Without Elections, Surveys, or Social Movements.” Public Opinion Quarterly 47(4): 461-78.

Welsh, William A. 1981. Survey Research and Public Attitudes in Eastern Europe and the Soviet Union. New York: Pergamon Press.

Zhu, Jian-Hua and Stanley Rosen. 1993. “From Discontent to Protest: IndividualLevel Causes of the 1989 Pro-Democracy Movement in China.” International Journal of Public Opinion Research 5(3): 234-49

Cale Horne is an assistant professor of political studies at Covenant College, Lookout Mountain, GA. He specializes in the politics of autocratic states and recently completed a postdoctoral fellowship at the International Center for the Study of Terrorism at Penn State University. Dr. Horne can be contacted at cale.horne@covenant.edu.
Comments are closed.