Sampling

A key aim of this project is to contribute to the understanding of who participates in political violence. Thus, we want to examine what characteristics set activists who are prone to political violence apart from other people in Thailand. In order to enable the best possible test of our theoretically derived expectations, we therefore needed a sample with variation in the dependent variable as well as potential variation in the explanatory variables. We assumed that high values on our dependent variable, “participation in political violence,” will generally be rare in Thailand, as in most societies. We chose not to use probability sampling only, as we were concerned that this method would not capture enough people with extreme behavior in this regard. However, the fact that such people may be uncommon should not be confused with the claim that they are unimportant. On the contrary, a very small group of violent-minded individuals can constitute the driving force behind a conflict turning violent. Thus, although such people are (likely to be) rare, they are also (likely to be) influential. We therefore decided to collect two sets of data: first, a cluster survey of 200 respondents who are politically active as either red-shirts or yellow-shirts; and second, a nationally representative sample of 1,000 respondents with a slight oversampling in the Deep South (Yala, Narathiwat, Pattani, and Songkhla districts). The reason for drawing special samples with politically active and possibly radical red-shirt and yellow-shirt members is to try to obtain larger variation in the rare political violence that we aim to investigate. We use the nationally representative sample to investigate whether findings from the activist samples also hold true in the nation-wide sample, where experiences of political violence are much rarer.

The representative survey of the national population was conducted by selecting one thousand interviewees using multistage random sampling. This sampling consisted of the following stages: (1) regional sampling, (2) district sampling, (3) sub-district sampling, and (4) household sampling. In total, 37 out of the 76 provinces in Thailand were surveyed. People on the household lists were contacted via their house addresses and names. The interviewer teams went to the selected village/neighborhood and knocked on the interviewee’s door, sometimes with the assistance of the village headman, who would introduce the interviewer to the potential interviewee. With the exception of one section of the survey, the interviews were conducted face-to-face in the interviewee’s home, without the presence of onlookers. KPI’s teams have worked in this way with national surveys for about a decade.

To protect the integrity of the interviewee and to minimize social desirability bias (i.e. the tendency of survey respondents to answer in a socially favorable manner) sensitive questions on personal experiences of violence and the personal use of violence were asked in a self-administered section of the survey. For these sensitive questions, the interviewee filled out the questionnaire him- or herself, after which this numbered and removable part of the survey was placed into an envelope, which was sealed and placed in a closed box. It is important to point out that social desirability responding usually refers to the tendency for people to underreport socially undesirable attitudes or behaviors. However, considering the theory of honor ideology as proposed in this study, we may also suspect that men with high levels of honor ideology may over-report the use of violence in order to present themselves as more ‘manly’. To mitigate effects such as these, we posed sensitive questions through self-administration. The possible incentive for endorsers of honor ideology to brag about using violence should be greatly reduced thanks to the self-administration of this part of the survey since not even the enumerator will know the respondent’s answers.  

Part of the reason for using a self-administered part of the survey was also to reduce the potential problem of missing data due to item nonresponse. We reasoned that respondents should be less uncomfortable about admitting to having participated in political violence if the enumerator would not know the answer to these questions. This strategy seems to have been successful in that 95% or more of the respondents answered the questions about their use of violence.

The two hundred political activist interviewees—100 red-shirts and 100 yellow-shirts—were chosen by purposive sampling, with 20 interviewees per district in 10 districts that were considered to be either red or yellow strongholds. By having a few (20) interviewees in several (10) districts we hoped to obtain a quite general and broad picture of political activists, despite the small number. In each province, the KPI local survey coordinator contacted active red- or yellow-shirts in the province and snowball sampling was used to contact interviewees. From the first interviewee, and in all subsequent interviews, the following question was asked in order to contact the next interviewee: “Can you introduce me to someone who is an active red/yellow?” In order to assess a potential interviewee’s degree of red-shirt or yellow-shirt activity, they were asked whether they had participated in red/yellow riots, particularly at Ratchaprasong (in April/May 2010) and at the Don Muang and Suvanaphum airports (in 2008). Each subsequent interviewee was contacted via an introduction by the previous interviewee, in a way that was deemed appropriate by this previous interviewee. We thus used ten different starting points in a chain of referral for the yellow-shirt activists, and ten for the red-shirt activists, thereby reducing the risk that any particular referral in the chains of referrals becomes decisive for the resultant sample.