07 June 2006

Interpreting the IPS voter survey

While I have my doubts about how representative the survey was, it nonetheless gave some useful findings. Relative to other concerns, many voters did think that having opposition parties represented in Parliament was among the most important of issues. Termed 'Pluralists' by researchers, these voters were more likely to be found among the middle and upper classes. What does this mean for opposition parties' programs and campaigning styles? Full essay.


BL said...

Hi Yawning Bread,

On the statistical survey issue, you are right on the part that the small sample size of 985 can still tolerate the margin of errors which the poll is made. I don't dispute that. Dansong also wrote a piece which depict this issue of statistical analysis.

However, the survey questions does not seem to allow us to systematically cluster out interesting trends in behavioural patterns. My guess is that the researchers in IPS have done their study based on answering whether the following classification changes the vote pattern:

1. Ethnicity
2. Pre and Post 65-ers

Of course, my concern is not with the sample size but more with the margin of error and systematic bias based on the survey framework. Still the law of large numbers will smooth out these problems in two ways: even out the statistical bias and two reduce the margin of errors.

Button said...

I'm rather impatient and could not read such wordy articles. However the 1st table got me wondering what is 'Household income'. If interviewing an individual, why is it called household income? If it's a household, then why is there an age group?

Yawning Bread Sampler said...

Hi bl -

Neither would I dispute that large samples can even out bias in sampling procedure, though I've also seen surveys where the bias has been so inherent, no matter how large the sample, it could not overcome the bias.

For example, I've come across surveys asking people about their sexual orientation and sex lives without anonymity! How many men are going to tell the researchers that they cheated on their wives four times a year on average? How many married women with kids are going to admit that they have lesbian fantasies?

For other kinds of bias, e.g. opinion that correlates with ethnicity, occupation, income, that aren't too sensitive, large sample size should be able to take care of it.

The question before us is, in the matter of political surveys in Singapore, how sensitive are political opinions?

In this survey, was the bias an outcome of methodology or mere limited size? I think this case merely demonstrates how immature Singapore is politically; we have very little experience in political polling!

Anonymous said...

voters immature? maybe, but here the survey design was immature

dansong said...

Hi Yawning Bread,

Great nuanced, informed and interpretive analysis! Citizen journalism indeed.

Of course, the Straits Times or the IPS, being a government-linked think tank, cannot but portray certainty to a bureaucracy that thinks in terms of absolutes not probabilities and unequivocal facts not contingent interpretations. ST has rejected a very neutralized letter (compared to my blogpost) I wrote explaining how the margins of error mean that the conclusions claimed in its report are misleading.

Admitting this and informing the public about the probability of its claimed truth not only does not make for newsworthiness (imagine a headline 'cost of living not main concern, maybe') but will also damage the credibility of both the ST and IPS in consolidating and representing public opinion to the bureaucracy. I agree with you that the ST does not attempt to deliberately misinform. I think the mass media and think tanks here are not so much propaganda machines but more data and opinion collection instruments for a government that has lost touch with the ground. A sort of surrogate grassroots with an expert veneer, so to speak.

You said that the margin of error for a survey of a size of close to 1,000 would be 3% at 95% confidence level assuming no systematic bias. This is actually not quite correct, since the margin of error and confidence level are calculated using the standard deviations around the average of the results. This means that the greater the variations in the answers (how spread out the answers are) obtained in the survey, the greater the margin of error.

For example (very simplified), in a survey of 10 people with scale (1) to (5) like the IPS survey and an average score of (3), an answer set of 2 persons responding for each point in the scale would have a great margin of error than an answer set of 3 persons answering (2), 4 answering (3) and 3 answering (4). The first result is more spread out than the second. See www.robertniles.com/stats/stdev.shtml

I don't mean to pick a technical bone for the sake of math-nerd nitpicking, but this really does matter. The margin of error could be anything from 1% to 10%, and this makes the IPS survey almost impossible to read or interpret unless we know the margin of error.

And I don't think this nullifies your interpretation, we have to make do with what we have as long as those in and with power are not wholly transparent. But a big MAYBE must be added to any interpretation. One thing for sure, maybe just a tiny subatomic maybe for this one, 'Upgrading' (with a 20% gap to the next issue) is the last thing on the list of voter concerns.

Anonymous said...

Hi Yawningbread,

I came across your posting of the IPS voter survey sometime ago. However, due to limited access to the Internet, I do not for the moment have a blog. So I wanted to give my views about the survey through you site. However, I notice that you have trumph me on that score as your analysis drew the same conclusion as mine.

To avoid labouring over the same views, I just want to add a few points, which maybe you might be able to clarify.

I don't know if this is a fair comment but somehow I get the impression that in Singapore media and supposedly reputable Institutions, such as IPS and Economics department, seemed to have little grasp of statistics.

Take the IPS survey for example, I would have thought such a Institution would have noted the claringly leading survey result they were churning out. Like you said, the case of asking people whether "efficient government" is a worthy issue for consideration seemed so leading that anyone in their right mind would have found the result suspect. This is akin to conducting a survey to finding out the proportion of telephone owners via a telephone interview.

Would it not have been more worthwhile to do survey what people understood my the term "efficient government"? (Rhetorical question) For instance they could have structured the question to: "Is an effcient government one that can create job?" throw in a few more like "Do you believe an efficient government is one that can create jobs or one that makes condition good for creating jobs?"

Questions such as those, phrased differently, can better reveal understanding of what is in the mind of a person. Certainly better than asking a bald question like is an efficient government is good or bad thing (my paraphrasing).

Grant such detailed exercise is an expensive affair. But for an Institution dedicated to public affair, I would have thought such survey would have been an on-going effort and not one focus only on the this election. Isn't it?(Again rhetoric). Seemed strange to hear the excuse of cost as constrained in exercise.

The IPS, in my observation is not the one that seemed very slipped short in interpreting statistic data. Other Institutions, particularly, the media in Singapore seemed to be afflicted with similar problem. For example, one survery, which is often trump, PERC's survey on Singapore Judiciary. Did it not seemed odd that the Survey was based on respondendents based in Singapore. Would the respondents have been inclined to respond differently?

Fruits for thoughts.