Re: This could've been an interesting take on the story...
Granted it's about the difference in ratios, but when 0.9% of your sample report believe the moon landings are a hoax and 6 out of those 10 also agree with the author that burning fossil fuels over the last fifty years has increased the atmospheric temperature to some measurable degree, then you're on very dodgy ground in making your headline conclusion that AGW-skeptics believe the Moon landings were faked. More of the believers in hoax also believe that CO2 raises temperatures than don't believe that.
Now what you're saying is that it can still support that conclusion if you are looking at the ratios. I.e. if 40% of people who think CO2 has no effect disbelieve the Moon landings, but just 20% of people who do think CO2 has an effect do, then disbelievers of CO2 causing warming are more likely to be disbelievers of Moon landings.
That's potentially true, but there are some serious problems with it. The first is that the sample size actually is too small. You say that 1100 responses is reasonable, but that is the number of total responses. The number of applicable responses as any good statistician would immediately pick up on, depends on which question you are asking - in this case whether you believe the moon landings are faked correlated with belief in AGW - which in this case is just ten people. I'll illustrate. If I ask a thousand people to rate Lady Gaga as a musician as Good or Bad, that could be a decent sample size (we'll ignore selection bias for now, just as the author of this study has). But if that group there are 10 Muslims and 7 of them say she's Good, then whilst my selection group for how "people" think might be 1000 respondents, my selection group for what muslims think, is ten. Not 1,000. But ten. The reason is because there are 2bn muslims in the world and 10 is a non-representative sample. Think of it as a Venn diagram where whatever you want to test has to fall into the intersection of both groups. The moment you start restricting the set of people you're talking about, you have to start discarding some of your responses. That's a slight simplification, but good enough for non-Statisticians and essentially true. There's nothing that really changes that basic principle.
The sample size really is too small to draw the conclusions drawn from it and the way it is presented, is very far from suggesting to people that of the respondents, only 0.9% who believed the Moon landings were faked; and that of them, 60% were actually believers that CO2 increased global temperatures. The stated conclusions and headline are horribly misleading to the point that I call them wilfully disingenuous.
And the whole thing is flawed from the outset not just because of the framed questions and ropey analysis, but because it has the selection bias from Hell.
Disraeli said there were 'lies, damnded lies and statistics." I wouldn't even dignify this paper with that last one, just the first two. It knew what it wanted to prove and by Jupiter, it would do so! I wonder how many of the commentators here actually have the survey and the results spreadsheet open and have looked at it. I have, and it's rubbish. If I can poke holes in it, then any competent statistician would (and will) rip it to shreds. This thing is a blot upon the whole field of statistics. Sorry that I worked my way up into a rant, but however cynical I may become, an academic should be better than this.