QB, I'm not insinuating that, and I agree that people should actually look at the evidence themselves. However I think it is completely ridiculous for you to think that a layperson will be able to assess the validity of EXTREMELY complicated climate modelling techniques. We can look at the macro-detail, and assess for ourselves precisely how the project has been carried out, but we NEED to rely on executive summaries to explain the complexities for us, and to validate the modeling techniques used.
I don't understand this reasoning - lay people shouldn't be expected to understand the science, but they can be allowed to make sweeping remarks about the validity
en masse? My problem is that while a summary is a nice thing to have, it must;
- address any shortcomings of the models used
- put into context the results and uncertainties therein
- consider the results in the context of the hypothesis of the study.
The report you quote fails on several counts here - results quoted without reference to what was actually measured, with which hypothesis, and lacking contextual uncertainties. Half of this isn't the researcher's fault - these studies are usually looking for 'is climate change happening', and a study like that tends to say little more than 'yes'. Conclusive? Hardly. The question "is climate change happening?" is almost useless in this context.
The correct question should be "is the global climate (whatever that is defined to be) doing something abnormal?" This
is an enormously difficult question to pose - since we have no idea what 'normal' is for the climate. This is where the models come into play, but there's no mention of what these models are modeling, how good they are at it, their uniqueness, or whether or not they're just 'climate change' models made to predict sea-level rises 8 times out of 10. People seem quite happy to reason that if this is such a difficult question, then anyone who has an answer must have done a damn fine job of it. Bullshit.
It's certainly true that not everyone can understand the science, but I say if you can't swim, get out of the pool. Science doesn't have to be for everyone, but if someone can't understand that a particular model that predicts a 1m/century rise in sea-levels is based simply on volume of ice melting is perhaps, not particularly sophisticated, then I don't want them claiming to be an authority on the subject.
What I'm getting at is I think QB is kidding himself if he thinks anyone, without any training in complex statistics and mathematical modelling techniques, can assess if someone's climate modelling methodology is solid or not. I struggle with long division FFS.
I wouldn't expect someone to be able to front up and perfectly understand complicated passages from Deleuze or Derrida without some basic background in continental philosophy, why should you expect me to be able to grasp the ins and outs of climate science without a basic background in science-based disciplines?
Exactly. People who don't understand enough about statistics and modeling shouldn't be making value judgments on research. Simple.
Using your analogy, If I was to flip through Dissemination should I then be qualified to tell someone else whether or not a particular piece properly addresses the mind/body dichotomy? I think not. Yet it's appropriate for people to see this review (even if they didn't read it), take it as gospel that the results are unambiguous, and pass on that conclusion to others.
I'm constantly told I shouldn't question those that know better, bullshit! The whole point of Science is to question and remain Sceptical.
Well, not the point of, but the method by which it improves.