Our vision is to provide a home to sincere 9/11 researchers free from biased moderation and abusive tirades from other members.
You are currently viewing our boards as a guest, which only gives you access to view the discussions. New registration has been suspended.
David B. Benson wrote:femr2 --- Just some of the comments (by knowledgable people) on Real Climate.
Most of you are probably already aware that recently someone managed to hack into the computer system at CRU (the Climate Research Unit in Great Britain). They stole over 60 megabytes of personal emails, which was posted online.
The denialosphere has trumpeted the contents as proof of the fraudulent behavior of climate scientists, especially Phil Jones at CRU. But what’s most remarkable is that even the bits pointed to as a “smoking gun” really don’t support that idea. There are certainly phrases which seem incriminating when taken out of context — but when put into context are nothing of the kind.
Continuing to suggest that climate scientists generally, and Phil Jones specifically, are engaged in a conspiracy to deceive the world about global warming, when there turns out to be no real evidence of it in 10 years of personal communications (only words that can be twisted when taken out of context), demonstrates the idiocy of those who stand by that suggestion. If anything, the messages prove that there is not any conspiracy, and the scientists at CRU did not fudge data or engage in deceptive practices to push their “agenda.”
Certainly the emails contain some unkind words about certain people. I’ve said unkind things about some of them myself (here on this blog for all to see). In my opinion, the unkind words were earned by the loathesome recipients.
Perhaps the most enlightening revelation to come out of this sordid episode is how Gavin Schmidt (at RealClimate) has addressed the issue head-on but avoided any temptation to indulge in mud-slinging, even in the midst of this despicable invasion of privacy, unjustified by any of the contents of the messages. His conduct is exemplary, and illustrates a character and self-control that I can only envy. My respect for him knows no bounds.
My disrespect for the theives in likewise unbounded. They stole private communications, found nothing damning, but proved how willing — nay, eager — they are to distort things to make it seem as though they did. It’s every bit as immature and vindictive as stealing your sister’s diary and posting it on the internet. If she’d confessed to murder, there might be a reason to bring that to light, but when the worst you can find is that she said “I hate that bitch,” you have no business making her private thoughts public.
OneWhiteEye wrote:I hope they weren't using this software for anything:
femr2 wrote:OneWhiteEye wrote:I hope they weren't using this software for anything:
Crikey. I imagine home-grown code mash is extensively used to be honest. I haven't gone through it in full, but the written (dialogue) language used is very consistent with the discussions contained within the emails, especially in terms of treating the insertion and removal of differing data-sets. Needs a tooth-comb and code analysis. There's nothing intrinsically wrong with using small codebase segments that have evolved out of *whatever*. It happens all the time in most organisations (in the background), but I don't think anyone would ever dream of using such informal processes to produce high-profile reports, conclusions and world-changing implications of those in scope within the CRU.
If so, staggering. Will take a bit of time to go through.
You can't imagine what this has cost me - to actually allow the operator to assign false
WMO codes!! But what else is there in such situations? Especially when dealing with a 'Master'
database of dubious provenance (which, er, they all are and always will be).
False codes will be obtained by multiplying the legitimate code (5 digits) by 100, then adding
1 at a time until a number is found with no matches in the database. THIS IS NOT PERFECT but as
there is no central repository for WMO codes - especially made-up ones - we'll have to chance
duplicating one that's present in one of the other databases. In any case, anyone comparing WMO
codes between databases - something I've studiously avoided doing except for tmin/tmax where I
had to - will be treating the false codes with suspicion anyway. Hopefully.
Of course, option 3 cannot be offered for CLIMAT bulletins, there being no metadata with which
to form a new station.
This still meant an awful lot of encounters with naughty Master stations, when really I suspect
nobody else gives a hoot about. So with a somewhat cynical shrug, I added the nuclear option -
to match every WMO possible, and turn the rest into new stations (er, CLIMAT excepted). In other
words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to
become bad, but I really don't think people care enough to fix 'em, and it's the main reason the
project is nearly a year late.
Especially when dealing with a 'Master' database of dubious provenance (which, er, they all are and always will be)
Oh, sod it. It'll do. I don't think I can justify spending any longer on a dataset, the previous version of which was completely wrong (misnamed) and nobody noticed for five years.
not hatrry, probably tim wrote:Bear in mind that there is no working synthetic method for cloud, because Mark New lost the coefficients file and never found it again (despite searching on tape archives at UEA) and never recreated it. This hasn't mattered too much, because the synthetic cloud grids had not been discarded for 1901-95, and after 1995 sunshine data is used instead of cloud data anyway.
AGREED APPROACH for cloud (5 Oct 06).
For 1901 to 1995 - stay with published data. No clear way to replicate
process as undocumented.
For 1996 to 2002:
1. convert sun database to pseudo-cloud using the f77 programs;
2. anomalise wrt 96-00 with anomdtb.f;
3. grid using quick_interp_tdm.pro (which will use 6190 norms);
4. calculate (mean9600 - mean6190) for monthly grids, using the
published cru_ts_2.0 cloud data;
5. add to gridded data from step 3.
This should approximate the correction needed.
So, erm.. now we need to create our synthetic cloud from DTR. Except that's the thing we CAN'T do because pro cal_cld_gts_tdm.pro needs those bloody coefficients (a.25.7190, etc) that went AWOL. Frustratingly we do have some of the outputs from the program (ie, a.25.01.7190.glo), but that's obviously no use.
So, erm. We need synthetic cloud for 2003-2007, or we won't have enough data to run with. And yes it's taken me this long to realise that. Oh, bugger.
Another problem. Apparently I should have derived TMN and TMX from DTR and TMP, as that's
what v2.10 did and that's what people expect. I disagree with publishing datasets that are simple arithmetic derivations of other datasets published at the same time, when the real data could be published instead.. but no.
I was going to do further backtracing, but it's been revealed that the same issues were in 2.1 - meaning that I didn't add the duff data. The suggested way forward is to not use any observations after 1989, but to allow synthetics to take over. I'm not keen on this approach as it's likely (imo) to introduce visible jumps at 1990, since we're effectively introducing a change of data source just after calculating the normals
Of course, there is no easy way to check it's working properly, since the random element (used when relaxing to the climatology) ensures that each run gives different results
I don't know which is the more worrying - the fact that adding the CLIMAT updates lost us 1251 lines from tmax but gained us 1448 for tmin, or that the BOM additions added sod all. And yes - I've checked, the int2 and int3 databases are IDENTICAL. Aaaarrgghhhhh.
I am seriously close to giving up, again. The history of this is so complex that I can't get far enough
into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and
semi-automated interventions that I simply cannot just go back to early versions and run the update prog.
I could be throwing away all kinds of corrections - to lat/lons, to WMOs (yes!), and more.
So what the hell can I do about all these duplicate stations? Well, how about fixdupes.for? That would
be perfect - except that I never finished it, I was diverted off to fight some other fire. Aarrgghhh.
I - need - a - database - cleaner.
Users browsing this forum: No registered users and 0 guests