femr2 wrote:Do you think it is acceptable that organisations deriving conclusions from such data are advising policy makers who are making policy that will affect you very directly, your children, and your childrens' children ?
No, I do not. It doesn't matter how intelligent and renowned the participants are in their fields or the quality of the other work involved, the evidence is that this particular aspect of the work is an abomination. I know very little of climatology, but I know a great deal about the design and implementation of very high reliability software systems. The particulars are irrelevant except as they form the constituents of the whole picture. Abysmal.
Of course, not all findings are contingent on the correct functioning of this code (whatever that's defined to be, if at all!) and the code may indeed function as designed.
David B. Benson wrote:OneWhiteEye --- Tamino is an exceptionally capable statistician and has no truck for the anti-science crowd.
Perhaps then, he'd like to have a look at the readme file and estimate the confidence that the software in question functions as intended and also whether it functions correctly in a more absolute sense - for they are two separate questions.
Based on my extensive experience with software development, my confidence that it is free of significant bugs is vanishingly small, given the statements (and tone!) in the document. I wouldn't pay for this software (ah but no doubt I did in some part). I wouldn't sign off on it if I were the cognizant authority. I wouldn't hire anyone who worked on this project, importantly not even Harry (poor sod) for the lack of decorum and discretion.
Software engineering has advanced well beyond this miserable state of development, but such sloppiness has never been acceptable to professional practitioners. I'm to believe the state of the art in climate science has this as the computational aspect and does NOT see it as a problem? Sad.
Research software? There is never a reason to write software which doesn't work. There can be ample reasons to sacrifice performance, features, compatibility, user convenience and a host of other concerns when the objective is research. Never is there a justification for sacrificing correctness in implementation. Time and again, and with a higher degree of confidence than anything seen in climate science, it has been shown that factors which negatively impact program correctness include:
- poor or no documentation
- incomplete unit test coverage
- bloated size and complexity metrics
Fail, fail, fail. Add lost source code to that list.
Every developer of substance has experienced the problems of scaling, maintenance and enhancement - most have contributed to the problem themselves in one way or another. An entire (as yet immature) science has sprung up to address the competing contraints of economy, design evolution and provable correctness. The reason I have such low confidence that the software functions correctly is obvious to even the most casual developer.
Not that anyone has tried to make the point, but I'd like to cover the objection in advance, distinctions between research/production/mission critical software development may be appropriate. When research leads to sweeping global policy changes, I'd consider it mission critical. Highest standards on everything. This is the worst of the worst, don't even need to look at the code. If it functions correctly (or even as intended), it is pure dumb luck. Not what I'd rely on to auto-pay my phone bill, forget about deciding global policy.
Software: Fail. Brush-off explanation of "there's nothing to see here": Shameful Fail.
To reiterate, I sincerely hope this software wasn't actually used for anything important, since flawed results have no value even for discussions at the local pub. Thus, at best, the situation is one of flushing public funds down a useless enterprise. Hope they've cleaned up their act, but why there's a second chance is beyond me.
Thank goodness for disclosure, ill-gained or otherwise!