2009/10/18

noise tree rings and stuff

But surely the random sequences added together are just that random. Because they are random there will be random sequences that conform to any curve required, but outside the conformance the sequence will fall back to random = average zero.

Surely what is being proposed is that trees growths are controlled by many factors. no randomness just noise and a combination of factors.
Trees will not grow at -40C trees will not grow at +100c.
Trees do grow well at a temp in between (all else being satisfactory).

Choosing trees that grow in tune to the temperature means that if they extend beyond the temp record than the is a greater possibility that these will continue to grow in tune with the temp. If they grow to a different tune then they are invalid responders.

A long time ago I posted a sequence of pictures showing what can be obtained by adding and averaging a sequence of aligned photos - the only visible data was the church and sky glow. I added 128 of these images together and obtained this photo:

Note that it also shows the imperfections in the digital sensor (the window frame effect)
Image shack did have a single image with the gamma turned up to reveal the only visible image (Church+sky) but they've lost it!


The picture was taken in near dark conditions.
A flash photo of the same:


By removing all invalid data (pictures of the wife, the kids, flowers etc) that do not have the church and sky, a reasonable picture of the back garden appears from the noise.
Of course I may have included a few dark picture with 2 streetlights in those locations, but with enough of the correct image these will have a lessening effect.

--------------
This cap shape must have a dependence on temperature. It may not be linear but it must be there.

Somewhere between 15C and 100C the growth must start declining Did trees pass the optimum in the 60s?

Uncontrolled emissions in the 60s, 70s and 80s was known to cause acid rain (to an extent that some countries were forced to add lime to lakes to prevent damage) there was plenty of evidence that trees were being damage also.

Is it not true to say Damaged trees=slow growth

There are many factors that can slow tree growth but apart from over temperature these effects will be diminished by limited industrialisation (before 1900?).

Trees are rubbish thermometers, but in all the noise there MUST be a temperature signal. A large local sample will lower the noise from sickness, or damage. A large global sample will lower the noise from changes in soil fertility, etc.

Nothing will remove the noise from CO2 fertilisation, or other global events.

Some trees growing at the limit of their water needs may be negatively affected by rises in temperatures from their minimum growing value - growing in heat requires more water. these will always show a negative growth increase with temp. But if averaged with enough positive responders then these will be insignificant.

But the signal that remains must, when averaged contain a temperature signal (not necessarily linear)

wiki:
"Overall, the Program's cap and trade program has been successful in achieving its goals. Since the 1990s, SO2 emissions have dropped 40%, and according to the Pacific Research Institute, acid rain levels have dropped 65% since 1976.[16][17] However, this was significantly less successful than conventional regulation in the European Union, which saw a decrease of over 70% in SO2 emissions during the same time period.[18]
In 2007, total SO2 emissions were 8.9 million tons, achieving the program's long term goal ahead of the 2010 statutory deadline.[19]
The EPA estimates that by 2010, the overall costs of complying with the program for businesses and consumers will be $1 billion to $2 billion a year, only one fourth of what was originally predicted.[16]"

"However, the issue of acid rain first came to the attention of the international community in the late 1960s, having been identified in certain areas of southern Scandinavia, where it was damaging forests. The matter quickly became an international issue when it was discovered that the acid deposits in these areas were a result of heavy pollution in the UK and other parts of northern Europe.

http://www.politics.co.uk/briefings-guides/issue-briefs/environment-and-rural-affairs/acid-rain-$366677.htm
Acid rain and air pollution emerged from the industrial boom of the early 1900s onwards and the increasing levels of chemical production associated with these processes. The building of taller industrial chimneys from the 1960s onwards was largely held to be responsible for pollutants generated in the UK blowing as far as Scandinavia. "

CO2 and IR absoption

TomVonk:
October 16th, 2009 at 4:03 am
Re: thefordprefect (#186),
"From what I have seen the logarithmic effect is usually explained by the absoption bands getting full - ie. no more radiation can be absorbed. Radiation is then absorbed by smaller absorptions bands and and by the under used width of the CO2 bands.
And the CO2 GH effect is vastly lessened by many of the bands falling within the H20 bands."
.
I don't konw where you have seen that but this explanation is not even wrong .
It is absolutely and totally forbidden that a band , any band gets "full" or "saturated" .
The population that is in an excited state is a CONSTANT for a given temperature (f.ex the CO2 15ยต excited state represents 5 % of the total CO2 population at room temperatures) . This is prescribed by the MB distribution of the quantum states .
It doesn't depend on the number of molecules , the intensity of radiation or the age of the captain . Only temperature .
So whatever amount of IR you throw at a CO2 population , they will absorb it all and REEMIT .
They can't do anything else because they must do whatever it takes to keep the percentage of excited states constant .
.
Imagine a dam (CO2 molecules) and a lake whose level (percentage of excited states) is exactly at the top of the dam .
If you increase the flow in the lake (absorbed radiation) all that will happen is that the flow over the top of the dam (emitted radiation) will increase by exactly the same amount .
If you increase the height of the dam (temperature) , the level of the lake will go in a transient untill it gets to the new top and then it's again exactly like described above .
There is no "saturation" .

2009/10/12

Oceans as temperature controllers

Re: tallbloke (#41),
I believe this is due to the long solar minimum. When the sunspot count is above 40 or so, the oceans are net gainers of solar heat energy. When the sun is quiet for a while, that energy makes it's way back to the surface and is released. The last five solar minima have been followed within 12 months by an el nino.


Are you also posting as Stephen Wilde on the wuwt blog? You seem to be pushing the same ideas!.

SW radiation penetrates sea water further than LW radiation. However, this does not mean that SW radiation penetrates 20m of water suddenly transferring all its energy at that depth. It is progressively absorbed on the way down until at depth there is no more SW radiation left. So IR heats the surface only, UV heats the surface mainly. Air in contact with this sea surface is rapidly heated by the water and the water cools fractionally ONLY if the air temp is less than the water temp. If the water temp is less than the air temp (as it is during the daylight hours - usually) then the air will be cooled and the water warmed very fractionally.

The water temperature varies on a yearly basis round the UK (I assume it does round the rest of the globe?) There is no year long lag in temperature fluctuation as seasons change (perhaps only a month??)

My question to you is the same as it has been to Mr. Wilde - how is the ocean going to store this heat over many years as you suggest and then release it to the atmosphere?

Deep water more than 900m is at 4C 700 m averages 12C and at the surface 22C at an air temp of ????
http://www.windows.ucar.edu/tour/link=/earth/Water/temp.html&edu=high

If the heat is stored in the upper layers then it is continuously losing the "heat" to COOLER air
If it is in layers below 900m then how is 4C water going to up-well to release heat stored at 4C to air at 5C(for example)

Assuming it were possible to get heat energy stored at 4C to transfer the energy to the air at 12C how do you prevent these heat storage layer mixing as the sea slops around for 5 to 10 years?.

I would agree that the oceans act as a big temperature smoothing "capacitor" Reducing the yearly variations. Much more than this I need a better physical explanation for, please.

A further point AMO is often implicated in controlling air temperatures. This was posted on wuwt:
Comparing AMO with Hadcrut3V and Hadcrut3NH there is a wonderful correlation not so good with CET:



Apart from the increased trend caused by ?something? All the slow humps and dips appear in the right places and even the rapid changes appear aligned (to the eye!)

So if we zoom in and look at the signals through a much longer moving average the dips again align.



The dips in HADCRUT seem to occur a few months ahead of AMO and the peaks are a bit off. Not sure what CET has little correlation but hey, there must be a connection.
If Air Temp is driving AMO then one would expect the air temp changes to occur before AMO
and
Vice Versa.

So now lets look at the same date range through shorter moving averages.



Now it becomes interesting. sometimes the air temp leads amo and sometimes amo leads air temp.

If amo drives temp then there is no way that amo can lag air temperature.
and
vice versa

To me this says that there is a external driver, or the data is faulty.

Any thoughts?

Stuff:
Some interseting stuff but not too useful:

http://www.terrapub.co.jp/journals/JO/JOSJ/pdf/2601/26010052.pdf
http://science.jrank.org/pages/4836/Ocean-Zones-Water-depth-vs-light-penetration.html
http://spg.ucsd.edu/People/Mati/2003_Vasilkov_et_al_UV_radiation_SPIE.pdf
interesting book (full)
http://oceanworld.tamu.edu/resources/ocng_textbook/PDF_files/book.pdf

This is the one for wavelength and penetration depth in ocean:
http://www.terrapub.co.jp/journals/JO/JOSJ/pdf/2906/29060257.pdf

IR whacks the water molecules into motion UV less so - check the absoption bands of water vapour.

2009/10/06

McIntyre refuses offer to do real science

from dot earth
October 5, 2009, 2:41 pm Climate Auditor Challenged to Do Climate Science
By Andrew C. Revkin
Bloggers skeptical of global warming’s causes* and commentators fighting restrictions on greenhouse gases have made much in recent days of a string of posts on Climateaudit.org, one of the most popular Web sites aiming to challenge the deep consensus among climatologists that humans are setting the stage for generations of disrupted climate and rising seas. In the posts, Stephen McIntyre, questions sets of tree-ring data used in, or excluded from, prominent studies concluding that recent warming is unusual even when compared with past warm periods in the last several millenniums (including the recent Kaufman et al. paper discussed here).

Mr. McIntyre has gained fame or notoriety, depending on whom you consult, for seeking weaknesses in NASA temperature data and efforts to assemble a climate record from indirect evidence like variations in tree rings. Last week the scientists who run Realclimate.org, several of whom are authors of papers dissected by Mr. McIntyre, fired back. The Capital Weather Gang blog has just posted its analysis of the fight. One author of an underlying analysis of tree rings Keith Briffa, responded on his Web site and at on Climateaudit.org.

What is novel about all of this is how the blog discussions have sidestepped the traditional process of peer review and publication, then review and publication of critiques, and counter-critiques, by which science normally does that herky-jerky thing called knowledge building. The result is quick fodder for those using the Instanet to reinforce intellectual silos of one kind or another.

I explored this shift in the discourse in some e-mail exchanges with Mr. McIntyre and some of his critics, including Thomas Crowley, a University of Edinburgh specialist in unraveling past climate patterns. Dr. Crowley and Mr. McIntyre went toe to toe from 2003 through 2005 over data and interpretations. I then forwarded to Mr. McIntyre what amounted to a challenge from Dr. Crowley:

Thomas Crowley (now in Edinburgh) has sent me a note essentially challenging you to develop your own time series [of past climate patterns] (kind of a “put up or shut up” challenge). Why not do some climate science and get it published in the literature rather than poking at studies online, having the blogosphere amplify or distort your findings in a kind of short circuit that may not help push forward understanding?

As [Dr. Crowley] puts it: “McIntyre is really tiresome - notice he never publishes an alternate reconstruction that he thinks is better, oh no, because that involves taking a risk of him being criticized. He just nitpicks others. I don’t know of anyone else in science who actually does such things but fails to do something constructive himself.”

Here’s Mr. McIntyre’s reply (to follow references to publications you’ll need to refer to the linked papers). In essence, he says he sees no use in trying his own temperature reconstruction given the questions about the various data sets one would need to utilize:
The idea that I’m afraid of “taking a risk” or “taking a risk of being criticized” is a very strange characterization of what I do. Merely venturing into this field by confronting the most prominent authors at my age and stage of life was a far riskier enterprise than Crowley gives credit for. And as for “taking a risk of being criticized”? Can you honestly think of anyone in this field who is subjected to more criticism than I am? Or someone who has more eyes on their work looking for some fatal error?

The underlying problem with trying to make reconstructions with finite confidence intervals from the present roster of proxies is the inconsistency of the “proxies,” a point noted in McIntyre and McKitrick (PNAS 2009) in connection with Mann et al 2008 (but applies to other studies as well) as follows:

Paleoclimate reconstructions are an application of multivariate calibration, which provides a theoretical basis for confidence interval calculation (e.g., refs. 2 and 3). Inconsistency among proxies sharply inflates confidence intervals (3). Applying the inconsistency test of ref. 3 to Mann et al. A.D. 1000 proxy data shows that finite confidence intervals cannot be defined before ~1800.

Until this problem is resolved, I don’t see what purpose is served by proposing another reconstruction.

Crowley interprets the inconsistency as evidence of past “regional” climate, but offers no support for this interpretation other than the inconsistency itself –- which could equally be due to the “proxies” not being temperature proxies. There are fundamental inconsistencies at the regional level as well, including key locations of California (bristlecones) and Siberia (Yamal), where other evidence is contradictory t.o Mann-Briffa approachs (e.g. Millar et al 2006 re California; Naurzbaev et al 2004 and Polar Urals re Siberia,) These were noted up in the N.A.S. panel report, but Briffa refused to include the references in I.P.C.C. AR4. Without such detailed regional reconciliations, it cannot be concluded that inconsistency is evidence of “regional” climate as opposed to inherent defects in the “proxies” themselves.

The fundamental requirement in this field is not the need for a fancier multivariate method to extract a “faint signal” from noise – such efforts are all too often plagued with unawareness of data mining and data snooping. These problems are all too common in this field (e.g. the repetitive use of the bristlecones and Yamal series). I think that I’ve made climate scientists far more aware of these and other statistical problems than previously, whether they are willing to acknowledge this in public or not, and that this is highly “constructive” for the field.

As I mentioned to you, at least some prominent scientists in the field accept (though not for public attribution) the validity of our criticisms of the Mann-Briffa style reconstruction and now view such efforts as a dead end until better quality data is developed. If this view is correct, and I believe it is, then criticizing oversold reconstructions is surely “constructive” as it forces people to face up to the need for such better data.

Estimates provided to me (again without the scientists being prepared to do so in public) were that the development of such data may take 10-20 years and may involve totally different proxies than the ones presently in use. If I were to speculate on what sort of proxies had a chance of succeeding, it would be ones that were based on isotope fractionation or other physical processes with a known monotonic relationship to temperature and away from things like tree ring widths and varve thicknesses. In “deep time,” ice core O18 and foraminifera Mg/Ca in ocean sediments are examples of proxies that provide consistent or at least relatively consistent information. The prominent oceanographer Lowell Stott asked to meet with me at AGU 2007 to discuss long tree ring chronologies for O18 sampling. I sent all the Almagre cores to Lowell Stott’s lab, where Max Berkelhammer is analyzing delO18 values.

Underlying my articles and commentary is the effort to frame reconstructions in a broader statistical framework (multivariate calibration) where there is available theory, a project that seems to be ignored both by applied statisticians and climate scientists. At a 2007 conference of the American Statistical Association to which Caspar Ammann (but not me) was invited, it was concluded:

While there is undoubtedly scope for statisticians to play a larger role in paleoclimate research, the large investment of time needed to become familiar with the scientific background is likely to deter most statisticians from entering this field. http://www.climateaudit.org/?p=2280

I’ve been working on this from time to time over the past few years and this too seems “highly constructive” to me and far more relevant to my interests and skills than adding to the population of poorly constrained “reconstructions,” as Crowley proposes.

In the meantime, studies using recycled proxies and problematic statistical methods continue to be widely publicized. Given my present familiarity with the methods and proxies used in the field, I believe that there is a useful role for timely analysis of the type that I do at Climate Audit. It would be even more constructive if the authors rose to the challenge of defending their studies.

Given the importance of climate change as an issue, it remains disappointing that prompt archiving of data remains an issue with many authors and that funding agencies and journals are not more effective in enforcing existing policies or establishing such policies if existing policies are insufficient. It would be desirable as well if journals publishing statistical paleoclimate articles followed econometric journal practices by requiring the archiving of working code as a condition of review. While progress has been slow, I think that my efforts on these fronts, both data and code, have been constructive. It is disappointing that Crowley likens the archiving of data to doing a tax return. It’s not that hard. Even in blog posts (e.g. the Briffa post in question), I frequently provide turnkey code enabling readers to download all relevant data from original sources and to see all statistical calculations and figures for themselves. This is the way that things are going to go – not Crowley’s way.

So should this all play out within the journals, or is there merit to arguments of those contending that the process of peer review is too often biased to favor the status quo and, when involving matters of statistics, sometimes not involving the right reviewers?

Another scientist at the heart of the temperature-reconstruction effort, Michael Mann of Pennsylvania State University, said that if Mr. McIntyre wants to be taken seriously he has to move more from blogging to publishing in the refereed literature.

“Skepticism is essential for the functioning of science,” Dr. Mann said. “It yields an erratic path towards eventual truth. But legitimate scientific skepticism is exercised through formal scientific circles, in particular the peer review process.” He added: “Those such as McIntyre who operate almost entirely outside of this system are not to be trusted.”

2009/10/03

Grape harvest

Nothing seems to give a useful proxy to temperature. Some of the better ones are grape harvest and budbust dates. But these only go back to about 1300s.



Note that grape harvest has not been converted to temp. so high temp = early harvest!

Statisticians and the real world

What a strange world we live in.

We have McIntyre and acolytes saying in one breath -
1. It is not valid to sort samples before you analyse them. Briffa should have used all samples from the area, and then come to a conclusion.
2. Then they say The 10/12 Briffa trees should not have been included as we have these 34 schweingruber(?) trees, and look no 20thC warming.
3. Someone then says that the Briffa trees should have been included
4. McIntyre adds them and finds a smaller hockey stick.
5. McIntyre analyses the Briffa trees and find a golden hockey stick tree which provides most of the late 20thC warming.
6. McIntyre then says this result is 8 sigma outside the normal and should not be included.
How do you reconcile statement 1 with statement 6??????
In my view if you are not allowed to sort for correlation between ring width and temperature over the period where we have instrumental records. Then you are not allowed to sort at all.

Consider this scenario

At a junk sale you purchase a number of instruments various environmental parameters over time. None are very accurate, and you have no idea which parameter they are measuring. You want to record temperature, so you set them up in the same location. Some years later you can afford a calibrated temperature recorder which you also set up in the same location.
Some of these instruments will have recorded sunlight, precipitation, soil nutrient levels, fungal spore levels, ambient temperature, and temperature of the soil 1 metre down.
If you want to know what the temperature was when you set up the first instruments do you
a. normalise all readings of all instruments then average them.
b. average them all without normalising
c. compare the outputs from all instruments with the calibrated temperature recorder and throw out all that show no correlation. normalise the results remaining and then average them
d. as c. but additionally throw out units deviating by significant amounts from the average.

Which of a. b. c. is going to give you best historic temperatures?

Personally as an engineer not a statistician I would go with d. or if insufficient instruments to find the outliers c.
I realise that this is going to bias the results to giving the same result as the calibrated instrument, but may I suggest this is exactly what you want.

It seems a statistician would go for 1 as this would not bias the result to valid temperatures. I just cannot understand this.