×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Mathematicians Devise Typefaces Based On Problems of Computational Geometry

rgbatduke Conveyor belt problem... (56 comments)

Interesting article, but I don't understand why the conveyor belt problem (as described) is unsolved. Start with one pulley. Obviously a band around it works. Assume a solution exists for some finite number of pulleys, N. Since the support of the pulley locations is compact, one can always and uniquely determine the exterior of the spanning belt. Place an additional pulley exterior to this belt. There are only three topologically relevant cases -- (an pair of in the case of more than two of) the "nearest neighbor" exterior pulleys carry a belt that is "convex" (outside both), "concave" (inside both), or "mixed" (inside one, outside the other). In all three cases it can be shown that one can add the pulley and still satisfy the conditions of the problem. Hence one has 1, N and N+1, a proof by topological induction. The only additional bit of work on the proof is to note that one can avoid problems with pathological interior loopings (if necessary -- I don't really think that it is) or adding the N+1 pulley INSIDE the belt by simply reordering the inductive process for any given pattern to maintain the belt in a maximally convex state as one proceeds, that is starting with any belt and then adding the pulleys ordered by their distance from the original pulley. Not only is there "a" spanning belt, but there will be in most cases an enormous permutation of spanning belts. As in, all of the permutations one can construct by adding pulleys in circular distance order from any pulley treated as the original pulley until they are all entrained.

3 hours ago
top

Plant Breeders Release 'Open Source Seeds'

rgbatduke Re:Shame this happened (110 comments)

The interesting question will be GPL viral. So far, Monsanto et. al. have invoked a viral clause to protect the genes of their products that are literally carried by the wind to non-purchaser's fields who happen to grow their own seed crop. Imagine the impact of having genes carried the other way! Sorry Monsanto, the hybrid crop is now GPL, unless you take steps to prevent e.g. corn pollen from blowing in the wind.

Never work, of course, but it is a nice fantasy.

rgb

8 hours ago
top

U.S. Biomedical Research 'Unsustainable' Prominent Researchers Warn

rgbatduke Re:Another thing (135 comments)

Well, or it is just barely possible that the continuing improvement in our understanding of the Universe, the astronomical marginal improvements in per capita productivity, our geometrically exploding increase in computational capacity and storage, will continue to more than compensate for all of your imagined sources of doom and in twenty or thirty years the present -- arguably the best time in human history to be alive -- will be looked back on as a rather sad time when we hadn't quite eliminated war, when world religions hadn't quite advanced far enough along their death spiral for rationality to have replaced the patriarchal mind control whose memes dominate your comment, when we hadn't quite settled down to deal with global poverty and scarcity because of a global shadow war over control of energy production and consumption driven by disinformation on all sides.

The "shiny happy part" you lament, for example, is something that was never experienced by more than perhaps 1/10th of the population of the planet, and more often the shiny part was confined to an even smaller proportional of the population. Most of the world's population, including the entire part where women were and to a large extent are still confined to the "duties" you idealize as part of an ideal world economy, is still stuck in the 19th or even the 18th century as far as "shiny happy" goes. Perhaps a trip to central Africa or rural India is in order -- something to give you a bit of perspective as you so cheerfully advocate the return of half of the world's population to a state of de facto chattel. Or a rebirth as a woman living in central Afghanistan, square in the middle of a biblical culture that perfectly reflects your ideal.

Personally I rather suspect that humans will continue to do what they do best -- cope with an ever changing socioeconomic technology backed culture and optimize it in countless ways to that, on average, things continue to be better tomorrow than they are today, just as today is a tiny bit shinier and happier than, say, the heart of the cold war and the cold war was mostly better than the heart of World War II (for most people alive) which was for all of its violence and death still a step up from the economic upheaval of the interval between the wars and an enormous potentiator of global change as it brought about the beginning of the end of the colonial era, which was definitely better than World War I, which in turn was arguably the war that broke up the grip of leftover emperors, kings, and nobility that still dominated the bulk of world politics at the time. Women in the workforce as full equals simply means that society is in the process of restructuring and accommodating the enormous influx of wealth (of all sorts) that entry enables. Are their problems? Sure. But they are nothing compared to the problems of being born as an XX instead of an XY in the 19th century or earlier.

rgb

3 days ago
top

Mathematicians Use Mossberg 500 Pump-Action Shotgun To Calculate Pi

rgbatduke Or... (307 comments)

...they could take the same sheet of aluminum, weigh it, cut along the arc that they already inscribed and weigh the quarter circle, and multiply the ratio in the weight by four. Or, they could take a length of string, carefully line it up with the arc they already inscribed and snip it, form the ratio of its length $\pi R/2$ and the length of a side of the square R, and multiply by 2. Or they could evaluate using any one of a number of summed series. Or any of a number of other measurement-based geometric arguments, most of which will be more accurate than Monte Carlo done with a Mossberg "corrected" by arguments that are surely more abstruse than summing a series.

If one absolutely insists on computing pi using homemade Monte Carlo, you might as well toss hot dogs:

http://www.wikihow.com/Calcula...

Or toothpicks, if you want a bit higher resolution. With a large enough target grid and a good enough "random toss" one can once again avoid needing to "correct" for the non-uniform distribution of pellets in a shotgun blast.

rgb

4 days ago
top

Mathematical Proof That the Cosmos Could Have Formed Spontaneously From Nothing

rgbatduke Platonic Ideal Science Fiction... (594 comments)

Sheesh. I mean, I'm a theorist. I love theory. But let us not lose sight of the difference between metaphysical speculation and "proof". All that has been done is that it has been shown that -- subject to a whole slew of prior assumptions (premises, axioms) that may or may not be correct (and that cannot be verified or sorted out either way) -- that a particular kind of "empty" Universe could consistently give rise to a vacuum fluctuation that grows a la big bang. Of course, there is a big difference between an "empty" Universe subject to all sorts of quantum rules and nothing -- as nothing tends to come without anything, including a set of rules quantum or otherwise.

So let me summarize the argument. If the Universe already existed, complete with a set of physical laws, but just happened at some point in meta-space and meta-time to be empty, then if those probably non-unique laws had parameters within some almost certainly non-unique range, then mass-energy could have poofed into existence in a big bang as a quantum vacuum fluctuation that grew. It is proven that all of this could have happened.

And we are now precisely as knowledgeable as we were before. We already knew that it could have happened because it did. We still know absolutely nothing (more) useful about the state of the Universe before the bang, because the bang erased the prior state in a blast of cosmic entropy and all of our ability to make inferences comes from weak extrapolation of observation of its visible state "now" (that is, into the distance-mediated past). We cannot use the "proof" to make any useful predictions that can be tested (either verified or falsified).

Don't get me wrong, I think it is a lovely result, and it may prove useful in some indirect way by providing an incentive to reformulate quantum theory in ways that are at least consistent with the big bang, just as quantum theory ultimately proves useful when discussing things like black holes. But it is still theoretical metaphysics, not physics.

about a week ago
top

How Many People Does It Take To Colonize Another Star System?

rgbatduke Re:Sure, but... (392 comments)

You're thinking small. By the time we have the tech even to build a credible slowboat, we'll have the tech to just ship the basic human genetic code plus a library of admissible variations and we'll assemble the humans at the far end from scratch, robotically. That way we don't need to worry about hundreds of years of radiation exposure (frozen or not, damaged DNA is damaged DNA) or the slow but unstoppable dehydration and diffusion out of the embryos. We can also ship the genetic code plus variation library (or algorithm(s)) for all the rest of the species we might need to establish an ecology. All we need to find at the far end is a tolerable temperature range, water, oxygen, nitrogen, carbon, and the rest of the stuff such as zinc and silicon and iron and calcium and potassium needed to provide raw materials to a build-a-bear machine. The biggest problem we'll face is that the machines we send to do the bear building will themselves be subject to bit-flipping, library-corrupting entropy due to e.g. cosmic rays so we'll have to employ fairly advanced error correction and detection and factor of a gazillion redundancy in the information (which should be easy, by then a billion petabytes of data will probably fit onto 3 cm cube, and of course we will be able to refresh or update data by means of our terawatt laser on Pluto pointed in the right direction in case we discover anything else that needs to be done or the data gets corrupted anyway).

The problem is the slowboat. 20 trillion miles is 20 trillion miles. At 1% of lightspeed, it's a 400 year journey. Any crap in the space lanes equals being hit by a meteorite at 3 million meters per second -- even a grain of ice could be deadly. Even a tiny boat with a minimal build-a-bear factory, supersmart computer and data bank + power supply and drive would mass in the tens to hundreds of metric tons, so you're talking a huge amount of kinetic energy. Expensive doesn't begin to describe it.

rgb

about two weeks ago
top

P vs. NP Problem Linked To the Quantum Nature of the Universe

rgbatduke Re:Say what? (199 comments)

Nature doesn't solve any equations at all. Equations describe what nature does. Electrons do not contain calculators.

If you like, you can consider the Universe itself to be a really big calculator, but if so it isn't computing something else -- the computation is the self-consistent dynamical evolution of the Universe itself.

So of all of the arguments against Schrodinger's Cat (which requires a non-existent non-relativistic local partitioning in the first place) this one is the nuttiest IMO. Why not simply work through the Nakajima-Zwanzig repartitioning of a "unversal" density matrix into a generalized master equation (ideally retaining relativistic non-locality in time) and acknowledge that since we cannot formally adiabatically disconnect the interior of the box from the rest of the Universe, the state of the interior is always coupled to the exterior state in such a way that the cat is dead or not dead but not both?

rgb

about two weeks ago
top

Study: Exposure To Morning Sunlight Helps Managing Weight

rgbatduke Re:Statistically Insignificant (137 comments)

For seven days. Either dredging the data from something else or (worse) looking for the effect!. Which is claimed to be 20% of BMI. I can refute this trivially within my own household. This is arrant nonsense. For me dropping 20% of BMI means losing 40 pounds, and gee, I'll bet nobody in their study dropped 40 pounds in seven days. So precisely how could the establish the correlation? By enrolling a handful of thin early risers and fat late sleepers and watching them for seven days and concluding that the relevant controlling variable was the brightness of the light they were exposed to when they got up?

Sometimes one doesn't even have to RTFA to sneeze out a "bullshit" and move on.

rgb

about two weeks ago
top

Study: Exposure To Morning Sunlight Helps Managing Weight

rgbatduke 54 participants? Seven days? (137 comments)

That is, "We did a tiny study for a ridiculously short amount of time without anything like controls or double blindedness and found that exposure to morning light accounts for reductions of 20% of BMI at a statistically significant level.

This could be true only if the lights one turned on when getting up "early" were frickin' laser beams attached to sharks that lopped of a major limb and ate it.

April Fool's day was Tuesday. Why post this now?

rgb

about two weeks ago
top

New US Atomic Clock Goes Live

rgbatduke Re:How, exactly, do we know? (127 comments)

Sadly, AC or not, I would have modded this up if only my mod points hadn't disappeared today.

rgb (and don't call me Surly...)

about two weeks ago
top

UN Report: Climate Changes Overwhelming

rgbatduke Re:Projections (987 comments)

And this would mean that the absolutely catstrophic level of 4C would be reached by 2140. But we know that the response is very likely not linear at all and that if we do nothing the 4C could be reached even by 2100. Not a conforting thought is it?

Actually, we know that the response is logarithmic, not linear. We know that it is logarithmic with a very small effective coupling (the reasons increased CO_2 increases surface temperature at all are complex -- the GHE from CO_2 is "saturated" so that the change in the GHE comes from second order effects that we cannot directly calculate or measure with any particular accuracy, which is why the direct CO_2 warming by 600 ppm is usually given as a substantial semi-empirical range (of roughly 1 to 1.5 C). The semi-empirical bit means that in part, our beliefs about the forcing come from our ability to use some value of the forcing in models that then come close to observed reality.

As for starting date -- why start at 1950? Why not 1953, or 1945? Why not just examine the entire global record from 1950 to the present? One form of numerology is as good as another, unless and until you build a model. When you build a model, the model has to work before it is believable, or that isn't any improvement over what the data and your own eyes tell you. Here's what mine tell me, with the linear trend plotted only as a guide to the eye and as a measure of the total warming:

http://www.woodfortrees.org/pl...

Facts:

1) The linear trend of global surface temperatures has been roughly 0.8 C over 164 years, or 0.05 C/decade for the entire thermometric record.

2) The pattern of increase is remarkably consistent. It is numerology, of course, but there is a clear sinusoidal variation imposed on the linear trend. One can imagine a slight amplification in the second cycle and that amplification might be CO_2, but then again, it might not -- numerology being notoriously poor as a predictor and more a motivation for understanding the structure of the past. There is no particularly strong evidence for amplification of the underlying linear trend and only weak evidence of amplification of the oscillation.

3) The oscillation is roughly synchronous with the Pacific Decadal Oscillation, suggesting a causal link between warming and cooling efficiency and things like the PDO and the (coupled) distribution and sign of ENSO events.

4) If you look at figure 9.8a of AR5 and mentally replot it against this figure, the CMIP5 MME mean goes left from 1850 to 1970 nearly flat -- starting at an anomaly of -0.1 C (that would be, 0.2C higher than the general left endpoint of the data) and ending in 1975 or thereabouts with only 0.1C of net warming to an anomaly of 0.0 C. It then rockets up like somebody turned on a switch. It is a hockey stick decorated with bounces at major volcanic events. The data is not at all shaped like a hockey stick. The model goes straight over all of the 19th and early 20th century structure as if it were not there, and deviates from the record in the 21st century as the temperature follows the established pattern instead of the GCM mean.

There is no cherry picking at all here, because I'm just describing obvious features of the actual data compared to the actual MME mean on the entire timeseries. You many find the MME mean convincing, an "excellent" fit to the data. I think the fit absolutely sucks, especially given that the only place it does a really good job of fitting it is the reference interval plus a few time periods on either side.

In the absence of any model at all one would find nothing surprising or alarming about the shape of the climate record in any of the intervals 1850 to 1900, 1900 to 1950, 1950 to 2000, or post 2000, and if one plots it (especially if one plots it to scale, not as an anomaly on a scale where the entire relative variation is around 0.3% of the absolute temperature over 160 years, against which has increased by over 33% over the same time interval). There is, quite literally, nothing to see here -- signal (if any) is buried in natural noise.

So while you are proposing 4 C warming, consider the possibility of 1.6 C warming from 1850 to 2270, which is what the continuation of this pattern would be. Not that I think continuation is likely. The post LIA warming is probably nearly complete, and if one looks further back at the climate record at the MWP and the general descent that started as it ended, even if there is weak anthropogenic forcing, it may have to compete with a strong natural cooling trend as the Sun is predicted to go nearly flat in the next solar cycle (the current one is the weakest one in 100 years).

Again, I'm not addressing the details of any given model or comparing it to the data -- nearly all of the models in CMIP5 would fare much worse than the MME mean. I'm simply noting that the evidence that the climate models are good forecasters or hindcasters of the actual record (even when augmented with estimates of past volcanic events) is weak, and growing weaker at the moment, not stronger. This isn't really something one can argue about, because any of us could draw lines that come (much) closer to the data than the MME mean, and anybody can see that the actual climate skates the lower bound of the MME envelope far too much of the time. You can have "faith" that the climate will start to warm suddenly and will catch up to the MME mean, and of course you could be right. I could have "faith" that the climate will recover and start to cool (I don't, actually -- I'm quite neutral about the probably correct solution to impossibly difficult computational problems). Which of those beliefs is correct will, of course, be revealed in time, but at the moment the former belief is in trouble because the recovery you predict will happen hasn't started yet and there is little evidence that it is about to start. Even the most ardent of climate modellers has to acknowledge that if the ocean has taken up the missing heat Trenberth et. al. claim to have measured (at unbelievably fine precision/resolution, IMO) the models of CMIP5 completely neglect this possibility and hence they all have the physics fundamentally wrong.

Let me explain the problem. If the missing heat can go into the top km of the ocean through mechanisms we do not understand at all based on our existing knowledge of oceanic heat transport processes to raise the temperature of the ocean by a few thousandths of a degree, this means that the heat is gone as far as climate influence is concerned. It cannot come back (as some people absurdly have proposed) to haunt us later because the ocean is much colder than the air over the ocean almost everywhere, and in any event even several hundredths of a degree is pure noise on the surface where the atmosphere and ocean are in contact. The ocean could eat the CO_2 surplus heat for centuries and not substantially warm to depth. That's the kind of heat capacity you are talking about -- the ocean mass is a great big flywheel buffering climate change (just as coastal weather is tempered compared to inland weather, with warmer winters and cooler summers).

Then there are all of the other potential problems with the GCMs -- but why bother repeating them? If you believe them to be correct as a matter of religion and aren't willing to contemplate the possibility that they might be broken even on the basis of 9.8a or an even worse direct comparison one model at a time with the data, how can anything I say convince you?

rgb

about two weeks ago
top

UN Report: Climate Changes Overwhelming

rgbatduke Re:Projections (987 comments)

In other words: typical variablility factors which does nothing to diminish the global trend of 0.11C per decade over 1951–2012 (a 60 year period)

Which extrapolates to a non-catastrophic total warming of 1C by 2100, assuming that we don't work out LFTR or fusion in the meantime and abandon carbon not to save the climate but to save money. Assuming that none of the warming over that 60 year period was natural -- if some fraction of it was natural, that would come out of the extrapolation unless you are asserting that natural warming is likely to continue for all of the next century. How one would defend such an assertion is in and of itself an interesting question, since natural vs CO_2 driven warming cannot be projectively decomposed from the climate models.

Thanks for establishing my point. The argument isn't just "there is no such thing as CO_2 driven global warming" (which none of the scientists who are skeptics that I know assert) vs "CO_2 driven global warming will lead to a global catastrophe by 2100" -- there is a strong middle ground "CO_2 driven global warming will be non-catastrophic throughout the 21st century and it makes a lot more sense to live with it and accommodate it than to spend hundreds of billions of dollars trying to ameliorate it with immature technology and at the expense of the perpetuation of global poverty".

rgb

about two weeks ago
top

UN Report: Climate Changes Overwhelming

rgbatduke Re:Projections (987 comments)

And in that graph, CO_2 was irrelevant to both the declining spots and to the general upward trend before the single stretch from roughly 1983 to 1998. Yet not only does the pattern continue, as you note, but there is clearly a pattern that is continuing, one that started before CO_2 was a factor and continues with CO_2 a factor. So you are quite right to adopt the null hypothesis, which is, actually, that the warming is natural. One has to demonstrate that the warming pattern you note could not have occurred without CO_2's help before you conclude that it isn't natural. Since it is all a part of the general recovery from the Little Ice Age, the coldest stretch in some 9000 years stretching back to the end of the Younger Dryas and the beginning of the Holocene, it is not at all unreasonable to think that it is mostly natural.

That does not mean that CO_2 was not, or could not, be a partial factor in the overall curve. Note well that if CO_2 is the strong forcing agent with strong positive feedback it is asserted to be, as indeed it must be if temperatures are to rise by 2.5C or more by 2100, then one expectes flat to negative trends must become less likely as one moves from the past to the future. How much less likely? Look at the predictions of the GCMs (which don't work in more ways than running too hot across the entire time series outside of the reference interval where they were essentially fit). The mean prediction has a very distinct hockey stick shape, modulated only by major volcanic events. Even so it overestimates the volcanic cooling of several of those past events and misses entirely the cooling associated with the start of the 20th century, and simply runs roughl 0.4 C too warm across that dip and the subsequent rapid warming in the first half of the 20th century, warming that took place without the benefit of CO_2, warming that produced one half of the state high temperature records in the United States even as of today in the single decade of the 1930s (at which time Arctic ice all but disappeared, a fact that was recorded in the news at the time although most of the Arctic was still more or less inaccessible or enormously expensive to visit).

Allowing for a well-fit volcanic event that took place within the reference period, "anthropogenic" warming turned on like a switch somewhere between the mid-1970s and early 1980s (in spite of the fact that a lot of this warming looked exactly like the trend in the 1930s pre-CO_2 that was missed by the models because (in my opinion) they do not have the natural variation of the climate right indepedendent of CO_2), probably because they do not correctly account for the global effects of the decadal oscillations and latent heat/albedo variation on heating/cooling efficiencies outside of atmospheric radiation chemistry). This in turn is easy to understand, given the coarse spatiotemporal grid required to solve the computational problem at all, the granularity of the events that dominate the evolution of weather (that would be, largely too fine for this grid to resolve), the neglect of oceanic dynamics in some, but not all, of the models and its overapproximation where it is included, and the ever-present spectre of missing or incorrect parameters or physics. This is a dancing bear, not a ballerina, if you know the reference.

Outside of the reference interval, the models based on strong-feedback CO_2 tracked Pinatubo, missed (as usual) an unpredictable decadal oscillation forcing (the super-ENSO of 1997=1998), and promptly diverged from the actual climate. According to most of the models, the turnover that occurs in the ballpark of 1998-2000 should not have happened. CO_2 continued to rise. As you can see from looking at the overall model hockey stick, only volcanic aerosols produce a significant, coherent effect on the climate outside of rising CO_2 (which is obviously not the case for the real climate). Finally it is worth noting that the individual models have an absurdly large range of variation -- and this is after a PPE average -- in this figure. It requires averaging 36 odd models in order to smooth the MME mean down to where its variability approximately accurately tracks only the volcanic aerosol events while missing almost all the rest of the natural variation.

There are, without question, multiple ways to explain the late 20th century warming. AR5 pays lip service to this by making statements such as "over half the warming is due to CO_2 as opposed to natural variation" made with some statistically indefensible statement of "confidence". However, it is utterly impossible to decompose GCM predictions into natural vs unnatural components -- the climate is a nonlinear chaotic system and tiny changes in initial state make huge changes in final state, which is a signal that not only can such a decomposition not occur, one very likely cannot even linearize the system at all outside of very short (decade long, perhaps) intervals. Tiny changes in the dynamics, the parameterization, the internal physics, the computational granularity can completely change the outcome. If in fact the natural vs forced "components" are incorrectly balanced, if atmospheric dynamics in general is given too much weight, if confounding variations in ignored dimensions are present, the models could fail. It isn't even unlikely that they have things wrong.

If one directly compares the models one at a time to the temperature record, the performance of most of the models is very unimpressive. The spaghetti graph nature of the figure (deliberately?) makes it impossible to see if any of the CMIP5 models did a comparatively much better job than all of the rest at fitting the entire temperature curve. It would be extremely interesting to see how those "best of show" models did within their perturbed parameter ensemble runs -- do they, indeed, produce future climates that are significanly cooling as often as warming over the last 15 years (the only way for the PPE average to produce a nearly flat segment, a point that people seem determined to miss). What fraction of each model's PPE runs are as cool or cooler than the current climate trend? If it is under 5%, one is justified in rejecting the model altogether as failing an elementary hypothesis test not as an "outlier", but as a model that almost certainly has physical errors resulting in a warm bias.

The fundamental problem I have with AR5 and the claims of high confidence is that the statistical sophistication of the presentation and analysis of the CMIP5 results is simply appalling. Where is the critical analysis that is actually based on the axioms of statistics instead of fond hopes? Where is the aggressive comparison of models to reality with the aim of rejecting failures instead of keeping the failures because -- one has to guess -- it is essential to keep them to preserve the political text of the conclusions. How in God's name can one assign confidence of any sort to the predictions of the multimodel mean of the GCMs when 9.8a is a record of the performance of this metric? Section 9.2.2.3 openly admits that it cannot be done. The summary for policy makers goes ahead and does it anyway!

Global warming from the Dalton minimum is absolutely real. Some fraction of that warming is natural -- all of the warming up through the mid-20th century is basically natural, as was the cooling trend of the third quarter of the 20th century, as was (very likely) the warming trend visible in the last quarter of the 20th century, an interval even CAGW proponents on this thread have indicated is too short to be more than "barely" indicative of climate trend (especially when almost all of the warming in that quarter occurred in a single span of roughly 15 years). There is -- in my opinion, formed after looking at a lot of things, not just AR5, HADCRUT, GISS, model performance graphs -- a very substantial uncertainty in how natural versus forced breaks down in the late 20th century warming. Numerology alone suggests that CO_2 is order of 0.1 to 0.2 C of the warming burst at most as the warming burst of the first half of the 20th century is within that range of the second half. However, one can always find parameters that will fit that warming with more weight given to the CO_2 and less to the natural component. The problem with such a fit isn't that it won't work fine in the reference interval, it is that it will diverge or deviate outside of the reference interval as it has the wrong dynamical mix. This is a very reasonable explanation for the divergences apparent on both sides of the reference interval.

It is a shame that AR5 (and climate science in general) is basically dishonest about the uncertainties of the modeling process and it is positively scandalous that statistical arguments that are openly acknowledged as being weak in chapter 9 are morphed into "high confidence" in the summary for policy makers. Maybe CAGW or CACC is true, maybe not. It is a hypothesis. Its primary evidence is the GCMs, as the climate could without question have produced precisely the same track we've observed naturally based on our very limited span of observations without it even being particularly improbable. The GCMs, in turn, must be held to a high standard of predictive accuracy if we are to bet a trillion dollars on them, and until they achieve that degree of accuracy, we should at most hedge the bet without committing economy-shaking resources to it.

rgb

about two weeks ago
top

UN Report: Climate Changes Overwhelming

rgbatduke Re:Projections (987 comments)

I just looked at those sections, and to me it reads, "these (Multi-Model Ensembles and Perturbed-Parameter Ensembles) are the types of simulations we've taken into account [9.2.2], these are their weaknesses [9.2.2.1-2], and this is how we combined them to evaluate them as a whole [9.2.2.3]." Perhaps you're referring to the direct quote "...collections such as the CMIP5 MME cannot be considered a random sample of independent models," which is repeating the weakness described in 9.2.2.1, which is that a lot of models in that set use components from other models in the set. To me that makes perfect sense because we do that in engineering all the time: reusing model components that (seem to) work well. I can see why that would seem fishy, though. It'd be nice to see someone dig into that and see what components are reused and how they might bias the results.

Actually, there are three distinct problems listed:

"The most common approach to characterize MME results is to calcu-
late the arithmetic mean of the individual model results, referred to
as an unweighted multi-model mean. This approach of ‘one vote per
model’ gives equal weight to each climate model regardless of (1) how
many simulations each model has contributed, (2) how interdependent
the models are or (3) how well each model has fared in objective eval-
uation. The multi-model mean will be used often in this chapter."

1) is utterly senseless. Doubly so since it is a re-averaging -- if you really wanted to average the models equally you'd just average their PPE runs, however many there were. Keeping all of the PPE runs would actually allow one to get a feel for the spread of model predictions (per model and/or collectively) and would certainly aid in rational hypothesis testing or adjusting the weight given to each model.

2) as you (and they) note means that the confidence intervals are all incorrect. Remember our friend the Central Limit Theorem (not that this has anything to do with averaging over models that are in no sense random samples drawn from a distribution of perfectly correct, unbiased models). In order for the variance/standard deviation to have any force via the CLT, the samples being averaged have to be independent, identically distributed samples, usually shrunk to "iid" samples in the field of statistics. Model dependence reduces even further the already indefensible idea that the models are likely to span some sort of unbiased independent sampling of model space, with errors in one compensated for by errors in another. This ultimately means that any error estimates formed by "the standard deviation" of the individual model reaveraged, equally weighted results will strictly underestimate the probable error, potentially significantly bias the mean, and hence reduce the confidence of any predictions, very significantly.

3) means that they don't care if 95% or more of a model's PPE results are above of the actual climate data. They don't care if the model makes completely incorrect predictions concerning rainfall, LTT, global temperatures, the distributions of weather events or any of the many other ways the models can fail. They don't care about a model's computational spatiotemporal resolution, and whether or not it is known (at this point) to omit or incorrectly represent the physics. They do not reject bad models, or weight the contributions to the mean in favor of models that actually work to predict the climate trajectory either by rejecting those that fail a simple hypothesis test or by giving more weight to better (in any sense of the word, but especially more predictive) models.

As they remark below in the understatement of the entire document:

"This complexity creates challenges for how best to make quantitative inferences of future climate".

It sure does.

The big question is: How many of the models would survive a simple hypothesis test when their performance is quantitatively assessed against the entire range of e.g. HADCRUT4. From the look of the spaghetti in 9.8a, not many of them. But there are a few that don't look too bad. They are, of course the coolest running models in the bunch, the ones that predict the least warming and remain closest to the actual climate trajectory not only in the recent past but back before the reference period. Those models, however, predict far, far lower climate sensitivity and hence do not support the hypothesis of catastrophic warming well enough to justify diverting the energies of all of civilization for 100 years at the cost of great hardship, living a real-time catastrophe now to prevent a hypothetical catastrophe later.

rgb

about two weeks ago
top

UN Report: Climate Changes Overwhelming

rgbatduke Re:Projections (987 comments)

What Nate Silver would do is bias them for how well they have so far performed. At the beginning of such an exercise, you'd have to value them equally. Doing anything else would be bringing subjectivity into it.

Excellent. So let's bias them on the basis of how well they've performed, since this is AR5 and we are hardly at the beginning of the exercise. Let's also not pretend that the decision to leave them all in equally weighted had the slightest hint of objectivity to it -- even AR5 sheepishly acknowledges that by doing so they make it impossible to assess things like error or confidence intervals (although elsewhere in the report they do it anyway).

At this point we know enough to be able to completely reject some of the models, just like Silver would, I'm sure, reject models altogether that are too heavily biased by partisan dynamics (or give them so little weight that they might as well not be present).

Since we aren't idiots, we'll also reduce our sample space because the models in CMIP5 are not even nominally independent -- there are whole family trees represented.

Since we have taken a course or two in statistics, we'll further weight them all on the basis of how many PPE runs each model contributes. Finally, because we really want to do the best possible job, perhaps we'd look at how the successful models differ from the unsuccessful models and see if we can improve even the successful models further.

That would be all the things AR5 says were not done in section 9.2.2.3 in constructing the MME mean. What do you think the effect of doing them would be? Maybe, I don't know, reducing the climate sensitivity? Of course honesty and best practice here would have been and continues to be political suicide.

And I repeat -- you may not care about the 15 year "hiatus" because you have safely put the goalposts for falsifying predictions of 2.5C or greater total climate sensitivity out there beyond reach, but AR5 cared enough to devote an entire box to trying to explain it and the current debate in the literature has people trying to "prove" that it it must be at least 2.3 C because papers are appearing proposing less than 2 C. Remember, RSS has been flat for 17 years. That too is a pretty serious problem for the models (which do make specific predictions for lower troposphere temperatures, and they aren't close to flat).

      rgb

about two weeks ago
top

UN Report: Climate Changes Overwhelming

rgbatduke Re:Projections (987 comments)

If people want to create a social movement, like the Suffragettes, which maintains on principle that the world should be organised differently, fine. Justify it with moral arguments. Don't muddy science by claiming it is all facts and beyond doubt and irrationally play propaganda games, smearing those you can't logically refute as "deniers", when even the most basic bit of core evidence contradicts AGW. Oh yeah it is my own lying eyes, mustn't believe it.

Well said, actually. And to be frank, many of the things suggested to help ameliorate supposed CAGW I am in favor of quite independent of the CAGW question. But presenting bad science as good science is only going to backfire if, in fact, the science is bad. Presenting science without an honest appraisal of the probable error is de facto bad science.

rgb

about two weeks ago
top

UN Report: Climate Changes Overwhelming

rgbatduke Re:Projections (987 comments)

Wow, talk about non-sequitors. Also talk about avoiding presenting anything substantive whatsoever. So you're saying, that since Nate Silver successfully predicted a presidential election that this is sufficient proof that if I average fifty Hartree model atomic computations (all with a known systematic error relative to the true multielectron atom) the result will converge to the actual measured result with all correlation and exchange correctly represented.

Wow. Does anybody actually study statistics any more? No wonder it is so easy to convince people to believe what they are told.

rgb

about two weeks ago

Submissions

rgbatduke hasn't submitted any stories.

Journals

rgbatduke has no journal entries.

Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...