Monday, January 24, 2011

Christian cosmologist says universe not fine-tuned for life: A response

Here's Rob Sheldon again, on the recent paper by Christian cosmologist Don Page on why our universe is not fine-tuned for life:
a) "fine-tuning" is in the eye of the beholder. All Page demonstrates is that his eye is different than yours. Hence the only real question is whether "fine-tuning" exists at all, not what its magnitude is. If it does exist, no matter what its size, then the universe is "special", "indeterminate", and "not necessarily so". So "fine-tuning" is a scientist's placeholder for a philosopher's contingency.

b) "Optimality" is in the mind of the beholder, depending on what the beholder knows. The "optimum" shape for a human is a sphere, if we are trying to achieve 98.6F on a planet that averages 40F. Obviously, we've left a lot out of our calculations, and equally obviously, we will never know if we left out some crucial factor. Thus we never know if our "optimum" solution is global (contains all relevant factors) or local (misses some). Drawing global (e.g., theological) conclusions from some local guess is sheer hubris, and should be laughed to derision.

c) Lambda =10^-122 Planck units means that the observations are only about 122 orders of magnitude off from what theoretical physics would estimate for this number (the Planck units.)

Let that soak in for a moment. Dembski's universal probability bound is 150 orders of magnitude, only slightly greater than this number. The 10^122 ratio of theory/observed has been called the biggest unsolved problem in physics.

It is also orders of magnitude smaller than typical error bars on other physical constants. So distinguishing it from 0.0 is more an article of faith than of science. Therefore making conclusions about revelation (what God did as observed by science) using theology (how God should work as assumed from theory) may be a fine thing for seminarians, but makes lousy science.

I'm on a hobby horse here, but putting assumptions in our method that turn out to determine our conclusions is a no-no that should invalidate a science paper. One of the many ways that peer-review has failed, is that logical nonsense doesn't get flagged any more. Science should be inductive, not deductive, and when our conclusions are contained in our assumptions, we're being deductive.

For Page to conclude that lambda =/= 0, he had to assume a model with Lambda in it to start with. Einstein inserted Lambda to get a steady state universe, and removed it when Lemaitre's expanding universe was shown to be a simpler solution. It has been reinserted to (a) explain a small anomaly with Type Ia supernovae intensities and (b) solve a "flatness" contingency problem. So if we invented it to solve the metaphysical contingency problem [and I purposely discount (a)], we cannot therefore claim that its observed existence solves the contingency problem.

d) The baryon density, which is the middle term of this deductive syllogism that goes from cosmological constant to contingency, is itself another controversial subject. The cosmological constant is all about dark energy, baryon density is all about dark matter, while "fine tuning" is all about contingent creation. Page has managed to combine the three most controversial subjects in cosmology into a logical syllogism and claim some sort of deductive power. This ought to be scientifically humorous.

The more uncertainty we add into model, the more certain our specific model must be wrong (the ratio of actual solutions/possible solutions --> 0). The fact that global warming models do not include clouds, cosmic rays, precipitation, past climate or repeatability does not mean that climate change is inevitable and deniers are being dogmatic. So also the fact that dark energy assumptions change dark matter assumptions which affect contingency assumptions should tell us our conclusions are woefully uncertain and most probably wrong.