One of the things that Ken Ham made a big deal about during the recent Nye vs Ham debate was his distinction between “Observational Science” - things we observe now - and “Historical Science” - inferences from the past that we can never know because we can never go back. Historical Science, he argues, can never be verified/falsified, is built on assumptions, and could therefore be wrong. He then makes the giant leap (of faith) that because it could be wrong, it can safely be dismissed when it does not fit his Young Earth Creationist (YEC) worldview. (A world-view, ironically, built on the trust of an historical text that he cannot go back and see written and therefore cannot know is authentic!) He then accused science textbooks of being misleading because they fail to make this distinction between the observational and historical.
Bill Nye dismissed this distinction. However, although he gave hints, he did not really explain why it was wrong. So, who’s right? It seems fair enough to divide what can be seen from what is only inferred, doesn’t it? Is Ken Ham right?
As was abundantly clear when Ken Ham tried to give examples of “observational science” that supported the YEC position versus “historical science” that did not, dividing scientific conclusions along these lines is confusing, meaningless and demonstrates a fundamental misunderstanding of how science works. The real division - the aspect of the argument that appears to have merit - is between data and interpretation.
In science, data is sacred. This is the “observational” aspect. A fossil either exists or it does not. A DNA sequencing machine returns a particular order of nucleotides. Light has a particular measured wavelength etc. But, data by itself does not mean anything. Its meaning is derived from the interpretations of that data, based on certain assumptions and interpretation in the context of other data and evidence.
Crucially, this interpretation - and the required for underlying assumptions - is true for all observations and all science. The methods, techniques and models etc. that we apply to contemporary data are the same as those applied to “historical” data. There is nothing inherently more or less reliable about one or the other. Contemporary “observations” are rarely direct observations - they also require assumptions and knowledge of technical error rates etc. DNA sequencing, for example, is not literally observing nucleotides: it is interpreting patterns of fluorescence, or changes in electric current. Technical errors - and human mistakes - can happen.
Now, that does not mean that all scientific conclusions are equally reliable. The confidence in a conclusion will depend on the confidence of the underlying assumptions and models; these in turn are determined by how consistently those assumptions and models fit/explain other data. This is another mistake of the Creationist crowd (including Intelligent Design Creationism): they mistake possibility for probability.
Confidence is also determined by how readily we could spot something awry with the model/assumption in question. In other words, if the assumption is wrong, how would we expect the results of a particular experiment to deviate from the expected results if the assumption is right. This is what it means to be falsifiable and this is what scientists do on a day-to-day basis: test their models and assumptions and try to break them. If the assumptions seem to hold, we keep the interpretation - or suite of possible interpretations - that result. If not, we go back to the drawing board and try to come up with new models and assumptions that work with the new data and still work with the old data. The interpretation of the data is then updated in the light of the new model. Critically, we do not build the assumptions on the conclusion. This is why science changes its best interpretation - and gets less wrong with time; YEC does not.
It is certainly true that some data is easier to come by than others and, as a general rule, we have less confidence about things the further back in time we go. But, this is nothing to do with observation versus history; it is everything to do with abundance (or paucity) and variety of data. We have more confidence about recent things because we can generally get more - and more varied - data more easily. This is not universal, though, and when the data is absent we have little confidence regardless of the age. We have far more confidence in the age of the Earth than the Higgs boson, for example. Likewise, where we have weak, untested models and assumptions - or messy data for which the correct assumptions cannot be established - we have little confidence. Where we have robust well-validated models, we have high confidence.
Radiometric dating is a case in point. As exemplified by Ken Ham, the YEC crowd love to pull out examples where radiometric dating goes wrong - usually featuring recent volcanic eruptions. When doing so, they completely neglect whether we expect those examples to give reliable dates. We don’t! They are often messy scenarios where the inherent assumptions of the dating techniques are likely to be violated. (Sometimes, they are completely inappropriate scales and/or ignore measurement error.) This does not mean that those assumptions are always questionable and the techniques are always unreliable. Whether the conditions give us confidence that the underlying assumptions are correct - clear strata, for example - dating methods are incredibly accurate and consistent. (For a good discussion of when radiometric dating is (not) reliable, see here.) If you want to go after a scientific theory, you have to attack the strongest evidence, not the weakest.
All science is both observational and historical. The science of an Old Earth and the common ancestry of all (known) life on Earth are as confident and observational as the particle physics that made possible the computer on which I type these words. Ken Ham is just plain wrong. Again.