Physics versus Fudge Factors
Graph of the temperature
anomaly predicted for CO2
doubling from various models and the observed anomaly (From: IPCC). The climate models from centers across the globe featured here include: CGM1 (Canadian Center for Climate Modeling and Analysis, Canada); CCSR-NIES (Center for Climate Research
Studies, Japan & National Inst. for Environmental Studies, Japan); CSIRO2 (Australia Commonwealth Scientific and Industrial Research Organisation, Australia); ECHAM3 & ECHAM 4 (Deutsches Klimarechenzentrum, Germany); GFDL-R15 (Geophysical Fluid Dynamics Laboratory, USA); HadCM2 & HadCM3 (Hadley Centre for Climate Predication and Research, UK); and NCAR-DOE (National Center for Atmospheric Research & Department of Energy, USA).
Dealing with Uncertainties
Different computer models give somewhat different predictions when experiments are made to test their response to changing conditions (see figre above). The reason is not that the physics of climate change is different from one scientist to another. The reason is that many of the physical processes are not sufficiently understood, or resolved in sufficient detail, to be amenable to exact computation. For these missing computations, partial results are supplied by either simplified calculations or by simply "looking up" the most probable results in a table based on experience or previous calculations, or both. These methods for dealing with missing calculations are often given the tongue-in-cheek name, “fudge factors,” since they help “fudge” the models to reflect physics that could otherwise not be simulated directly.
Computer programs simulate the climate system by simulating physical processes, with varying degrees of accuracy depending on the process. For example, the development of clouds and their effects on the radiation balance are notoriously difficult to represent in any detail in a computer model. Clouds trap heat in the lower atmosphere (this is why a cloudy night in the desert is warmer than a starry night), but clouds also reflect sunlight (this is why large portions of the planet appear white in satellite photos). Hence, clouds both help to warm and to cool the climate. To make matters worse, the balance between these two opposing effects differs for different types of clouds. To get this balance right, one would have to know precisely which clouds form under which conditions, and how much light they reflect and how much heat they are preventing from escaping to space. To get around having to calculate this directly from the laws of physics, computer models contain general rules about which clouds should form under which conditions and how they affect the radiation balance. Since scientists differ in their opinion as to what precisely these rules are, the programs respond somewhat differently to the same forcing. Hence the difference in output.
Can we assume that the truth will be some sort of average output from the different models? Not really. The "truth" (that is, how the real system would respond to the type of forcing being tested) may lie outside the prediction of all the models. What the models give us is a clue to the range of uncertainty which results from our recognized ignorance concerning those physical processes that we know should be incorporated in the models.
Other Sources of Errors
There may be other physical processes that should be incorporated, into the programs but which have not been incorporated because scientists have no awareness of them. For example, it is now realized that most of the energy of motion in the ocean is in form of eddies. Before this was recognized, computer programs simulating the circulation of the ocean did not include eddy motions. Now that it is recognized, eddy motion is considered, but because eddies are too small to be readily resolved their contribution to the ocean’s response to disturbance is included by general rules about the way disturbances are propagated through the ocean, rather than by direct computing of the actions of individual eddies.
Besides the problem of spatial resolution (that is, a sufficient density of grid points that exchange information) mentioned in the last section, there is the problem of temporal resolution (the amount of time between the actions and responses of the model). Again, the limits of computing dictate that the information exchange proceeds in time steps appropriate for the distance between the grid points. If the grid point distance is, say, 100 km, it is no use to have a time step of one second, because much of the information does not move that fast. On the other hand, heat radiation does move nearly instantaneously (with the speed of light), and putting in steps of an hour implies that it makes no difference to the response whether radiation moves at the speed of light or at the speed of an automobile. But putting in steps of a second, to compute climate over a hundred years, might take years to run the program. All such assumptions about appropriate time steps (and there are many) are tagged with a bit of an error, and these errors add up and provide for uncertainties in the overall response.