This is adapted and extended from ideas I first put forth within a keynote delivered at MODSIM World Canada in Montréal in June 2010.
On the Navajo Indian Reservation in northern Arizona is a place called Long House Valley. This valley was inhabited by a people known as the Anasazi from 1800 BC to 1300 AD, at which point they abandoned it for reasons that remain unclear (1). This abandonment is a mystery that archaeologists have been studying for decades. Why did the Anasazi disappear? There’s no written record, and no obvious catastrophe — no meteors, no volcanoes, no anachronistic herds of saber-toothed cats. Traditional archaeological approaches haven’t yielded a definitive answer. We know what killed the dinosaurs 65 million years ago, but we don’t know why the Anasazi left Long House Valley 700 years ago.
We can’t go back and observe the Anasazi directly. But what if we could simulate their society in an attempt to understand what happened? As it happens many researchers have done just that.
Long House Valley has been described as “one of the icon models of the agent-based modeling community” (2). It’s relatively small (just 96 square kilometers) and well-bounded. There is a rich paleoenvironmental record that can be used as the basis of simulations. And there’s a mystery to be solved. As a result, numerous Anasazi simulations have been built. These simulations typically cover periods of hundreds of years and model everything from family size and composition to population growth, weather patterns, agricultural productivity, and the like.
What have we learned from these simulations? What they tend to show, consistently, is that environmental factors by themselves don’t explain the complete abandonment of Long House Valley. There was a 300-year drought in North America beginning around 1150 AD, and that seems to have contributed to the departure of the Anasazi. There was a drop in water table levels that also contributed. And overfarming seems to have taken its toll. But even with all these factors taken into account, the valley could have supported a smaller population.
In other words, despite all the work has been done, there is still a significant mystery attached to Long House Valley. But thanks to simulation, the mystery is smaller. It’s not why the Anasazi left, but instead, and more precisely, why all the Anasazi left. A logical reason would be social pressures, but that remains to be addressed by future simulations.
Can we ever prove what happened to the Anasazi via simulation? No. Simulation can tell us what might have happened, what likely happened, but it can’t tell us definitively what did happen. But knowing what might have happened, what likely happened — these are valuable in and of themselves. They help us narrow our future efforts. They provide a base upon which future researchers can build. And as agent-based computational simulations improve in quality, and as more and more independently developed simulations return similar results, we can move from likely to probably.
Studying the reasons for the disappearance of the Anasazi probably doesn’t hold a great deal of relevance for most people on a daily basis. But it does point to something that is important to many people: the use of simulation to understand things we otherwise can’t.
The use of simulation as a scientific research tool is spreading rapidly, from simulations of sociological events like the disappearance of the Anasazi to simulations of galactic formation taking place over tens of millions of years; from simulations of rat brains at a level of detail sufficient to replicate their functioning to simulations of global climate patterns and how they are changing (and might change) as the result of human activity. This presents tremendous opportunities for us to know the previously unknowable. The problem with this new new kind of science (with apologies to Steven Wolfram) is that simulation has not traditionally been a component of the scientific method.
The first person I heard articulate the problem of integrating simulation into the scientific method was Dr. Rick Satava, Professor in the Department of Surgery at the University of Washington. As he put it in the abstract to his 2005 paper on the subject (3):
The scientific method has been the mainstay of scientific inquiry and clinical practice for nearly a century. A new methodology has been emerging from the scientific (nonmedical) community: the introduction of modeling and simulation as an integral part of the scientific process. Thus, after the hypothesis is proposed and an experiment is designed, modern scientists perform numerous simulations of the experiment. An iterative optimization of the design of the experiment is performed on the computer and is seen in virtual prototyping and virtual testing and evaluation. After this iterative step, when the best design has been refined, the actual experiment is conducted in the laboratory. The value is that the modeling and simulation step saves time and money for conducting the live experiment. The practice of medicine should look to the tools being used by the rest of the scientific community and consider adopting and adapting those new principles.
As Rick says in the talk he gives based on these concepts, “Would you rather run 8 iterations of an experiment, or 28 iterations?”
Rick’s focus is on medicine, so he sees (if I understand him correctly) simulation as a way to shrink the experimental space when the experiment moves from the virtual to the real. In other words, if we can use simulation to discard thousands or even millions of scenarios, we can focus our limited dollars for expensive real-world experimentation on the most promising possibilities. But what happens when real-world experimentation isn’t possible, as in the case of the Anasazi?
The problem isn’t limited to just the study of historical events such as those at Long House Valley. Climate change is an area of tremendous interest to researchers right now. Arguably, one of the reasons that humanity has yet to address climate change with a level of seriousness appropriate to the findings of the scientific community is that scientists can’t prove beyond a shadow of a doubt whether carbon dioxide levels will continue to rise, or what will happen to the Earth if they do. Proof is difficult to come by when the duration of a real-world experiment stretches beyond weeks and months into decades and even centuries. Or take the challenge of understanding how galaxies form. We have theories about how they do so, and these theories often involve galactic collisions that take place over tens of millions of years. How can we “prove” our theories when the timescale of the experiment is an order of magnitude (or more) greater than human beings have existed?
I believe that we’re going to have to come to a new understanding of the scientific method. For domains in which we can validate the results of a simulation using real-world experimentation, Rick’s description of Steven Wolfram’s methodology is good: “Build the computer model, add the data from a real world experiment, see if the results match real-world expectations, change the input data to more closely approximate the model, and run the next iteration… until there is concurrence with the evidence of real-world results.” But for domains in which real-world validation is impractical for one reason or another, then we will have to agree on a standard of evidence that allows us to accept a theory as provisionally proven, as the best evidence available at the time, and move on, knowing that revisions may be necessary as better evidence is developed.
I believe that we’re going to have to come to a new understanding of the scientific method.
The obvious danger with this approach is that we could find ourselves building scientific houses of cards. Simulations based on inaccurate starting data, inaccurate simulation algorithms, or both, will produce inaccurate results. It is entirely conceivable that disparate researchers around the world could simulate the same events using different techniques and come to such similar results that all concerned accept these results as provisionally proven and begin building upon them — and yet for these disparate researchers to all be wrong, even profoundly so.
This will be a growing challenge to the simulation and scientific communities for many years to come. But the rewards are too great to not solve the problem. And I’m sure that we are — collectively — up to the task.
References after the break.
1. Axtell RL, Epstein JM, Dean JS, Gumerman GJ, et al. Population growth and collapse in a multiagent model of the Kayenta Anasazi in Long House Valley. Proceedings of the National Academy of Sciences. 2002;99:7275-7379. Full text (PDF).
2. Janssen MA. Understanding artificial Anasazi. Journal of Artificial Societies and Social Simulation. 2009;12:13. Full text.