[I recently wrote here on the question of the future of science and the scientific method and how they would be influenced by simulation. In that entry, I referred to Dr. Rick Satava, Professor Emeritus at the University of Washington, who has written more on this topic than anyone I know. Rick was kind enough not only to read my piece but to comment quite thoughtfully on it. I asked him for permission to post his comments as a guest blog entry and he graciously agreed. Rick's comments are below. -- Frank Boosman]
Guest Post: Comments on Simulation-Based Science
Dr. Rick Satava, Professor Emeritus, University of Washington
I purposefully chose 108 because scientists, (especially healthcare or those involved in clinical or other research involving patients or human subjects) routinely use 8 to 10 or so subjects (n) on the first iteration of an experiment, and if the results are good, then go higher, especially in genetics and similar complex fields. So if n = 108, that is 100 million experiments, which proves the importance of doing something 100 million times (each a little different from before, as in Monte Carlo analysis), we can optimize, as you point out, the most likely best results. But just as importantly, there usually are only a few of the millions of results of the simulation that don’t ‘fit’ the hypothesis. (Read The Black Swan by Nassim Taleb about the one in a million events and see the implications for creativity and perhaps generating intelligence by ‘discovering’ the random event (outliers) that point to where the new discovery should be, as opposed to the other 999,999 that support one’s conventional idea.) This is possible now, especially when we now have — literally — supercomputers at our fingertips thanks to parallel computing, the grid, and other technologies.
Using the above methodology of massive simulation is an attempt to bring creativity, imagination, intuition, etc., into the mainstream of science and the ‘scientific method’. I concur with Thomas Kuhn and Karl Popper that “the scientific method is dead” — that each new age of science (e.g., Classical, Renaissance, Age of Enlightenment, and Industrial Age) not only brought new technology, but depended upon an extension of the scientific method in order to accomplish the next revolution in science — hence Classical (observation), Renaissance (phenomenology and taxonomy), Enlightenment (experimentation), and Industrial (current hypothesis-driven scientific method). Each ‘age’ does not destroy the previous concept of science, but rather ‘stands on the shoulders of those who have gone before’ to create a more comprehensive understanding (and process) of the scientific method. (The scientific method is dead, long live the [new] scientific method). The Information Age is actually extending the scientific method through the use of simulation to integrate creativity, etc., as a formal part of the discovery process, as proposed above. Hence, as a structured approach to scientific discovery, massive simulation of current natural ‘laws’ to (re)prove their validity will result in outliers. Rather than discard these outliers, they are then ‘chosen’ (as a human brain would) as the ‘creative new idea’ which is the hypothesis (beginning) of the scientific method.
Why is the outlier important and why does it occur? One possible explanation is that as our (incomplete) understanding of science and the natural world expands, this understanding is becoming more complex. Initially an observation results in a fact, which then is investigated to reveal that the fact is actually part of a system-of-facts (a phenomenon), and after further investigation, that system-of-facts is actually but a small part of multiple other systems (i.e., system-of-systems — hence the investigation of science as ‘systems-of-systems’). As the level of complexity increases with multiple different associated systems, not only the amount of known facts increases, but there is an unknown process from which ‘emergent properties’ of the system-of-systems occurs that is not a property of the sum of the individual systems — which is the accepted belief that ‘the whole is greater than the sum of the parts’. In essence, an emergent property is the unknown property that is the ‘association’ (whatever that might be) that ‘binds’ two (or more) complex systems together.
Is there any practical value to this concept? Of course. By challenging current ‘irrefutable’ laws or common knowledge, a re-evaluation of the knowledge which has been proven (validated) through hundreds or thousands of experiments. This is done through simulation of millions or hundreds of millions of ‘virtual experiments’ with computational simulation — and looking for the outliers, as indicated above. One could consider this approach as a scientific methodology for generating a new hypothesis (artificial creativity). There are various ways of inserting the outlier (new idea) into scientific method — through analogy, metaphor, exception to the rule, etc., as the expression of a new hypothesis, which then uses the scientific method to prove/disprove current new hypothesis in real-world experimentation.
A current approach in scientific investigation is to use a multi-disciplinary approach and look at a problem or unknown phenomenon from many different ‘views’, i.e., the ’360 degree’ approach. While at DARPA, there was the rise of an interesting concept for discovering something new, based upon the observation that as a new idea is refined, validated, expanded, and investigated through multiple iterations, there is less and less improvement with each iteration until a final ‘product’ occurs which has little if any improvement. This is a ‘pendulum effect’, in which an idea, product (or a company) has reached the maximum potential for the product, and unless there is a change in the product/approach through innovation, it will become irrelevant, obsolete, or replaced by a new and better one — Clayton Christensen’s “disruptive technology”. Awareness of this concept has revitalized Schumpeter’s expansion of Marxist economic theory of ‘creative destruction’ (old systems of wealth are destroyed so new systems can arise) to business and now to scientific inquiry as well.
On a separate yet parallel note (pun intended) to the previous concept of supercomputing (through massive parallel processing), about two-thirds of our brains are devoted to ‘visualization’, i.e., using the occipital lobe for ‘acquiring the image’ (retinal stimulation by light) and the forebrain for ‘interpreting the image’ (perception defined as the interpretation of retinal sensory signals). This ability to organize the neural signs into a ‘model’ is what distinguishes primates (principally humans) from lower species. Data show that bees navigate by serial processing (orienteering from one point to the next, without a global model or map of their world). However, humans likely use ‘parallel processing’, massively comparing (through parallel processing) one image (a picture is worth a thousand words) to similar stored imagines using visual pattern matching. This hypothesis seems to be supported by fMRI [functional magnetic resonance imaging] and DTI [diffusion tensor imaging] of brain tracts rather than neurons. Pattern matching is also what intelligent image processing analytic programs, say for mammograms, use to discover critical features within the very ‘noisy’ environment of the breast. I believe most humans almost always parallel process, especially for complex problems like visualizing or consolidating/comparing abstract concepts.
So, the point to all of this is that optimization of a theory (or an idea) through the use of supercomputers for simulation might be a more ‘natural’ approach (i.e., the way the human brain works) in an absolutely fundamental way, supporting your hypothesis that the human brain is like a simulation machine. In addition, simulation is the method or process by which creativity occurs, and combining simulation with ‘creative destruction’ may be one pathway for innovation. And I can answer your contention about disambiguating multiple similar possibilities with emotion, culture, politics, etc. that seem irrational and not susceptible to simulation — it is a matter of context. Hence, all of life is a simulation. (The movie Total Recall, anyone?)
All of life is a simulation.
Thus, simulation not only allows us to optimize (almost predict) the best alternative (reducing the expense of real-world experimentation) in real-time or near-real-time, but also may be the basis of creativity (i.e., instead of ignoring the ‘outliers’, explore them as the method to discover something new — something that does not fit into what is ‘absolute truth’ based upon eons of ‘evidence’). The black swan, according to the pre-Renaissance world, could absolutely not exist — the evidence was that in thousands of years, no one had ever seen a black swan, therefore they could not be possible. One day, a black swan appeared, and since then, although they are uncommon (unless specifically bred) they do exist. (I actually have seen some in Stratford-upon-Avon, England.) Massive simulation is the essence of creativity (intuition, imagination, discovery, etc.). However, ‘observation’ or ‘models’ will rarely provide a ‘black swan-like result’ — but usually will, if the simulation is massive enough. We are exploring this in the next generation of programming called ‘big data’ — well beyond meta-analysis.
So why is a human brain like a simulation? Because it is the same process (software program) but a billion times more powerful (and faster, hence real-time) than existing computers. This is because of parallel processing billions of neurons, each connecting with thousands of other neurons (4 x 109!, or 4 x 109 factorial possible combinations), which makes Avogadro’s number (6.02 x 1023) seem infinitesimally small. That is how we will eventually simulate emotions, culture, politics, etc., and all the other seemingly impossible things to simulate. You might want to look into Dylan Schmorrow’s human social, cultural, behavioral (HSCB) program that he started while at DARPA and now continues as a totally new office in the Office of Naval Research in the Department of Defense.