These days, when I think about simulation, I tend to find myself thinking about interconnected simulations. I do think about individual simulations, but more and more it seems like I want to find a way for those simulations to talk. “Why?”, you might ask. Well, it comes down to my approach to code.
I started my programming life in procedural languages. It was simple and straight-forward. Do this. Do that. Nothing to it. Eventually, though, I started to think of code as little life forms, and those life forms would talk. This is probably due to the fact that I was at college working towards a career in molecular biology at the time. Then I heard about object oriented programming. It was a presentation that Bill Gates gave at BMUG (the Berkeley Macintosh Users Group) in 1989 or 1990. I walked away from that night with many thoughts about how my life forms (now objects) could talk and inherit and grow.
Years later, when I was actually working as a developer, I would always try to think in terms of objects. Keep everything discreet and simple. Don’t make an object do more than it needs to do. Provide clean simple interfaces for how those objects interact. These were my silent orders to my self. They’ve carried forward to today.
So back to simulations…
Since I do a lot of my work in game engines, it’s patently obvious that there are multiple simulations running in tandem. In a first person 3D game, as my player moves around within the game world, a physics simulation is running to handle collisions. The entire 3D visualization system is a simulation of the environment. AI is managing the other non-human characters in the environment. And all these are combining to form the overall experience. They do so with carefully crafted interfaces and a well developed framework. That said, it’s insanely easy for things to go off the rails. Modern games are very complex systems. So complex that it’s very difficult–if not impossible–for any one person to keep the entire system in their head. This means that when two, three or fifty subsystems are interacting with a single in-world object they can end up pushing and pulling in different directions. One just needs to do a search for “World of Warcraft exploits” to see hundreds of examples of this phenomenon.
That’s games, but what about “real” simulations, and why do simulations need to interact?
Let me start by introducing you to two current simulation projects.
OpenWorm is an interesting project designed to fully simulate a nematode Caenorhabditis elegans. Their target is smart. The C. elegans has about 1000 cells in its whole body. This results in 302 neurons, 50k synapses and 95 muscles. (1) It’s not that many elements to simulate. Even better, the C. elegans has been heavily studied. There is a ton of data on which to base the simulations.
The project seems to have been started by two Ph.D students at UC San Diego: Stephen Larson and Marius Buibas. Nowadays, at least according to their web site, it has 10 active developers and six “contributors” (whatever that means). This is what’s supposed to happen when you open source something cool.
OpenWorm looks promising. There’s even a Chrome Experiment for exploring the visual model called the OpenWorm Browser. YMMV if you open this in something other than Chrome.
Then there’s a team at Stanford that have fully simulated all the systems in a Mycoplasma genitalium cell. (2) M. genitalium is a single cell pathogen with 525 genes. Compare that with E. coli‘s 4288 genes. So again, the target chosen is smart. This project really dives down as far as they can into the details. Every gene is included in the simulation, and every gene function is there, too.
For reference, that’s a ton of computation. They’re running on a cluster of 128 computers, and it takes them 10 hours to perform a complete cell division. Coincidentally, that’s how long it takes M. genitalium to split in real life. E. coli divides a couple of times an hour, though, so it isn’t some rule of the universe we’re seeing.
If I look at these two different simulations, I see two beautiful accomplishments, but they’re still islands unto themselves. They’re bespoke. What if I want to integrate the two simulations? It would be another very large task. Let’s imagine for a second that the simulation of M. genitalium was actually of a simulation of a C. elegans muscle cell. If the projects didn’t plan ahead then when they finished and had to integrate, they would likely be in for a huge amount of work.
If we start building our simulations with the idea they’re somewhere in the strata then we will have far more interesting simulations.
That shouldn’t be the case. There should be a way for us to design simulations that allow our inputs and outputs to be defined, that describes how and what we’ve simulated and how it fits into the (virtual) world.
If we have that then we get more accuracy. We get more expandability. We get strata.
So that’s where it gets really interesting to me. Imagining a time where we can easily interconnect simulations makes all the individual simulations more valuable. Looking at OpenWorm, I’m simulating a worm moving through the earth and eating and living. If I’m interested in studying the soil in compost piles, then I grab a simulation of a compost pile, drop OpenWorm in, and the compost pile will use OpenWorm instead of its more limited C. elegans representation. If I want to understand more about how C. elegans breaks down its food as it is chomping through that compost, and OpenWorm only has a simplistic simulation of that process. I’d just drop in a simulation of that process that’s more detailed and OpenWorm would start using it. The phrase I like for this concept is “simulation strata.” (3)
If we start building our simulations with the idea they’re somewhere in the strata then we will have far more interesting simulations. How does that happen? How do we define cells relative to galaxies? I don’t know yet, but I’m certain we can.
References below… Read more