Chaos and Aging
A super simple overview of quantifying instability, and what it has to do with aging
UPDATE 04/15/2015: Made some changes to the equation formatting and the plotting. I’ve got a new project and this whole post became much more relevant.
Consider the following situation: you are a computer scientist specializing in simulations of dynamical systems such as the weather surrounding your city, or the trajectories of multiple planets affected by each other’s gravitational pulls. This is crucial to your job – you work for the weather forecast, and also part time in an intergalactic mastermind’s venture to attack and colonize the Milky Way. Now, as a result of extensive scientific research and your own ingenuity, your simulations are able to perfectly predict how these systems change over time, given any set of starting parameters. However, you must still be extremely careful in using these simulations to predict what happens in the real world, or else it may result in a wildly inaccurate, completely different prediction. This is due to the simple fact that these systems, are chaotic. These systems that were mentioned earlier are what’s known as chaotic deterministic. Deterministic means that they are, in fact, not random – given any initial conditions, there is one and only one way that the system will pan out. However, the fact that they’re chaotic means they display aperiodic behavior over time – meaning there’s no observable pattern as to how they behave – and are extremely sensitive to small changes in the initial conditions. Of course, it’s impossible to find out the exact parameters of the real world with zero margin of error, such as knowing the temperature of a room to infinite decimal places, and any small differences in the initial conditions will gradually become larger and larger differences. As a result, one of the most important tasks when dealing with chaos is to find out: after what period of time is it simply not worth it to predict the future?
But chaos goes much deeper than simply being unpredictable – the whole phenomenon and field of study of chaos has its roots in differential equations and dynamical systems, the very language that is used to describe how any physical system evolves in the real world. This video aims to tell the story of chaos step by step, from simple non-chaotic systems, to different types of attractors, to fractal spaces and the language of unpredictability. A dynamical system involves one or more variables that change over time according to autonomous differential equations. For example, let’s say there’s a system that has two variables x and y. X dot is the rate of change of x as time changes, and y dot is the rate of change of y as time changes – keep in mind that even though it doesn’t show it, x and y depend on the independent variable of t, which stands for time – and this dot notation is special in that it can only be used when the independent variable is time. However, notice how the differential equations that describe x dot and y dot don’t actually involve t, and only contain x and y as variables. This makes them autonomous; each combination of x and y only corresponds to one combination of x dot and y dot. As a result, there’s a very convenient geometrical and visual way of representing a dynamical system, known as the phase space. This is a Cartesian space where the axes are the system’s variables. Each point in the space is a unique state of the system, and has its own rate of change which can be shown as a vector. For this specific system shown, the vector field looks like this. Let’s scatter a bunch of random points around to represent different possible states, and see how they evolve and move around. It seems like they all spiral towards the center. This brings us to our next topic. An attractor is a set of points in the phase space which attracts all the trajectories in a certain area surrounding it, known as the basin of attraction. Here, the attractor is just the origin, and the basin of attraction is every point in the space. Notice that at the origin where x = 0 and y = 0, x dot and y dot equal 0 too – that makes it a fixed point, because any point there will stay there forever due to it having a rate of change of zero. Since it’s also an attractor, it’s a fixed point attractor. Trajectories that get sucked into any attractor never get to escape – this seems to reflect some sort of inevitability and predictability that the system will always end up a certain way no matter what – how could this be related to chaos? Well don’t worry, there’s also other types of attractors… There was a time when computers were made using vacuum tubes. Balthasar van der Pol worked as an electrical engineer at Philips during the 1920s, and it was while studying vacuum tubes that he stumbled upon a system of differential equations that exhibited some interesting behavior, and would later be known as the Van der Pol oscillator. The original equations had a parameter that x dot was multiplied with and y dot was divided by, but here the parameter has been chosen to be 1, for a simpler set of equations. If we plot this system’s phase plane, we can see that interestingly, trajectories everywhere seem to approach this loop around the origin. This loop is known as a limit cycle attractor, and is an important example that shows us that the attraction of trajectories does not necessarily have to result in the trajectories stopping at a singular point. Limit cycles are often characterized by physical phenomena that involve some sort of oscillation, and van der Pol’s equations are no different. Other than electrical circuits, they have been used to model things like two tectonic plates at a geological fault, or the mechanics of human vocal cords. But… this is still not quite chaotic. There’s two more ingredients. In 1963, meteorologist Edward Lorenz was developing a simulation for atmospheric conditions which involved 12 changing variables, and found out that tiny differences in the initial values resulted in disproportionately big differences in the state of the variables just a short time after. Curious about this, Lorenz spent some time simplifying his simulation to only have 3 variables, but still display this sensitive dependence on initial conditions. This simplified model describes convection cycles in the atmosphere, and is now known as the Lorenz System. It’s the poster-child of chaos theory, and is sometimes almost synonymous with the Butterfly Effect or the field of chaos theory itself – it even looks like a butterfly. The Lorenz equations have a few parameters that can be tweaked to alter the behavior of the system, but we’ll be using values of 28, 10 and 8/3. This is what’s known as a strange attractor, and here’s what that means. A strange attractor is one that has a fractal structure. No point in the space is ever visited more than once by the same trajectory – if that happened, the trajectory would travel in a predictable loop. And no two trajectories will ever intersect – if that happened, they would merge into the same path, giving two different sets of initial conditions the same outcome. Think about what that means – a single trajectory will visit an infinite number of points in this limited space, and this limited space will have an infinite number of trajectories. Now trajectories are just curves, so they should be 1-dimensional, right? But how come no matter how much you zoom in on this attractor, you can always find more and more trajectories, everywhere? That’s why this attractor is said to have a non integer dimension – it’s made up of infinitely long curves in a finite space, which are so detailed that they start to partially fill up higher dimensions. It’s not 1 dimensional, 2 dimensional, or 3 dimensional – it’s dimension is somewhere in between. As a result of this non integer dimension and detail at arbitrarily small scales, the set of points in the Lorenz attractor is a fractal space, and that’s why it’s a strange attractor. A strange attractor isn’t necessarily chaotic, but a chaotic attractor will always be strange, and the Lorenz Attractor is a strange chaotic attractor. Watch what happens when I highlight two trajectories that are initially a very small distance apart. It doesn’t take long for them to diverge so much as to be on completely different paths. It turns out that in the early stages of this divergence, when the two trajectories are close to each other, the distance between them increases exponentially. After a time of t, the resulting difference is the initial difference times e to the power of lambda t – up until a certain point, of course, since the attractor is only so big. Here, lambda stands for an important value known as the Lyapunov exponent. Since it’s a factor of the exponent of e, if it’s positive then any distance between trajectories will increase exponentially, if it’s equal to zero then the distance will stay constant, and if it’s negative then the distance will converge to zero. There isn’t a way to find the Lyapunov exponent just by looking at the equations – it is measured by actually running the simulation, keeping track of many pairs of trajectories and finding the average rate of change in their distance. But it provides a simple metric to communicate how chaotic a system is. As long as the Lyapunov exponent is larger than zero, the attractor will be chaotic, and it’s equal to about 0.9 for the Lorenz Attractor. That’s how we can figure out the duration of time in which predictions are valid, otherwise known as the predictability horizon. Rearranging the previous equation, we can see that this is how it can be found, given the initial error, delta zero, and a maximum error that we are willing to allow, a. In the Lorenz system, after a time of just 10, any error would have multiplied by more than 8000. To get an idea of how hard it is to make accurate predictions over long periods of time with this exponential divergence, let’s say you had a simulation that predicts where ocean currents flow, and you wanted to keep the error less than 1000m. If you ran it twice, once with an initial error of one meter, and once with an initial error that was a million times smaller, at one micrometer, how much longer do you think the simulation with the smaller error would stay below the margin of error? Well, let’s write the expressions for the two predictability horizons and put them in a fraction, simplify a little bit, use some logarithm rules, and it’s 3 times longer. A million times more accurate, and your simulation would be valid for let’s say 9 days instead of 3 days. This is the type of difficulty that any chaotic system presents in its simulation. The solar system’s predictability horizon is not more than a few million years under current technology – not even a tenth of the time period between humans and dinosaurs. The earth’s atmosphere and weather is incredibly hard to predict accurately for more than a week. Perhaps people will just have to keep tuning into the forecast every day, and our home is unlikely to be colonized by an intergalactic alien civilisation. Is that a good thing?
As someone studying biochemistry at Brown, I’ve definitely noticed that biological systems are often presented as highly ordered and regular. I know that this is necessary to understand how they’re supposed to work in the first place, but it does bother me that there’s a comparative lack of focus on how such systems diverge from this idea. We’ve known for thousands of years that living things eventually diverge from these consistent and periodic states. We simply referred to it as aging and death. However, we don’t have anything approaching the mathematical precision with which we can describe weather systems or planetary orbits deviating from predictions. I beleive that if we can properly simulate biological systems such as signalling pathways and protein interactions, we can better track how these systems fall apart from the ideal. If we can do that, maybe we can even learn how to create attractors that delay this process or bring the system back to a more orderly state.