"Complicated systems subsuming their complex substrate, becoming increasingly fragile till collapse becomes immanent. Our built world is complicated – not self organizing or self repairing; fragile. And increasingly inter-connectedly fragile. And the biosphere is losing its anti-fragility from the depletion and accumulation dynamics resulting from the open loops in the complicated system. This is related to how all previous civilizations failed. The current cycle is just the first fully global scale civilization. We need to learn how to build closed loop systems that don’t create depletion and accumulation, don’t require continued growth, and are in harmony with the complex systems they depend on."
It is now well-established that many aspects of human behavior do not come close to maximizing what the external scientist considers that human's utility function. Yet natural selection does not stop at the neck; it is hard to imagine why the behavior of the human brain should not be close to optimal, given the same exigencies that cause the rest of a human's organs to behave close to optimally. So why does the human brain (apparently) behave so sub-optimally?
I am very interested in this issue. Especially intriguing is the idea that much of what appears to us, the external scientist, to be sub-optimal human behavior is actually optimal - once we figure out what problem that behavior is actually designed for, as opposed to what problem we think it is designed for. (From a certain perspective, that is what all game theoretic theoretical models of altruism in terms of repeated games try to do.) See my papers on hedonic prods and persona games for some early work along these lines.
Many biological systems seem driven to have high complexity, often accompanied by low
entropy. Many processes can drive these increases, including natural selection, auto-catalysis,
constructive neutral evolution, and embryogenesis. The second law of thermodynamics
allows these complexity-increasing processes, since biological systems are open. However
it may be that the second law in fact drives these processes, i.e., drives biological systems to have low entropy. This possibility is known as “Schrodinger’s paradox”.
Ultimately, one would like to address Schrodinger's paradox in terms of dynamic systems theory. To be more precise, let p(z) refer to the phase space density of a system over phase space position z, and let H be the Hamiltonian governing its dynamics. Then our goal is to understand what characteristics of H determine whether the resultant dynamics of p has an attractor throughout which p has high complexity, and understand how the dynamics of the complexity of p is determined by H.
To illustrate this dynamic systems perspective in a biological context, say our system is initially described by a p within a basin of attraction of a high-complexity attractor. Say that the system experiences an external shock knocking it to another point in the basin that has lower complexity. After the shock, the system would start increasing its complexity back to the value it had before the shock. Examples of this arguably include asteroid impacts, volcanic eruptions, etc., that cause a mass extinction, thereby reducing the complexity of the terrestrial biosphere, after which the biosphere's complexity grows back. Note though that if the shock were big enough to knock the systems completely out of the basin of attraction, then the system would "die", and not increase its complexity back to what it was.
The goal then is to investigate how the Hamiltonian of a system determines its high-complexity attractors, and in particular how its thermodynamic properties do so.
To start to address this we need a framework that encompasses both thermodynamics and a formalization of "complexity". A natural choice is Shannon's information theory, since as Jaynes showed it can be used to express statistical physics (thermodynamics), and would seem to provide a possible measure of complexity. However Shannon entropy is often dismissed as a measure of complexity, since it assigns different values to gases and crystals, while both are typically viewed as non-complex. This would seem to rule out the use of information theory to analyze Schrodinger's paradox.
However like so much work with entropy, this result is based on using a uniform prior in the definition of entropy. In a recent paper I have shown that if one is careful in specifying the prior distribution in the entropy measure, that measure actually assigns a low complexity to both crystals and gases. This provides the possibility of using information theory to address Schrodinger's paradox.