Since Bentley and Shennan’s work demonstrating that random copying processes generate power-law frequency spectra, a significant thread in cultural transmission research has focused on the shape of frequency distributions. In my previous post, I cited Mesoudi and Lycett’s (2009) paper in passing, and in this post I want to highlight an issue that constitutes an important open problem in transmission modeling.
Mesoudi and Lycett note (p. 42) in passing that “perhaps some mix of conformity, anti-conformity, and innovation combine to produce aggregate, population-level data that are indistinguishable from random copying.” The authors go on to note that this claim has not been tested explicitly, and I believe as of this writing (Dec 2010), that this still constitutes an open issue.
I’ve been re-reading a lot of the cultural transmission literature lately, in preparation for a writing project, and anthropologists (including archaeologists) working on CT tend to discuss unbiased transmission (or random copying, to use Bentley’s term) and drift as if they referred to the same thing.
For example, in their superb article “Random copying, frequency-dependent copying and culture change,” Alex Mesoudi and Stephen Lycett say: ”In recent years, several studies have … proposed that the frequency distributions of various cultural traits … can be explained using a simple model of random copying, the cultural analogue of genetic drift.” (p. 41-42, references omitted for clarity, italics in original). I use Mesoudi and Lycett’s quote because it is particularly clear in drawing this parallel, but one can find similar statements throughout many other works on cultural transmission, particularly since Bentley’s work on power-law frequency distributions.
The problem is, “random copying” and “drift” have nothing to do with one another, except possibly the statistical properties of their effects upon a well-mixed population.
Over the last few months, a high-profile controversy has been brewing in evolutionary biology. Martin Nowak, Corina Tarnita, and E.O. Wilson published “The Evolution of Eusociality” in Nature, in which they apply Nowak and Tarnita’s work on evolutionary set theory to the evolution of cooperation and particularly eusociality among the social insects. What made this work controversial is their claim that such an approach renders inclusive fitness theory unncessary. But what got legions of evolutionary biologists (including Alan Grafen) really hot under the collar was the additional suggestion that inclusive fitness makes enough simplifying assumptions that it doesn’t even apply to the empirical cases which it is purported to best explain, potentially calling into question a great deal of work based on IF theory.
I’m not qualified to evaluate the latter claims, which is fine because Alan Grafen and Richard Dawkins are on the warpath and I’m sure we’ll see a paper in response quite soon.
I’m more interested in the general claim, that the approach taken by Nowak et al. represents a useful and general way of looking at evolution in realistically structured populations. Because I think they’re on the right track. The last thirty years have seen an explosion of evolutionary models for populations structured in various ways, because virtually everyone now realizes the stability of cooperative phenomena depend crucially upon assortative interaction. In other words, structured interaction helps keep defectors from invading groups of mutually supporting cooperators. Some such groups are kin-based, others are based upon social network connections, and still other groupings are spatial. All of these situations can be described by understanding evolutionary dynamics upon generalized networks or graphs (since spatial lattices are simply regular graphs).
And understanding the effect of complex and rich structure upon evolutionary dynamics is critical, as a growing mountain of theoretical work has shown. We started understanding evolution in quantitative, dynamical system terms (with the work of Wright and Fisher), by largely ignoring interaction structure (although Wright did some crucial early work on assortative mating). Theoretical biologists employed what physicists call a “mean-field approximation,” assuming that every organism if a population is equally likely to reproduce with any other, and thus evolutionary forces can be treated as an average “field” applied to the state of the population as a whole. Nearly every equation you see in a basic text on population genetics is a mean-field model. The same is true for quantitative models of social learning Boyd and Richerson’s (1985) landmark book is filled with mean-field models, and quite understandably so. Mean-field models are where we typically start trying to understand a complex phenomenon.
Over the last decade or more, Martin Nowak and his group have been key contributors to understanding how the dynamics of evolutionary processes depend upon relaxing the mean-field approximation and incorporating explicitly the structure of interaction into our models. But even what we now call “complex” network models tend to represent only a single type of relationship between individuals. The “complex” moniker here refers to topology, not richness of association or relationship. So I find Nowak and Tarnita’s work on “evolutionary set theory” quite interesting, as a generalization of the network concept (and which clearly can interoperate with it). In this posting, I want to explore where such an approach leads, in terms of the structure of evolutionary models, and what methods will be required to analyze those models as we add realism and complexity.
…even though it’s really tough.
And studying the full spatial behavior of stochastic processes (including evolutionary theory, in its many guises), especially when interaction and fitness are relative to a complex network of contacts or relationships, is hard. Usually so hard, that we don’t have analytic models for the full behavior of sets of stochastic processes operating on complex networks, or interacting in complex ways. We resort to simulation since the models we can solve are very simple, and few. And we seek guidance for the “average” behavior — the nonspatial global behavior of a model — in mean-field approximations. We temporarily ignore fluctuations, write deterministic mean-field equations for the dynamics, analyze those, and then add fluctuations back, in the form of simple white noise. We take the deterministic mean-field equations and derive pair-approximations or moment closures, and analyze at least the summary statistics for correlations between classes or traits we’re tracking, since we can’t analyze much else spatially. We reduce complex epidemic diffusion models to percolation problems. But mostly, we simulate.
I’ve been studying statistical physics pretty hard lately, learning how to deal with many-body systems with a bunch of contributing factors to the dynamical evolution of a system. To a lesser extent, I’ve been studying the serious probability theory (interacting particle systems, stochastic processes) that go along with statistical physics. It’s caused me to ask questions about the last model I was looking at. I love it when that happens.
In a previous project on signaling theory, I looked at some of the newer literature on coevolutionary or “adaptive” network models. A coevolutionary network model is a dynamic process (for example, an evolutionary game theory model) whose interactions are localized to the structure of a mathematical graph or network. The network topology thus exerts an influence on the solution space of the game, and thus the outcomes which occur for any particular state of the population. In addition, the results of each round of the game have an effect upon the edges and nodes of the network itself, causing “rewiring” of the network and thus changes in the interaction between individuals for the next round. In the case of the costly signaling theory model I was exploring, the setup looks like this: