WordPress utterly rocks

OK. For the last five years, I’ve used Typepad as a blogging platform. Largely out of laziness. I just transitioned Extended Phenotype to WordPress, and started this blog, and couldn’t be happier.

Jeez Louise, this is easier and with much richer functionality. On Typepad, I tried a bunch of CSS-based hacks to get decent footnotes on my posts, for the more academically oriented stuff, and eventually trashed all of them and did ugly manual ASCII footnotes in parentheses and then typed notes in order at the bottom of the post.1 But the hardest part about doing footnotes in WordPress was deciding which of several supported plugins would look best with my design.2. The great thing, is I can change my mind and start using a different plugin, without editing all my old posts.3

And I suspect I can use the Chrome Developer tools and even change the typography of the footnotes, given CSS tweaks.4

Post to Twitter

Why Studying Spatiotemporal Complex Systems Matters…

…even though it’s really tough.

And studying the full spatial behavior of stochastic processes (including evolutionary theory, in its many guises), especially when interaction and fitness are relative to a complex network of contacts or relationships, is hard. Usually so hard, that we don’t have analytic models for the full behavior of sets of stochastic processes operating on complex networks, or interacting in complex ways. We resort to simulation since the models we can solve are very simple, and few. And we seek guidance for the “average” behavior — the nonspatial global behavior of a model — in mean-field approximations. We temporarily ignore fluctuations, write deterministic mean-field equations for the dynamics, analyze those, and then add fluctuations back, in the form of simple white noise. We take the deterministic mean-field equations and derive pair-approximations or moment closures, and analyze at least the summary statistics for correlations between classes or traits we’re tracking, since we can’t analyze much else spatially. We reduce complex epidemic diffusion models to percolation problems. But mostly, we simulate.

Post to Twitter

Will coevolutionary/adaptive network models be “easier” to understand than processes on fixed networks?

I’ve been studying statistical physics pretty hard lately, learning how to deal with many-body systems with a bunch of contributing factors to the dynamical evolution of a system. To a lesser extent, I’ve been studying the serious probability theory (interacting particle systems, stochastic processes) that go along with statistical physics. It’s caused me to ask questions about the last model I was looking at. I love it when that happens.

In a previous project on signaling theory, I looked at some of the newer literature on coevolutionary or “adaptive” network models. A coevolutionary network model is a dynamic process (for example, an evolutionary game theory model) whose interactions are localized to the structure of a mathematical graph or network. The network topology thus exerts an influence on the solution space of the game, and thus the outcomes which occur for any particular state of the population. In addition, the results of each round of the game have an effect upon the edges and nodes of the network itself, causing “rewiring” of the network and thus changes in the interaction between individuals for the next round. In the case of the costly signaling theory model I was exploring, the setup looks like this:

adaptive-network-model.png

Post to Twitter

The structure of mean-field transmission models

In my previous post, I argued that cultural transmission models in archaeology [1] need to get away from being “mean-field” theories, in order to make predictions about how cultural variation is distributed in space, as well as spatiotemporally. In this post, I describe what a “mean-field” theory is, and how mean-field theories relate to a “full” description of a model’s dynamics. This post is aimed at those with a background in the basic population genetic models, or Boyd and Richerson’s cultural transmission models and their offshoots. Those with a strong background in statistical physics or spatial stochastic processes should feel free to skip it, there’s nothing here you don’t already know.

In population genetics and cultural transmission models, we often see equations which predict the evolution of a quantity over time, given parameter values and functions which describe rates of change. In other words, models for the evolution of trait frequencies tend to be difference or differential equations, depending upon whether the population is assumed to evolve continuously (overlapping generations) or discretely (as in the Wright-Fisher model). Here’s a simple example:

Post to Twitter

Google celebrates Issac Newton’s birthday

Check it out…the apple drops and falls due to gravity.  Probably won’t be there forever, so view it soon and say Happy Birthday to Sir Issac!

Post to Twitter

Temporary look-and-feel changes

I’ve temporarily removed the nice typography from MadsenLab, provided through TypeKit.  The selected typefaces were not rendering well on Windows.  This isn’t a Typekit bug, you get the same result if you manually work with the @font-face attribute in CSS and the same typefaces.  But I need to look at some alternative typefaces, and check to make sure that Windows doesn’t render them poorly.  So here’s the default web fonts, temporarily, for your viewing boredom.  Thanks, Microsoft!

Post to Twitter

Moving beyond mean-field models in cultural transmission studies

To study cultural transmission is to study patterns in the way people share information, become socialized with a specific body of cultural knowledge as children, and pass on what they know. Within cultural transmission research, some folks study the underlying psychological and cognitive mechanisms, while others study the population-level consequences of those mechanisms. I study the latter, with a special focus on deep human history and longer time scales.

My premise, in this post, is that the generic structure of archaeological data places a distinctive set of requirements on transmission models meant for studying cultural transmission over long time scales. My goal is to describe why this might be, and what implications it carries for archaeologists and other historical studies of cultural transmission. In particular, I want to make the case that we need to be moving beyond well-mixed or “mean-field” mathematical models for cultural transmission when we want such models to be useful in explaining quantitative data about past cultural practices and artifact traditions.

Post to Twitter

Barnes and Noble Nook Bookreader: A First Look

Barnes and Noble Nook Bookreader:  A First Look
I’ve had the Nook for a day or so, long enough to load a large batch of PDF documents on it, download a book from B&N, and run the device through its paces.  Here are some initial thoughts.
The packaging was great – well designed both functionally and aesthetically.  Perhaps a bit nicer than Amazon, but also a lot more wasteful than Amazon’s, with a hard shell transparent polycarb box.  I gave it awhile plugged in, but no real indicators of when it was charged and ready.  Eventually I got it to boot after pushing the power button a few dozen times while it was charging.
Unlike the Kindle, the Nook did not come already set up and registered to me; I had to register the device with my Barnes & Noble account.  This was easy but it’s interesting how the little things matter; I remember my delight at taking the Kindle out of its packaging and having it boot up to show me my name and all ready to buy a book and download it.  The Nook really doesn’t take too much longer to set up, but it’s an additional step and there’s no little personal surprise factor.
I loaded both a formatted eBook and a batch of academic journal articles, in PDF format, and tested out the device.  The goal is to see whether the Nook is useful both for pleasure reading, which nearly always involves formatted eBooks, as well as reading journals with complex content.  Most of the journal articles had embedded graphics, tables, but most especially complex mathematics.
The formatted eBook looks great, and there are no problems with line wrapping.  However, some PDF eBooks I looked at do have line wrap issues, or scaling issues, and have long lines interspersed with very short lines, and that’s incredibly irritating to read.  But if you mostly read formatted eBook content, bought from Barnes and Noble or another source, the Nook looks good.
Where things fall apart is trying to read arbitrary journal articles in PDF on the Nook.  The Nook has a small screen — it’s sized like a second-gen small Kindle, instead of the larger Kindle DX, and so it does  two things with complex PDFs.  First it displays a scaled image of the entire page you’re viewing, but with a standard journal article the type is way too small to actually read.  Then you hit “next,” and that same page is redisplayed in scveral screens, apparently by extracting the text from the PDF.  This involves some of the same wrapping issues previously described, but much worse, this extraction and reformatting process makes complete hash out of any mathematics in the text.  Usually there is 2-3 pages of this extraction and reformatting, and then you get the next “real” page of the PDF, again scaled down and displaying the unreadable whole page, etc.
So basically, the Nook in its current form is pretty useless for complex sideloaded content.  Perhaps if (a) they make a larger screen version, like the Kindle DX, and (b) allow one to turn on and off the “extraction and redisplay” of PDF pages, it would work.  But at the moment I don’t think it’s usable for reading journal articles in the sciences.
There are also small irritations in the UI.  When you select a document from the table of contents display, instead of being taken to the document, you see an almost blank “header” page with the directory path of the document, and a “Read” button down in the keyboard/pointer area.  You have to click “Read” to actually open the document.  This is minor, but wholly unnecessary — it’s like they hired the guys who used to design extraneous Windows dialog boxes.  You sure you want to read this document?  Hell yes I’m sure, and if not, don’t put me two clicks away from changing my mind.
Finally, the device is slow in comparison to the Kindle.  I put just a few dozen PDF files on the Nook, instead of the 300+ I currently have on the Kindle DX, but when the Nook boots up the table of contents is empty.  It approximately 10 seconds to scan the device’s storage and build the table of contents for maybe four dozen files.  The device also boots quite slowly, and when it goes to sleep and wakes up, it actually *reboots* instead of waking up, or at least that’s the behavior I’ve seen.
In general, my first impressions are that Barnes and Noble tried to do a Kindle, and focused on the big stuff to exclusion of detail.  The UI is clunky, the device is slow, and various features (like PDF handling) look like last-minute hacks by the programming team.  I’m not impressed thus far.
One caveat is that if you only read content off eBook provider websites, such as Barnes and Noble, you’ll probably be fine in terms of basic functionality.  But as a competitor to the Kindle DX, the Nook isn’t going to find a place in my laptop bag anytime soon.

I’ve had the Nook for a day or so, long enough to load a large batch of PDF documents on it, download a book from B&N, and run the device through its paces.  Here are some initial thoughts.

The packaging was great – well designed both functionally and aesthetically.  Perhaps a bit nicer than Amazon, but also a lot more wasteful than Amazon’s, with a hard shell transparent polycarb box.  I gave it awhile plugged in, but no real indicators of when it was charged and ready.  Eventually I got it to boot after pushing the power button a few dozen times while it was charging.

Unlike the Kindle, the Nook did not come already set up and registered to me; I had to register the device with my Barnes & Noble account.  This was easy but it’s interesting how the little things matter; I remember my delight at taking the Kindle out of its packaging and having it boot up to show me my name and all ready to buy a book and download it.  The Nook really doesn’t take too much longer to set up, but it’s an additional step and there’s no little personal surprise factor.

I loaded both a formatted eBook and a batch of academic journal articles, in PDF format, and tested out the device.  The goal is to see whether the Nook is useful both for pleasure reading, which nearly always involves formatted eBooks, as well as reading journals with complex content.  Most of the journal articles had embedded graphics, tables, but most especially complex mathematics.

The formatted eBook looks great, and there are no problems with line wrapping.  However, some PDF eBooks I looked at do have line wrap issues, or scaling issues, and have long lines interspersed with very short lines, and that’s incredibly irritating to read.  But if you mostly read formatted eBook content, bought from Barnes and Noble or another source, the Nook looks good.

Where things fall apart is trying to read arbitrary journal articles in PDF on the Nook.  The Nook has a small screen — it’s sized like a second-gen small Kindle, instead of the larger Kindle DX, and so it does  two things with complex PDFs.  First it displays a scaled image of the entire page you’re viewing, but with a standard journal article the type is way too small to actually read.  Then you hit “next,” and that same page is redisplayed in scveral screens, apparently by extracting the text from the PDF.  This involves some of the same wrapping issues previously described, but much worse, this extraction and reformatting process makes complete hash out of any mathematics in the text.  Usually there is 2-3 pages of this extraction and reformatting, and then you get the next “real” page of the PDF, again scaled down and displaying the unreadable whole page, etc.

So basically, the Nook in its current form is pretty useless for complex sideloaded content.  Perhaps if (a) they make a larger screen version, like the Kindle DX, and (b) allow one to turn on and off the “extraction and redisplay” of PDF pages, it would work.  But at the moment I don’t think it’s usable for reading journal articles in the sciences.

There are also small irritations in the UI.  When you select a document from the table of contents display, instead of being taken to the document, you see an almost blank “header” page with the directory path of the document, and a “Read” button down in the keyboard/pointer area.  You have to click “Read” to actually open the document.  This is minor, but wholly unnecessary — it’s like they hired the guys who used to design extraneous Windows dialog boxes.  You sure you want to read this document?  Hell yes I’m sure, and if not, don’t put me two clicks away from changing my mind.

Finally, the device is slow in comparison to the Kindle.  I put just a few dozen PDF files on the Nook, instead of the 300+ I currently have on the Kindle DX, but when the Nook boots up the table of contents is empty.  It approximately 10 seconds to scan the device’s storage and build the table of contents for maybe four dozen files.  The device also boots quite slowly, and when it goes to sleep and wakes up, it actually *reboots* instead of waking up, or at least that’s the behavior I’ve seen.

In general, my first impressions are that Barnes and Noble tried to do a Kindle, and focused on the big stuff to exclusion of detail.  The UI is clunky, the device is slow, and various features (like PDF handling) look like last-minute hacks by the programming team.  I’m not impressed thus far, sadly.

One caveat is that if you only read content off eBook provider websites, such as Barnes and Noble, you’ll probably be fine in terms of basic functionality.  But as a competitor to the Kindle DX, the Nook isn’t going to find a place in my laptop bag anytime soon.

Post to Twitter

Welcome!

Welcome to MadsenLab.org! I’ve been blogging for a long, long time now (my first blog used Radio Userland, and was lost in a hard drive crash in mid-2003, which is probably just as well, because I didn’t say anything terribly memorable).

Between February 2004 and March 2009, I wrote a regular blog, where I concentrated mostly on law, politics, and personal topics. When a critical mass of my friends began using Facebook, I slowly stopped writing blog posts as often, and became too busy with my research (and a fundraising project in my adopted hometown) to write much about political topics.

Facebook mostly solves the need to update friends and family about what’s happening in my life, and I probably won’t write much about my daily life here, so join me on Facebook if you want to know what culinary projects I’ve got going, what cocktails I’ve been experimenting with, discussion of interesting wines, and this sort of thing.

For politics and technology, I try to occasionally post essays to a Posterous blog I share with another friend named Mark Madsen. I have to say I think the Posterous blogging platform sucks, but it’s tightly integrated with Facebook, easy to use, and sufficient to the purpose. Though maybe if we blogged a bit more often, I’d argue for moving to WordPress.

So what does that leave? My research, comments on science and whatever I’m reading in the scientific literature, and discussions of mathematics, scientific software, and simulation modeling. Obviously, that’s a specialized audience, so if it’s not your thing, see you on Facebook. Otherwise, I hope you enjoy and read in the months (and hopefully years) to come.

Naturally, this blog is configured to update Facebook and Twitter when I post, and have the usual RSS/Atom feed options (actually, the addition of this paragraph is an attempt to test those links….)

Post to Twitter

TransmissionLab Moved from GoogleCode to GitHub

As of today, my open-source framework project for cultural transmission (TransmissionLab), is moving from Google Code to GitHub.  Please look for the project at its new home.  The Google Code repository is being removed, and is currently only visible to registered commiters.

This change corresponds to a switch from Subversion to Git as the revision control system.  I’m making the switch for a number of reasons.  The biggest has nothing to do with the TL project at all; I simply want to keep my software engineering skills a bit closer to the cutting edge, and want to be familiar with new tools and environments, so I’m going to use Git for awhile.

Slightly more relevant, Git is a distributed revision control system that handles micro-branching very well, and this is something that helps with a research-oriented codebase.  One tends to make small experiments, run simulations, see what happens, but I also want to make it easy to release a clean “mainstream” distribution of the framework, which doesn’t necessarily have to have my own experiments and dissertation-related code in it.  Git makes that much easier to do than Subversion’s more heavy-weight “branch” features.  So I’m gonna see how that works, and hopefully that means TransmissionLab 1.9 (when I finish it) will have less half-done models and in-progress experiments for folks who might want to try it out.

Anyhow, if you’re not familiar with Git, but you use Eclipse or Netbeans or another IDE, you can just switch plugins, and check out the project from the new URL and keep playing with TransmissionLab.

Post to Twitter