June 20, 2011
"Zombie Ideas in Ecology"

Since I have been lax with posting as of late (I’m busy!), check out this excellent post over at Oikos blog.

May 22, 2011
Concept: Dark diversity

In March 2011 issue of Trends in Ecology and Evolution (TREE), Partel, Szava-Kobets and Zobel, of the Institute of Ecology and Earth Sciences, University of Tartu, Estonia, posit a very interesting and novel way of looking at local-species diversity and the absence thereof.  I feel that this timely paper will spark much contention and work over the next few years, especially considering that climate change is occurring as I write this, and species shifts/migrations beginning to be large issues.  Thinking about both the observed diversity, as well as what species might be absent from a particular area, is going to be of increasing importance.

The three term “dark diversity” as the absence of species from a suitable habitat when the species exists in the local species pool.  Therefore, a simple calculation of dark diversity is subtracting the local observed species from the total number of species that exist in the region that could occupy the suitable habitat.  The authors acknowledge that their definition of local species diversity is similar to the definition of alpha-diversity, drawing upon the local species pool, but their definition of dark diversity is much different from the concepts of beta- and gamma- diversity.  Beta-diversity is the turnover between the gamma and alpha pools of species diversity, where dark diversity only deals with species existing in a particular type of habitat. (For a brief overview of alpha-,beta-gamma- diversity, go here)

What is the take-home message about this concept?  It provides a dimensionless indicator that we can use to compare different habitats or regions around the world.  Three examples the authors provide are of a plant diversity across grasslands, fish diversity in different lakes, and birds in a tropical forest.  A final and interesting point the authors bring up, with respect to their now dimensionless indicator, is that temperate ecosystems have higher numbers of the available species from regional pools than tropical ecosystems.

I am not the only person that finds this concept interesting.  In the most recent issue of TREE are two responses to the original paper, and a third by the authors.  You can find them below.

Original paper:

Partel M., Szava-Kovats R., and Martin Zobel. Dark diversity: shedding light on absent species. TREE 26(3): 124-128.

Responses:

Scott C.E., Alofs K.M., B.A. Edwards. Putting dark diversity in the spotlight. TREE 26(6): 263-264.

Karel Mokani and Dean R. Paini. Dark Diversity: adding the grey. TREE 26(6): 264-265.

Partel M., Szava-Kovats R., and Martin Zobel. Discerning the niche of dark diversity. TREE 26(6): 265-266.

Additional references

Robert Whittaker paper discussing gradient analysis and diversity

May 13, 2011
The Traveling Salesman Problem and its application to foraging theory

Yesterday I introduced a common thought experiment, The Prisoner’s Dilemma, it has been used as a starting point for thinking about evolutionary stable strategies and game theory.  Today, I’m going to highlight another thoroughly studied question, The Traveling Salesman Problem (TSP), an inquiry which still is unanswered, but has provided insights and benefits to numerous fields.

The problem is rather simple: a salesman has to travel to a certain number of places to sell his goods and he wishes to calculate the most efficient and cost-effective route.  If the salesman needs to go to only one place, then the solution is rather elegant, consider it as a straight line (or as close to straight as it can be based upon the salesman’s method of travel) between two cities.  As more destinations are added to the salesman’s itinerary, the solution becomes very difficult to find.  This type of problem is referred to as a “P vs. NP" problem and the mathematical solution to it is worth 1,000,000 bucks.  Since I’m not a mathematician, I am not going to attempt to explain any of this in mathematical terms, but I am going to talk a little bit about how this problem applies to insect behavioral ecology.

Now, lets consider the problem within a slightly different framework.  Parasitic wasps, or parasitoids, are insects that lay their eggs (known as ovipositing) and develop in or on  (mostly) other insects.  These hosts, of variable quality and quantity, are spread as a mosaic throughout the environment.  This means that the host or hosts upon which these parasitic wasps lay their eggs are not evenly distributed, similar to the cities in the TSP.  If a parasitic wasp wishes to maximize the returns on its energy expenditures (e.g. flying, searching for a host, ovipositing), it must calculate an efficient path in which all of the available (within biological reason) patches containing hosts are visited.  If we consider this problem in it’s biologically relevant form, it quickly becomes more complex; its hosts are variable in terms of their quantity (number in a patch), as well as in quality (large vs. small hosts).

So what is an insect parasitoid to do? Eric Charnov, a theoretical ecologist addresses this in his 1976 paper, Optimal Foraging: The Marginal Value Theorem.

The paper predicts that insect parasitoids will (as in shown in the graph, thanks to Wikipedia for letting me steal the photo) spend more or less time in a patch searching for hosts depending upon how profitable the patch is in highs of quality or quantity of hosts, and the distance between patches.  While searching the parasitoids have a  certain threshold or Giving up time (GUT) which will be reached before they move on and search in another patch.  There have been hundreds of papers simulating,testing, and attempting to further this hypothesis.  For instance, Wajnberg et al. 2000 uses Trichogramma brassicae, a polyphagous egg parasitoid to test, and confirm Charnov’s predictions.

Although there haven’t been papers arguing against the theory, there have been numerous that have suggesting additional variables to consider when applying it.  A paper that does such is: van Alphen, Bernstein and Driessen 2003, which suggest that functional (evolutionary) and causal (mechanistic) approaches such as egg load and semiochemicals, respectively, need to be considered as part of the foraging equation.

As you can seem, just as with the Prisoner’s Dilemma, the TSP problem which originated elsewhere has had a huge impact on biology and helped us speculate upon, and elucidate effective and stable, evolutionary strategies.

Note: A great book released in 2008 that covers all of this and other aspects of foraging theory, the Marginal Value Theorem, the Traveling Salesman Problem (as it applies to parasitoids), and other behavioral ecology issues, is the “Behavioral Ecology of Insect Parasitoids" edited by Wajnberg, Bernstein and van Alphen.  It’s definitly worth checking out.

May 12, 2011
Altruism and evolutionary stable strategies

One thing that I am fascinated with more than anything else is is strategy.  Usually equated with games of the mind such as chess, rather than everyday life, strategy can both subtle or overt, but when it comes to the end of the day, it’s all-encompassing.

Another thing I’m pretty head-over-heels for is public radio.  If you don’t listen to public radio, honestly, you should.  There is so much variety and tons of quality programming.  So now you’re wondering why I’ve completely changed the subject and have gone off on a tangent…Well, it’s because, by the glory of all things good in public radio and science, there just happens to be a show that took on this very issue.  That show, RadioLab, does an absolutely excellent job not only on this one specific episode, but in general.  I won’t ramble too much more about it, but know that it’s a worthwhile listen for anyone interested in science, be it on those actual radio things, or as the free podcast.

Finally, the show, entitled, “The Good Show" covers a few of the basics regarding what strategy is, what strategies there are, and a few of the key players that and involved in furthering research on the subject.

On that note- you should go listen to it…

But before I end this brief post, let me just introduce you to a simple thought experiment that has been foundational to research on strategy.

This though experiment is something you may very well have heard of before, it’s called, “The Prisoner’s Dilemma.”

Imagine two crooks, bank-robbers,murders, or some other type of criminal offenders have been caught for a crime.  Although apprehended, the police don’t have quite all the evidence they need to really put the two in the slammer for committing the aforementioned crime(s).  So, what the police decide to do is split the two prisoners up for a period of time then take each one, individually into a room and say…”your other guy, you know he ratted you out, you’re going to go away for 5 years in prison…But, if you spill the beans on him, we’ll make you a deal, cutting that time, to only a year.”  The prisoner ponders this offer and has to decide whether he is going to reciprocate against his partner, or if he is going to keep silent and not say anything.  These two options are now to be referred to as “defect,” and “cooperate,” respectively.

Here’s the kicker, the police don’t actually have much evidence at all, and-they’re liars.  The other prisoner did not say anything, in fact, the police have yet to even meet with him yet.  They are trying to trick the prisoner into giving them the testimony they need in order to convict him and his accomplice.

So let’s step back away from the thought experiment for a moment and consider that each prisoner has two unique choices he can make.  We can therefore look at all the possible outcomes just like a Mendelian punnett square

There are four possible outcomes:

(C,C), (C,D), (D,C) and (D,D)

where C= cooperate and D= defect

It’s easy to understand the outcomes of each of these situations by assigning a value to each decision and outcome.  The graphic to the left (which I’m so graciously borrowing) assigns the most logical values for this situation, “years in prison.” 

So lets run through these situations:

(C,C) where both prisoners cooperate they both get 2 years in prison

(C,D) Prisoner 2, Henry in the case of the graphic, defects, squealing on his partner, leaving him with a quick 1 year in prison and his partner with 5 years in prison.

(D,C) Prisoner 1, Dave, defects, and ends up with the same deal as Henry would have if he defected, 1 year in prison for the rat, and 5 years for the transgressed.

(D,D) Both Prisoners 1 and 2 defect, giving testimony that the other was the leader of the crime, and they were simply the accomplice.  Due to the contradiction, neither can be fully convicted, but they still both end up with 3 years in prison each.

SO.  What is a prisoner to do?  It’s obvious if this testify that their partner was in charge that they will receive the least amount of time in prison.  If they say nothing, they will end up with more time in the slammer.  This is where the dilemma comes in.  Both prisoners are told that their partner ratted them out.  This means, if they believe the police officers words, then they will go away for 5 years and their partner will get off with a much reduced sentence.  So what is to be done?

None of the options are really obvious to the prisoners as they are not privy to exactly what evidence is held against them.  They also cannot speak with their accomplice to confirm or deny the statements made against them.

Again, I ask, What is a prisoner to do?  Well, Robert Axelrod, a professor of public policy at the University of Michigan, wondered the same thing.  Except, he approached the problem with a little more robust treatment than just thinking and anecdote.  Axelrod decided an excellent way to determine what to do was to hold a contest.  The guidelines of this contest were relatively simple, contestants would write a program that would “play out” this situation a number of times, against a program of another contestant.  Instead of “years in prison,” point values would be assigned to the programs, based upon their decisions, respectively.  The highest point value would be assigned to the best outcome, or the least amount of time in prison.  Therefore, the prisoner who defected against his cooperating accomplice would receive the most points.

By running these simulations over time, particular strategies, or (sometimes) conditional behaviors would be highlighted as good, bad, better, etc.  Therefore, one could use these as guides as to what to do in the Prisoner’s Dilemma.  I’m not going to elaborate too much on these strategies, except for the winning one, was known as “Tit-for-Tat,” which I believe Wikipedia does a good job of explaining (this is verbatim from the site):

This strategy is dependent on four conditions, which have allowed it to become the most successful strategy for the iterated prisoner’s dilemma:[1]

  1. Unless provoked, the agent will always cooperate
  2. If provoked, the agent will retaliate
  3. The agent is quick to forgive
  4. The agent must have a good chance of competing against the opponent more than once.

In the last condition, the definition of “good chance” depends on the payoff matrix of the prisoner’s dilemma. The important thing is that the competition continues long enough for repeated punishment and forgiveness to generate a long-term payoff higher than the possible loss from cooperating initially.

A fifth condition applies to make the competition meaningful: if an agent knows that the next play will be the last, it should naturally defect for a higher score. Similarly if it knows that the next two plays will be the last, it should defect twice, and so on. Therefore the number of competitions must not be known in advance to the agents.

Therefore, always cooperate, and if someone burns you, burn them back; but be forgiving.

That’s the Prisoner’s Dilemma, foundational for a basic understanding of strategy, and a central tenet of game theory.

Thanks for sticking with me for the rather lengthy explanation.  Now if you haven’t already, GO LISTEN TO RADIOLAB.  It’s a great show and the episode features Robert Axelrod, who does a much better job explaining his own work than I.

If you’d like to pursue this subject a little bit further I’d suggest two books, one which is rather technical (read: full of math) and the other, which is written towards a popular audience, but still features all of the meaty bits of science (sorry I can’t equate science to veggies folks)

Read: “Evolution and the Theory of Games” by John Maynard Smith

Read: “The Selfish Gene" by Richard Dawkins

Liked posts on Tumblr: More liked posts »