7 July 2015 by Remco Bouckaert
These are a few issues that can pop up when doing ancestral reconstruction aka discrete phylogegraphy as outlined in the Ancestral reconstruction tutorial following the model of Lemey et al, 2009.
Too many states
When doing a phylogeographical analysis, it is tempting to split up the samples in as fine a geographic classification as the available data allows — for instance by splitting the samples by country of origin. This can lead to a large number of states/countries. The more states are defined, the more rates need to be estimated. For an analysis with N states, at least N-1 rates need to be specified in order to be able to get from any one state to any other (possibly through a set of other states), and this is a requirement for the likelihood to be larger than zero.
So, it depends on the number of samples what a reasonable number of states can be; more samples allow for more states. I have seen a number of cases where it was attempted to use a number of states more than half the number of states. In such cases, it makes sense to merge states (combine different countries into larger regions)
Note that ancestral reconstruction has no notion of how far samples are apart from each other, so it can only estimate rates based on state transitions in the tree informed by locations at the tips. Instead of using ancestral state reconstruction, you could use a form of continuous phylogeography, which tends to have more power since it has a notion of distance built in. If you do not know the exact point locations of the tips, tip locations can be sampled, or approximated by the mean of the region where the sample originated.
Analysis does not start
A common result of defining too many states is that the analysis does not start. You will see an error containing something like this:
Start likelihood: -Infinity after 11 initialisation attempts P(posterior) = -Infinity (was NaN) P(prior) = -Infinity (was NaN) P(CoalescentConstant.t:chr1) = -224.91126226515757 (was NaN) P(GammaShapePrior.s:chr1) = -1.0 (was NaN) P(KappaPrior.s:chr1) = -1.8653600339742873 (was NaN) P(nonZeroRatePrior.s:location) = -Infinity (was NaN) P(PopSizePrior.t:chr1) = 1.2039728043259361 (was NaN) P(relativeGeoRatesPrior.s:location) = -350.99999999999994 (was NaN) P(geoclockPrior.c:location) = -6.915086640662835 (was NaN) P(likelihood) = NaN (was NaN) P(treeLikelihood.chr1) = NaN (was NaN) P(traitedtreeLikelihood.location) = NaN (was NaN)
Note the Infinity in the line for nonZeroRatePrior.s:location. This is the prior over the number of rates that are used. By default, this prior is a Poisson prior with mean of 0.693 and offset equal to the number of states minus 1. This is a rather tight prior. At the start, by default all rates are estimated. And though in theory the Poisson prior extends over the range of positive numbers, due to numerical issues, the probability of the number of estimated rates can be large enough that the support can become zero.
Workarounds for this are
- Reduce the number of states.
- Start with a wider prior on non-zero rates by increasing the value of lambda, or use a different prior altogether. Once the analysis runs for a little while you can stop it, set the prior back and resume.
- Set up a start state that contains more zeros. This is a bit fiddly, since it involves editing the XML. Find the rateIndicator parameter ( id="rateIndicato.s:location"). Its value say is true, and it has dimension N. For parameters that have less values than the dimension its value is copied till all N values are available. So, if you have dimension=6 (i.e., we need 6 flags) and value=”true false” it will be copied 3 times, giving “1 0 1 0 1 0″. With value=”true false true” we get “1 0 1 1 0 1”.
So, what you can do if you have N states is set up a set of values such that only the N-1 rates along the diagonal are true.
Analysis does not converge
There are many reasons an analysis does not converge (there are several sections on it in the book and tips on how to increase ESS). Probably, the first you want to do is make sure rates are set up correctly.
A specific reasons for the ancestral state reconstruction to fail include that there are too many states, hence there is not enough data for the rates to be estimated.
Numeric instability with asymmetric analysis
By default, the ancestral reconstruction uses a symmetric rate matrix, like her on the left.
By setting the symmetric attribute to false on the element with spec="SVSGeneralSubstitutionModel", an asymmetric rate matrix is used, which means going from state 1 to 2 can get a different rate than the other way around. This means that potentially the number of rates is doubled. It also means that the rateindicator has a dimension that is doubled.
This can lead to numeric instability for the eigensystem (which does an Eigen value decomposition of the rate matrix), which means your analysis won’t start. This can be solved by changing the default Eigen-decomposition method to a more robust variant by setting the eigenSystem attribute of the substitution model to beast.evolution.substitutionmodel.RobustEigenSystem so the substitution model looks something like this: