Category Archives: Uncategorized

What is new in v2.4.8

BEAST v2.4.8 is a patch release of issue #736. When using RNA data, a bug caused the BeagleTreeLikelihood class to incorrectly interpret ‘U’ characters in the sequence as missing data instead of the ‘T’ character similar to DNA. Thus, the likelihood is calculated incorrectly when using the BEAGLE library — this is not a bug in BEAGLE, but in the BeagleTreeLikelihood class in BEAST that interfaces with BEAGLE, so it does not affect other software that uses BEAGLE, such as BEAST 1 and MrBayes.

If you used BEAST with RNA data, have BEAGLE installed and used BEAGLE in the analysis, this affects your analysis.

This does not affect DNA or amino acid data with or without use of BEAGLE.

Also, this does not affect analyses with RNA data when using the -java option for BEAST, or when you don’t have BEAGLE installed (BEAST attempts to use BEAGLE if installed by default).

To tell whether you are using the BeagleTreeLikelihood or the java TreeLikelihood, BEAST shows at the start which TreeLikelihood is used. If you have BEAGLE installed and use it, it should show a message similar to this:

Using BEAGLE version: 2.1.2 resource 0: CPU
Ignoring ambiguities in tree likelihood.
Ignoring character uncertainty in tree likelihood.
With 69 unique site patterns.
Using rescaling scheme : dynamic

If you use the -java option, or do not have BEAGLE installed, it shows

TreeLikelihood(treeLikelihood) uses BeerLikelihoodCore4

What is new in v.2.4.6

There are mainly a few enhancements and bug fixes in BEAUti.


Starting trees can now be edited by showing the starting tree panel, which becomes visible by selecting the menu View/Show Starting tree panel. This is for the Standard template only, and does not work in the StarBeast template. It should be much less error prone to set up a different starting tree than editing the XML. Also, it is now possible to change the attributes of the random tree, like population size and maximum tree height, which makes it easier to get a starting tree that conforms to all constraints of the analysis, such as origin heights for advanced birth death tree priors.
There is a choice of random tree, which used to be the default, a cluster tree (UPGMA, neighbor joining, and a number of other standard hierarchical clustering algorithms) and a newick tree.

BEAUti now allow alignments to be replaced, so old analyses can be used for new data. If you need to run the same kind of analyses for many alignments this can save quite a bit of time. To replace an alignment, select an alignment in the partition panel, click on the small ‘r’ button at the bottom of the screen, next to the ‘+’, ‘-‘ and ‘Split’ buttons. A file chooser dialog is shows where you can select an alignment file that will replace the one selected in the partition panel.

There is a fix for a fasta file import bug that marked sequences as amino acid while it should be marked as nucleotide. This happened when importing a fasta file that was misclassified as amino acid alignment, and a dialog was shown where you can change the type. Unfortunately, only the data type for the alignment was changed but not the sequences, leading to hard to diagnose problems.

When splitting alignment on codon positions, previously the tree was unlinked. So splitting into three partitions at codon positions 1, 2 and 3 resulted in adding three trees. Now, BEAUti keeps the trees linked, which makes more sense from a biological point of view.

In the Site model panel, BEAUti now automatically set the estimate flag on the shape parameter when choosing more than 1 rate categories. You can still fix the shape parameter by un-checking the checkbox again, but since this is not usual, the shape is now estimated by default.

BEAUti allows visualisation of alignments, which is triggered by double clicking an alignment in the partition panel. Display of integer alignments as used in microsattelite analyses is now possible.


Better documentation by updates of descriptions of classes and improved error messages.

More robust XMLParser, which can now deal more robustly with BEASTObject classes using the Param annotation in constructors.

Bug fix that prevents double counting of offset-input in ParametricDistribution.sample.


DensiTree version updated to v2.2.6.

Two new PhD positions in the Centre for Computational Evolution!

Two new PhD positions in computational evolution are available in the Centre for Computational Evolution at the University of Auckland to work on developing Bayesian integrative models of evolution that use data from genomic sequences, phenotypic data and the fossil record. The research will include the design and development of new mathematical and computational models for Bayesian phylogenetic inference. The successful candidates will work with an international team of computational biologists, evolutionary biologists and palaeontologists to both develop new methods and test them on a number of exciting data sets.

How and when species came to be is the fundamental question in macroevolution. Attempts to answer it use a variety of data sources including genome sequences, morphology and fossil discoveries. Yet current methods are unable to exploit all this data, with different data sources often producing conflicting results. This project aims to create a unifying probabilistic framework that combines genomic, fossil and phenotypic data to give us the best possible understanding of evolutionary history. The PhD research will involve creating open-source software tools to disseminate new methods widely as well as using these new methods to address outstanding questions in human, animal and pathogen evolution.

The initial focus of the work will be to extend the StarBEAST2 package to allow for sampled ancestral species and their phenotypes in the species tree as well as ancient DNA samples in the embedded gene trees. This is a major software engineering task. There will also be work on developing new trait evolution models that can account for trait variation both within and between species. Current models will be incorporated into the BEAST 2 software and major studies on real and simulated data will be run to assess their strengths and weaknesses. The successful candidates will also have the opportunity to develop new inference methods for continuous trait evolution. Finally a third focus of the work will be on incorporating rich fossil data into the phylogenetic framework within BEAST 2. New models will incorporate variability of sampling over time and space, trait-dependent sampling, and will be able to use multiple fossils from the same morphospecies while accounting for uncertainty in the geologically-derived age of fossils. Both simulated and curated data sets will be used to test and prove the newly developed methods.

The successful candidates will work with an international team including Professor Alexei Drummond, Dr David Welch (University of Auckland), A/Prof Tanja Stadler (ETH Zurich) and Dr Nick Matzke (ANU), as well as expert collaborators with knowledge of specific paleontological and molecular data sets including Dr Mana Dembo (hominins; Simon Fraser University) and Dr Graham Slater (canids; University of Chicago).

Each position comes with a stipend of $NZ27,300 (which is annually adjusted for inflation) and payment of enrolment fees.  There is no teaching requirement associated with the stipend.

The successful applicants will have a strong background in a quantitative subject (such as Computational Biology, Mathematics, Statistics, Computer Science, Physics or similar), an understanding of Bayesian statistics, some experience of coding and ideally have had some exposure to, or at least a strong interest in, phylogenetic methods. The exact nature of the work will depend on the strengths and background of the successful candidates.

For more information and to express interest please send your CV to Professor Alexei Drummond ( or Dr David Welch (

Workshop Announcement: Taming the BEAST in the South Pacific

*** Deadline extended to 7th NOVEMBER for late applications ***

*** Just a few places left. ***

Taming the BEAST in the South Pacific is a comprehensive 5-day workshop to be held on the scenic Waiheke Island, New Zealand from 5 – 10 February 2017.

This workshop will equip researchers with the skills to use BEAST2 software to perform phylogenetics and phylodynamic inferences across a wide range of disciplines through a series of talks by leading experts, lectures and hands-on tutorial sessions. Participants are also encouraged to bring their own datasets for one-on-one discussion and guidance.

Speakers confirmed for the workshop are leading experts in the field:

  • Simon Ho, University of Sydney
  • Alexei Drummond, University of Auckland
  • David Bryant, University of Otago
  • Remco Bouckaert, University of Auckland
  • Tracy Heath, Iowa State University

Taming the BEAST in the South Pacific is hosted by the Centre for Computational Evolution at the University of Auckland, and is modelled after the Taming the BEAST summer school in the Swiss Alps which was organised by the Computational Evolution Group at ETH Zurich. Registration of interest for Taming the BEAST in the South Pacific is open until 25 October 2016. Deadline for late applications is 7th November 2016. Just a few places left, first in first served! See for more information and registration details. Three partial scholarships have been made available for postgraduate students.

Metropolis Coupled MCMC(MC3) works?

19 May 2015 by Remco Bouckaert

Metropolis coupled MCMC (MCMCMC or MC3) allows running an MCMC analysis together with a number of ‘heated’ chains. These heated chains run over a distribution that is adjusted so that it is less peaked than the posterior we want to sample from, which means it is easier for these heated chains to move away from a local optimum. At regular intervals there is the option to switch states between chains (depending on a stochastic critereon), including the chain that samples from the posterior. This is supposed to help explore the sample space more efficient.

To set up an MCMCMC analysis in BEAST, you need to install the BEASTLabs package. The easiest way to set up the XML is by setting it up in BEAUti for a simple MCMC analysis, save the file and edit the XML by

  • replacing the spec attribute in the run element by "beast.inference.MCMCMC".
  • add a chains attribute with the number of chains you want to run.

After this, the XML should look something like this:


When running the analysis, you want to use at least as many chains as there are cores, so that each chain thread can run on its own core. The current implementation is multi-threaded, but does not support multi-processors (yet).

Does MCMCMC work?

The question remains whether it is better to run say 4 individual MCMC analyses and combine results instead of running a single MCMCMC analysis. From what I have seen so far, the BEAST proposals are typically very well tuned to explore tree space, and can handle correlations between various parameters quite well. If a BEAST analysis gets stuck — which shows up by running different chains that seem to converge, but all end up at a different posterior — anecdotal evidence with *BEAST analyses suggest that throwing MCMCMC at it does not solve the problem.

So, there are two criteria on how to judge whether MC3 works or not

  • Can it get us out of local optima, where MCMC by itself has trouble?
  • Can it produce better effective sample size (ESS) per computer cycle?

I can imagine that MC3 works in some cases, and it has been around for ages (notably in MrBayes), but perhaps this is due to the kind of MCMC proposals used, and maybe BEAST analyses do not benefit from MC3. I have not seen an example yet, so if you have a BEAST analysis where MC3 produces better results than MCMC alone, please let me know!