#!/usr/bin/env python # coding: utf-8 # ## (📗) ipyrad Cookbook: `abba-baba` admixture tests # # The `ipyrad.analysis` Python module includes functions to calculate abba-baba admixture statistics (including several variants of these measures), to perform signifance tests, and to produce plots of results. All code in this notebook is written in Python, which you can copy/paste into an IPython terminal to execute, or, preferably, run in a Jupyter notebook like this one. See the other analysis cookbooks for [instructions](http://ipyrad.readthedocs.io/analysis.html) on using Jupyter notebooks. All of the software required for this tutorial is included with `ipyrad` (v.6.12+). Finally, we've written functions to generate plots for summarizing and interpreting results. # ### Load packages # In[1]: import ipyrad.analysis as ipa import ipyparallel as ipp import toytree import toyplot # In[2]: print ipa.__version__ print toyplot.__version__ print toytree.__version__ # ### Connect to cluster # The code can be easily parallelized across cores on your machine, or many nodes of an HPC cluster using the `ipyparallel` library (see our [ipyparallel tutorial]()). An `ipcluster` instance must be started for you to connect to, which can be started by running `'ipcluster start'` in a terminal. # In[3]: ipyclient = ipp.Client() len(ipyclient) # ### Load in your .loci data file and a tree hypothesis # We are going to use the shape of our tree topology hypothesis to generate 4-taxon tests to perform, therefore we'll start by looking at our tree and making sure it is properly rooted. # In[4]: ## ipyrad and raxml output files locifile = "./analysis-ipyrad/pedic_outfiles/pedic.loci" newick = "./analysis-raxml/RAxML_bipartitions.pedic" # In[5]: ## parse the newick tree, re-root it, and plot it. tre = toytree.tree(newick=newick) tre.root(wildcard="prz") tre.draw( height=350, width=400, node_labels=tre.get_node_values("support") ) ## store rooted tree back into a newick string. newick = tre.tree.write() # ### Short tutorial: calculating abba-baba statistics # To give a gist of what this code can do, here is a quick tutorial version, each step of which we explain in greater detail below. We first create a `'baba'` analysis object that is linked to our data file, in this example we name the variable **bb**. Then we tell it which tests to perform, here by automatically generating a number of tests using the `generate_tests_from_tree()` function. And finally, we calculate the results and plot them. # In[6]: ## create a baba object linked to a data file and newick tree bb = ipa.baba(data=locifile, newick=newick) # In[7]: ## generate all possible abba-baba tests meeting a set of constraints bb.generate_tests_from_tree( constraint_dict={ "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["33413_thamno"], }) # In[8]: ## show the first 3 tests bb.tests[:3] # In[9]: ## run all tests linked to bb bb.run(ipyclient) # In[10]: ## show first 5 results bb.results_table.head() # ### Look at the results # By default we do not attach the names of the samples that were included in each test to the results table since it makes the table much harder to read, and we wanted it to look very clean. However, this information is readily available in the `.test()` attribute of the baba object as shown below. Also, we have made plotting functions to show this information clearly as well. # In[11]: ## save all results table to a tab-delimited CSV file bb.results_table.to_csv("bb.abba-baba.csv", sep="\t") ## show the results table sorted by index score (Z) sorted_results = bb.results_table.sort_values(by="Z", ascending=False) sorted_results.head() # In[12]: ## get taxon names in the sorted results order sorted_taxa = bb.taxon_table.iloc[sorted_results.index] ## show taxon names in the first few sorted tests sorted_taxa.head() # ### Plotting and interpreting results # Interpreting the results of D-statistic tests is actually *very* complicated. You cannot treat every test as if it were independent because introgression between one pair of species may cause one or both of those species to *appear* as if they have also introgressed with other taxa in your data set. This problem is described in great detail in [this paper (Eaton et al. 2015)](http://onlinelibrary.wiley.com/doi/10.1111/evo.12758/abstract). A good place to start, then, is to perform many tests and focus on those which have the strongest signal of admixture. Then, perform additional tests, such as `partitioned D-statistics` (described further below) to tease apart whether a single or multiple introgression events are likely to have occurred. # # In the example plot below we find evidence of admixture between the sample **33413_thamno** (black) with several other samples, but the signal is strongest with respect to **30556_thamno** (tests 12-19). It also appears that admixture is consistently detected with samples of (**40578_rex** & **35855_rex**) when contrasted against **35236_rex** (tests 20, 24, 28, 34, and 35). Take note, the tests are indexed starting at 0. # In[13]: ## plot results on the tree bb.plot(height=850, width=700, pct_tree_y=0.2, pct_tree_x=0.5, alpha=4.0); # ### generating tests # Because tests are generated based on a tree file, it will only generate tests that fit the topology of the test. For example, the entries below generate zero possible tests because the two samples entered for P3 (the two thamnophila subspecies) are paraphyletic on the tree topology, and therefore cannot form a clade together. # In[14]: ## this is expected to generate zero tests aa = bb.copy() aa.generate_tests_from_tree( constraint_dict={ "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["33413_thamno", "30556_thamno"], }) # If you want to get results for a test that does not fit on your tree you can always write the result out by hand instead of auto-generating it from the tree. Doing it this way is fine when you have few tests to run, but becomes burdensome when writing many tests. # In[15]: ## writing tests by hand for a new object aa = bb.copy() aa.tests = [ {"p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["33413_thamno", "30556_thamno"], "p2": ["40578_rex", "35855_rex"], "p1": ["39618_rex", "38362_rex"]}, {"p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["33413_thamno", "30556_thamno"], "p2": ["40578_rex", "35855_rex"], "p1": ["35236_rex"]}, ] ## run the tests aa.run(ipyclient) aa.results_table # #### Further investigating results with 5-part tests # You can also perform partitioned D-statistic tests like below. Here we are testing the direction of introgression. If the two *thamnophila* subspecies are in fact sister species then they would be expected to share derived alleles that arose in their ancestor and which would be introduced from together if either one of them introgressed into a *P. rex* taxon. As you can see, test 0 shows no evidence of introgression, whereas test 1 shows that the two *thamno* subspecies share introgressed alleles that are present in two samples of *rex* relative to sample "35236_rex". # # More on this further below in this notebook. # In[16]: ## further investigate with a 5-part test cc = bb.copy() cc.tests = [ {"p5": ["32082_przewalskii", "33588_przewalskii"], "p4": ["33413_thamno"], "p3": ["30556_thamno"], "p2": ["40578_rex", "35855_rex"], "p1": ["39618_rex", "38362_rex"]}, {"p5": ["32082_przewalskii", "33588_przewalskii"], "p4": ["33413_thamno"], "p3": ["30556_thamno"], "p2": ["40578_rex", "35855_rex"], "p1": ["35236_rex"]}, ] cc.run(ipyclient) # In[17]: ## the partitioned D results for two tests cc.results_table # In[17]: ## and view the 5-part test taxon table cc.taxon_table # ## Full Tutorial # # ### Creating a `baba` object # # The fundamental object for running abba-baba tests is the `ipa.baba()` object. This stores all of the information about the data, tests, and results of your analysis, and is used to generate plots. If you only have one data file that you want to run many tests on then you will only need to enter the path to your data once. The data file must be a `'.loci'` file from an ipyrad analysis. In general, you will probably want to use the largest data file possible for these tests (`min_samples_locus`=4), to maximize the amount of data available for any test. Once an initial `baba` object is created you create different copies of that object that will inherit its parameter setttings, and which you can use to perform different tests on, like below. # In[19]: ## create an initial object linked to your data in 'locifile' aa = ipa.baba(data=locifile) ## create two other copies bb = aa.copy() cc = aa.copy() ## print these objects print aa print bb print cc # ### Linking tests to the baba object # The next thing we need to do is to link a `'test'` to each of these objects, or a list of tests. In the [Short tutorial](#Short-tutorial:-calculating-abba-baba-statistics) above we auto-generated a list of tests from an input tree, but to be more explicit about how things work we will write out each test by hand here. A test is described by a Python dictionary that tells it which samples (individuals) should represent the 'p1', 'p2', 'p3', and 'p4' taxa in the ABBA-BABA test. You can see in the example below that we set two samples to represent the outgroup taxon (p4). This means that the SNP frequency for those two samples combined will represent the p4 taxon. For the `baba` object named `'cc'` below we enter two tests using a list to show how multiple tests can be linked to a single `baba` object. # In[20]: aa.tests = { "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["29154_superba"], "p2": ["33413_thamno"], "p1": ["40578_rex"], } bb.tests = { "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["30686_cyathophylla"], "p2": ["33413_thamno"], "p1": ["40578_rex"], } cc.tests = [ { "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["41954_cyathophylloides"], "p2": ["33413_thamno"], "p1": ["40578_rex"], }, { "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["41478_cyathophylloides"], "p2": ["33413_thamno"], "p1": ["40578_rex"], }, ] # ### Other parameters # Each `baba` object has a set of parameters associated with it that are used to filter the loci that will be used in the test and to set some other optional settings. If the `'mincov'` parameter is set to 1 (the default) then loci in the data set will only be used in a test if there is at least one sample from every tip of the tree that has data for that locus. For example, in the tests above where we entered two samples to represent "p4" only one of those two samples *needs* to be present for the locus to be included in our analysis. If you want to require that both samples have data at the locus in order for it to be included in the analysis then you could set `mincov=2`. However, for the test above setting `mincov=2` would filter out *all* of the data, since it is impossible to have a coverage of 2 for 'p3', 'p2', and 'p1', since they each have only one sample. Therefore, you can also enter the `mincov` parameter as a dictionary setting a different minimum for each tip taxon, which we demonstrate below for the `baba` object `'bb'`. # In[21]: ## print params for object aa aa.params # In[22]: ## set the mincov value as a dictionary for object bb bb.params.mincov = {"p4":2, "p3":1, "p2":1, "p1":1} bb.params # ### Running the tests # When you execute the `'run()'` command all of the tests for the object will be distributed to run in parallel on your cluster (or the cores available on your machine) as connected to your `ipyclient` object. The results of the tests will be stored in your `baba` object under the attributes `'results_table'` and `'results_boots'`. # In[23]: ## run tests for each of our objects aa.run(ipyclient) bb.run(ipyclient) cc.run(ipyclient) # ### The results table # The results of the tests are stored as a data frame (pandas.DataFrame) in `results_table`, which can be easily accessed and manipulated. The tests are listed in order and can be referenced by their `'index'` (the number in the left-most column). For example, below we see the results for object `'cc'` tests 0 and 1. You can see which taxa were used in each test by accessing them from the `.tests` attribute as a dictionary, or as `.taxon_table` which returns it as a dataframe. An even better way to see which individuals were involved in each test, however, is to use our plotting functions, which we describe further below. # In[31]: ## you can sort the results by Z-score cc.results_table.sort_values(by="Z", ascending=False) ## save the table to a file cc.results_table.to_csv("cc.abba-baba.csv") ## show the results in notebook cc.results_table # ### Auto-generating tests # Entering all of the tests by hand can be pain, which is why we wrote functions to auto-generate tests given an input **rooted** tree, and a number of contraints on the tests to generate from that tree. It is important to add constraints on the tests otherwise the number that can be produced becomes very large very quickly. Calculating results runs pretty fast, but summarizing and interpreting thousands of results is pretty much impossible, so it is generally better to limit the tests to those which make some intuitive sense to run. You can see in this example that implementing a few contraints reduces the number of tests from 1608 to 13. # In[32]: ## create a new 'copy' of your baba object and attach a treefile dd = bb.copy() dd.newick = newick ## generate all possible tests dd.generate_tests_from_tree() ## a dict of constraints constraint_dict={ "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["40578_rex", "35855_rex"], } ## generate tests with contraints dd.generate_tests_from_tree( constraint_dict=constraint_dict, constraint_exact=False, ) ## 'exact' contrainst are even more constrained dd.generate_tests_from_tree( constraint_dict=constraint_dict, constraint_exact=True, ) # ### Running the tests # The `.run()` command will run the tests linked to your analysis object. An ipyclient object is required to distribute the jobs in parallel. The `.plot()` function can then optionally be used to visualize the results on a tree. Or, you can simply look at the results in the `.results_table` attribute. # In[33]: ## run the dd tests dd.run(ipyclient) dd.plot(height=500, pct_tree_y=0.2, alpha=4); dd.results_table # ### More about input file paths (i/o) # The default (required) input data file is the `.loci` file produced by `ipyrad`. When performing D-statistic calculations this file will be parsed to retain the maximal amount of information useful for each test. # # An additional (*optional*) file to provide is a newick tree file. While you do not need a tree in order to run ABBA-BABA tests, you do need at least need *a hypothesis* for how your samples are related in order to setup meaningful tests. By loading in a tree for your data set we can use it to easily set up hypotheses to test, and to plot results on the tree. # In[20]: ## path to a locifile created by ipyrad locifile = "./analysis-ipyrad/pedicularis_outfiles/pedicularis.loci" ## path to an unrooted tree inferred with tetrad newick = "./analysis-tetrad/tutorial.tree" # ### (optional): root the tree # For abba-baba tests you will pretty much always want your tree to be rooted, since the test relies on an assumption about which alleles are ancestral. You can use our simple tree plotting library `toytree` to root your tree. This library uses [Toyplot](http://toyplot.rtfd.io) as its plotting backend, and [ete3](http://etetoolkit.org/) as its tree manipulation backend. # # Below I load in a newick string and root the tree on the two *P. przewalskii* samples using the `root()` function. You can either enter the names of the outgroup samples explicitly or enter a wildcard to select them. We show the rooted tree from a tetrad analysis below. The newick string of the rooted tree can be saved or accessed by the `.newick` attribute, like below. # In[39]: ## load in the tree tre = toytree.tree(newick) ## set the outgroup either as a list or using a wildcard selector tre.root(outgroup=["32082_przewalskii", "33588_przewalskii"]) tre.root(wildcard="prz") ## draw the tree tre.draw(width=400) ## save the rooted newick string back to a variable and print newick = tre.newick # ### Interpreting results # You can see in the `results_table` below that the D-statistic range around 0.0-0.15 in these tests. These values are not too terribly informative, and so we instead generally focus on the Z-score representing how far the distribution of D-statistic values across bootstrap replicates deviates from its expected value of zero. The default number of bootstrap replicates to perform per test is 1000. Each replicate resamples nloci with replacement. # # In these tests ABBA and BABA occurred with pretty equal frequency. The values are calculated using SNP frequencies, which is why they are floats instead of integers, and this is also why we were able to combine multiple samples to represent a single tip in the tree (e.g., see the test we setup, above). # In[41]: ## show the results table print dd.results_table # ### Running 5-taxon (partitioned) D-statistics # To perform partitioned D-statistic tests is not any harder than running the standard four-taxon D-statistic tests. You simply enter your tests with 5 taxa in them now, listed as p1-p5. We have not developed a function to generate 5-taxon tests from a phylogeny, as this test is more appropriately applied to a smaller number of tests to further tease apart the meaning of significant 4-taxon results. See example above in the short tutorial. A simulation example will be added here soon... # In[ ]: