#!/usr/bin/env python # coding: utf-8 # # Data analysis in Python # # ## Lesson preamble # # ### Learning objectives # # - Describe what a data frame is. # - Load external data from a .csv file into a data frame with pandas. # - Summarize the contents of a data frame with pandas. # - Learn to use data frame attributes `loc[]`, `head()`, `info()`, `describe()`, `shape`, `columns`, `index`. # - Understand the split-apply-combine concept for data analysis. # - Use `groupby()`, `mean()`, `agg()` and `size()` to apply this technique. # - Use `concat()` and `merge()` to combine data frames. # # ### Lesson outline # # - Manipulating and analyzing data with pandas # - Data set background (10 min) # - What are data frames (15 min) # - Data wrangling with pandas (40 min) # - Split-apply-combine techniques in `pandas` # - Using `mean()` to summarize categorical data (20 min) # - Using `size()` to summarize categorical data (15 min) # - Combining data frames (15 min) # # --- # # ## Manipulating and analyzing data with pandas # To access additional functionality in a spreadsheet program, you need to click the menu and select the tool you want to use. All charts are in one menu, text layout tools in another, data analyses tools in a third, and so on. Programming languages such as Python have so many tools and functions so that they would not fit in a menu. Instead of clicking File -> Open and chose the file, you would type something similar to file.open('') in a programming language. Don't worry if you forget the exact expression, it is often enough to just type the few first letters and then hit Tab, to show the available options, more on that later. # ### Dataset background # # Today, we will be working with real data from a longitudinal study of the # species abundance in the Chihuahuan desert ecosystem near Portal, Arizona, USA. # This study includes observations of plants, ants, and rodents from 1977 - 2002, # and has been used in over 100 publications. More information is available in # [the abstract of this paper from 2009]( # http://onlinelibrary.wiley.com/doi/10.1890/08-1222.1/full). There are several # datasets available related to this study, and we will be working with datasets # that have been preprocessed by the [Data # Carpentry](https://www.datacarpentry.org) to facilitate teaching. These are made # available online as *The Portal Project Teaching Database*, both at the [Data # Carpentry website](http://www.datacarpentry.org/ecology-workshop/data/), and on # [Figshare](https://figshare.com/articles/Portal_Project_Teaching_Database/1314459/6). # Figshare is a great place to publish data, code, figures, and more openly to # make them available for other researchers and to communicate findings that are # not part of a longer paper. # # #### Presentation of the survey data # # We are studying the species and weight of animals caught in plots in our study # area. The dataset is stored as a comma separated value (CSV) file. Each row # holds information for a single animal, and the columns represent: # # | Column | Description | # |------------------|------------------------------------| # | record_id | unique id for the observation | # | month | month of observation | # | day | day of observation | # | year | year of observation | # | plot_id | ID of a particular plot | # | species_id | 2-letter code | # | sex | sex of animal ("M", "F") | # | hindfoot_length | length of the hindfoot in mm | # | weight | weight of the animal in grams | # | genus | genus of animal | # | species | species of animal | # | taxa | e.g. rodent, reptile, bird, rabbit | # | plot_type | type of plot | # To read the data into Python, we are going to use a function called `read_csv`. This function is contained in an Python-package called [`pandas`](https://pandas.pydata.org/). As mentioned previously, Python-packages are a bit like browser extensions, they are not essential, but can provide nifty functionality. To use a package, it first needs to be imported. # In[1]: # pandas is given the nickname `pd` import pandas as pd # `pandas` can read CSV-files saved on the computer or directly from an URL. # In[2]: surveys = pd.read_csv('https://ndownloader.figshare.com/files/2292169') # To view the result, type `surveys` in a cell and run it, just as when viewing the content of any variable in Python. # In[3]: surveys # This is how a data frame is displayed in the Jupyter Notebook. Although the data frame itself just consists of the values, the Notebook knows that this is a data frame and displays it in a nice tabular format (by adding HTML decorators), and adds some cosmetic conveniences such as the bold font type for the column and row names, the alternating grey and white zebra stripes for the rows and highlights the row the mouse pointer moves over. The increasing numbers on the far left is the data frame's index, which was added by `pandas` to easily distinguish between the rows. # # ## What are data frames? # # A data frame is the representation of data in a tabular format, similar to how data is often arranged in spreadsheets. The data is rectangular, meaning that all rows have the same amount of columns and all columns have the same amount of rows. Data frames are the *de facto* data structure for most tabular data, and what we use for statistics and plotting. A data frame can be created by hand, but most commonly they are generated by an input function, such as `read_csv()`. In other words, when importing spreadsheets from your hard drive (or the web). # # As can be seen above, the default is to display the first and last 30 rows and truncate everything in between, as indicated by the ellipsis (`...`). Although it is truncated, this output is still quite space consuming. To glance at how the data frame looks, it is sufficient to display only the top (the first 5 lines) using the `head()` method. # In[4]: surveys.head() # Methods are very similar to functions, the main difference is that they belong to an object (above, the method `head()` belongs to the data frame `surveys`). Methods operate on the object they belong to, that's why we can call the method with an empty parenthesis without any arguments. Compare this with the function `type()` that was introduced previously. # In[5]: type(surveys) # Here, the `surveys` variable is explicitly passed as an argument to `type()`. An immediately tangible advantage with methods is that they simplify tab completion. Just type the name of the dataframe, a period, and then hit tab to see all the relevant methods for that data frame instead of fumbling around with all the available functions in Python (there's quite a few!) and figuring out which ones operate on data frames and which do not. Methods also facilitates readability when chaining many operations together, which will be shown in detail later. # # The columns in a data frame can contain data of different types, e.g. integers, floats, and objects (which includes strings, lists, dictionaries, and more)). General information about the data frame (including the column data types) can be obtained with the `info()` method. # In[6]: surveys.info() # The information includes the total number of rows and columns, the number of non-null observations, the column data types, and the memory (RAM) usage. The number of non-null observation is not the same for all columns, which means that some columns contain null values representing that there is missing information. In `pandas` these missing values are marked `NaN` for "not a number". # # The column data type is often indicative of which type of data is stored in that column, and approximately corresponds to the following # # - **Qualitative/Categorical** # - Nominal (labels, e.g. 'red', 'green', 'blue') # - `object`, `category` # - Ordinal (labels with order, e.g. 'Jan', 'Feb', 'Mar') # - `object`, `category`, `int` # - Binary (only two outcomes, e.g. True or False) # - `bool` # - **Quantitative/Numerical** # - Discrete (whole numbers, often counting, e.g. number of children) # - `int` # - Continuous (measured values with decimals, e.g. weight) # - `float` # # Note that an `object` could contain different types, e.g. `str` or `list`. Also note that there can be exceptions to the schema above, but it is still a useful rough guide. # # After reading in the data into a data frame, `head()` and `info()` are two of the most useful methods to get an idea of the structure of this data frame. There are many additional methods that can facilitate the understanding of what a data frame contains: # - Size: # - `surveys.shape` - a tuple with the number of rows in the first element # and the number of columns as the second element # - `surveys.shape[0]` - the number of rows # - `surveys.shape[1]`- the number of columns # # - Content: # - `surveys.head()` - shows the first 5 rows # - `surveys.tail()` - shows the last 5 rows # # - Names: # - `surveys.columns` - returns the names of the columns (also called variable names) # objects) # - `surveys.index` - returns the names of the rows (referred to as the index in pandas) # # - Summary: # - `surveys.info()` - column names and data types, number of observations, memory consumptions # length, and content of each column # - `surveys.describe()` - summary statistics for each column # # These belong to a data frame and are commonly referred to as *attributes* of the data frame. All attributes are accessed with the dot-syntax (`.`), which returns the value of the attribute. # In[7]: surveys.shape # If the attribute is a method, parentheses can be appended to the name to carry out the method's operation on the data frame. # In[8]: surveys.head # In[9]: surveys.head() # Attributes that are not methods often hold a value that has been precomputed because it is commonly accessed and it saves time store the value in an attribute instead of recomputing it every time it is needed. For example, every time `pandas` creates a data frame, the number of rows and columns is computed and stored in the `shape` attribute. # # >#### Challenge # > # >Based on the output of `surveys.info()`, can you answer the following questions? # > # >* What is the class of the object `surveys`? # >* How many rows and how many columns are in this object? # >* Why is there not the same number of rows (observations) for each column? # ### Saving data frames locally # # It is good practice to keep a copy of the data stored locally on your computer in case you want to do offline analyses, the online version of the file changes, or the file is taken down. For this, the data could be downloaded manually or the current `surveys` data frame could be saved to disk as a CSV-file with `to_csv()`. # In[10]: surveys.to_csv('surveys.csv', index=False) # `index=False` because the index (the row names) was generated automatically when pandas opened # the file and this information is not needed to be saved # Since the data is now saved locally, the next time this Notebook is opened, it could be loaded from the local path instead of downloading it from the URL. # In[11]: surveys = pd.read_csv('surveys.csv') surveys.head() # ### Indexing and subsetting data frames # # The survey data frame has rows and columns (it has 2 dimensions). To extract specific data from it (also referred to as "subsetting"), columns can be selected by their name.The Jupyter Notebook (technically, the underlying IPython interpreter) knows about the columns in the data frame, so tab autocompletion can be used to get the correct column name. # In[12]: surveys['species_id'].head() # The name of the column is not shown, since there is only one. Remember that the numbers on the left is just the index of the data frame, which was added by `pandas` upon importing the data. # # Another syntax that is often used to specify column names is `.`. # In[13]: surveys.species_id.head() # Using brackets is clearer and also alows for passing multiple columns as a list, so this tutorial will stick to that. # In[14]: surveys[['species_id', 'record_id']].head() # The output is displayed a bit differently this time. The reason is that in the last cell where the returned data frame only had one column ("species") `pandas` technically returned a `Series`, not a `Dataframe`. This can be confirmed by using `type` as previously. # In[15]: type(surveys['species_id'].head()) # In[16]: type(surveys[['species_id', 'record_id']].head()) # So, every individual column is actually a `Series` and together they constitue a `Dataframe`. This introductory tutorial will not make any further distinction between a `Series` and a `Dataframe`, and many of the analysis techniques used here will apply to both series and data frames. # Selecting with single brackets (`[]`) is a shortcut to common operations, such as selecting columns by labels as above. For more flexible and robust row and column selection the more verbose `loc[, ]` (location) syntax is used. # In[17]: surveys.loc[[0, 2, 4], ['species', 'record_id']] # Although methods usually have trailing parenthesis, square brackets are used with `loc[]` to stay # consistent with the indexing with square brackets in general in Python (e.g. lists and Numpy arrays) # A single number can be selected, which returns that value (here, an integer) rather than a data frame or series with just one value. # In[18]: surveys.loc[4, 'record_id'] # If the column argument is is left out, all columns are selected. # In[19]: surveys.loc[[3, 4]] # To select all rows, but only a subset of columns, the colon character (`:`) can be used. # In[20]: surveys.loc[:, ['month', 'day']].shape # show the size of the data frame # It is also possible to select slices of rows and column labels. # In[21]: surveys.loc[2:4, 'record_id':'day'] # It is important to realize that `loc[]` selects rows and columns by their *labels*, and that slicing therefore in inclusive of *both* the start and the end. To instead select by row or column *position*, use `iloc[]` (integer location). # In[22]: surveys.iloc[[2, 3, 4], [0, 1, 2]] # The index of `surveys` consists of consecutive integers so in this case selecting from the index by labels or position will look the same. As will be shown later, an index could also consist of text names just like the columns. # # While selecting a slice by label is inclusive of the start and end, selecting a slice by position is inclusive of the start by exclusive of the end position, consistent with how position slicing works for other Python objects, such as lists and tuples. # In[23]: surveys.iloc[2:5, :3] # Selecting slices of row positions is a common operation, and has thus been given a shortcut syntax with single brackets. # In[24]: surveys[2:5] # >#### Challenge # > # >1. Extract the 200th and 201st row of the `surveys` dataset and assign the resulting data frame to a new variable name (`surveys_200_201`). Remember that Python indexing starts at 0! # > # >2. How can you get the same result as from `surveys.head()` by using row slices instead of the `head()` method? # > # >3. There are at least three distinct ways to extract the last row of the data frame. How many can you come up with? # The `describe()` method was mentioned above as a way of retrieving summary statistics of a data frame. Together with `info()` and `head()` this is often a good place to start exploratory data analysis as it gives a nice overview of the numeric valuables the data set. # In[25]: surveys.describe() # A common next step would be to plot the data to explore relationships between different variables, but before getting into plotting, it is beneficial to elaborate on the data frame object and several of its common operations. # # An often desired outcome is to select a subset of rows matching a criteria, e.g. which observations have a weight under 5 grams. To do this, the "less than" comparison operator that was introduced previously can be used. # In[26]: surveys['weight'] < 5 # The result is a boolean array with one value for every row in the data frame indicating whether it is `True` or `False` that this row has a value below 5 in the weight column. This boolean array can be used to select only those rows from the data frame that meet the specified condition. # In[27]: surveys[surveys['weight'] < 5] # As before, this can be combined with selection of a particular set of columns. # In[28]: surveys.loc[surveys['weight'] < 5, ['weight', 'species']] # A single expression can also be used to filter for several criteria, either matching *all* criteria (`&`) or *any* criteria (`|`). These special operators are used instead of `and` and `or` to make sure that the comparison occurs for each row in the data frame. Parentheses are added to indicate the priority of the comparisons. # In[29]: # AND = & surveys.loc[(surveys['taxa'] == 'Rodent') & (surveys['sex'] == 'F'), ['taxa', 'sex']].head() # To increase readability, these statements can be put on multiple rows. Anything that is within a parameter or bracket in Python can be continued on the next row. When inside a bracket or parenthesis, the indentation is not significant to the Python interpreter, but it is still recommended to include it in order to make the code more readable. # In[30]: surveys.loc[(surveys['taxa'] == 'Rodent') & (surveys['sex'] == 'F'), ['taxa', 'sex']].head() # With the `|` operator, rows matching either of the supplied criteria are returned. # In[31]: # OR = | surveys.loc[(surveys['species'] == 'clarki') | (surveys['species'] == 'leucophrys'), 'species'] # >#### Challenge # > # >Subset the `survey` data to include individuals collected before # >1995 and retain only the columns `year`, `sex`, and `weight`. # ### Creating new columns # # A frequent operation when working with data, is to create new columns based on the values in existing columns, for example to do unit conversions or find the ratio of values in two columns. To create a new column of the weight in kg instead of in grams: # In[32]: surveys['weight_kg'] = surveys['weight'] / 1000 surveys.head(10) # The first few rows of the output are full of `NA`s. To remove those, use the `dropna()` method of the data frame. # In[33]: surveys.dropna().head(10) # By default, `.dropna()` removes all rows that has an NA value in any of the columns. There are parameters that controls how the rows are dropped and which columns should be searched for NAs. # # A common alternative to removing rows containing `NA` values is to fill out the values with e.g. the mean of all observations or the previous non-NA value. This can be done with the `fillna()` method. # In[34]: surveys['hindfoot_length'].head() # In[35]: # Fill with mean value fill_value = surveys['hindfoot_length'].mean() surveys['hindfoot_length'].fillna(fill_value).head() # In[36]: # Fill with previous non-null value surveys['hindfoot_length'].fillna(method='ffill').head() # Whether to use `dropna()` or `fillna()` depends on the data set and the purpose of the analysis. It is also possible to interpolate missing values from neighboring values via the `interpolate()` method, which can be useful when working with timeseries. # >#### Challenge # > # >1. Create a new data frame from the `surveys` data that contains only the `species_id` and `hindfoot_length` columns and no NA values. # >2. Add a column to this new data frame called `hindfoot_half`, which contains values that are half the `hindfoot_length` values. Keep all observations that have a value less than 30 in the `hindfoot_half`. # >3. The final data frame should have 31,436 rows and 3 columns. How can you check that your data frame meets these criteria? # ## Split-apply-combine techniques in pandas # # Many data analysis tasks can be approached using the *split-apply-combine* paradigm: split the data into groups, apply some analysis to each group, and then combine the results. # # `pandas` facilitates this workflow through the use of `groupby()` to split data and summary/aggregation functions such as `mean()`, which collapses each group into a single-row summary of that group. The arguments to `groupby()` are the column names that contain the *categorical* variables by which summary statistics should be calculated. To start, compute the mean `weight` by sex. # # ![Image credit Jake VanderPlas](img/split-apply-combine.png) # # *Image credit Jake VanderPlas* # # ### Using `mean()` to summarize categorical data # The `.mean()` method can be used to calculate the average of each group. When the mean is computed, the default behavior is to ignore NA values, so they only need to be dropped if they are to be excluded from the visual output. # In[37]: surveys.groupby('species')['weight'].mean() # The output here is a series that is indexed with the grouped variable (the species) and the single column contains the result of the aggregation (the mean weight). Since there are so many species, a subset of the will be selected to fit the output within the screen and facilitate instruction. This could be done by manually typing each OR (`|`) condition. # In[38]: surveys.loc[(surveys['species'] == 'albigula') | (surveys['species'] == 'ordii') | (surveys['species'] == 'flavus') | (surveys['species'] == 'torridus')].shape # Comparing this number with the number of rows in the original data frame shows that it was filtered successfully. # In[39]: surveys.shape # However, it is rather tedious to type out the species names by hand and to do one comparison per species. Instead, the names of the species of interest can be extract by putting them in a list and use the `isin()` method. # In[40]: species_to_keep = ['albigula', 'ordii', 'flavus', 'torridus'] surveys_sub = surveys.loc[surveys['species'].isin(species_to_keep)] surveys_sub.shape # In[41]: avg_wt_spec = surveys_sub.groupby('species')['weight'].mean() avg_wt_spec # Individual species can be selected from the resulting series using `loc[]`, just as previously. # In[42]: avg_wt_spec.loc[['ordii', 'albigula']] # Single `[]` without `.loc[]` could also be used to extract rows by label from a series, but since single `[]` extracts columns from a data frame, it can be easier to be explicit and use `.loc[]` instead of keeping track when the returned object is a series or a data frame. # # Groups can also be created from multiple columns, e.g. it could be interesting to see the difference in weight between males and females within each species. # In[43]: avg_wt_spec_sex = surveys_sub.groupby(['species', 'sex'])['weight'].mean() avg_wt_spec_sex # The returned series has an index that is a combination of the columns `species` and `sex`, and referred to as a `MultiIndex`. The same syntax as previously can be used to select rows on the species-level. # In[44]: avg_wt_spec_sex.loc[['ordii', 'albigula']] # To select specific values from both levels of the `MultiIndex`, a list of tuples can be passed to `loc[]`. # In[45]: avg_wt_spec_sex.loc[[('ordii', 'F'), ('albigula', 'M')]] # To select only the female observations from all species, the `xs()` (cross section) method can be used. # In[46]: avg_wt_spec_sex.xs('F', level='sex') # The names and values of the index levels can be seen by inspecting the index object. # In[47]: avg_wt_spec_sex.index # Although MultiIndexes offer succinct and fast ways to access data, they also requires memorization of additional syntax and are strictly speaking not essential unless speed is of particular concern. It can therefore be easier to reset the index, so that all values are stored in columns. # In[48]: avg_wt_spec_sex_res = avg_wt_spec_sex.reset_index() avg_wt_spec_sex_res # After resetting the index, the same comparison syntax introduced earlier can be used instead of `xs()` or passing lists of tuples to `loc[]`. # In[49]: female_weights = avg_wt_spec_sex_res.loc[avg_wt_spec_sex_res['sex'] == 'F'] female_weights # `reset_index()` grants the freedom of not having to work with indexes, but it is still worth keeping in mind that selecting on an index level with `xs()` can be orders of magnitude faster than using bollean comparisons (on large data frames). # # The opposite operation (to create an index) can be performed with `set_index()` on any column (or combination of columns) that creates an index with unique values. # In[50]: female_weights.set_index(['species', 'sex']) # # Multiple aggregations on grouped data # # Since the same grouped data frame will be used in multiple code chunks below, this can be assigned to a new variable instead of typing out the grouping expression each time. # In[51]: grouped_surveys = surveys_sub.groupby(['species', 'sex']) grouped_surveys['weight'].mean() # Other aggregation methods, such as the standard deviation, are called with the same syntax. # In[52]: grouped_surveys['weight'].std() # Instead of using the `mean()` method, the more general `agg()` method could be called to aggregate (or summarize) by *any* existing aggregation functions. The equivalent to the `mean()` method would be to call `agg()` and specify `'mean'`. # In[53]: grouped_surveys['weight'].agg('mean') # This general approach is more flexible and powerful since multiple aggregation functions can be applied in the same line of code by passing them as a list to `agg()`. For instance, the standard deviation and mean could be computed in the same call by passing them in a list. # In[54]: grouped_surveys['weight'].agg(['mean', 'std']) # The returned output is in this case a data frame and the `MultiIndex` is indicated in bold font. # # By passing a dictionary to `.agg()` it is possible to apply different aggregations to the different columns. Long code statements can be broken down into multiple lines if they are enclosed by parentheses, brackets or braces, something that will be described in detail later. # In[55]: grouped_surveys[['weight', 'hindfoot_length']].agg( {'weight': 'sum', 'hindfoot_length': ['min', 'max'] } ) # There are plenty of aggregation methods available in pandas (e.g. `sem`, `mad`, `sum`), the most common are listed in [the table at the end of this subsection in the documentation](https://pandas.pydata.org/pandas-docs/stable/groupby.html#aggregation) and all can be found via tab-completions of the data frame (`surveys.`). # # Even if a function is not part of the `pandas` library, it can be passed to `agg()`. # In[56]: import numpy as np grouped_surveys['weight'].agg(np.mean) # Any function can be passed like this, including user-created functions. # > #### Challenge # > # > 1. Use `groupby()` and `agg()` to find the mean, min, and max hindfoot # > length for each species. # ### Using `size()` to summarize categorical data # # When working with data, it is common to want to know the number of observations present for each categorical variable. For this, `pandas` provides the `size()` method. For example, to group by 'taxa' and find the number of observations for each 'taxa': # In[57]: # Note that the original full length data frame is used here again surveys.groupby('taxa').size() # `size()` can also be used when grouping on multiple variables. # In[58]: surveys.groupby(['taxa', 'plot_type']).size() # If there are many groups, `size()` is not that useful on its own. For example, it is difficult to quickly find the five most abundant species among the observations. # In[59]: surveys.groupby('species').size() # Since there are many rows in this output, it would be beneficial to sort the table values and display the most abundant species first. This is easy to do with the `sort_values()` method. # In[60]: surveys.groupby('species').size().sort_values() # That's better, but it could be helpful to display the most abundant species on top. In other words, the output should be arranged in descending order. # In[61]: surveys.groupby('species').size().sort_values(ascending=False).head(5) # Looks good! By now, the code statement has grown quite long because many methods have been *chained* together. It can be tricky to keep track of what is going on in long method chains. To make the code more readable, it can be broken up multiple lines by adding a surrounding parenthesis. # In[62]: (surveys .groupby('species') .size() .sort_values(ascending=False) .head(5) ) # This looks neater and makes long method chains easier to reads. There is no absolute rule for when to break code into multiple line, but always try to write code that is easy for collaborators (your most common collaborator is a future version of yourself!) to understand. # # `pandas` actually has a convenience function for returning the top five results, so the values don't need to be sorted explicitly. # In[63]: (surveys .groupby(['species']) .size() .nlargest() # the default is 5 ) # To include more attributes about these species, add columns to `groupby()`. # In[64]: (surveys .groupby(['species', 'taxa', 'genus']) .size() .nlargest() ) # >#### Challenge # > # >1. How many individuals were caught in each `plot_type` surveyed? # > # >2. Calculate the number of animals trapped per plot type for each year. Extract the combinations of year and plot type that had the three highest number of observations (e.g. "1998-Control"). # ## Merging and concatenating data frames # # Commonly, several data frames (e.g. from different CSV-files) need to be combined together into one big data frame. Either rows or columns from different data frames can be combined together resulting in a data frame that is longer or wider than the originals, respectively. # # First, the syntax for subsetting data frames will be used to create the partial data frames that will later be joined together. # In[65]: wt_avg = surveys_sub.groupby('species')['weight'].mean().reset_index() wt_avg # In[66]: wt_avg_top = wt_avg[:2] wt_avg_top # In[67]: wt_avg_bot = wt_avg[2:] wt_avg_bot # The `pandas` function `concat()` can be used to join the rows from these two data frames into one longer data frame. # In[68]: wt_avg_full = pd.concat([wt_avg_top, wt_avg_bot]) wt_avg_full # The first argument to `concat()` is a list of the data frames to concatenate, and can consist of more than two data frames. # In[69]: frames = [wt_avg_top, wt_avg_bot, wt_avg_top] pd.concat(frames) # Concatenating rows from separate data sets is very useful when new data has been recorded and needs to be added to the previously existing data. Although `concat()` can also be used to add columns from different data frames together, the `merge()` function provides a more powerful interface for this operation. Its syntax is inspired by how these operations are performed on (SQL-like) databases. # # Since merging data frames joins columns together, let's first create a new data frame with a column for the median weight per species to be merged with the mean weight data frame. # In[70]: wt_med = surveys_sub.groupby('species')['weight'].median().reset_index() wt_med # Merging only works with two data frames at a time and they are specified as the first and second argument to `merge()`. By default all columns with a common name from both data frames will be used to figure out how the rows should be added together. Here, only the `species` (and not the `weight`) column should be used to determine how to merge the dataframes, which can be specified with the `on` parameter. # In[71]: merged_df = pd.merge(wt_avg, wt_med, on='species') merged_df # Since the column `weight` exists in both the individual data frames, `pandas` appends `_x` and `_y` to differentiate between the columns in the final data frame. These columns could be renamed to better indicate what their values represent. # In[72]: merged_df.rename(columns={'weight_x': 'mean_wt', 'weight_y': 'median_wt'}) # For clarity, the `rename()` method could be used on the two individual data frames before merging. # # It is important to understand that `merge()` method uses the specified column to match which rows to merge from the different data frames, so the order of the values in each data frame does not matter. Likewise, `merge()` can handle species that occur only in one of the two data frames. # In[73]: wt_avg.loc[2, 'species'] = 't_rex' wt_avg # In[74]: pd.merge(wt_avg, wt_med, on='species') # By default, if the column(s) to join on contains values that are not present in both data frames, rows with those values will not be included in the resulting data frame. This default behavior is referred to as an 'inner' join. To include unique values, an 'outer' join should be performed instead. # In[75]: pd.merge(wt_avg, wt_med, on='species', how='outer') # The `NaN`s are introduced since these rows were only present in one of the joined data frames, and thus only has a value for one of the columns `weight_x` or `weight_y`.