#!/usr/bin/env python # coding: utf-8 # # Introduction to programming and data analyses in Python # # ## Lesson preamble # # ### Learning objectives # # - Define the following data types in Python: strings, integers, and floats. # - Perform mathematical operations in Python using basic operators. # - Define the following as it relates to Python: lists, tuples, and dictionaries. # - Describe what a data frame is. # - Load external data from a .csv file into a data frame with pandas. # - Summarize the contents of a data frame with pandas. # - Learn to use data frame methods `loc()`, `head()`, `info()`, `describe()`, `shape()`, `columns()`, and `index()`. # # ### Lesson outline # # - Introduction to programming in Python (50 min) # - Manipulating and analyzing data with pandas # - Data set background (10 min) # - What are data frames (15 min) # - Data wrangling with pandas (40 min) # # --- # ### Operators # # Python can be used as a calculator and mathematical calculations use familiar operators such as `+`, `-`, `/`, and `*`. # In[1]: 2 + 2 # In[2]: 6 * 7 # In[3]: 4 / 3 # Text prefaced with a `#` is called a "comment". These are notes to people reading the code, so they will be ignored by the Python interpreter. # In[4]: # `**` means "to the power of" 2 ** 3 # Values can be given a nickname, this is called assigning values to variables and is handy when the same value will be used multiple times. The assignment operator in Python is `=`. # In[5]: a = 5 a * 2 # A variable can be named almost anything. It is recommended to separate multiple words with underscores and start the variable name with a letter, not a number or symbol. # In[6]: new_variable = 4 a - new_variable # Variables can hold different types of data, not just numbers. For example, a sequence of characters surrounded by single or double quotation marks (called a string). In Python, it is intuitive to append string by adding them together: # In[7]: b = 'Hello' c = 'universe' b + c # A space can be added to separate the words. # In[8]: b + ' ' + c # To find out what type a variable is, the built-in function `type()` can be used. In essence, a function can be passed input values, follows a set of instructions with how to operate on the input, and then outputs the result. This is analogous to following a recipe: the ingredients are input, the recipe specifies the set of instructions, and the output is the finished dish. # In[9]: type(a) # `int` stands for "integer", which is the type of any number without a decimal component. # # To be reminded of the value of `a`, the variable name can be typed into an empty code cell. # In[10]: a # A code cell will only output its last value. To see more than one value per code cell, the built-in function `print()` can be used. When using Python from an interface that is not interactive like the JupyterLab Notebook, such as when executing a set of Python instructions together as a script, the function `print()` is often the preferred way of displaying output. # In[11]: print(a) type(a) # Numbers with a decimal component are referred to as floats # In[12]: type(3.14) # Text is of the type `str`, which stands for "string". Strings hold sequences of characters, which can be letters, numbers, punctuation or more exotic forms of text (even emoji!). # In[13]: print(type(b)) b # The output from `type()` is formatted slightly differently when it is printed. # Python also allows to use comparison and logic operators (`<`, `>`, `==`, `!=`, `<=`, `>=`, `and`, `or`, `not`), which will return either `True` or `False`. # In[14]: 3 > 4 # `not` reverses the outcome from a comparison. # In[15]: not 3 > 4 # `and` checks if both comparisons are `True`. # In[16]: 3 > 4 and 5 > 1 # `or` checks if *at least* one of the comparisons are `True`. # In[17]: 3 > 4 or 5 > 1 # The type of the resulting `True` or `False` value is called "boolean". # In[18]: type(True) # Boolean comparison like these are important when extracting specific values from a larger set of values. This use case will be explored in detail later in this material. # # Another common use of boolean comparison is with conditional statement, where the code after the comparison only is executed if the comparison is `True`. # In[19]: if a == 4: print('a is 4') else: print('a is not 4') # In[20]: a # Note that the second line in the example above is indented. Indentation is very important in Python, and the Python interpreter uses it to understand that the code in the indented block will only be exectuted if the conditional statement above is `True`. # ## Sequential types: Lists and Tuples # # ### Lists # # **Lists** are a common data structure to hold an ordered sequence of elements. Each element can be accessed by an index. Note that Python indexes start with 0 instead of 1. # In[21]: numbers = [1, 2, 3] numbers[0] # You can index from the end of the list by prefixing with a minus sign # In[22]: numbers[-1] # A loop can be used to access the elements in a list or other Python data structure one at a time. # In[23]: for num in numbers: print(num) # To add elements to the end of a list, we can use the `append` method. Methods are a way to interact with an object (a list, for example). We can invoke a method using the dot `.` followed by the method name and a list of arguments in parentheses. Let's look at an example using `append`: # In[24]: numbers.append(4) numbers # To find out what methods are available for an object, we can use the built-in `?` command in the Notebook. # In[25]: get_ipython().run_line_magic('pinfo', 'numbers') # ### Tuples # # A tuple is similar to a list in that it's an ordered sequence of elements. However, tuples can not be changed once created (they are "immutable"). Tuples are created by separating values with a comma (and for clarity these are commonly surrounded by parentheses). # In[26]: # Tuples use parentheses a_tuple = (1, 2, 3) another_tuple = ('blue', 'green', 'red') # > ## Challenge - Tuples # > 1. What happens when you type `a_tuple[2] = 5` vs `a_list[1] = 5` ? # > 2. Type `type(a_tuple)` into Python - what is the object type? # > # ## Dictionaries # # A **dictionary** is a container that holds pairs of objects - keys and values. # In[27]: translation = {'one': 1, 'two': 2} translation['one'] # Dictionaries work a lot like lists - except that they are indexed with *keys*. Think about a key as a unique identifier for a set of values in the dictionary. Keys can only have particular types - they have to be "hashable". Strings and numeric types are acceptable, but lists aren't. # In[28]: rev = {1: 'one', 2: 'two'} rev[1] # In[29]: bad = {[1, 2, 3]: 3} # This generates an error message, commonly referred to as a "traceback". This message pinpoints what line in the code cell resulted in an error when it was executed, by pointing at it with an arrow (`---->`). This is helpful in figuring out what went wrong. # # To add an item to the dictionary, a value is assigned to a new dictionary key. # In[30]: rev = {1: 'one', 2: 'two'} rev[3] = 'three' rev # Using loops with dictionaries iterates over the keys by default. # In[31]: for key in rev: print(key, rev[key]) # > ## Challenge - Can you do reassignment in a dictionary? # > # > 1. First check what `rev` is right now (remember `rev` is the name of our dictionary). # > # > 2. Try to reassign the second value (in the *key value pair*) so that it no longer reads "two" but instead reads "apple-sauce". # > # > 3. Now display `rev` again to see if it has changed. # # It is important to note that dictionaries are "unordered" and do not remember # the sequence of their items (i.e. the order in which key:value pairs were # added to the dictionary). Because of this, the order in which items are # returned from loops over dictionaries might appear random and can even change # with time. # # ## Functions # # Defining a section of code as a function in Python is done using the `def` # keyword. For example a function that takes two arguments and returns their sum # can be defined as: # In[32]: def add_function(a, b): """This function adds two values together""" result = a + b return result z = add_function(20, 22) z # Just previously, the `?` can be used to get help for the function. # In[33]: get_ipython().run_line_magic('pinfo', 'add_function') # The string between the `"""` is what is shown in the help so it is good to write a helpful message here. It is possible to see the entire source code of the function by using double `?` (this can be quite complex for complicated functions). # In[34]: get_ipython().run_line_magic('pinfo2', 'add_function') # # To access additional functionality in a spreadsheet program, you need to click the menu and select the tool you want to use. All charts are in one menu, text layout tools in another, data analyses tools in a third, and so on. Programming languages such as Python have so many tools and functions so that they would not fit in a menu. Instead of clicking File -> Open and chose the file, you would type something similar to file.open('') in a programming language. Don't worry if you forget the exact expression, it is often enough to just type the few first letters and then hit Tab, to show the available options, more on that later. # # ### Packages # # Since there are so many esoteric tools and functions available in Python, it is unnecessary to include all of them with the basics that are loaded by default when you start the programming language (it would be as if your new phone came with every single app preinstalled). Instead, more advanced functionality is grouped into separate packages, which can be accessed by typing `import ` in Python. You can think of this as that you are telling the program which menu items you want to use (similar to how Excel hides the Developer menu by default since most people rarely use it and you need activate it in the settings if you want to access its functionality). Some packages needs to be downloaded before they can be used, just like downloading an addon to a browser or mobile phone. # # Just like in spreadsheet software menus, there are lots of different tools within each Python package. For example, if I want to use numerical Python functions, I can import the **num**erical **py**thon module, [`numpy`](http://www.numpy.org/). I can then access any function by writing `numpy.`. # ## Manipulating and analyzing data with pandas # ### Dataset background # # Today, we will be working with real data from a longitudinal study of the # species abundance in the Chihuahuan desert ecosystem near Portal, Arizona, USA. # This study includes observations of plants, ants, and rodents from 1977 - 2002, # and has been used in over 100 publications. More information is available in # [the abstract of this paper from 2009]( # http://onlinelibrary.wiley.com/doi/10.1890/08-1222.1/full). There are several # datasets available related to this study, and we will be working with datasets # that have been preprocessed by the [Data # Carpentry](https://www.datacarpentry.org) to facilitate teaching. These are made # available online as *The Portal Project Teaching Database*, both at the [Data # Carpentry website](http://www.datacarpentry.org/ecology-workshop/data/), and on # [Figshare](https://figshare.com/articles/Portal_Project_Teaching_Database/1314459/6). # Figshare is a great place to publish data, code, figures, and more openly to # make them available for other researchers and to communicate findings that are # not part of a longer paper. # # #### Presentation of the survey data # # We are studying the species and weight of animals caught in plots in our study # area. The dataset is stored as a comma separated value (CSV) file. Each row # holds information for a single animal, and the columns represent: # # | Column | Description | # |------------------|------------------------------------| # | record_id | unique id for the observation | # | month | month of observation | # | day | day of observation | # | year | year of observation | # | plot_id | ID of a particular plot | # | species_id | 2-letter code | # | sex | sex of animal ("M", "F") | # | hindfoot_length | length of the hindfoot in mm | # | weight | weight of the animal in grams | # | genus | genus of animal | # | species | species of animal | # | taxa | e.g. rodent, reptile, bird, rabbit | # | plot_type | type of plot | # To read the data into Python, we are going to use a function called `read_csv`. This function is contained in an Python-package called [`pandas`](https://pandas.pydata.org/). As mentioned previously, Python-packages are a bit like browser extensions, they are not essential, but can provide nifty functionality. To use a package, it first needs to be imported. # In[35]: # pandas is given the nickname `pd` import pandas as pd # `pandas` can read CSV-files saved on the computer or directly from an URL. # In[36]: surveys = pd.read_csv('https://ndownloader.figshare.com/files/2292169') # To view the result, type `surveys` in a cell and run it, just as when viewing the content of any variable in Python. # In[37]: surveys # This is how a data frame is displayed in the JupyterLab Notebook. Although the data frame itself just consists of the values, the Notebook knows that this is a data frame and displays it in a nice tabular format (by adding HTML decorators), and adds some cosmetic conveniences such as the bold font type for the column and row names, the alternating grey and white zebra stripes for the rows and highlights the row the mouse pointer moves over. # # ## What are data frames? # # A data frame is the representation of data in a tabular format, similar to how data is often arranged in spreadsheets. The data is rectangular, meaning that all rows have the same amount of columns and all columns have the same amount of rows. Data frames are the *de facto* data structure for most tabular data, and what we use for statistics and plotting. A data frame can be created by hand, but most commonly they are generated by an input function, such as `read_csv()`. In other words, when importing spreadsheets from your hard drive (or the web). # # As can be seen above, the default is to display the first and last 30 rows and truncate everything in between, as indicated by the ellipsis (`...`). Although it is truncated, this output is still quite space consuming. To glance at how the data frame looks, it is sufficient to display only the top (the first 5 lines) using the `head()` method. # In[38]: surveys.head() # Methods are very similar to functions, the main difference is that they belong to an object (above, the method `head()` belongs to the data frame `surveys`). Methods operate on the object they belong to, that's why we can call the method with an empty parenthesis without any arguments. Compare this with the function `type()` that was introduced previously. # In[39]: type(surveys) # Here, the `surveys` variable is explicitly passed as an argument to `type()`. An immediately tangible advantage with methods is that they simplify tab completion. Just type the name of the dataframe, a period, and then hit tab to see all the relevant methods for that data frame instead of fumbling around with all the available functions in Python (there's quite a few!) and figuring out which ones operate on data frames and which do not. Methods also facilitates readability when chaining many operations together, which will be shown in detail later. # # The columns in a data frame can contain data of different types, e.g. integers, floats, and objects (which includes strings, lists, dictionaries, and more)). General information about the data frame (including the column data types) can be obtained with the `info()` method. # In[40]: surveys.info() # The information includes the total number of rows and columns, the number of non-null observations, the column data types, and the memory (RAM) usage. The number of non-null observation is not the same for all columns, which means that some columns contain null (or NA) values representing that there is missing information. # # After reading in the data into a data frame, `head()` and `info()` are two of the most useful methods to get an idea of the structure of this data frame. There are many additional methods that can facilitate the understanding of what a data frame contains: # - Size: # - `surveys.shape` - a tuple with the number of rows in the first element # and the number of columns as the second element # - `surveys.shape[0]` - the number of rows # - `surveys.shape[1]`- the number of columns # # - Content: # - `surveys.head()` - shows the first 5 rows # - `surveys.tail()` - shows the last 5 rows # # - Names: # - `surveys.columns` - returns the names of the columns (also called variable names) # objects) # - `surveys.index` - returns the names of the rows (referred to as the index in pandas) # # - Summary: # - `surveys.info()` - column names and data types, number of observations, memory consumptions # length, and content of each column # - `surveys.describe()` - summary statistics for each column # # All methods end with parentheses. Those words that do not have a trailing parenthesis are called attributes and hold a value that has been computed earlier, think of them as variables that belong to the object. When an an attribute is accessed, it will just return its value, like a variable would. When a method is called it will first perform a computation and then return the resulting value. For example, every time pandas creates a data frame, the number of rows and columns is computed and stored in the `shape` attribute, since it is very common to access this information and it would be a waste of time to compute it every time it is needed. # # >#### Challenge # > # >Based on the output of `surveys.info()`, can you answer the following questions? # > # >* What is the class of the object `surveys`? # >* How many rows and how many columns are in this object? # >* Why is there not the same number of rows (observations) for each column? # ### Saving data frames locally # # It is good practice to keep a copy of the data stored locally on your computer in case you want to do offline analyses, the online version of the file changes, or the file is taken down. For this, the data could be downloaded manually or the current `surveys` data frame could be saved to disk as a CSV-file with `to_csv()`. # In[41]: surveys.to_csv('surveys.csv', index=False) # `index=False` because the index (the row names) was generated automatically when pandas opened # the file and this information is not needed to be saved # Since the data is now saved locally, the next time this Notebook is opened, it could be loaded from the local path instead of downloading it from the URL. # In[7]: surveys = pd.read_csv('surveys.csv') surveys.head() # ### Indexing and subsetting data frames # # The survey data frame has rows and columns (it has 2 dimensions). To extract specific data from it (also referred to as "subsetting"), columns can be called by name. # In[43]: surveys['species_id'].head() # Using `head` just to limit the ouput. # The JupyterLab Notebook (technically, the underlying IPython interpreter) knows about the columns in the data frame, so tab autocompletion can be used to get the correct column name. # # Another syntax that is often used to specify column names is `.`. # In[44]: surveys.species_id.head() # Using brackets is clearer and also alows for passing multiple columns as a list, so this tutorial will stick to that. # In[45]: surveys[['species_id', 'record_id']].head() # The output is displayed a bit differently this time. The reason is that in the last cell where the returned data frame only had one column ("species") pandas technically returned a `Series`, not a `Dataframe`. This can be confirmed by using `type` as previously. # In[46]: type(surveys['species_id'].head()) # In[47]: type(surveys[['species_id', 'record_id']].head()) # So, every individual column is actually a `Series` and together they constitue a `Dataframe`. This introductory tutorial will not make any further distinction between a `Series` and a `Dataframe`, and many of the analysis techniques used here will apply to both series and data frames. To convert a `Series` to a `Dataframe` the `to_frame` method can be used. # In[48]: type(surveys['species_id'].head().to_frame()) # In[49]: surveys['species_id'].head().to_frame() # To select specific rows instead of columns, the `loc[]` (location) syntax can be used. This will select the row where the index name (the row name) equals '4'. Indices are unique, so specifying one name to `loc[]` will always return one row. # In[50]: surveys.loc[4] # Square brackets are used instead of parentheses to stay consistent with the indexing with square brackets for Python lists and Numpy arrays. The index of `surveys` consists of consecutive integers but and index can also consist of text names, and `loc[]` can then be used to reference a named row via a string. If it is desired to reference rows by their index *position* rather than their index *name*, `iloc[]` could be used. # # `loc[]` can also select a range of rows with the same slice syntax introduced for lists earlier. # In[51]: surveys.loc[2:4] # As a convenience row slicing can also be done in brackets without loc. # And a combination of columns and rows. # In[52]: surveys.loc[2:4, 'record_id'] # In[53]: surveys.loc[[2, 4, 7], ['species', 'record_id']] # It is also possible to slice column names with `.loc`. # In[9]: surveys.loc[2:4, 'record_id':'plot_id'] # And column positions with `.iloc` # In[11]: surveys.iloc[2:4, 1:5] # >#### Challenge # > # >1. Create a `DataFrame` (`surveys_200`) containing only the observations from # > the 200th row of the `surveys` dataset. Remember that Python indexing starts at 0! # > # >2. Notice how `shape[0]` gave you the number of rows in a data frame? # > # > * Use that number to pull out just that last row in the data frame. # > * Compare that with what you see as the last row using `tail()` to make # > sure it's meeting expectations. # > * Pull out that last row using `shape[0]` instead of the row number. # > * Create a new data frame object (`surveys_last`) from that last row. # > # >3. What's a third way of getting the last row apart from using `shape` or `tail`? Remember how to index lists from the end! # The `describe()` method was mentioned above as a way of retrieving summary statistics of a data frame. Together with `info()` and `head()` this is often a good place to start exploratory data analysis as it gives a nice overview of the numeric valuables the data set. # In[54]: surveys.describe() # A common next step would be to plot the data to explore relationships between different variables, but before getting into plotting, it is beneficial to elaborate on the data frame object and several of its common operations. # # An often desired outcome is to select a subset of rows matching a criteria, e.g. which observations have a weight under 5 grams. To do this, the "less than" comparison operator that was introduced previously can be used. # In[55]: surveys['weight'] < 5 # The result is a boolean array of 3476 values, the same length as the data frame. This array actually has one value for every row in the data frame indicating whether it is `True` or `False` that this row has a value below 5 in the weight column. This boolean array can be used together with the `loc[]` parameter to select only those observations from the data frame! # In[56]: surveys.loc[surveys['weight'] < 5] # As before, this can be combined with selection of a particular set of columns. # In[57]: surveys.loc[surveys['weight'] < 5, ['weight', 'species']] # To prevent the output from running of the screen, `head()` can be used just like before. # In[58]: surveys.loc[surveys['weight'] < 5, ['weight', 'species']].head() # A new object could be created from this smaller version of the data, by assigning it to a new variable name. # In[59]: surveys_sml = surveys.loc[surveys['weight'] < 5, ['weight', 'species']] surveys_sml.head() # A single expression can also be used to filter for several criteria, either # matching *all* criteria (`&`) or *any* criteria (`|`): # In[60]: # AND = & surveys.loc[(surveys['taxa'] == 'Rodent') & (surveys['sex'] == 'F'), ['taxa', 'sex']].head() # To increase readability, these statements can be put on multiple rows. Anything that is within a parameter or bracket in Python can be continued on the next row. When inside a bracket or parenthesis, the indentation is not significant to the Python interpreter, but it is still recommended to include it in order to make the code more readable. # In[61]: surveys.loc[(surveys['taxa'] == 'Rodent') & (surveys['sex'] == 'F'), ['taxa', 'sex']].head() # With the `|` operator, rows matching either of the supplied criteria are returned. # In[62]: # OR = | surveys.loc[(surveys['species'] == 'clarki') | (surveys['species'] == 'leucophrys'), 'species'] # >#### Challenge # > # >Subset the `survey` data to include individuals collected before # >1995 and retain only the columns `year`, `sex`, and `weight`. # ### Creating new columns # # A frequent operation when working with data, is to create new columns based on the values in existing columns, for example to do unit conversions or find the ratio of values in two columns. To create a new column of the weight in kg instead of in grams: # In[63]: surveys['weight_kg'] = surveys['weight'] / 1000 surveys.head(10) # The first few rows of the output are full of `NA`s. To remove those, use the `dropna()` method of the data frame. # In[64]: surveys.dropna().head(10) # By default, `.dropna()` removes all rows that has an NA value in any of the columns. There are parameters that controls how the rows are dropped and which columns should be searched for NAs. # # >#### Challenge # > # >Create a new data frame from the `surveys` data that meets the following # >criteria: contains only the `species_id` and `hindfoot_length` columns, and a new column called # >`hindfoot_half` containing values that are half the `hindfoot_length` values. # >In this `hindfoot_half` column, there are no `NA`s and all values are less # >than 30. # > # >**Hint**: It is a good idea to break this into three steps! # This concludes the introductory data analysis section. # In[ ]: