#!/usr/bin/env python # coding: utf-8 # # Data Analysis and Machine Learning Applications for Physicists # *Material for a* [*University of Illinois*](http://illinois.edu) *course offered by the* [*Physics Department*](https://physics.illinois.edu). *This content is maintained on* [*GitHub*](https://github.com/illinois-mla) *and is distributed under a* [*BSD3 license*](https://opensource.org/licenses/BSD-3-Clause). # # [Table of contents](Contents.ipynb) # In[1]: get_ipython().run_line_magic('matplotlib', 'inline') import matplotlib.pyplot as plt import seaborn as sns; sns.set() import numpy as np import pandas as pd # ## Locate Course Data Files # During the [initial setup of your environment](Setup.ipynb), you installed the data for this course with the `mls` package, which also provides a function to locate it: # In[2]: from mls import locate_data # In[3]: locate_data('pong_data.hf5') # Data files are stored in the industry standard [binary HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format) and [text CSV](https://en.wikipedia.org/wiki/Comma-separated_values) formats, with extensions `.hf5` and `.csv`, respectively. HDF5 is more efficient for larger files but requires specialized software to read. CSV files are just plain text: # In[4]: with open(locate_data('line_data.csv')) as f: # Print the first 5 lines of the file. for lineno in range(5): print(f.readline(), end='') # The first line specifies the names of each column ("feature") in the data file. Subsequent lines are the rows ("samples") of the data file, with values for each column separated by commas. Note that values might be missing (for example, at the end of the third row). # ## Read Files with Pandas # We will use the [Pandas package](https://pandas.pydata.org/) to read data files into DataFrame objects in memory. This will only be a quick introduction. For a deeper dive, start with [Data Manipulation with Pandas](https://jakevdp.github.io/PythonDataScienceHandbook/03.00-introduction-to-pandas.html) in the [Phython Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/index.html). # In[5]: pong_data = pd.read_hdf(locate_data('pong_data.hf5')) # In[6]: line_data = pd.read_csv(locate_data('line_data.csv')) # You can think of a DataFrame as an enhanced 2D numpy array, with most of the same capabilities: # In[7]: line_data.shape # Individual columns also behave like enhanced 1D numpy arrays: # In[8]: line_data['y'].shape # In[9]: line_data['x'].shape # For a first look at some unknown data, start with some basic [summary statistics](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.aggregate.html): # In[10]: line_data.describe() # Jot down a few things you notice about this data from this summary. # - The values of x and y are symmetric about zero. # - The values of x look uniformly distributed on \[-1, +1], judging by the percentiles. # - The value of dy is always > 0, as you might expect if it represents the "error on y". # - The dy column is missing 150 entries. # Summarize `pong_data` the same way. Does anything stick out? # In[11]: pong_data.describe() # Some things that stick out from this summary are: # - Mean, median values in the xn columns are increasing left to right. # - Column y0 is always zero, so not very informative. # - Mean, median values in the yn columns increase from y0 to y4 then decrease through y9. # In[12]: # Add your solution here... # ## Work with Subsets of Data # A subset is specified by limiting the rows and/or columns. We have already seen how to pick out a single column, e.g. with `line_data['x']`. # We can also pick out specific rows (for details on why we use `iloc` see [here](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html#Indexers:-loc,-iloc,-and-ix)): # In[13]: line_data.iloc[:4] # Note how the missing value in the CSV file is represented as NaN = "not a number". This is generally how Pandas handles any [data that is missing / invalid or otherwise not available (NA)](https://pandas.pydata.org/pandas-docs/stable/missing_data.html). # We may not want to use any rows with missing data. Select the subset of useful data with: # In[14]: line_data_valid = line_data.dropna() # In[15]: line_data_valid[:4] # You can also select rows using any logical test on its column values. For example, to select all rows with dy > 0.5 and y < 0: # In[16]: xpos = line_data[(line_data['dy'] > 0.5) & (line_data['y'] < 0)] xpos[:4] # Use `describe` to compare the summary statistics for rows with x < 0 and x >= 0. Do they make sense? # In[17]: line_data[line_data['x'] < 0].describe() # In[18]: line_data[line_data['x'] >= 0].describe() # In[19]: # Add your solution here... # ## Extend Data with New Columns # You can easily add new columns derived from existing columns, for example: # In[20]: line_data['yprediction'] = 1.2 * line_data['x'] - 0.1 # The new column is only in memory, and not automatically written back to the original file. # **EXERCISE:** Add a new column for the "pull", defined as: # $$ # y_{pull} \equiv \frac{y - y_{prediction}}{\delta y} \; . # $$ # What would you expect the mean and standard deviation (std) of this new column to be if the prediction is accuracte? What do the actual mean, std values indicate? # In[21]: line_data['ypull'] = (line_data['y'] - line_data['yprediction']) / line_data['dy'] # The mean should be close to zero if the prediction is unbiased. The RMS should be close to one if the prediction is unbiased and the errors are accurate. The actual values indicate that the prediction is unbiased, but the errors are overerestimated. # In[22]: line_data.describe() # In[23]: # Add your solution here... # ## Combine Data from Different Sources # Most of the data files for this course are in data/targets pairs (for reasons that will be clear soon). # Verify that the files `pong_data.hf5` and `pong_targets.hf5` have the same number of rows but different column names. # In[24]: pong_data = pd.read_hdf(locate_data('pong_data.hf5')) pong_targets = pd.read_hdf(locate_data('pong_targets.hf5')) print('#rows: {}, {}.'.format(len(pong_data), len(pong_targets))) assert len(pong_data) == len(pong_targets) print('data columns: {}.'.format(pong_data.columns.values)) print('targets columns: {}.'.format(pong_targets.columns.values)) # In[25]: # Add your solution here... # Use `pd.concat` to combine the (different) columns, matching row by row. Verify that your combined data has the expected number of rows and column names. # In[26]: pong_both = pd.concat([pong_data, pong_targets], axis='columns') # In[27]: print('#rows: {}'.format(len(pong_both))) print('columns: {}.'.format(pong_both.columns.values)) # In[28]: # Add your solution here... # ## Prepare Data from an External Source # Finally, here is an example of taking data from an external source and adapting it to the standard format we are using. The data is from the [2014 ATLAS Higgs Challenge](https://www.kaggle.com/c/higgs-boson) which is now documented and archived [here](http://opendata.cern.ch/record/328). More details about the challenge are in [this writeup](http://opendata.cern.ch/record/329/files/atlas-higgs-challenge-2014.pdf). # **EXERCISE:** # # 1. Download the compressed CSV file (~62Mb) `atlas-higgs-challenge-2014-v2.csv.gz` using the link at the bottom of [this page](http://opendata.cern.ch/record/328). # 2. Move the file to directory containing this notebook. You do need to uncompress (gunzip) the file. # 3. Skim the description of the columns [here](http://opendata.cern.ch/record/328). The details are not important, but the main points are that: # - There are two types of input "features": 17 primary + 13 derived. # - The goal is to predict the "Label" from the input features. # 4. Examine the function defined below and determine what it does. Lookup the documentation of any functions you are unfamiliar with. # 5. Run the function below, which should create two new files in your coursse data directory: # - `higgs_data.hf5`: Input data with 30 columns, ~100Mb size. # - `higgs_targets.hf5`: Ouput targets with 1 column, ~8.8Mb size. # In[29]: def prepare_higgs(filename='atlas-higgs-challenge-2014-v2.csv.gz'): # Read the input file, uncompressing on the fly. df = pd.read_csv(filename, index_col='EventId', na_values='-999.0') # Prepare and save the data output file. higgs_data = df.drop(columns=['Label', 'KaggleSet', 'KaggleWeight']).astype('float32') higgs_data.to_hdf(locate_data('higgs_data.hf5', check_exists=False), 'data', mode='w') # Prepare and save the targets output file. higgs_targets = df[['Label']] higgs_targets.to_hdf(locate_data('higgs_targets.hf5', check_exists=False), 'targets', mode='w') # In[30]: prepare_higgs() # Check that `locate_data` can find the new files: # In[31]: locate_data('higgs_data.hf5') # In[32]: locate_data('higgs_targets.hf5') # You can new safely delete the downloaded CSV file. Uncomment and run the line below if you would like to do this directly from your notebook (this is an example of a [shell command](https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html)). # In[33]: #!rm atlas-higgs-challenge-2014-v2.csv.gz