#!/usr/bin/env python # coding: utf-8 # # ISB-CGC Community Notebooks # # ``` # Title: How to Create a Random Sample in BigQuery # Author: Lauren Hagen # Created: 2019-10-17 # Purpose: Demonstrates how to split a data set into multiple groups randomly with BigQuery # ``` # *** # # How to Create a Random Sample in BigQuery # # In this notebook, we will be using BigQuery to create random samples for predicting an outcome with test and training data sets such as in machine learning. In this notebook, we assume that you have set up your GCP and accessed the ISB-CGC WebApp. If not, please visit the [How To Get Started on ISB-CGC](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowToGetStartedonISB-CGC.html) or the [Community Notebook Repository](https://github.com/isb-cgc/Community-Notebooks) for guides on how to get started. # # We will go over two methods to create the subset data: # - `RAND()` Function # - `MOD()` and `FARM_FINGERPRINT` Functions # Before we can begin working with BigQuery, we will need to load the BigQuery module and authenticate ourselves. # In[ ]: from google.cloud import bigquery # In[ ]: get_ipython().system('gcloud auth application-default login') # ## `RAND()` Function for Randomly Splitting data # A simple way to create a random sample with BigQuery is to use the `RAND()` function. The `RAND()` function creates a seemingly random set of numbers and then the query can select to create a random sample of rows. # # We will create two queries with the `RAND()` function. The first will return a random sample of the data and the second will return all of a cohort with the rows labeled for which subset they belong to. # # We are going to start with a simple query to create a cohort. Cohorts can be created previously in the WebApp or through other means instead of within this query. For more information on creating cohorts, please see the [ISB-CGC Web Interface (Web App) documentation](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/Web-UI.html) and the [Community Notebook Repository](https://github.com/isb-cgc/Community-Notebooks). # # # ** Note: you will need to update 'your_project_number' with your project number before continuing with the notebook ** # In[ ]: # Create a query with the cohort information # This can be replaced with your own cohort get_ipython().run_line_magic('%bigquery', 'cohort --project your_project_number') SELECT case_barcode, project_short_name, case_gdc_id FROM `isb-cgc.TARGET_bioclin_v0.Clinical` WHERE project_short_name = "TARGET-NBL" # In[4]: # View the first 5 rows of the cohort cohort.head(5) # In[5]: print("This cohort has " + str(len(cohort)) + " cases (rows).") # The next part of the query is creating a random sample. This line can be adjusted to return any percent of the data table into a random sample. For this example, it is set to create a random sample of ~25% of the data. # In[ ]: get_ipython().run_cell_magic('bigquery', 'sample --project your_project_number', '\n--- Create Cohort\nWITH table1 AS (\nSELECT case_barcode, project_short_name, case_gdc_id, 0 as table_num\nFROM `isb-cgc.TARGET_bioclin_v0.Clinical`\nWHERE project_short_name = "TARGET-NBL")\n\n--- Select a random sample that is ~25% of the data\nSELECT case_barcode, project_short_name, case_gdc_id, 1 as table_num\nFROM table1\n-- Count the number of rows in the cohort, then find how many of them will be 25%\n-- of the cohort. Divid that number by the total number of rows in the data \n-- To change the %, change the 0.25 to what ever precentage you need\nWHERE RAND() < ((SELECT COUNT(*) FROM table1)*0.25)/(SELECT COUNT(*) FROM table1)\n') # In[7]: print("The random sample of the cohort with " + str(len(sample)) + " cases which is around " + str(round((len(sample)/len(cohort)*100),1)) + "% of the cohort.") # This query is nice if we just wanted to grab a smaller sample of the cohort to do some initial analysis before moving to a larger data set but it is not useful if you want to create two separate subsets of data for training a model and then testing the model. The final query joins the new random sample table back with the main table and preserving the split of data with a column for which subset it belongs to. # In[ ]: get_ipython().run_cell_magic('bigquery', 'dataset --project your_project_number', '--- Create Cohort\nWITH table1 AS (\nSELECT case_barcode, project_short_name, case_gdc_id, 0 as table_num\nFROM `isb-cgc.TARGET_bioclin_v0.Clinical`\nWHERE project_short_name = "TARGET-NBL"),\n\n--- Select a random sample that is ~25% of the data\ntable2 AS (\nSELECT case_barcode, project_short_name, case_gdc_id, 1 as table_num\nFROM table1\n-- Count the number of rows in the cohort, then find how many of them will be 25%\n-- of the cohort. Divid that number by the total number of rows in the data\n-- To change the %, change the 0.25 to what ever precentage you need\nWHERE RAND() < ((SELECT COUNT(*) FROM table1)*0.25)/(SELECT COUNT(*) FROM table1)\n)\n\n--- Join the random sample table back to the main table\nSELECT a.case_barcode, a.project_short_name, a.case_gdc_id, IFNULL(b.table_num,2) AS table_num\nFROM table1 AS a\nFULL OUTER JOIN table2 AS b\nON a.case_barcode = b.case_barcode\n') # In[9]: dataset.head(5) # In[10]: print("The final table has " + str(len(dataset)) + " cases.") # In[11]: # Create a list with the sorted initial barcodes case_barcode_initial = list(cohort.case_barcode.sort_values()) # Create a list with the sorted final barcodes case_barcode_final = list(dataset.case_barcode.sort_values()) # Compare the two lists, if TRUE, no barcodes were lost case_barcode_initial == case_barcode_final # Each query will have a different set of random samples because each time `RAND()` is run, it generates a new set of random numbers. This could be a problem if you want reproducible results each time you run the query. Another way to solve this problem is to use `FARM_FINGERPRINT()` with `MOD()` which we will cover next. # ## `MOD()` and `FARM_FINGERPRINT` Functions # # `FARM_FINGERPRINT()` will compute a string of BYTES or STRING and will never change. The `MOD()` function will return the remainder of the farm fingerprint number and a number. This method will always return the same values for each subset. # # For example, we want three approximately equal subsets of the data sets of our cohort. We will create a `WHERE` statement that has the `case_barcode` in the `FARM_FINGERPRINT` function, take the absolute value, put that number into the `MOD()`, and set that equal to 0. # # Note: You need to run `FARM_FINGERPRINT()` on a column that has unique values for each row or combine two columns with `CONCAT` to create a unique value row. # # For more information visit the BigQuery documentation: # - [`FARM_FINGERPRINT()`](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#farm_fingerprint) # - [`MOD()`](https://cloud.google.com/dataprep/docs/html/MOD-Function_57344691) # In[12]: get_ipython().run_cell_magic('bigquery', ' --project your_project_number', 'SELECT case_barcode, project_short_name, case_gdc_id\nFROM `isb-cgc.TARGET_bioclin_v0.Clinical`\nWHERE project_short_name = "TARGET-NBL" AND MOD(ABS(FARM_FINGERPRINT(case_barcode)),3) = 0\n') # The three in the `MOD()` statement is what splits the cohort into three subsets. If we wanted to do ~20% split of the data, we would change the three to a ten and then take any row that less than or equal to 1. # In[13]: get_ipython().run_cell_magic('bigquery', ' --project your_project_number', 'SELECT case_barcode, project_short_name, case_gdc_id\nFROM `isb-cgc.TARGET_bioclin_v0.Clinical`\nWHERE project_short_name = "TARGET-NBL" AND MOD(ABS(FARM_FINGERPRINT(case_barcode)),10) <= 1\n') # With this method, we can also create a query that will retrieve the remaining subset of data. For the first example, we would change the `=` to `!=` as shown below: # In[14]: get_ipython().run_cell_magic('bigquery', ' --project your_project_number', 'SELECT case_barcode, project_short_name, case_gdc_id\nFROM `isb-cgc.TARGET_bioclin_v0.Clinical`\nWHERE project_short_name = "TARGET-NBL" AND MOD(ABS(FARM_FINGERPRINT(case_barcode)),3) != 0\n') # For the second example, we would change the `<=` to `>` as shown below: # In[15]: get_ipython().run_cell_magic('bigquery', ' --project your_project_number', 'SELECT case_barcode, project_short_name, case_gdc_id\nFROM `isb-cgc.TARGET_bioclin_v0.Clinical`\nWHERE project_short_name = "TARGET-NBL" AND MOD(ABS(FARM_FINGERPRINT(case_barcode)),10) > 1\n') # If you want to label the entire cohort and only create one query instead of two, a column can be added to label each case with the subset number. # In[16]: get_ipython().run_cell_magic('bigquery', ' --project your_project_number', 'SELECT case_barcode, project_short_name, case_gdc_id, MOD(ABS(FARM_FINGERPRINT(case_barcode)),3) as subset\nFROM `isb-cgc.TARGET_bioclin_v0.Clinical`\nWHERE project_short_name = "TARGET-NBL"\n') # The subsets can then be filtered out and manipulated in python. Have fun random sampling the data! Please let us know if you have questions by emailing us at feedback@isb-cgc.org.