The goal of pyxplorer is to provide a simple tool that allows interactive profiling of datasets that are accessible via a SQL like interface. The only requirement to run data profiling is that you are able to provide a Python DBAPI like interface to your data source and the data source is able to understand simplistic SQL queries.
I built this piece of software while trying to get a better understanding of data distribution in a massive several hundred million record large dataset. Depending on the size of the dataset and the query engine the resposne time can ranging from seconds (Impala) to minutes (Hive) or even hourse (MySQL)
The typical use case is to use pyxplorer
interactively from an iPython
Notebook or iPython shell to incrementally extract information about your data.
$> pip install pyxplorer pympala
Questions, Ideas, Comments:
Basically pyexplorer
works with all DBAPI like interfaces, but to show the advantages of running a high-performance data analysis on large amounts of data we will use Impala to store our data.
from impala.dbapi import connect
conn = connect(host='diufpc57', port=21050)
Imagine that you are provided with access to a huge Hive/Impala database on your very own Hadoop cluster and you're asked to profile the data to get a better understanding for performing more specific data science later on. Based on this connection, we can now instantiate a new explorer object.
import pyxplorer as pxp
data = pxp.Database("tpcc3", conn)
data
Name | Size |
---|---|
customerp | 30000000 |
districtp | 10000 |
historyp | 30000000 |
itemp | 100000 |
new_orderp | 9000000 |
oorderp | 30000000 |
order_linep | 299991280 |
stockp | 100000000 |
warehousep | 1000 |
Rows: 9 / Columns: 2
This simple code gives you access to all the tables in this database. Let's further investigate how many tables and columns exist in the database.
len(data)
9
The above is the idiomatic python way, but sometimes, it might not be as easy to gasp what is meant, which gives you the chance to use as well
data.num_tables()
9
Get the total number of columns:
sum([len(x.columns()) for x in data.tables()])
92
Or we can directly use the number of columns method on the database object
data.num_columns()
92
It seems like we have a better understanding of the dataset, but how many tuples are we talking about?
data.num_tuples()
499102280
Using the above operations, we can perform simple operations on all tables, but let's have a further look at single table operations to extract more information from instances.
In this example, we want to investigate the order_line
table.
tab = data['order_linep']
tab
<Table: "tpcc3.order_linep">
Let's start by doing some basic inspection of the table, like extracting the number of rows and the number of columns
tab.size()
299991280
len(tab.columns())
10
tab.columns()
[ol_w_id, ol_d_id, ol_o_id, ol_number, ol_i_id, ol_delivery_d, ol_amount, ol_supply_w_id, ol_quantity, ol_dist_info]
Columns are special objects that can be easily and interactively inspected in iPython Notebooks, the default information per column are the min
and max
value, the most frequent and least frequent value and the total number of distinct values. Based on these measrues we provide information about the column.
$uniqueness = \frac{distinct}{rows}$
$constancy = \frac{count_{mf}}{rows}$
tab['ol_w_id']
Name | Value |
---|---|
Min | 1 |
Max | 1000 |
#Distinct Values | 1000 |
Most Frequent | 109 (301593) |
Least Frequent | 212 (298395) |
Top 10 MF | 109,676,117,460,19,877,165,764,340,689 |
Top 10 LF | 212, 405, 52, 284, 304, 769, 727, 665, 90, 163 |
Uniqueness | 3.33343022504e-06 |
Constancy | 0.00100533922186 |
Rows: 9 / Columns: 2
Its possible to access the column either using subscript notation or directly as an attribute of the object
tab.ol_w_id
Name | Value |
---|---|
Min | 1 |
Max | 1000 |
#Distinct Values | 1000 |
Most Frequent | 109 (301593) |
Least Frequent | 212 (298395) |
Top 10 MF | 109,676,117,460,19,877,165,764,340,689 |
Top 10 LF | 212, 405, 52, 284, 304, 769, 727, 665, 90, 163 |
Uniqueness | 3.33343022504e-06 |
Constancy | 0.00100533922186 |
Rows: 9 / Columns: 2
Based on this information we can further deduct what role this column might have in the overall schema. For example, based on the uniqueness we can say that the column is not suitable to uniquely identify every row. In additiona, based on the constancy of the most frequent value and the spread between the most and least frequent value we can deduct that the data is almost uniformly distributed.
One important feature ist to look at the distinct values and their distribution
tab.ol_w_id.dcount()
1000
dist = tab.ol_w_id.distribution(limit=10000)
%matplotlib inline
dist.fraction.hist()
<matplotlib.axes.AxesSubplot at 0x10c550ad0>
The above example validates our assumption of unifrom distribution as the histograms spreads basically from $0.0009$ to $0.001006$.
In addition to performing such analysis on a column level, we can have a look at the distributions as well from a higher level.
data.dcounts().fraction.hist()
<matplotlib.axes.AxesSubplot at 0x109958510>
len(data.dcounts())
98
data.dcounts().head(10)
table | column | distinct | size | fraction | |
---|---|---|---|---|---|
0 | customerp | c_w_id | 1000 | 30000000 | 3.333333e-05 |
1 | customerp | c_d_id | 10 | 30000000 | 3.333333e-07 |
2 | customerp | c_id | 3000 | 30000000 | 1.000000e-04 |
3 | customerp | c_discount | 5000 | 30000000 | 1.666667e-04 |
4 | customerp | c_credit | 2 | 30000000 | 6.666667e-08 |
5 | customerp | c_last | 1000 | 30000000 | 3.333333e-05 |
6 | customerp | c_first | 29999284 | 30000000 | 9.999761e-01 |
7 | customerp | c_credit_lim | 1 | 30000000 | 3.333333e-08 |
8 | customerp | c_balance | 1 | 30000000 | 3.333333e-08 |
9 | customerp | c_ytd_payment | 1 | 30000000 | 3.333333e-08 |
Herew, we see that dcounts
refers to a Pandas data frame object with 96 rows.
While in most cases the data to profile is already available as a Hive or Impala or even MySQL table, there might exist the case that we simply have a directory full of CSV files following the same format and want to perform the same analysis.
To facilitate this process, pyxplorer
provides a simple interface to generate an external table based on a sample from the CSV files in a given directory.
The idea is that a directory in HDFS already contains a set of files that all follow the same specification and the user now wants to perform some data profiling. We further assume that the user has access to Hive / Impala to run queries on the data.
from pyxplorer.loader import Loader
loader = Loader("/user/martin/test1", name_node="diufpc56", hive_server="diufpc301")
It will check the first file it finds in the directory, extract the data of the first few hundred lines. It will then try to guess the correct separator and build an external table. The rest of the operations follows the normal explorative schedule described before.
loader.load()
('default', 'pyxplorer_data')
pyxdb = m.Database("default", conn)
pyxdb
Name | Size |
---|---|
pyxplorer_data | 7 |
Rows: 1 / Columns: 2
tab = pyxdb["pyxplorer_data"]
tab
<Table: "default.pyxplorer_data">
tab.columns()
[col_0, col_1, col_2, col_3, col_4, col_5, col_6]
tab.col_0
Name | Value |
---|---|
Min | 100 |
Max | distance |
#Distinct Values | 7 |
Most Frequent | distance (1) |
Least Frequent | distance (1) |
Top 10 MF | 100,129,148,distance,192,113,168 |
Top 10 LF | 100, 129, 148, distance, 192, 113, 168 |
Uniqueness | 1.0 |
Constancy | 0.142857142857 |
Rows: 9 / Columns: 2