As we've already seen in previous sections, the power of the PyData stack is built upon the ability of NumPy and Pandas to push basic operations into C via an intuitive syntax: examples are vectorized/broadcasted operations in NumPy, and grouping-type operations in Pandas. While these abstractions are efficient and effective for many common use cases, they often rely on the creation of temporary intermediate objects, which can cause undue overhead in computational time and memory use.
As of version 0.13 (released January 2014), Pandas includes some experimental tools that allow you to directly access C-speed operations without costly allocation of intermediate arrays.
These are the
query() functions, which rely on the Numexpr package.
In this notebook we will walk through their use and give some rules-of-thumb about when you might think about using them.
eval(): Compound Expressions¶
We've seen previously that NumPy and Pandas support fast vectorized operations; for example, when adding the elements of two arrays:
import numpy as np rng = np.random.RandomState(42) x = rng.rand(1000000) y = rng.rand(1000000) %timeit x + y
100 loops, best of 3: 3.39 ms per loop
As discussed in Computation on NumPy Arrays: Universal Functions, this is much faster than doing the addition via a Python loop or comprehension:
%timeit np.fromiter((xi + yi for xi, yi in zip(x, y)), dtype=x.dtype, count=len(x))
1 loop, best of 3: 266 ms per loop
But this abstraction can become less efficient when computing compound expressions. For example, consider the following expression:
mask = (x > 0.5) & (y < 0.5)
Because NumPy evaluates each subexpression, this is roughly equivalent to the following:
tmp1 = (x > 0.5) tmp2 = (y < 0.5) mask = tmp1 & tmp2
In other words, every intermediate step is explicitly allocated in memory. If the
y arrays are very large, this can lead to significant memory and computational overhead.
The Numexpr library gives you the ability to compute this type of compound expression element by element, without the need to allocate full intermediate arrays.
The Numexpr documentation has more details, but for the time being it is sufficient to say that the library accepts a string giving the NumPy-style expression you'd like to compute:
import numexpr mask_numexpr = numexpr.evaluate('(x > 0.5) & (y < 0.5)') np.allclose(mask, mask_numexpr)
The benefit here is that Numexpr evaluates the expression in a way that does not use full-sized temporary arrays, and thus can be much more efficient than NumPy, especially for large arrays.
query() tools that we will discuss here are conceptually similar, and depend on the Numexpr package.
pandas.eval()for Efficient Operations¶
eval() function in Pandas uses string expressions to efficiently compute operations using
For example, consider the following
import pandas as pd nrows, ncols = 100000, 100 rng = np.random.RandomState(42) df1, df2, df3, df4 = (pd.DataFrame(rng.rand(nrows, ncols)) for i in range(4))
To compute the sum of all four
DataFrames using the typical Pandas approach, we can just write the sum:
%timeit df1 + df2 + df3 + df4
10 loops, best of 3: 87.1 ms per loop
The same result can be computed via
pd.eval by constructing the expression as a string:
%timeit pd.eval('df1 + df2 + df3 + df4')
10 loops, best of 3: 42.2 ms per loop
eval() version of this expression is about 50% faster (and uses much less memory), while giving the same result:
np.allclose(df1 + df2 + df3 + df4, pd.eval('df1 + df2 + df3 + df4'))
As of Pandas v0.16,
pd.eval() supports a wide range of operations.
To demonstrate these, we'll use the following integer
df1, df2, df3, df4, df5 = (pd.DataFrame(rng.randint(0, 1000, (100, 3))) for i in range(5))
result1 = -df1 * df2 / (df3 + df4) - df5 result2 = pd.eval('-df1 * df2 / (df3 + df4) - df5') np.allclose(result1, result2)
result1 = (df1 < df2) & (df2 <= df3) & (df3 != df4) result2 = pd.eval('df1 < df2 <= df3 != df4') np.allclose(result1, result2)
result1 = (df1 < 0.5) & (df2 < 0.5) | (df3 < df4) result2 = pd.eval('(df1 < 0.5) & (df2 < 0.5) | (df3 < df4)') np.allclose(result1, result2)
In addition, it supports the use of the literal
or in Boolean expressions:
result3 = pd.eval('(df1 < 0.5) and (df2 < 0.5) or (df3 < df4)') np.allclose(result1, result3)
pd.eval() supports access to object attributes via the
obj.attr syntax, and indexes via the
result1 = df2.T + df3.iloc result2 = pd.eval('df2.T + df3.iloc') np.allclose(result1, result2)
Other operations such as function calls, conditional statements, loops, and other more involved constructs are currently not implemented in
If you'd like to execute these more complicated types of expressions, you can use the Numexpr library itself.
DataFrame.eval()for Column-Wise Operations¶
Just as Pandas has a top-level
DataFrames have an
eval() method that works in similar ways.
The benefit of the
eval() method is that columns can be referred to by name.
We'll use this labeled array as an example:
df = pd.DataFrame(rng.rand(1000, 3), columns=['A', 'B', 'C']) df.head()
pd.eval() as above, we can compute expressions with the three columns like this:
result1 = (df['A'] + df['B']) / (df['C'] - 1) result2 = pd.eval("(df.A + df.B) / (df.C - 1)") np.allclose(result1, result2)
DataFrame.eval() method allows much more succinct evaluation of expressions with the columns:
result3 = df.eval('(A + B) / (C - 1)') np.allclose(result1, result3)
Notice here that we treat column names as variables within the evaluated expression, and the result is what we would wish.
In addition to the options just discussed,
DataFrame.eval() also allows assignment to any column.
Let's use the
DataFrame from before, which has columns
We can use
df.eval() to create a new column
'D' and assign to it a value computed from the other columns:
df.eval('D = (A + B) / C', inplace=True) df.head()
In the same way, any existing column can be modified:
df.eval('D = (A - B) / C', inplace=True) df.head()
DataFrame.eval() method supports an additional syntax that lets it work with local Python variables.
Consider the following:
column_mean = df.mean(1) result1 = df['A'] + column_mean result2 = df.eval('A + @column_mean') np.allclose(result1, result2)
@ character here marks a variable name rather than a column name, and lets you efficiently evaluate expressions involving the two "namespaces": the namespace of columns, and the namespace of Python objects.
Notice that this
@ character is only supported by the
DataFrame.eval() method, not by the
pandas.eval() function, because the
pandas.eval() function only has access to the one (Python) namespace.
DataFrame has another method based on evaluated strings, called the
Consider the following:
result1 = df[(df.A < 0.5) & (df.B < 0.5)] result2 = pd.eval('df[(df.A < 0.5) & (df.B < 0.5)]') np.allclose(result1, result2)
As with the example used in our discussion of
DataFrame.eval(), this is an expression involving columns of the
It cannot be expressed using the
DataFrame.eval() syntax, however!
Instead, for this type of filtering operation, you can use the
result2 = df.query('A < 0.5 and B < 0.5') np.allclose(result1, result2)
In addition to being a more efficient computation, compared to the masking expression this is much easier to read and understand.
Note that the
query() method also accepts the
@ flag to mark local variables:
Cmean = df['C'].mean() result1 = df[(df.A < Cmean) & (df.B < Cmean)] result2 = df.query('A < @Cmean and B < @Cmean') np.allclose(result1, result2)
When considering whether to use these functions, there are two considerations: computation time and memory use.
Memory use is the most predictable aspect. As already mentioned, every compound expression involving NumPy arrays or Pandas
DataFrames will result in implicit creation of temporary arrays:
For example, this:
x = df[(df.A < 0.5) & (df.B < 0.5)]
Is roughly equivalent to this:
tmp1 = df.A < 0.5 tmp2 = df.B < 0.5 tmp3 = tmp1 & tmp2 x = df[tmp3]
If the size of the temporary
DataFrames is significant compared to your available system memory (typically several gigabytes) then it's a good idea to use an
You can check the approximate size of your array in bytes using this:
On the performance side,
eval() can be faster even when you are not maxing-out your system memory.
The issue is how your temporary
DataFrames compare to the size of the L1 or L2 CPU cache on your system (typically a few megabytes in 2016); if they are much bigger, then
eval() can avoid some potentially slow movement of values between the different memory caches.
In practice, I find that the difference in computation time between the traditional methods and the
query method is usually not significant–if anything, the traditional method is faster for smaller arrays!
The benefit of
query is mainly in the saved memory, and the sometimes cleaner syntax they offer.
We've covered most of the details of
query() here; for more information on these, you can refer to the Pandas documentation.
In particular, different parsers and engines can be specified for running these queries; for details on this, see the discussion within the "Enhancing Performance" section.