This demo assumes you are familiar with the basics of running an Intrepydd, which is covered in the "Hello, world!" demo.
As a toy example, consider the function, increment_elements(xs, value)
, defined below. Given a three-dimensional Numpy array, xs
, and a floating-point value, value
, it adds value
to every element of xs
, modifying xs
in-place. This function only uses native Python and Numpy constructs.
The example is strictly to help show the similarity between Intrepydd code and basic Python. In particular, it uses explicit nested loops to iterate over the elements as opposed to more idiomatic Numpy methods, such as
xs += value
. A key feature of Intrepydd is that it can optimize many types of explicit loop code, and future versions will support mixing of "native" Numpy and loop-based code. In this way, Intrepydd takes inspiration from Numba, although our planned extensions will go further.
def increment_elements(xs, value):
'''
Increment every element in array `xs` by `value`.
Assume the array is 3d.
'''
assert len(xs.shape) == 3
for i in range(xs.shape[0]):
for j in range(xs.shape[1]):
for k in range(xs.shape[2]):
xs[i, j, k] += value
Here is a function to test this function. We'll reuse this function later to test an Intrepydd version.
def test_code(increment_elements_function=increment_elements):
from numpy import arange
xs = arange(12).reshape(2, 2, 3).astype('double')
print('=== before ===')
print(xs)
increment_elements(xs, 3.0)
print('\n=== after ===')
print(xs)
test_code()
The first technique for attaining performance using Intrepydd is to use type specialization. For example, if you know that your code's Python object is an array of floating-point values, Intrepydd can use this information to generate specialized code that is presumably faster and more energy-efficient.
For instance, suppose we know that the array only contains 64-bit floating-point values, or double
values, and that the value
increment is also a double
. Then we can take the original Python function and simply modify the function signature (def
line) to declare this fact. That is,
def increment_elements(xs, value):
becomes
def increment_elements_pydd(xs: Array(float64), value: double):
Here is a complete implementation, which we will write to demo2.pydd
:
%%writefile demo2.pydd
def increment_elements_pydd(xs: Array(float64), value: double): # Add types
'''
Increment every element in array `xs` by `value`.
Assume the array is 3d.
'''
for i in range(shape(xs, 0)):
for j in range(shape(xs, 1)):
for k in range(shape(xs, 2)):
xs[i, j, k] += value
There are some additional differences between certain operations involving Intrepydd arrays and Numpy arrays. In Intrepydd v0.1, field objects (e.g., xs.shape[0]
) have function counterparts (e.g., shape(xs, 0)
).
Let's go ahead and compile this new version:
!pyddc demo2.pydd
Let's test the correctness of the Intrepydd version by importing the new module and running the test code against it.
!ls -al
import demo2
test_code(demo2.increment_elements_pydd)
If everything went well, you should see the same numerical output as with the original version.
A key first step in enabling higher performance is type specialization. The first way you do that in Intrepydd is by modifying the signatures of your function definitions to include annotations.