#!/usr/bin/env python # coding: utf-8 # # Getting Started with DaCe # # DaCe is a Python library that enables optimizing code with ease, from running on a single core to a full supercomputer. With the power of data-centric transformations, it can automatically map code for CPUs, GPUs, and FPGAs. # # Let's get started with DaCe by importing it: # In[1]: import dace # A data-centric program can be generated from several general-purpose and domain-specific languages. Our main frontend, however, is Python/numpy. To define a program, we take an existing function on numpy arrays and decorate it with `@dace.program`: # In[2]: @dace.program def getstarted(A): return A + A # Running our dace program, we will see several outputs and a prompt. These are the available transformations we can apply. For the first step, we opt to apply none (press Enter) and proceed with compilation and running: # In[3]: import numpy as np a = np.random.rand(2, 3) a # In[4]: getstarted(a) # The results are, as expected, `2*A`. # # Now, let's inspect the intermediate representation of the data-centric program, its Stateful Dataflow Multigraph (SDFG): # In[5]: getstarted.to_sdfg(a) # You can drag the handle at the bottom right to make the SDFG frame larger. # # Notice the following four elements in the graph: # # 1. **State** (blue region): This is the control flow part of the application, represented as a state machine. Since there is no control-flow in the data-centric representation of `A+A`, we see only one state encompassing the computation. # 2. **Arrays** (circular nodes) and **Memlets** (arrows): These nodes represent disjoint N-dimensional memory regions (similar to numpy `ndarray`s), and the edges represent data that is moved throughout the state. Hovering over a memlet will show more information about the subset being moved. # 3. **Tasklets** (octagon): This node represents the computational parts of the graph. Zooming into it will show the code (addition operation in this case). Tasklets act as pure functions that can only work with the data coming into/out of its **connectors** (cyan circles on the node). # 4. **Maps** (trapezoid): Anything that is enclosed between these two nodes (the map *scope*) is replicated for the number of times specified on the node (in our case, `2*3` times). This creates parametric parallelism in the graph and can be nested in each other for efficient parallelization and distribution of work. # # Unfortunately (or fortunately in some cases), this graph is specialized for a specific size of array (as given to it), and will not work on other sizes. To compile a program that works with general sizes, we'll need to use symbolic sizes. # ## Symbols # # DaCe includes a symbolic math engine (extending SymPy) to support symbolic expressions for sizes, ranges, accesses, and more. # # Any number of symbols can be used throughout a computation. Defining a symbol is as easy as calling: # In[6]: N = dace.symbol('N') # which we can now use for any computation and definitions. For example, annotating the types of our function from above will yield a version that works with any size: # In[7]: @dace.program def getstarted_sym(A: dace.float64[N, 2*N]): return A + A # In[8]: getstarted_sym.to_sdfg() # If we compile this code, any array that can match a size of `Nx2N` will be automatically used to infer the value of `N` and invoke the function: # In[9]: getstarted_sym(np.random.rand(100, 200)) # ## Performance # # Given our symbolic SDFG, we would not like to recompile it every time. Thus, we can pre-compile the graph into an .so/.dll file: # In[12]: csdfg = getstarted_sym.compile() # A compiled SDFG, however, has to be invoked like an SDFG, with keyword arguments only: # In[13]: b = csdfg(A=np.random.rand(10,20), N=np.int32(10)) # We can now see the performance of the code on large arrays vs. numpy: # In[14]: tester = np.random.rand(2000, 4000) # In[15]: get_ipython().run_line_magic('timeit', 'tester + tester') # In[16]: get_ipython().run_line_magic('timeit', 'csdfg(A=tester, N=np.int32(2000))') # ## Explicit Dataflow # # One can specify explicit dataflow in dace using `for i in dace.map[begin:end]:` syntax, as well as tasklets manually using `with dace.tasklet:`. Here is an example of a real-world example (Scattering Self-Energies) with an 8-dimensional parallel computation: # In[17]: # Declaration of symbolic variables Nkz, NE, Nqz, Nw, N3D, NA, NB, Norb = ( dace.symbol(name) for name in ['Nkz', 'NE', 'Nqz', 'Nw', 'N3D', 'NA', 'NB', 'Norb']) @dace.program def sse_sigma(neigh_idx: dace.int32[NA, NB], dH: dace.complex128[NA, NB, N3D, Norb, Norb], G: dace.complex128[Nkz, NE, NA, Norb, Norb], D: dace.complex128[Nqz, Nw, NA, NB, N3D, N3D], Sigma: dace.complex128[Nkz, NE, NA, Norb, Norb]): # Declaration of Map scope for k, E, q, w, i, j, a, b in dace.map[0:Nkz, 0:NE, 0:Nqz, 0:Nw, 0:N3D, 0: N3D, 0:NA, 0:NB]: dHG = G[k - q, E - w, neigh_idx[a, b]] @ dH[a, b, i] dHD = dH[a, b, j] * D[q, w, a, b, i, j] Sigma[k, E, a] += dHG @ dHD sse_sigma.to_sdfg()