#!/usr/bin/env python # coding: utf-8 # # Intro # # In this example, we solve the simplest problem I can think of: a network of linear springs joining particles. Let the original position of each particle be $\mathbf{x}_A$, where the springs are all at equilibrium. We will be solving for the new position $\mathbf{y}_A$ For each particle in the network, we solve the force balance equation # \begin{equation} # 0 = \sum_{B \,\text{joined}\, A} \mathbf{f}\left(\mathbf{y}_A,\mathbf{y}_B\right) \quad \forall \,\text{nodes}\,A # \end{equation} # We define the residual of the system $\mathbf{R}$ to be the stack of all of these force balance equations, into a system of equations that is $N_{equ}=gdim \times N_{particle}$ long. Let $\mathbf{y}$ be the stack of all of particle positions. The spring force $\mathbf{f}$ will be nonlinear due to its dependence on the orientation of the particles. We use Newton's method to solve the equation $\mathbf{R}\left(\mathbf{y}\right)=0$ by iterating on # \begin{equation} # \frac{\partial\mathbf{R}}{\partial\mathbf{y}}\Delta\mathbf{y} = - \mathbf{R}\left(\mathbf{y}\right) # \end{equation} # and updating $\mathbf{y}$ by $\Delta \mathbf{y}$ until convergence. # # We have to do this in in two steps: # # 1. Use popcorn to define a computational kernel to compute to the force balance tangent and residual, and # 2. Use cornflakes to make a representation of a spring network, apply the kernel to it, and solve it with Newton's method. # # Normally, the cornflakes paradigm is to put the kernel specifications into a seperate file so that they are only generated once. Multiple scripts then load the compiled result. That is, the contents of "Part 1" and "Part 2" should be in two different files. They're both together in this notebook for the sake of the discussion. # # Part 1: The Popcorn kernel # In[1]: from popcorn import * init_printing() # Sympy method to render the equations # # # Note: $\mathbf{x}$ and $\mathbf{y}$ are not the axis labels, they are both points. Axes are refered to by the numbers 0,1, and 2. # In[2]: gdim = 2 Scalar = DofSpace(1,0,2) Vector = DofSpace(gdim,0,2) Param = DofSpace(1,-1) i_x = Input('x',Vector) i_y = Input('y',Vector) i_k = Input('k',Param) # `Input`s are (slightly hacky) extensions of the `Matrix` class of Sympy, so you can use them directly to build symbolic expressions. # In[3]: print i_x # The `Vertex_Split` method of `Input` is a convenient way to quickly cut it up # In[4]: x0,x1 = i_x.Vertex_Split() y0,y1 = i_y.Vertex_Split() x1 # Now that we cut up the end points of the springs, we can build useful symbolic expressions with them. Let us make expressions for the deformed and refernce length of the spring. # In[5]: norm = lambda a : sqrt( (a.T*a)[0,0] ) L_defo = norm(y1-y0) L_orig = norm(x1-x0) L_defo # The notebook interface is paticularly useful for this type of work because it renders our equations! Imagine how useful this is to ensuring that you typed in equations correctly! Especially because we could build them symbollically, instead of having to type in vector concepts unrolled into double arrays. # # The ugly [0,0] is there because 1x1 Matrices don't behave like normal Symbols in Sympy. This infuriates me, but I get why. It is just something to watch out for, and why rendering in the notebook is useful. # Now we are able to define the force between the springs, # \begin{equation} # \mathbf{f} = -k \frac{L-L_o}{L_o} \hat{\mathbf{r}} # \end{equation} # In[6]: f = i_k[0]*(L_defo-L_orig)/L_orig * (y1-y0)/L_defo f # Similarly, we have to index `i_k` to get a scalar because all `PopcornVariable` types (`Input` and `Output` are children) are Matrices. (Conditional inheritance to subclass `Symbol` on length 1 variables was attempted and abandoned. There is still conditional inheritance; `PopcornVariable`s can have an unknown size based on `l_edge`, in which case it cannot be represented as a Matrix, and it needs to indexed into with `Vertex_Handle` or `__get_item__`. It supports indexing into with symbolic expressions, where it returns out a `Matrix` of fixed size.) # In[7]: R = Matrix([f,-f]) R # Let's make sure we did that right... # In[8]: R.shape # In[9]: K = R.jacobian(i_y) K.simplify() # This takes a long time! K # Now we define two output `PopcornVariable`s # # There is a global context that manages a list of initialized Inputs, Outputs, and Kernels. Each Kernel travels the program listing to figure out which variables need to be included in the arguments, making it easy to make a whole bunch of different kernels with different arguments that share the same symbolic work. # In[10]: o_R = Output('R',[Vector],1) o_K = Output('K',[Vector],2) Kernel("spring_force", listing=[ Asgn(o_R,R,'='), Asgn(o_K,K,'=') ]) # The final line of a popcorn specification is the `Husk` call. $<$bad pun$>$The husk contains the kernels.$$ It generates all of the C code files for all of the Kernels defined in the file and the required build system and Python wrapper. # In[11]: # This rm line is to highlight the mechanics of it in the next section get_ipython().run_line_magic('rm', '-rf husk_spring') # The final call: Husk('spring') # # Intermezzo: the husk # The `Husk` command writes out a Python module with the name `"husk_"+argument`. There is no hidden cache or JIT runtime in cornflakes: everything is sitting in this directory that you can move around and ship with your production code. # This design choice is to highlight a # The popcorn file only needs to be run once and then loaded everytime you # # What's in the husk? # In[12]: get_ipython().run_line_magic('ls', 'husk_spring') # This directory contains the C files, autogenerated SWIG interface, and a CMake specification, all within the confines of a Python module. We can just import it now, # In[13]: import husk_spring # If you look at the standard out of the notebook server, you'll see the compilation process for the husk: # ``` # [ 25%] Swig source # Scanning dependencies of target _spring_lib # [ 50%] Building C object CMakeFiles/_spring_lib.dir/spring_swigPYTHON_wrap.c.o # [ 75%] Building C object CMakeFiles/_spring_lib.dir/spring_force.c.o # [100%] Linking C shared module _spring_lib.so # [100%] Built target _spring_lib # ``` # It will only compile the first time. If the husk isn't touched, only the Built target line will be present. # In[14]: get_ipython().run_line_magic('ls', 'husk_spring') # Everything has been built for the current machine. # You can even delete all of the files and keep only the `__init__.pyc` and `_spring_lib.so` if the containing code is sensitive. (Not every application is open source.) The original file containing the popcorn specification does not need to be included in the shipped code, either. # # *Pitfall warning*: You'll have to purge it manually if you're working within, say, a Dropbox folder synced to Linux and Mac machine. It gets confused. But, it is not an issue if it copy it into a folder such as /opt/myprogram with install step and import it from there. # The generated C file is even (more or less) human readable, if we examine it by truncating the long equations: # In[15]: get_ipython().system('cut -c -90 husk_spring/spring_force.c') # The file is two parts: 1) The generated C function, `spring_force_eval`, and 2) the kernel struct that is its call signature for cornflakes. The generated C function is self explanatory to call; all of the inputs and outputs are just pointers to `real_t`s (which is a `typedef` to `double` by default). # # **It's possible to call the generated C function by itself without using cornflakes!** In that sense, popcorn can be used to help generate code for existing software without having to completely switch mindsets to use cornflakes. # Don't read much into the phrase "human-readable". In Stone, an LBNL geomechanics code linked to TOUGH+, some of the kernel C files are up to 2MB! This is on par with the size of the C++ classes that FEniCS's ffc can generate. The whole reason for pursuing Python-based code generation is that actually programming C or Fortran code for some of the equations we encounter isn't even humanly tractable! # # The user has no reason to ever look at any of these files (unlike myself, who spends a lot of time looking at these to debug them). It is possible to directly inject C code into the function from the Python specification, which may merit a double-check to make sure it looks right. This husk directory can be treated as a block box that can be moved around, copied from system to system, and imported by the actual calculation in Python. # # Part 2: Cornflakes # **I restart the Jupyter kernel here.** # In[1]: import cornflakes as cf import numpy as np from matplotlib import pylab as plt # The husk containing our kernels is an importable module. # In[2]: import husk_spring # Even though cornflakes doesn't care about the geometry, it still provides some basic helper functions for building up geometries, since that's probably what you want to do with it. # In[3]: x = cf.PP.init_grid(10,10,[0,0],[1,0],[0,1]) y = x.copy() H = cf.Graphers.Build_Pair_Graph(x,0.2) # Let's take a look at the first 10 entries in the graph: # In[4]: H.view()[0][10:20] # In cornflakes, every hypervertex is assigned a unique integer. Each hyperedge is an ordered list of hypervertices of arbitrary length. In our model, the hypergraph is just a list of pairs. If we do a graphical plot of the edges in H as lines connecting the points, which is the physical interpretation of our model, we see the physical object we built in two lines of code: # In[5]: plt.scatter(x[:,0],x[:,1],color='k') for e in H: plt.plot(x[e,0],x[e,1],color='k') plt.show() # The data keys and the Inputs of the kernel **must** have the same values. Don't worry if you are mixing and matching kernels, though; it's possible to reference the same arrays in different data dicts with different names. Cornflakes doesn't force you to organize your data a certain way, you get to tell cornflakes where it is! The data array is just an argument list; it can be inlined in the `Assemble` call, but it's often good practice to declare once and keep a consistent variable naming scheme everywhere. # In[6]: dm_scalar = cf.Dofmap_Strided(1) dm_vector = cf.Dofmap_Strided(2) data = { 'x':(x,dm_vector), 'y':(y,dm_vector), 'k':(np.array([1.0]),dm_scalar) } # The Dofmap objects take Vertex ids as inputs, and # In[7]: dm_vector.Get(2) # In[8]: dm_scalar.Get(2) # There's also `Dofmap_Tabled` for using a lookup table on the hypervertex id. # In[9]: K,R = cf.Assemble2(husk_spring.kernel_spring_force, H,data, {'R':(dm_vector,),'K':(dm_vector,)}, ndof=x.size) # In the simplest case, cornflakes returns Numpy and Scipy objects. This is how I do my debugging and new model development. The ability to play with and spy the matrix is extremely useful: # In[10]: plt.spy(K[0:40,0:40]) plt.show() # Now, we must apply some boundary conditions. The bottom nodes are clamped, and the top nodes have a vertical load applied to them. Then, we use Scipy to solve the sparse system. # In[11]: # Bottom dirrichlet BCs marked_bot = cf.select_nodes(x, lambda a:a[1]<0.0001) bcdofs = dm_vector.Get_List(marked_bot) bcvals = np.zeros(bcdofs.shape,dtype=np.double) cf.Apply_BC(bcdofs,bcvals, K,R) # Top loaded BCs marked_top = cf.select_nodes(x, lambda a:a[1]>1.0-0.0001) loaddofs = dm_vector.Get_List(marked_top).reshape(len(marked_top),2)[:,1] # Just the x's R[loaddofs] -= 0.5 # Solve the system import scipy.sparse.linalg as splin u = -splin.spsolve(K,R) u = u.reshape(x.shape) # it comes out flat y += u # Finally, this problem is trivial enough that we can just plot it inside of the notebook itself. # In[12]: plt.scatter(x[:,0],x[:,1]) plt.scatter(y[:,0],y[:,1],color='k') for e in H: plt.plot(y[e,0],y[e,1],color='k') plt.show() # # Outro # # This was the simplest possible model. The next set of examples will demonstrate the full capabilities of popcorn and cornflakes.