cadCAD Tutorials: The Robot and the Marbles, Networks Addition

In Part 2 we introduced the 'language' in which a system must be described in order for it to be interpretable by cadCAD and some of the basic concepts of the library:

  • State Variables
  • Timestep
  • Policies
  • State Update Functions
  • Partial State Update Blocks
  • Simulation Configuration Parameters

In the previous example, we observed how two robotic arms acting in parallel could result in counterintuitive system level behavior despite the simplicity of the individual robotic arm policies. In this notebook we'll introduce the concept of networks. This done by extending from two boxes of marbles to n boxes which are the nodes in our network. Furthermore, there are are going to be arms between some of the boxes but not others forming a network where the arms are the edges.

The robot and the marbles

  • Picture a set of n boxes (balls) with an integer number of marbles in each; a network of robotic arms is capable of taking a marble from their one of their boxes and dropping it into the other one.
  • Each robotic arm in the network only controls 2 boxes and they act by moving a marble from one box to the other.
  • Each robotic arm is programmed to take one marble at a time from the box containing the largest number of marbles and drop it in the other box. It repeats that process until the boxes contain an equal number of marbles.
  • For the purposes of our analysis of this system, suppose we are only interested in monitoring the number of marbles in only their two boxes.
In [1]:
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from cadCAD.configuration import Configuration
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline

T = 50 #iterations in our simulation
n=3 #number of boxes in our network
m= 2 #for barabasi graph type number of edges is (n-2)*m

G = nx.barabasi_albert_graph(n, m)
k = len(G.edges)
In [ ]:
 
In [2]:
balls = np.zeros(n,)

for node in G.nodes:
    rv = np.random.randint(1,25)
    G.nodes[node]['initial_balls'] = rv
    balls[node] = rv
    G.nodes[node]['balls'] = rv
In [3]:
scale=100
nx.draw_kamada_kawai(G, node_size=balls*scale,labels=nx.get_node_attributes(G,'initial_balls'))
/anaconda3/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:611: MatplotlibDeprecationWarning: isinstance(..., numbers.Number)
  if cb.is_numlike(alpha):
In [4]:
initial_conditions = {'balls':balls, 'network':G}
In [5]:
#input the deltas along the edges and update the boxes
#mechanism: edge by node dimensional operator
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
# We make the state update functions less "intelligent",
# ie. they simply add the number of marbles specified in _input 
# (which, per the policy function definition, may be negative)


def update_balls(params, step, sL, s, _input):
    
    delta_balls = _input['delta']
    new_balls = s['balls']
    graph = s['network']
    
    for e in graph.edges:
        move_ball = delta_balls[e]
        src = e[0]
        dst = e[1]
        if (new_balls[src] >= move_ball) and (new_balls[dst] >= -move_ball):
            new_balls[src] = new_balls[src]-move_ball
            new_balls[dst] = new_balls[dst]+move_ball
            
    
    key = 'balls'
    value = new_balls
    
    return (key, value)

def update_network(params, step, sL, s, _input):
    
    new_nodes = _input['nodes']
    new_edges = _input['edges']
    new_balls = _input['quantity']
    
    graph = s['network']
    
    for node in new_nodes:
        graph.add_node(node)
        graph.nodes[node]['initial_balls'] = new_balls[node]
        graph.nodes[node]['balls'] = new_balls[node]
        graph.nodes[node]['strat'] = _input['node_strats'][node]
        
    for edge in new_edges:
        graph.add_edge(edge[0], edge[1])
        graph.edges[edge]['strat'] = _input['edge_strats'][edge]
    
    
    key = 'network'
    value = graph
    return (key, value)

def update_network_balls(params, step, sL, s, _input):
    
    new_nodes = _input['nodes']
    new_balls = _input['quantity']
    balls = np.zeros(len(s['balls'])+len(new_nodes))
    
    for node in s['network'].nodes:
        balls[node] = s['balls'][node]
    
    for node in new_nodes:
        balls[node] = new_balls[node]
            
    key = 'balls'
    value = balls
    
    return (key, value)

def update_balls_in_network(params, step, sL, s, _input):
    
    graph = s['network']
    for node in graph.nodes:
        graph.nodes[node]['balls'] = s['balls'][node]
            
    key = 'network'
    value = graph
    
    return (key, value)
In [6]:
# this time lets make three kinds of robots
def greedy_robot(src_balls, dst_balls):
    #robot wishes to accumlate balls at its source
    #takes half of its neighbors balls
    if src_balls < dst_balls:
        delta = np.floor(dst_balls/2)
    else:
        delta = 0
    
    return delta

def fair_robot(src_balls, dst_balls):
    #robot follows the simple balancing rule
    delta = np.sign(src_balls-dst_balls)
    
    return delta


def giving_robot(src_balls, dst_balls):
    #robot wishes to gice away balls one at a time
    if src_balls > 0:
        delta = -1
    else:
        delta = 0
    
    return delta
In [7]:
#in the previous version the robots were assigned to the edges
#moving towards an agent based model formulation we assign the stratgies
#instead to the nodes
robot_strategies = [greedy_robot,fair_robot, giving_robot]

for node in G.nodes:
    nstrats = len(robot_strategies)
    rv  = np.random.randint(0,nstrats)
    G.nodes[node]['strat'] = robot_strategies[rv]

for e in G.edges:
    owner_node = e[0]
    G.edges[e]['strat'] = G.nodes[owner_node]['strat']
In [8]:
#Policy: node by edge dimensional operator
#input the states of the boxes output the deltas along the edges
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
# We specify the robotic networks logic in a Policy Function
# unlike previous examples our policy controls a vector valued action, defined over the edges of our network
def robotic_network(params, step, sL, s):
    
    graph = s['network']
    
    
    delta_balls = {}
    for e in graph.edges:
        src = e[0]
        src_balls = s['balls'][src]
        dst = e[1]
        dst_balls = s['balls'][dst]
        
        #transfer balls according to specific robot strat
        strat = graph.edges[e]['strat']
        delta_balls[e] = strat(src_balls,dst_balls)
        
    return_dict = {'nodes': [],'edges': {}, 'quantity': {}, 'node_strats':{},'edge_strats':{},'delta': delta_balls}

    return(return_dict)
In [9]:
def agent_arrival(params, step, sL, s):
    
    node= len(s['network'].nodes)
    node_list = [e[1] for e in s['network'].edges]
    
    #choose a m random edges without replacement
    m=2
    new = set(np.random.choice(node_list,m))
    #new = [0, 1]#tester
    
    nodes = [node]
    edges = [(node,new_node) for new_node in new]
    
    initial_balls = {node:np.random.randint(1,25) }
    
    rv  = np.random.randint(0,nstrats)
    node_strat = {node: robot_strategies[rv]}
    
    edge_strats = {e: robot_strategies[rv] for e in edges}

    return_dict = {'nodes': nodes,
            'edges': edges, 
            'quantity':initial_balls, 
            'node_strats':node_strat,
            'edge_strats':edge_strats,
            'delta': np.zeros(node+1)
           } 
    return(return_dict)
In [10]:
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
# In the Partial State Update Blocks, the user specifies if state update functions will be run in series or in parallel
partial_state_update_blocks = [
    { 
        'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions
            'p1': robotic_network
        },
        'variables': { # The following state variables will be updated simultaneously
            'balls': update_balls      
        }
    },    
    {
        'policies': {
        },
        'variables': { # The following state variables will be updated simultaneously
            'network': update_balls_in_network
        }
    },
    {
      'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions
            'p1': agent_arrival
        },
        'variables': { # The following state variables will be updated simultaneously
            'network': update_network,
            'balls': update_network_balls
        }
    }
]
In [11]:
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
# Settings of general simulation parameters, unrelated to the system itself
# `T` is a range with the number of discrete units of time the simulation will run for;
# `N` is the number of times the simulation will be run (Monte Carlo runs)
# In this example, we'll run the simulation once (N=1) and its duration will be of 10 timesteps
# We'll cover the `M` key in a future article. For now, let's leave it empty
from cadCAD.configuration.utils import config_sim
simulation_parameters = config_sim({
    'T': range(T),
    'N': 1,
    'M': {'p':[0]}
})
In [12]:
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
# The configurations above are then packaged into a `Configuration` object
from cadCAD.configuration import append_configs
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
append_configs(
    initial_state=initial_conditions, #dict containing variable names and initial values
    partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
    sim_configs=simulation_parameters #dict containing simulation parameters
)
In [13]:
from tabulate import tabulate
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
from cadCAD import configs
import pandas as pd

exec_mode = ExecutionMode()
multi_proc_ctx = ExecutionContext(context=exec_mode.multi_proc)
run = Executor(exec_context=multi_proc_ctx, configs=configs)
In [14]:
i = 0
verbose = False
results = {}
for raw_result, tensor_field in run.execute():
    result = pd.DataFrame(raw_result)
    if verbose:
        print()
        print(f"Tensor Field: {type(tensor_field)}")
        print(tabulate(tensor_field, headers='keys', tablefmt='psql'))
        print(f"Output: {type(result)}")
        print(tabulate(result, headers='keys', tablefmt='psql'))
        print()
    results[i] = {}
    results[i]['result'] = result
    results[i]['simulation_parameters'] = simulation_parameters[i]
    i += 1
multi_proc: [<cadCAD.configuration.Configuration object at 0x1019c24f28>]
[<cadCAD.configuration.Configuration object at 0x1019c24f28>]
In [15]:
df = pd.DataFrame(raw_result)
balls_list = [b for b in df.balls]
In [16]:
df.head(13)
Out[16]:
balls network run substep timestep
0 [4.0, 20.0, 20.0] (0, 1, 2) 1 0 0
1 [5.0, 20.0, 19.0] (0, 1, 2) 1 1 1
2 [5.0, 20.0, 19.0] (0, 1, 2) 1 2 1
3 [5.0, 20.0, 19.0, 14.0] (0, 1, 2, 3) 1 3 1
4 [6.0, 19.0, 19.0, 14.0] (0, 1, 2, 3) 1 1 2
5 [6.0, 19.0, 19.0, 14.0] (0, 1, 2, 3) 1 2 2
6 [6.0, 19.0, 19.0, 14.0, 5.0] (0, 1, 2, 3, 4) 1 3 2
7 [7.0, 19.0, 18.0, 13.0, 6.0] (0, 1, 2, 3, 4) 1 1 3
8 [7.0, 19.0, 18.0, 13.0, 6.0] (0, 1, 2, 3, 4) 1 2 3
9 [7.0, 19.0, 18.0, 13.0, 6.0, 10.0] (0, 1, 2, 3, 4, 5) 1 3 3
10 [8.0, 18.0, 17.0, 11.0, 7.0, 12.0] (0, 1, 2, 3, 4, 5) 1 1 4
11 [8.0, 18.0, 17.0, 11.0, 7.0, 12.0] (0, 1, 2, 3, 4, 5) 1 2 4
12 [8.0, 18.0, 17.0, 11.0, 7.0, 12.0, 9.0] (0, 1, 2, 3, 4, 5, 6) 1 3 4
In [17]:
df['balls_from_graph'] = df.network.apply(lambda rec: [rec.nodes[node]['balls'] for node in rec.nodes])
In [18]:
df.head(13)
Out[18]:
balls network run substep timestep balls_from_graph
0 [4.0, 20.0, 20.0] (0, 1, 2) 1 0 0 [4, 20, 20]
1 [5.0, 20.0, 19.0] (0, 1, 2) 1 1 1 [4, 20, 20]
2 [5.0, 20.0, 19.0] (0, 1, 2) 1 2 1 [5.0, 20.0, 19.0]
3 [5.0, 20.0, 19.0, 14.0] (0, 1, 2, 3) 1 3 1 [5.0, 20.0, 19.0, 14]
4 [6.0, 19.0, 19.0, 14.0] (0, 1, 2, 3) 1 1 2 [5.0, 20.0, 19.0, 14]
5 [6.0, 19.0, 19.0, 14.0] (0, 1, 2, 3) 1 2 2 [6.0, 19.0, 19.0, 14.0]
6 [6.0, 19.0, 19.0, 14.0, 5.0] (0, 1, 2, 3, 4) 1 3 2 [6.0, 19.0, 19.0, 14.0, 5]
7 [7.0, 19.0, 18.0, 13.0, 6.0] (0, 1, 2, 3, 4) 1 1 3 [6.0, 19.0, 19.0, 14.0, 5]
8 [7.0, 19.0, 18.0, 13.0, 6.0] (0, 1, 2, 3, 4) 1 2 3 [7.0, 19.0, 18.0, 13.0, 6.0]
9 [7.0, 19.0, 18.0, 13.0, 6.0, 10.0] (0, 1, 2, 3, 4, 5) 1 3 3 [7.0, 19.0, 18.0, 13.0, 6.0, 10]
10 [8.0, 18.0, 17.0, 11.0, 7.0, 12.0] (0, 1, 2, 3, 4, 5) 1 1 4 [7.0, 19.0, 18.0, 13.0, 6.0, 10]
11 [8.0, 18.0, 17.0, 11.0, 7.0, 12.0] (0, 1, 2, 3, 4, 5) 1 2 4 [8.0, 18.0, 17.0, 11.0, 7.0, 12.0]
12 [8.0, 18.0, 17.0, 11.0, 7.0, 12.0, 9.0] (0, 1, 2, 3, 4, 5, 6) 1 3 4 [8.0, 18.0, 17.0, 11.0, 7.0, 12.0, 9]
In [19]:
#select the record
r = 10 #in this case r=3 is run 1, substep 3, timestep 4

#select a specific node index in graph:
i = 2

print(df.iloc[r].network.nodes[i]['balls'])
print(df.balls[r][i])
18.0
17.0
In [20]:
df['sync_check'] = df.balls - df.balls_from_graph
In [21]:
df.head()
Out[21]:
balls network run substep timestep balls_from_graph sync_check
0 [4.0, 20.0, 20.0] (0, 1, 2) 1 0 0 [4, 20, 20] [0.0, 0.0, 0.0]
1 [5.0, 20.0, 19.0] (0, 1, 2) 1 1 1 [4, 20, 20] [1.0, 0.0, -1.0]
2 [5.0, 20.0, 19.0] (0, 1, 2) 1 2 1 [5.0, 20.0, 19.0] [0.0, 0.0, 0.0]
3 [5.0, 20.0, 19.0, 14.0] (0, 1, 2, 3) 1 3 1 [5.0, 20.0, 19.0, 14] [0.0, 0.0, 0.0, 0.0]
4 [6.0, 19.0, 19.0, 14.0] (0, 1, 2, 3) 1 1 2 [5.0, 20.0, 19.0, 14] [1.0, -1.0, 0.0, 0.0]
In [91]:
nets = df[df.substep==1].network.values
pos = nx.spring_layout(nets[-1])
nx.draw(nets[-1],pos=pos, node_size=ns, node_color=c)
axes = plt.axis()
In [79]:
maxL = df['sync_check'].apply(len).max()
In [94]:
counter = 0
for i in range(len(nets)):
    ns = [10*b for b in rdf.balls.values[i]]
    m = len(ns)
    nx.draw(nets[i],pos=pos, node_size=ns, node_color=c[:m])
    plt.title("Update: $G_{"+str(i)+"}$ to $G_{"+str(i+1)+"}$")
    plt.axis(axes)
    if counter <10:
        st = 'fig0'
    else:
        st = 'fig'
    plt.savefig(st+str(counter)+'.png')
    
    counter = counter+1
    plt.show()