Introduction to Neurokernel's API

This notebook illustrates how to define and connect local processing unit (LPU) models using Neurokernel.

Background

An LPU comprises two distinct populations of neurons (Chiang et al., 2011): local neurons may only project to other neurons in the LPU, while projection neurons may project both to local neurons and neurons in other LPUs. All synapses between neurons are comprised by internal connectivity patterns. LPUs are linked by inter-LPU connectivity patterns that map one LPU's outputs to inputs in other LPUs. The general structure of an LPU is shown below:

Defining an LPU Interface

Interface Ports

All communication between LPUs must pass through ports that are internally associated with modeling elements that must emit or receive external data. An LPU's interface is defined as the set of ports it exposes to other LPUs. Each port is defined by a unique identifier string and attributes that indicate whether

  • it transmits spikes (i.e., boolean values) or graded potentials (i.e., floating point numbers) at each step of model execution and whether
  • it accepts input or emits output.

To facilitate management of a large numbers of ports, Neurokernel requires that port identifiers conform to a hierarchical format similar to that used to label files or elements in structured documents. Each identifier may comprise multiple levels joined by separators (/ and []). Neurokernel also defines an extended format for selecting multiple ports with a single selector; a selector that cannot be expanded to an explicit list of individual port identifiers is said to be ambiguous. Rather than define a formal grammar for this format, the following table depicts examples of how it may be used to refer to multiple ports.

Identifier/Selector Comments
/med/L1[0] selects a single port
/med/L1[0] equivalent to /med/L1[0]
/med+/L[1] equivalent to /med/L1[0]
/med[L1,L2][0] selects two ports
/med/L1[0,1] another example of two ports
/med/L1[0],/med/L1[1] equivalent to /med/L1[0,1]
/med/L1[0:10] selects a range of 10 ports
/med/L1/* selects all ports starting with /med/L1
(/med/L1,/med/L2)+[0] equivalent to /med/[L1,L2][0]
/med/[L1,L2].+[0:2] equivalent to /med/L1[0],/med/L2[1]

Inter-LPU Connectivity Patterns

All connections between LPUs must be defined in inter-LPU connectivity patterns that map the output ports of one LPU to the input ports of another LPU. Since individual LPUs may internally implement multiplexing of input signals to a single destination in different ways, the LPU interface only permits fan-out from individual output ports to multiple input ports; connections from multiple output ports may not converge on a single input port. A single pattern may define connections in both directions.

A connectivity pattern between two LPUs is fully specified by the identifiers and attributes of the ports in its two interfaces and the directed graph of connections defined between them. An example of such pattern defined between ports /lam[0:6] and /med[0:5] follows:

PortInterfaceI/OPort Type
/lam[0]0ingraded potential
/lam[1]0ingraded potential
/lam[2]0outgraded potential
/lam[3]0outspiking
/lam[4]0outspiking
/lam[5]0outspiking
/med[0]1outgraded potential
/med[1]1outgraded potential
/med[2]1outgraded potential
/med[3]1inspiking
/med[4]1inspiking
FromTo
/lam[0]/med[0]
/lam[0]/med[1]
/lam[1]/med[2]
/med[3]/lam[3]
/med[4]/lam[4]
/med[4]/lam[5]

Using Neurokernel's API

Setting up LPU Interfaces and Patterns

Neurokernel provides Python classes for defining LPUs and connectivity patterns that can be used to link them together. The former (neurokernel.core.Module for LPUs that don't access the GPU and neurokernel.core_gpu.Module for LPUs that do) requires an LPU designer to implement all of the LPU's internals from the ground up; the latter class places no explicit constraints upon how an LPU uses GPU resources. In order to enable independently implemented LPUs to communicate with each other, each LPU must implement a method called run_step() called during each step of execution that consumes incoming data from other LPUs and produces data for transmission to other LPUs. The example below generates random data in its run_step() method:

In [1]:
import numpy as np

from neurokernel.core_gpu import Module

class MyModule(Module):

    # Process incoming data and set outgoing data:
    def run_step(self):       
        super(MyModule, self).run_step()

        # Display input graded potential data:
        self.logger.info('input gpot port data: '+str(self.pm['gpot'][self.in_gpot_ports]))
        
        # Display input spike data:
        self.logger.info('input spike port data: '+str(self.pm['spike'][self.in_spike_ports]))

        # Output random graded potential data:
        out_gpot_data = gpuarray.to_gpu(np.random.rand(len(self.out_gpot_ports)))
        self.pm['gpot'][self.out_gpot_ports] = out_gpot_data
        self.log_info('output gpot port data: '+str(out_gpot_data))
            
        # Randomly select output ports to emit spikes:
        out_spike_data = gpuarray.to_gpu(np.random.randint(0, 2, len(self.out_spike_ports)))
        self.pm['spike'][self.out_spike_ports] = out_spike_data
        self.log_info('output spike port data: '+str(out_spike_data))

Notice that every LPU instance must be associated with a unique identifier (id). An LPU contains a port-mapper attribute (pm) that maps input and output ports to a data array that may be accessed by the LPU's internal implementation; after each step of execution, the array associated with the port-mapper is updated with input data from source LPUs while output data from the array is transmitted to destination LPUs.

One can instantiate the above LPU class as follows:

In [2]:
from neurokernel.plsel import Selector,SelectorMethods

m1_int_sel_in_gpot = Selector('/a/in/gpot[0:2]')
m1_int_sel_out_gpot = Selector('/a/out/gpot[0:2]')
m1_int_sel_in_spike = Selector('/a/in/spike[0:2]')
m1_int_sel_out_spike = Selector('/a/out/spike[0:2]')
m1_int_sel = m1_int_sel_in_gpot+m1_int_sel_out_gpot+\
             m1_int_sel_in_spike+m1_int_sel_out_spike
m1_int_sel_in = m1_int_sel_in_gpot+m1_int_sel_in_spike
m1_int_sel_out = m1_int_sel_out_gpot+m1_int_sel_out_spike
m1_int_sel_gpot = m1_int_sel_in_gpot+m1_int_sel_out_gpot
m1_int_sel_spike = m1_int_sel_in_spike+m1_int_sel_out_spike
N1_gpot = SelectorMethods.count_ports(m1_int_sel_gpot)
N1_spike = SelectorMethods.count_ports(m1_int_sel_spike)

m2_int_sel_in_gpot = Selector('/b/in/gpot[0:2]')
m2_int_sel_out_gpot = Selector('/b/out/gpot[0:2]')
m2_int_sel_in_spike = Selector('/b/in/spike[0:2]')
m2_int_sel_out_spike = Selector('/b/out/spike[0:2]')
m2_int_sel = m2_int_sel_in_gpot+m2_int_sel_out_gpot+\
             m2_int_sel_in_spike+m2_int_sel_out_spike
m2_int_sel_in = m2_int_sel_in_gpot+m2_int_sel_in_spike
m2_int_sel_out = m2_int_sel_out_gpot+m2_int_sel_out_spike
m2_int_sel_gpot = m2_int_sel_in_gpot+m2_int_sel_out_gpot
m2_int_sel_spike = m2_int_sel_in_spike+m2_int_sel_out_spike
N2_gpot = SelectorMethods.count_ports(m2_int_sel_gpot)
N2_spike = SelectorMethods.count_ports(m2_int_sel_spike)

Using the ports in each of the above LPUs' interfaces, one can define a connectivity pattern between them as follows:

In [3]:
from neurokernel.pattern import Pattern

pat12 = Pattern(m1_int_sel, m2_int_sel)
pat12.interface[m1_int_sel_out_gpot] = [0, 'in', 'gpot']
pat12.interface[m1_int_sel_in_gpot] = [0, 'out', 'gpot']
pat12.interface[m1_int_sel_out_spike] = [0, 'in', 'spike']
pat12.interface[m1_int_sel_in_spike] = [0, 'out', 'spike']
pat12.interface[m2_int_sel_in_gpot] = [1, 'out', 'gpot']
pat12.interface[m2_int_sel_out_gpot] = [1, 'in', 'gpot']
pat12.interface[m2_int_sel_in_spike] = [1, 'out', 'spike']
pat12.interface[m2_int_sel_out_spike] = [1, 'in', 'spike']

pat12['/a/out/gpot[0]', '/b/in/gpot[0]'] = 1
pat12['/a/out/gpot[1]', '/b/in/gpot[1]'] = 1
pat12['/b/out/gpot[0]', '/a/in/gpot[0]'] = 1
pat12['/b/out/gpot[1]', '/a/in/gpot[1]'] = 1
pat12['/a/out/spike[0]', '/b/in/spike[0]'] = 1
pat12['/a/out/spike[1]', '/b/in/spike[1]'] = 1
pat12['/b/out/spike[0]', '/a/in/spike[0]'] = 1
pat12['/b/out/spike[1]', '/a/in/spike[1]'] = 1

A Simple Example: Creating an LPU

To obviate the need to implement an LPU completely from scratch, the Neurodriver package provides a functional LPU class (neurokernel.LPU.LPU.LPU) that supports the following neuron and synapse models:

  • Leaky Integrate-and-Fire (LIF) neuron (spiking neuron)
  • Morris-Lecar (ML) neuron (graded potential neuron),
  • Alpha function synapse
  • Conductance-based synapse (referred to as power_gpot_gpot).

Note that although the ML model can in principle be configured as a spiking neuron model, the implementation in the LPU class is configured to output its membrane potential.

Alpha function synapses may be used to connect any type of presynaptic neuron to any type of of postsynaptic neuron; the neuron presynaptic to a conductance-based synapse must be a graded potential neuron.

It should be emphasized that the above LPU implementation and the currently support models are not necessarily optimal and may be replaced with improved implementations in the future.

The LPU class provided by Neurodriver may be instantiated with a graph describing its internal structure. The graph must be stored in GEXF file format with nodes and edges respectively corresponding to instances of the supported neuron and synapse models. To facilitate construction of an LPU, the networkx Python package may be used to set the parameters of the model instances. For example, the following code defines a simple network consisting of an LIF neuron with a single synaptic connection to an ML neuron; the synaptic current elicited by the LIF neuron's spikes is modeled by an alpha function:

In [4]:
import numpy as np
import networkx as nx

G = nx.MultiDiGraph()
# add a neuron node with LeakyIAF model
G.add_node('neuron0',                                 # UID
           {'class': 'LeakyIAF',                      # component model
            'name': 'neuron_0',                       # component name
            'initV': np.random.uniform(-60.0, -25.0), # initial membrane voltage
            'reset_potential': -67.5489770451,        # reset voltage
            'threshold': -25.1355161007,              # spike threshold
            'resting_potential': 0.0,                 # resting potential
            'resistance': 1024.45570216,              # membrane resistance
            'capacitance': 0.0669810502993})          # membrane capacitance

# The above neuron is a projection neuron,
# create an output port for it
G.add_node('neuron0_port',                    # UID
           {'class': 'Port',                  # indicates it is a port
            'name': 'neuron_0_output_port',   # name of the port
            'selector': '/a[0]',              # selector of the port
            'port_io': 'out',                 # indicates it is an output port
            'port_type': 'spike'})            # indicates it is a spike port

# connect the neuron node and its port
G.add_edge('neuron0', 'neuron0_port')

# add a second neuron node with MorrisLecar model
G.add_node('neuron1',
           {'class': "MorrisLecar",                  
            'name': 'neuron_1',
            'V1': 30.0,
            'V2': 15.0,
            'V3': 0.0,
            'V4': 30.0,
            'phi': 25.0,
            'offset': 0.0,
            'V_L': -50.,
            'V_Ca': 100.0,
            'V_K': -70.0,
            'g_Ca': 1.1,
            'g_K': 2.0,
            'g_L': 0.5,
            'initV': -52.14,
            'initn': 0.03,})

# add a synapse node with AlphaSynapse model
G.add_node('synapse_0_1',
           {'class': 'AlphaSynapse',
            'name': 'synapse_0_1',
            'ar': 1.1*1e2,        # decay rate
            'ad': 1.9*1e3,        # rise rate
            'reverse': 65.0,      # reversal potential
            'gmax': 2*1e-3,       # maximum conductance
            })

# connect presynaptic neuron to synapse
G.add_edge('neuron0', 'synapse_0_1')
# connect synapse to postsynaptic neuron
G.add_edge('synapse_0_1', 'neuron1')

# export the graph to GEXF file
nx.write_gexf(G, 'simple_lpu.gexf.gz')

We can prepare a simple pulse input and save it in an HDF5 file to pass to neuron_0 as follows:

In [5]:
import h5py

dt = 1e-4           # time resolution of model execution in seconds
dur = 1.0           # duration in seconds
Nt = int(dur/dt) # number of data points in time

start = 0.3
stop = 0.6

I_max = 0.6
t = np.arange(0, dt*Nt, dt)
I = np.zeros((Nt, 1), dtype=np.double)
I[np.logical_and(t>start, t<stop)] = I_max

uids = np.array(['neuron0'])

with h5py.File('simple_input.h5', 'w') as f:
    f.create_dataset('I/uids', data=uids)
    f.create_dataset('I/data', (Nt, 1),
                     dtype=np.double,
                     data=I)

The LPU defined earlier can be instantiated and executed as follows:

In [6]:
from neurokernel.core_cpu import Manager
from neurokernel.LPU.LPU import LPU

from neurokernel.LPU.InputProcessors.FileInputProcessor import FileInputProcessor
from neurokernel.LPU.OutputProcessors.FileOutputProcessor import FileOutputProcessor

import neurokernel.mpi_relaunch

dt = 1e-4
dur = 1.0         
steps = int(dur/dt) 

man = Manager()
(comp_dict, conns) = LPU.lpu_parser('simple_lpu.gexf.gz')

fl_input_processor = FileInputProcessor('simple_input.h5')
fl_output_processor = FileOutputProcessor([('V',None),('spike_state',None)], 'simple_output.h5', sample_interval=1)

man.add(LPU, 'simple', dt, comp_dict, conns,
        device=0, input_processors = [fl_input_processor],
        output_processors = [fl_output_processor], debug=False)

man.spawn()
man.start(steps=steps)
man.wait()

The spikes emitted by neuron_0 and the membrane potential of neuron_1 are respectively stored in simple_output_spike.h5 and simple_output_gpot.h5. These can be visualized using a built-in LPU output visualization class that can render the output in video and image format:

In [7]:
import matplotlib as mpl
mpl.use('agg')
import matplotlib.pyplot as plt

import neurokernel.LPU.utils.visualizer as vis
import networkx as nx
import h5py

# Temporary fix for bug in networkx 1.8:
nx.readwrite.gexf.GEXF.convert_bool = \
    {'false':False, 'False':False,
      'true':True, 'True':True}

V = vis.visualizer()

# create a plot for current input injected to 'neuron0'
V.add_LPU('simple_input.h5', LPU='Input', is_input = True)
V.add_plot({'type': 'waveform', 'uids': [['neuron0']], 'variable': 'I'}, 'input_Input')

# create a plot for spike output from 'neuron0'
V.add_LPU('simple_output.h5',
          gexf_file='./simple_lpu.gexf.gz', LPU='Simple LPU (Spikes)')
V.add_plot({'type':'raster', 'uids': [['neuron0']], 'variable': 'spike_state',
            'yticks': [0], 'yticklabels': [0], 'title': 'Output'},
            'Simple LPU (Spikes)')

# create a plot for membrane potential output from 'neuron1'
V.add_LPU('simple_output.h5',
          gexf_file='./simple_lpu.gexf.gz', LPU='Simple LPU (Graded Potential)')
V.add_plot({'type': 'waveform', 'uids': [['neuron1']], 'variable': 'V', 'title': 'Output',
            'ylim': [-70.0,-10.0]},
            'Simple LPU (Graded Potential)')

V._update_interval = None
V.rows = 3
V.cols = 1
V.fontsize = 4
V.dt = 0.0001
V.xlim = [0, 1.0]
V.figsize = (6, 4)
V.run('simple_output.png')

# Don't automatically display the output image:
plt.close(plt.gcf())

Here is the generated image of the output:

A More Complex Example: Connecting LPUs

The following example demonstrates the creation and connection of two LPUs containing multiple neurons with a connectivity pattern. The number of neurons and synapses in each of the LPUs' internal populations are randomly generated: the number of neurons in each populations is randomly selected between 30 to 40. The LPUs' projection neurons - as well as populations of input neurons presynaptic to the LPUs that accept an input stimulus - are modeled as LIF neurons, while each of the local neurons is modeled as either an LIF neuron or a graded potential ML neuron. Synaptic currents are modeled with alpha functions. Synapses between the LPU's local and projection neurons, as well as synpases between the input neurons and the LPU's internal neurons, are also created randomly; roughly half of the total number of pairs of neurons are connected.

To generate the LPUs and input signals used in this demo, we run the following script:

In [8]:
%cd -q ../examples/intro/data
%run gen_generic_lpu.py -s 0 -l lpu_0 generic_lpu_0.gexf.gz generic_lpu_0_input.h5
%run gen_generic_lpu.py -s 1 -l lpu_1 generic_lpu_1.gexf.gz generic_lpu_1_input.h5

Since the neurons and synapses in the generated LPUs are stored in GEXF format, they can be easily accessed using networkx and pandas. Neurokernel provides a convenience function to convert between networkx graphs and pandas' DataFrame class:

In [9]:
import neurokernel.tools.graph
g_0 = nx.read_gexf('generic_lpu_0.gexf.gz')
df_comp_0, df_conn_0 = neurokernel.tools.graph.graph_to_df(g_0)

Say one wishes to explore several LIF neurons in LPU 0. Here is how to access their parameters:

In [10]:
df_comp_0[df_comp_0['class'] == 'LeakyIAF'][30:35][['name','class',
                                'initV','reset_potential','resting_potential',
                                'threshold','resistance','capacitance']] 
Out[10]:
name class initV reset_potential resting_potential threshold resistance capacitance
sensory_30 sensory_30_s LeakyIAF -56.4074 -67.549 0 -25.1355 1002.45 0.0669811
sensory_34 sensory_34_s LeakyIAF -33.773 -67.549 0 -25.1355 1002.45 0.0669811
sensory_35 sensory_35_s LeakyIAF -39.563 -67.549 0 -25.1355 1002.45 0.0669811
sensory_5 sensory_5_s LeakyIAF -33.3288 -67.549 0 -25.1355 1002.45 0.0669811
sensory_7 sensory_7_s LeakyIAF -46.2975 -67.549 0 -25.1355 1002.45 0.0669811

Say one wishes to find the parameters of the synapses linking neuron output_3_s to other destination neurons; these can be accessed using the following query:

In [11]:
ind = df_comp_0['name'].str.startswith('proj_3_s-')
df_comp_0[ind][['name','class','ar','ad','reverse','gmax']]
Out[11]:
name class ar ad reverse gmax
synapse_proj_3_s-local_12_g proj_3_s-local_12_g AlphaSynapse 110 1900 10 3.1e-07
synapse_proj_3_s-local_17_g proj_3_s-local_17_g AlphaSynapse 110 1900 10 3.1e-07
synapse_proj_3_s-local_18_g proj_3_s-local_18_g AlphaSynapse 110 1900 10 3.1e-07
synapse_proj_3_s-local_23_g proj_3_s-local_23_g AlphaSynapse 110 1900 10 3.1e-07
synapse_proj_3_s-local_27_g proj_3_s-local_27_g AlphaSynapse 110 1900 10 3.1e-07
synapse_proj_3_s-local_28_g proj_3_s-local_28_g AlphaSynapse 110 1900 10 3.1e-07
synapse_proj_3_s-local_29_g proj_3_s-local_29_g AlphaSynapse 110 1900 10 3.1e-07
synapse_proj_3_s-local_30_g proj_3_s-local_30_g AlphaSynapse 110 1900 10 3.1e-07
synapse_proj_3_s-local_8_g proj_3_s-local_8_g AlphaSynapse 110 1900 10 3.1e-07

Once the configuration and the input stimulus are ready, we execute the entire model both with and without connections defined between the LPUs:

In [12]:
%cd -q ~/neurodriver/examples/intro
%run intro_demo.py

Finally, we generate videos that depict the progress of the model execution with and without connections between the LPUs:

In [13]:
%run visualize_output.py

Here is the output of the unconnected LPUs:

In [14]:
IPython.display.YouTubeVideo('FY810D-hhD8')
Out[14]:

Here is the output of the LPUs with synaptic connections from neurons in LPU 0 to LPU 1:

In [15]:
IPython.display.YouTubeVideo('U2FGNbQ5ibg')
Out[15]:

If one compares the two videos, one will observe that the output of LPU 0 in both videos remains the same while that of LPU 1 exhibits changes when connected to LPU 0. This confirms that the one-way connectivity pattern defined earlier is transmitting data from one LPU to the other during model execution.

References

Chiang, A.-S., Lin, C.-Y., Chuang, C.-C., Chang, H.-M., Hsieh, C.-H., Yeh, C.-W., et al. (2011), Three-dimensional reconstruction of brain-wide wiring networks in Drosophila at single-cell resolution, Current Biology, 21, 1, 1–11, doi:10.1016/j.cub.2010.11.056