#!/usr/bin/env python
# coding: utf-8
# # Up and Down the Ladder of Abstraction
#
#
# Last week you saw Ivana talk about language processing. Most of her work focuses on the high level.
#
# Next week Peter will talk to you about complex neuron models.
#
# Today, I'm going to talk about my work at the top.
# # Fun at the top of the ladder
# ## Scaling up
# We know how to:
#
# - Represesent symbols
# - Select actions using the Basal Ganglia and Thalamus
# - Save symbols in working memory and associative memory
#
# We used these components to:
#
# - Answer basic questions
# - Follow a sequence of rules
#
# Now, let's try:
#
# - Making decisions based on what's in memory
# Given a verb-noun pair from a limited vocabulary, execute the appropriate action.
#
# - `verb` is `SAY` or `WRITE`
# - `noun` is `HELLO` or `GOODBYE`
# - `NONE` to indicate execute action
# In[1]:
vis_sequence = 'SAY HELLO NONE WRITE GOODBYE NONE GOODBYE SAY NONE'.split()
def input_vision(t):
index = init(t / 0.5) % len(sequence)
return sequence[index]
# In[2]:
import nengo
import nengo.spa as spa
# In[3]:
D = 64
model = spa.SPA()
# In[4]:
# See "Large-scale cognitive model design using the Nengo neural simulator" by Sughanda et al. for code
# http://compneuro.uwaterloo.ca/publications/sharma2016.html
# In[ ]:
IPythonViz(model, "configs/verb_noun_follow.py.cfg")
# It gets more complicated with increasingly complicated behaviour. For example, the [task of counting](https://1drv.ms/p/s!Auhg6REoCX4GgWpkZHvmFBIN4FWV) that Spaun does requires [a LOT of lines of code](https://github.com/Seanny123/counting_to_addition/blob/master/counting_only.py).
# This also covers other models like:
# - N-back memory task [Gosmann et al. 2015](http://compneuro.uwaterloo.ca/publications/gosmann2015.html)
# - Stack-like procedure memory [Blouw et al. 2016](http://compneuro.uwaterloo.ca/publications/Blouw2016.html)
# - Language search model from last week [Kajic et al. 2017](http://compneuro.uwaterloo.ca/publications/kajic2017.html)
# If you want to integrate multiple complex behaviours, which Spaun actually does, you get an unmanageable amounts of code.
#
# > "If someone wants to implement a new task into Spaun requiring a new module, they're better off reimplementing Spaun entirely than trying to edit the code." - Xuan Choo, creator of Spaun
#
# That seems bad. How can fix this?
#
# - Make the SPA action syntax better. See https://github.com/nengo/nengo_spa.
# - Make the design and simulate loop faster with neuromorphic hardware.
# - Make SPA actions require less fine-tuning. Sverrir is working on this!
# - Make a design language that compiles to SPA. But we don't know how to efficiently describe systems of this scale yet, because so few have been created!
# ## Getting better with practice
# Humans get better at a task if it's repeated. How can we include this in our networks?
#
#
# This only works for basic input-output mapping. Further research needed for motor sequence chunking and other complex tasks.
# # Going back down the ladder
# ## Matching neurological data
# You have to use approximations:
# - MEG is basically spikes
# - fMRI is basically neurotransmitter use
# Does a Voja-based associative memory match MEG and fMRI data? No.
# What does? I'm working on it.
# # Bonus Pro Tips
#
# Ideas for projects requiring refining and discussion:
# https://github.com/ctn-waterloo/modelling_ideas/issues
#
# StackOverflow for cognitive modelling, psychology and neuroscience, good for getting summary of literature:
# https://psychology.stackexchange.com/help/on-topic