Up and Down the Ladder of Abstraction

Last week you saw Ivana talk about language processing. Most of her work focuses on the high level.

Next week Peter will talk to you about complex neuron models.

Today, I'm going to talk about my work at the top.

Fun at the top of the ladder

Scaling up

We know how to:

  • Represesent symbols
  • Select actions using the Basal Ganglia and Thalamus
  • Save symbols in working memory and associative memory

We used these components to:

  • Answer basic questions
  • Follow a sequence of rules

Now, let's try:

  • Making decisions based on what's in memory

Given a verb-noun pair from a limited vocabulary, execute the appropriate action.

  • verb is SAY or WRITE
  • noun is HELLO or GOODBYE
  • NONE to indicate execute action
In [1]:
vis_sequence = 'SAY HELLO NONE WRITE GOODBYE NONE GOODBYE SAY NONE'.split()

def input_vision(t):
    index = init(t / 0.5) % len(sequence)
    return sequence[index]
In [2]:
import nengo
import nengo.spa as spa
In [3]:
D = 64
model = spa.SPA()
In [4]:
# See "Large-scale cognitive model design using the Nengo neural simulator" by Sughanda et al. for code
# http://compneuro.uwaterloo.ca/publications/sharma2016.html 
In [ ]:
IPythonViz(model, "configs/verb_noun_follow.py.cfg")

It gets more complicated with increasingly complicated behaviour. For example, the task of counting that Spaun does requires a LOT of lines of code.

This also covers other models like:

If you want to integrate multiple complex behaviours, which Spaun actually does, you get an unmanageable amounts of code.

"If someone wants to implement a new task into Spaun requiring a new module, they're better off reimplementing Spaun entirely than trying to edit the code." - Xuan Choo, creator of Spaun

That seems bad. How can fix this?

  • Make the SPA action syntax better. See https://github.com/nengo/nengo_spa.
  • Make the design and simulate loop faster with neuromorphic hardware.
  • Make SPA actions require less fine-tuning. Sverrir is working on this!
  • Make a design language that compiles to SPA. But we don't know how to efficiently describe systems of this scale yet, because so few have been created!

Getting better with practice

Humans get better at a task if it's repeated. How can we include this in our networks?

This only works for basic input-output mapping. Further research needed for motor sequence chunking and other complex tasks.

Going back down the ladder

Matching neurological data

You have to use approximations:

  • MEG is basically spikes
  • fMRI is basically neurotransmitter use

Does a Voja-based associative memory match MEG and fMRI data? No.

What does? I'm working on it.

Bonus Pro Tips

Ideas for projects requiring refining and discussion: https://github.com/ctn-waterloo/modelling_ideas/issues

StackOverflow for cognitive modelling, psychology and neuroscience, good for getting summary of literature: https://psychology.stackexchange.com/help/on-topic