Last week you saw Ivana talk about language processing. Most of her work focuses on the high level.
Next week Peter will talk to you about complex neuron models.
Today, I'm going to talk about my work at the top.
We know how to:
We used these components to:
Now, let's try:
Given a verb-noun pair from a limited vocabulary, execute the appropriate action.
verb
is SAY
or WRITE
noun
is HELLO
or GOODBYE
NONE
to indicate execute actionvis_sequence = 'SAY HELLO NONE WRITE GOODBYE NONE GOODBYE SAY NONE'.split()
def input_vision(t):
index = init(t / 0.5) % len(sequence)
return sequence[index]
import nengo
import nengo.spa as spa
D = 64
model = spa.SPA()
# See "Large-scale cognitive model design using the Nengo neural simulator" by Sughanda et al. for code
# http://compneuro.uwaterloo.ca/publications/sharma2016.html
IPythonViz(model, "configs/verb_noun_follow.py.cfg")
It gets more complicated with increasingly complicated behaviour. For example, the task of counting that Spaun does requires a LOT of lines of code.
This also covers other models like:
If you want to integrate multiple complex behaviours, which Spaun actually does, you get an unmanageable amounts of code.
"If someone wants to implement a new task into Spaun requiring a new module, they're better off reimplementing Spaun entirely than trying to edit the code." - Xuan Choo, creator of Spaun
That seems bad. How can fix this?
Humans get better at a task if it's repeated. How can we include this in our networks?
This only works for basic input-output mapping. Further research needed for motor sequence chunking and other complex tasks.
You have to use approximations:
Does a Voja-based associative memory match MEG and fMRI data? No.
What does? I'm working on it.
Ideas for projects requiring refining and discussion: https://github.com/ctn-waterloo/modelling_ideas/issues
StackOverflow for cognitive modelling, psychology and neuroscience, good for getting summary of literature: https://psychology.stackexchange.com/help/on-topic