# Provocative Cognitive Models¶

1. Hofstadter et al. Copycat and Metacat
2. Biological Metaphors: Neural and Genetic models
1. Can a robot do something it wasn't trained to do?
2. Can a system invent its own language?

## Background¶

• The early years in AI were dominated by logic, words, and proofs/guarantees
• The last couple of decades have seen a rise in numbers and statistics
• Many AI and Cognitive scientists desire to model a phenomenon at the level of the phenomenon
• Models whose behavior "emerges" from lower-level activity are hard to create, and unpredictable

I believe that emergent models will exhibit all of the important properties that we are interested in. They will have thoughts, beliefs, awareness, intentions, desires, goals, etc. They will be conscious.

## Metacat¶

An emergent model of making analogies.

abc    -> abd
mrrjjj ->  ?


If $abc$ goes to $abd$ what does $mrrjjj$ go to?

In [16]:
calico.Image("http://upload.wikimedia.org/wikipedia/en/d/d4/Metacat_demo_abc_abd_mrrjjj_mrrjjjj.jpg")

Out[16]:

Metacat contributions:

• Perception
• Experience
• Concepts
• Memory
• Self-watching

http://science.slc.edu/~jmarshall/metacat/

## Neural Networks¶

For these examples, we will use one of the simplest of the neural network models, the three-layer, feed-forward propagation model. This was called the Multilayer Perceptron in early years.

In [6]:
calico.Image("http://upload.wikimedia.org/wikipedia/commons/thumb/e/e4/Artificial_neural_network.svg/500px-Artificial_neural_network.svg.png")

Out[6]:

Terms:

• Node/unit
• Layer (the above has three)
• Activation value
• Weight/connection
• Target (desired) output provided by "teacher"
• Actual output
• Error
• Online or batch training

It has been proven that for any finite function $f$:

$f(x) = y$

that there is a set of weights that exists to map the inputs $x$ to the desired output $y$.

This means that you can encode any given Turing Machine as a neural network (Turing Machine equivalent).

Limitations:

1. Inputs and Outputs must be represented as numbers
2. There is no guarantee that you can find the set of weights that provide the desired function

## History¶

1943 - McCulloch and Pitts, "A Logical Calculus of Ideas Immanent in Nervous Activity." Posited that there was an equivalency between first order logic and simple neurons with a threshold.

1949 - Hebb, "The Organization of Behavior." Proposed the idea that when two neurons fire simultaneously, the connection between them is strengthened.

1962 - Rosenblatt, "Principles of Neurodynamics." Proposed the Perceptron model.

1969 - Minsky and Papert, "Perceptrons: An Introduction to Computational Geometry." Proved that a two-layer Perceptron could not be used for some very simple functions (XOR).

1969 - Bryson, Denham, and Dreyfus, "Optimal programming problems with inequality constraints." First use of a backpropagation of error learning algorithm.

1975 - Holland, "Adaptation in natural and artificial systems." Describes the genetic algorithm.

1986 - Rumelhart and McClelland, "Parallel Distributed Processing: Explorations in the microstructure of cognition."

## Backpropagation of Error¶

Backprop training method:

• Requires a "teacher" (we provide the desired output)
• The error between actual output and desired output drives learning
• Can work on any number of layers
• Learning can take a long time
• Given enough weights and time, any function can be learned

## XOR¶

Input 1 Input 2 Output
0 0 0
0 1 1
1 0 1
1 1 0
In [1]:
from ai.conx import Network

In [2]:
net = Network()

Conx using seed: 1398415563.99

In [3]:
net.setInputsAndTargets([[0, 0], [0, 1], [1, 0], [1, 1]],
[[0],    [1],    [1],    [0]])
# net.setTolerance(.1)

In [4]:
net.train()

Epoch #    25 | TSS Error: 1.0225 | Correct: 0.0000 | RMS Error: 0.5056
Epoch #    50 | TSS Error: 1.0232 | Correct: 0.0000 | RMS Error: 0.5058
Epoch #    75 | TSS Error: 1.0328 | Correct: 0.0000 | RMS Error: 0.5081
Epoch #   100 | TSS Error: 1.0270 | Correct: 0.0000 | RMS Error: 0.5067
Epoch #   125 | TSS Error: 1.0183 | Correct: 0.0000 | RMS Error: 0.5046
Epoch #   150 | TSS Error: 1.0357 | Correct: 0.0000 | RMS Error: 0.5088
Epoch #   175 | TSS Error: 1.0168 | Correct: 0.0000 | RMS Error: 0.5042
Epoch #   200 | TSS Error: 1.0157 | Correct: 0.0000 | RMS Error: 0.5039
Epoch #   225 | TSS Error: 1.0173 | Correct: 0.0000 | RMS Error: 0.5043
Epoch #   250 | TSS Error: 1.0076 | Correct: 0.0000 | RMS Error: 0.5019
Epoch #   275 | TSS Error: 1.0404 | Correct: 0.0000 | RMS Error: 0.5100
Epoch #   300 | TSS Error: 1.0148 | Correct: 0.0000 | RMS Error: 0.5037
Epoch #   325 | TSS Error: 1.0310 | Correct: 0.0000 | RMS Error: 0.5077
Epoch #   350 | TSS Error: 1.0410 | Correct: 0.0000 | RMS Error: 0.5102
Epoch #   375 | TSS Error: 1.0236 | Correct: 0.0000 | RMS Error: 0.5059
Epoch #   400 | TSS Error: 1.0204 | Correct: 0.0000 | RMS Error: 0.5051
Epoch #   425 | TSS Error: 1.0176 | Correct: 0.0000 | RMS Error: 0.5044
Epoch #   450 | TSS Error: 1.0161 | Correct: 0.0000 | RMS Error: 0.5040
Epoch #   475 | TSS Error: 1.0129 | Correct: 0.0000 | RMS Error: 0.5032
Epoch #   500 | TSS Error: 1.0133 | Correct: 0.0000 | RMS Error: 0.5033
Epoch #   525 | TSS Error: 1.0104 | Correct: 0.0000 | RMS Error: 0.5026
Epoch #   550 | TSS Error: 1.0066 | Correct: 0.0000 | RMS Error: 0.5016
Epoch #   575 | TSS Error: 1.0043 | Correct: 0.0000 | RMS Error: 0.5011
Epoch #   600 | TSS Error: 1.0304 | Correct: 0.0000 | RMS Error: 0.5076
Epoch #   625 | TSS Error: 0.9876 | Correct: 0.0000 | RMS Error: 0.4969
Epoch #   650 | TSS Error: 0.9705 | Correct: 0.0000 | RMS Error: 0.4926
Epoch #   675 | TSS Error: 0.9598 | Correct: 0.0000 | RMS Error: 0.4898
Epoch #   700 | TSS Error: 0.9264 | Correct: 0.2500 | RMS Error: 0.4812
Epoch #   725 | TSS Error: 0.8914 | Correct: 0.2500 | RMS Error: 0.4721
Epoch #   750 | TSS Error: 0.8598 | Correct: 0.2500 | RMS Error: 0.4636
Epoch #   775 | TSS Error: 0.8354 | Correct: 0.2500 | RMS Error: 0.4570
Epoch #   800 | TSS Error: 0.8190 | Correct: 0.2500 | RMS Error: 0.4525
Epoch #   825 | TSS Error: 0.7967 | Correct: 0.2500 | RMS Error: 0.4463
Epoch #   850 | TSS Error: 0.7723 | Correct: 0.2500 | RMS Error: 0.4394
Epoch #   875 | TSS Error: 0.7560 | Correct: 0.2500 | RMS Error: 0.4348
Epoch #   900 | TSS Error: 0.7131 | Correct: 0.5000 | RMS Error: 0.4222
Epoch #   925 | TSS Error: 0.6659 | Correct: 0.7500 | RMS Error: 0.4080
Epoch #   950 | TSS Error: 0.5269 | Correct: 0.7500 | RMS Error: 0.3629
Final #   968 | TSS Error: 0.4098 | Correct: 1.0000 | RMS Error: 0.3201


## NN Can learn what it was trained on¶

The network can learn what it was trained on, even though this is a "non-linear" task (eg, you can't draw a line between all of the outputs that map to 1 and those that map to 0).

In [5]:
net.propagate(input=[1, 0])[0]

Out[5]:
0.680948902920552

## Generalization¶

Not only can the network learn the desired function, but it can learn to interpolate other values that it wasn't trained on.

In [6]:
net.propagate(input=[0.1, 0.1])[0]

Out[6]:
0.334272122585374
In [7]:
from Graphics import Picture, Color

res = 50 # resolution
picture = Picture(res, res)
for x in range(res):
for y in range(res):
out = net.propagate(input=[x/res, y/res])[0]
g = int(out * 255)
picture.setColor(x, y, Color(g,g,g))
Picture(picture, 10)

Out[7]:

If we look at the outputs as they are considered 0 or 1:

In [9]:
res = 50 # resolution
picture = Picture(res, res)
for x in range(res):
for y in range(res):
out = net.propagate(input=[x/res, y/res])[0]
g = int(round(out) * 255)
picture.setColor(x, y, Color(g,g,g))
Picture(picture, 10)

Out[9]:

For more details on Neural Network Learning, please see Neural Networks

I wonder why the "saddle" is always zero? There must be a bias towards outputting a zero rather than one? Wonder what that bias is?

In [ ]: