- Hofstadter et al. Copycat and Metacat
- Biological Metaphors: Neural and Genetic models
- Can a robot do something it wasn't trained to do?
- Can a system invent its own language?

- The early years in AI were dominated by logic, words, and proofs/guarantees
- The last couple of decades have seen a rise in numbers and statistics
- Many AI and Cognitive scientists desire to model a phenomenon at the level of the phenomenon
- Models whose behavior "emerges" from lower-level activity are hard to create, and unpredictable

I believe that emergent models will exhibit all of the important properties that we are interested in. They will have thoughts, beliefs, awareness, intentions, desires, goals, etc. They will be conscious.

An emergent model of making analogies.

abc -> abd mrrjjj -> ?

If $abc$ goes to $abd$ what does $mrrjjj$ go to?

In [16]:

```
calico.Image("http://upload.wikimedia.org/wikipedia/en/d/d4/Metacat_demo_abc_abd_mrrjjj_mrrjjjj.jpg")
```

Out[16]:

Metacat contributions:

- Perception
- Experience
- Concepts
- Memory
- Self-watching

For more information and runnable model, see:

For these examples, we will use one of the simplest of the neural network models, the three-layer, feed-forward propagation model. This was called the Multilayer Perceptron in early years.

In [6]:

```
calico.Image("http://upload.wikimedia.org/wikipedia/commons/thumb/e/e4/Artificial_neural_network.svg/500px-Artificial_neural_network.svg.png")
```

Out[6]:

Terms:

- Node/unit
- Layer (the above has three)
- Activation value
- Weight/connection
- Target (desired) output provided by "teacher"
- Actual output
- Error
- Online or batch training

It has been proven that for any finite function $f$:

$ f(x) = y $

that there is a set of weights that exists to map the inputs $x$ to the desired output $y$.

This means that you can encode any given Turing Machine as a neural network (Turing Machine equivalent).

Limitations:

- Inputs and Outputs must be represented as numbers
- There is no guarantee that you can find the set of weights that provide the desired function

1943 - McCulloch and Pitts, "A Logical Calculus of Ideas Immanent in Nervous Activity." Posited that there was an equivalency between first order logic and simple neurons with a threshold.

1949 - Hebb, "The Organization of Behavior." Proposed the idea that when two neurons fire simultaneously, the connection between them is strengthened.

1962 - Rosenblatt, "Principles of Neurodynamics." Proposed the Perceptron model.

1969 - Minsky and Papert, "Perceptrons: An Introduction to Computational Geometry." Proved that a two-layer Perceptron could not be used for some very simple functions (XOR).

1969 - Bryson, Denham, and Dreyfus, "Optimal programming problems with inequality constraints." First use of a backpropagation of error learning algorithm.

1975 - Holland, "Adaptation in natural and artificial systems." Describes the genetic algorithm.

1986 - Rumelhart and McClelland, "Parallel Distributed Processing: Explorations in the microstructure of cognition."

Backprop training method:

- Requires a "teacher" (we provide the desired output)
- The error between actual output and desired output drives learning
- Can work on any number of layers
- Learning can take a long time
- Given enough weights and time, any function can be learned

Input 1 | Input 2 | Output |
---|---|---|

0 | 0 | 0 |

0 | 1 | 1 |

1 | 0 | 1 |

1 | 1 | 0 |

In [1]:

```
from ai.conx import Network
```

In [2]:

```
net = Network()
net.addLayers(2, 5, 1)
```

In [3]:

```
net.setInputsAndTargets([[0, 0], [0, 1], [1, 0], [1, 1]],
[[0], [1], [1], [0]])
# net.setTolerance(.1)
```

In [4]:

```
net.train()
```

The network can learn what it was trained on, even though this is a "non-linear" task (eg, you can't draw a line between all of the outputs that map to 1 and those that map to 0).

In [5]:

```
net.propagate(input=[1, 0])[0]
```

Out[5]:

Not only can the network learn the desired function, but it can learn to interpolate other values that it wasn't trained on.

In [6]:

```
net.propagate(input=[0.1, 0.1])[0]
```

Out[6]:

In [7]:

```
from Graphics import Picture, Color
res = 50 # resolution
picture = Picture(res, res)
for x in range(res):
for y in range(res):
out = net.propagate(input=[x/res, y/res])[0]
g = int(out * 255)
picture.setColor(x, y, Color(g,g,g))
Picture(picture, 10)
```

Out[7]:

If we look at the outputs as they are considered 0 or 1:

In [9]:

```
res = 50 # resolution
picture = Picture(res, res)
for x in range(res):
for y in range(res):
out = net.propagate(input=[x/res, y/res])[0]
g = int(round(out) * 255)
picture.setColor(x, y, Color(g,g,g))
Picture(picture, 10)
```

Out[9]:

For more details on Neural Network Learning, please see Neural Networks

I wonder why the "saddle" is always zero? There must be a bias towards outputting a zero rather than one? Wonder what that bias is?

In [ ]:

```
```