#!/usr/bin/env python # coding: utf-8 # ## SYDE 556/750: Simulating Neurobiological Systems # # Accompanying Readings: Chapter 1 # In[2]: from IPython.display import YouTubeVideo YouTubeVideo('U_Q6Xjz9QHg', width=720, height=400, loop=1, autoplay=0, playlist='U_Q6Xjz9QHg') # # Overall Goal # # - Building brains! # - Why? # - To figure out how brains work (health applications) # - To apply this knowledge to building systems (AI applications) # # # Administration # # - Course website: [http://compneuro.uwaterloo.ca/courses/syde-750.html](http://compneuro.uwaterloo.ca/courses/syde-750.html) # - Contact information # - Chris Eliasmith: celiasmith@uwaterloo.ca # - Course times: Mon & Wed. 9:00a-10:20a (10:30a-11:20p Wed for 750) # - Location: Mon: E5-6004, Wed: E7-5343 # - Office hours: by appointment # ## Coursework # # - Four assignments (60%) # - 20%, 20%, 10%, 10% # - About two weeks for each assignment # - Everyone writes their own code, generates their own graphs, writes their own answers # # - Final project (40%) # - Make a model of some neural system # - For 556 students, this can be an extension of something seen in class # - For 750 students, this must be more of a research project with more novelty # - [ideas](http://compneuro.uwaterloo.ca/courses/syde-750/syde-556-possible-projects.html) # - Get your idea approved via email before Reading Week (Feb 18) # ## Schedule # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
WeekReadingTopicAssignments
Jan 7Chpt 1Introduction
Jan 9, 14Chpt 2,4Neurons, Population Representation#1 posted
Jan 16, 21Chpt 4Temporal Representation
Jan 23, 28, 30Chpt 5,6Feedforward Transformations#1 due (23rd at midnight); #2 posted
Feb 4, 6, 11Chpt 6,8Dynamics
Feb 13, 25Chpt 7Analysis of Representations#2 due (15th at midnight); #3 posted
Feb 18, 20*Reading Week*
Feb 27, Mar 4ProvidedSymbols
Mar 6, 11Chpt 8Memory#3 due (6th at midnight)
Mar 13, 18ProvidedAction Selection#4 due (20th at midnight)
Mar 20, 25Chpt 9Learning
Mar 27Conclusion
Apr 1, Apr 3Project Presentations
# # ## To Do: # - Get textbook (Eliasmith & Anderson, 2003, Neural Engineering), start reading. # - Be able to run [Juypter](http://jupyter.org/) (old: ipython) notebooks # - [Anaconda](http://continuum.io/downloads) is probably simplest # - Decide what language you'll do your assignments in (Python highly recommended, need permission for others) # - Start thinking about a project... already! # # Focus of the Course # ## Theoretical Neuroscience # # - How does the mind work? # - Most complex and most interesting system humanity has ever studied # - Why study anything else? # - How should we go about studying it? # - What techniques/tools? # - How do we know if we're making progress? # - How do we deal with the complexity? # # ## A Useful Analogy # # - What is Theoretical Neuroscience? # - A useful analogy is to theoretical physics # - Similarities # - Methods are similar # - Goals are similar (quantification) # - Differences # - Central question "What exists? vs Who are we?" # - More simulation (because of nonlinearities) in biology # # # Neural Modelling # # - Let's build it # - Specify theory in enough detail that this is possible # - Tends to get complex, so need computer simulation # - Bring together levels and modeling methods # - Single neuron models (levels of detail; e.g. spikes, spatial structure, various ion channels, etc.) # - Small network models (levels of detail; e.g. spiking neurons, rate neurons, mean fields, etc.) # - Large network/cognitive models (levels of detail; e.g. biophysics, pure computation, anatomy, etc.) # - Ideally allow all levels of detail below any higher level to be included as desired. # - 'Correct' level depends on questions being asked. # ## Problems with current approaches # # ### Large-scale neural models (e.g. [Human Brain Project](https://www.humanbrainproject.eu/), [Synapse Project](http://www.research.ibm.com/cognitive-computing/neurosynaptic-chips.shtml#fbid=qMgN8YTZy5Y), etc.) # - Lack of function or behaviour # - Can't compare to psychological data # - Assumes canonical algorithm repeats # - e.g., Measurements from one small part (hippocampus) are valid everywhere # - But, different parts of the brain are very different (connectivity, cell types, inputs/outputs) # # - Expects intelligence to 'emerge' # - Unclear what 'emergence' means, how it will work, or what it explains # - Wishful thinking? # # ### Cognitve models (e.g. [ACT-R](http://act-r.psy.cmu.edu/), [Soar](http://soar.eecs.umich.edu/), etc.) # - Disconnected from neuroscience, can't compare to neural data # - Trying to map components of the model to brain areas # - When a component is active, maybe neurons in that area are more active? # - No "bridging laws" # - Like having rules of chemistry that never mention that it's all built out of atoms and electrons # - No constraints on the equations # - Just anything that can be written down # - Many possibilities; hard to figure out what matches human data best # - Maybe that's okay # - Do we understand the brain enough to make this connection and constrain theories? # - When understanding a word processor, do we worry about transistors? # # The Brain # # - 2 kg (2% of body weight) # - 20 Watts (25% of power consumption) # - Area: 4 sheets of paper # - Neurons: 100 billion (150,000 per $mm^2$) # In[3]: from IPython.display import YouTubeVideo YouTubeVideo('jHxyP-nUhUY', width=500, height=400, autoplay=0, start=60) # ## Brain structures # - Lots of visually obvious structure # - Lots of greek and latin names to remember # - locus coeruleus, thalamus, amygdala, hypothalamus, substantia nigra, etc etc # # # # # In[20]: from IPython.display import HTML HTML('

A Neuron

') # # Neurons in the brain # # - 100 billion # - 100's or 1000's of distinct types (distinguished via anatomy and/or physiology) # - Axon length: from $10^{-4}$ to $5$ m # - Each neuron: 500-200,000 inputs and outputs # - 72km of axons # - Communication: 100's of different neurotransmitters # ## Neuron communication: Synapses # # # ## What it really looks like # # # ## What it really really looks like # In[5]: from IPython.display import YouTubeVideo YouTubeVideo('F37kuXObIBU', width=720, height=500, start=8*60+35) # # Kinds of data from the brain # # ## Lesion studies # - What are the effects of damaging different parts of the brain? # - Occipital cortex: blindness (really blindsight) # - Inferior frontal gyrus: can't speak (Broca's area) # - Posterior superior temporal gyrus: can't understand speech (Wernicke's area) # - Fusiform gyrus: can't recognize faces (and other visually complex objects) # - Ventral medial prefrontal cortex: moral judgement??? (Phineas Gage) # - etc, etc, etc # ## fMRI # # - Functional Magnetic Resonance Imaging # - Measure blood oxygenation levels in the brain # - show the difference between two tasks # - averaged over many trials and patients # - Measured while performing tasks # - ~4 second between scans # - some attempts at going faster, but blood vessels don't change much faster than this # - Shows where energy is being used in the brain # - equivalent to figuring out how a CPU works by measuring temperature # - a bit more fine-grained than lesion studies # - Good spatial resolution, low temporal resolution # - [Neurosynth](http://neurosynth.org/) # ## EEG # # - Electrical activity at the scalp # - Large-scale communication between areas # - High time resolution, low spatial resolution # # # # # ## Single cell recording # - Place electrodes (one or many) into the brain, record from it # - not necessarily right at a neuron # - Pick up local electrical potentials # - You can hear neural 'spikes' # - High temporal resolution only one (or a few) cells # In[1]: from IPython.display import YouTubeVideo YouTubeVideo('KE952yueVLA', width=640, height=390) # # ## Multielectrode recordings # - Put 'tetrodes' or multi-electrode arrays into the brain # - Post-processing: # - "Spike sorting" # - Local field potentials (LFPs) # - High temporal resolution, max ~100 cells # # # # In[6]: from IPython.display import YouTubeVideo YouTubeVideo('lfNVv0A8QvI', width=640, height=390) # ## Calcium Imaging # - Use calcium to glow when Ca2+ ions bond # - Happens a lot during neural activity and spike generation # - Good spatial and good temporal resolution # # - E.g. In a fish embryo # In[7]: from IPython.display import YouTubeVideo YouTubeVideo('DGBy-BGiZIM', width=640, height=360) # - In a stalking fish # In[8]: from IPython.display import YouTubeVideo YouTubeVideo('CpejbZ-XEyM', width=640, height=360) # ## Optogenetics # - Allows stimulation and recording from select parts of the brain # - Just those parts expressing a light sensitive proteins that are stimulated # - High spatial and temporal resolution (but local) # In[9]: from IPython.display import YouTubeVideo YouTubeVideo('v7uRFVR9BPU', width=640, height=390) # # What do we know so far? # # - Lots of details # - Data: "The proportion of type A neurons in area X is Y" # - Conclusion: "Therefore, the proportion of type A neurons in area X is Y". # - Hard to get a big picture # - No good methods for generalizing from data # # - "Data-rich and theory-poor" (Churchland & Sejnowski, 1994; still true) # - Need some way to connect these details # - Need unifying theory # # Recall: Neural Modeling # # - What I cannot create, I do not understand # - Build a computer simulation # - Do to neuroscience what Newton did to physics # - Too complex to be analytically tractable, so use computer simulation # - Can we use this to connect the levels? # ## Single neuron simulation # # - Hodgkin & Huxley, 1952 # # # ## Single neuron simulation # # - Hodgkin & Huxley, 1952 # # # ## Single neuron simulation # # # # # # ## Millions of neurons # In[10]: from IPython.display import YouTubeVideo YouTubeVideo('_UFOSHZ22q4', width=600, height=400, start=60) # ## Billions of neurons # # - Simplify the neuron model and you can run more of them # In[11]: from IPython.display import YouTubeVideo YouTubeVideo('WmChhExovzY', width=600, height=400) # # The controversy # # - What level of detail for the neurons? How should they be connected? # - IBM SyNAPSE project (Dharmendra Modha) # - Billions of neurons, but very simple models # - Randomly connected # - 2009: "Cat"-scale brain (1 billion neurons) # - 2012: "Human"-scale brain (500 billion neurons; 5x human!) # - Called a ["hoax and PR stunt"](http://spectrum.ieee.org/tech-talk/semiconductors/devices/blue-brain-project-leader-angry-about-cat-brain) by: # - Blue Brain (Henry Markram) # - Much more detailed neurons # - Statistically connected (i.e. similar to hippocampus) # - How much detail is enough? # - How could we know? # ## What actually matters... # - Connecting brain models to *behaviour* # - How can we build models that actually do something? # - How should we connect realistic neurons so they work together? # # The Neural Engineering Framework # # # - Our attempt # - Probably wrong, but got to start somewhere # - Three principles # - Representation # - Transformation # - Dynamics # - Building behaviour out of detailed low-level components # ## Representation # # - How do neurons represent information? (What is the neural code?) # - What is the mapping between a value to be stored and the activity of a group of neurons? # - Examples: # - Edge detection in retina # - Place cells # - Every group of neurons can be thought of as representing something # - Each neuron has some preferred value(s) # - Neurons fire more strongly the closer the value is to that preferred value # - Values are *vectors* # # ## Transformation # # - Connections compute functions on those vectors # - Activity of one group of neurons causes another group to fire # - One group may represent $x$, connected to another group representing $y$ # - Whatever firing pattern we get in $y$ due to $x$ is a function $y = f(x)$ # - Can find what class of functions are well approximated this way # - Puts limits on the algorithms we can implement with neurons # ## Dynamics # # - Recurrent connections (feedback) # - Turns out to allow us to compute functions of this form: # - ${dx \over dt} = f(x, u)$ # - $x$ is what the neurons represent, $u$ is the input neurons and $f()$ is the transformation # - Great for implementing all of control theory (i.e., dynamical systems) # - Example: # - memory: (${dx \over dt} = u$) # # Examples # # - This approach gives us a neural compiler # - Given a quantitative description of a behaviour (e.g. an algorithm), you can solve for the connections between neurons that will approximate that behaviour # - Works for a wide variety of neuron models # - Number of neurons affects accuracy # - Neuron properties influence timing and computation # - Can make predictions (e.g. rats head direction and path integration) # ## Vision: character recognition # # In[19]: from IPython.display import YouTubeVideo YouTubeVideo('2j9rRHChtXk', width=640, height=390) # ## Vision: >1000 categories # In[18]: from IPython.display import YouTubeVideo YouTubeVideo('VWUhCzUDZ70', width=640, height=390) # ## Problem solving: Tower of Hanoi # In[13]: from IPython.display import YouTubeVideo YouTubeVideo('sUvHCs5y0o8', width=640, height=360) # ## Spaun: digit recognition # In[14]: from IPython.display import YouTubeVideo YouTubeVideo('f6Ul5TYK5-o', width=640, height=360) # ## Spaun: copy drawing # In[15]: from IPython.display import YouTubeVideo YouTubeVideo('WNnMhF7rnYo', width=640, height=390) # ## Spaun: addition by counting # In[16]: from IPython.display import YouTubeVideo YouTubeVideo('mP7DX6x9PX8', width=640, height=390) # ## Spaun: pattern completion # In[17]: from IPython.display import YouTubeVideo YouTubeVideo('Q_LRvnwnYp8', width=640, height=390) # # Benefits # # - No one else can do this # - New ways to test theories (neurological constraints) # - Suggests different types of algorithms # - Potential medical applications # - New ways of understanding the mind and who we are # #