#!/usr/bin/env python # coding: utf-8 # ## SYDE556/750 Assignment 1: Representation in Populations of Neurons # # - Due Date: January 28th at midnight # - Total marks: 20 (20% of final grade) # - Late penalty: 1 mark per day # - It is recommended that you use Python. # - *Do not use or refer to any code from Nengo* # ## 1) Representation of Scalars # ### 1.1) Basic encoding and decoding # # Write a program that implements a neural representation of a scalar value $x$. For the neuron model, use a rectified linear neuron model ($a=max(J,0)$). Choose the maximum firing rates randomly (uniformly distributed between 100Hz and 200Hz at x=1), and choose the x-intercepts randomly (uniformly distributed between -0.95 and 0.95). Use those values to compute the corresponding $\alpha$ and $J^{bias}$ parameters for each neuron. The encoders $e$ are randomly chosen and are either +1 or -1 for each neuron. Go through the following steps: # # # #
    #
  1. [1 mark] Plot the neuron responses $a_i$ for 16 randomly generated neurons. (See Figure 2.4 in the book for an example, but with a different neuron model and a different range of maximum firing rates). # #
  2. #
#
    #
  1. [1 mark] Compute the optimal decoders $d_i$ for those 16 neurons (as shown in class). Report their values. # #
  2. #
#
    #
  1. [1 mark] Compute and plot $\hat{x}=\sum_i d_i a_i$. Overlay on the plot the line $y=x$. (See Figure 2.7 for an example). Make a separate plot of $x-\hat{x}$ to see what the error looks like. Report the Root Mean Squared Error value. #
  2. #
#
    #
  1. [1 mark] Now try decoding under noise. Add random normally distributed noise to $a$ and decode again. The noise is a random variable with mean 0 and standard deviation of 0.2 times the maximum firing rate of all the neurons. Resample this variable for every different $x$ value for every different neuron. Create all the same plots as in part c). Report the Root Mean Squared Error value. #
  2. #
#
    #
  1. [1 mark] Recompute the decoders $d_i$ taking noise into account (as shown in class). Show how these decoders behave when decoding both with and without noise added to $a$ by making the same plots as in c) and d). Report the RMSE for all cases. # #
  2. #
#
    #
  1. [1 mark] Show a 2x2 table of the four RMSE values reported in parts c), d), and e). This should show the effects of adding noise and whether or not the decoders $d$ are computed taking noise into account. Write a few sentences commenting on what the table shows. #
  2. #
# ### 1.2) Exploring sources of error # # Use the program you wrote in 1.1 to examine the sources of error in the representation. #
    #
  1. [2 marks] Plot the error due to distortion $E_{dist}$ and the error due to noise $E_{noise}$ as a function of $N$, the number of neurons. Use the equation with those two parts as your method (2.9 in the book). Generate two different loglog plots (one for each type of error) with $N$ values of [4, 8, 16, 32, 64, 128, 256, 512] (and more, if you would like). For each $N$ value, do at least 5 runs and average the results. For each run, different $\alpha$, $J^{bias}$, and $e$ values should be generated for each neuron. Compute $d$ under noise, with $\sigma$ equal to 0.1 times the maximum firing rate. Show visually that the errors are proportional to $1/N$ or $1/N^2$ (see figure 2.6 in the book). #
  2. #
#
    #
  1. [1 mark] Repeat part a) with $\sigma$ equal to 0.01 times the maximum firing rate. #
  2. #
#
    #
  1. [1 mark] What does the difference between the graphs in a) and b) tell us about the sources of error in neural populations? #
  2. #
# ### 1.3) Leaky Integrate-and-Fire neurons # # Change the code to use the LIF neuron model: # # $$ # a_i = \begin{cases} # {1 \over {\tau_{ref}-\tau_{RC}ln(1-{1 \over J})}} &\mbox{if } J>1 \\ # 0 &\mbox{otherwise} # \end{cases} # $$ # #
    #
  1. [1 mark] Generate the same plot as 1.1a). Use $\tau_{ref}=0.002$s and $\tau_{RC}=0.02$s. # #
  2. #
#
    #
  1. [2 marks] Generate the same four plots as 1.1e), and report the RMSE both with and without noise. #
  2. #
# ## 2) Representation of Vectors # ### 2.1) Vector tuning curves #
    #
  1. [1 mark] Plot the tuning curve of an LIF neuron whose 2D preferred direction vector is at an angle of $\theta=-\pi/4$, has an x-intercept at the origin (0,0), and has a maximum firing rate of 100Hz. # # #
  2. #
#
    #
  1. [1 mark] Plot the tuning curve for the same neuron as in a), but only considering the points around the unit circle. This will be similar to Figure 2.8b in the book. Fit a curve of the form $Acos(B\theta+C)+D$ to the tuning curve and plot it as well. What makes a cosine a good choice for this? Why does it differ from the ideal curve? # #
  2. #
# ### 2.2 Vector representation #
    #
  1. [1 mark] Generate a set of 100 random unit vectors uniformly distributed around the unit circle. These will be the encoders $e$ for 100 neurons. Plot these vectors with a quiver or line plot (i.e. not just points, but lines/arrows to the points). #
  2. #
#
    #
  1. [1 mark] Compute the optimal decoders. Use LIF neurons with the same properties as in question 1.3. When computing the decoders, take into account noise with $\sigma$ as 0.2 times the maximum firing rate. Plot the decoders. How do these decoding vectors compare to the encoding vectors? #
#
    #
  1. [1 mark] Generate 20 random $x$ values throughout the unit circle (i.e. with different directions and radiuses). For each $x$ value, determine the neural activity $a$ for each of the 100 neurons. Now decode these values (i.e. compute $\hat{x}$) using the decoders from part b). Plot the original and decoded values on the same graph in different colours, and compute the RMSE. #
  2. #
#
    #
  1. [2 marks] Repeat part c) but use the *encoders* as decoders. This is what Georgopoulos used in his original approach to decoding information from populations of neurons. Plot the decoded values this way and compute the RMSE. In addition, recompute the RMSE in both cases you've done, but ignoring the magnitude of the decoded vector. What are the relative merits of these two approaches to decoding? # #
  2. #
# In[ ]: