# SYDE 556/750: Simulating Neurobiological Systems¶

Terry Stewart

## Temporal Representation¶

• In the previous lecture, we covered basic representation
• take a vector $x$ and convert it into a bunch of neural activity values $a_i$, and then convert back to the original value $\hat{x}$.
• What happens if $x$ changes over time?
• i.e. if we have $x(t)$, how do we get $a_i(t)$?
• seems pretty easy:
• $a_i=G[\alpha e \cdot x + J^{bias}]$
• $a_i(t)=G[\alpha e \cdot x(t) + J^{bias}]$
• where $G[J]= {1 \over {\tau_{ref}-\tau_{RC}ln(1-{1 \over J})}}$
In [2]:
import syde556
import numpy

x = numpy.zeros(100)
for i in range(10):
x[i*10:(i+1)*10]=numpy.random.uniform(-1,1)

figure()
title('$x$ value changing over time')
plot(x)
ylabel('$x$')
ylim(-1,1)
xlabel('time (ms)')

figure()
title('firing rate changing over time')
plot(syde556.lif(1.5*x+1))
ylabel('firing rate (Hz)')
xlabel('time (ms)')
show()

• Unfortunately, that formula for $G[J]$ is an average firing rate over a long period of time.
• What does it mean to be firing at 10Hz for 10ms?
• We need to model what's actually happening to that neuron over time

## Leaky Integrate-and-Fire neurons¶

• Pretty much the standard neuron model
• simple
• produces spikes
• limiting case of other more complex neuron models
• all parameters map onto known properties of real neurons

• bilipid cell membrane acts as a capacitor
• flow of ions through the cell membrane ion channels is a resistor
• this is the "leak" current
• resistance and capacitance together give $\tau_{RC}$
• $\tau_{RC}=R C$
• when voltage passes some threshold, neuron emits a "spike"
• very fast, sharp response that's pretty much the same each time
• spike at time $t_n$ modelled as $\delta(t-t_n)$
• neuron recovers over a short period of time

• refractory time $\tau_{ref}$
• during this time the voltage is reset to some resting level
• typical values

• $\tau_{RC}$: 0.02 seconds
• $\tau_{ref}$: 0.002 seconds
• reset voltage (a.k.a. resting potential): -70mV
• firing threshold (a.k.a. threshold potential): -55mV
• NOTE: we normalize this so that reset is at 0 and firing is at 1
• this does not affect anything about the behaviour of the model
• For lots more info, see Unit 2 in this online course

### The RC circuit¶

• let's ignore the spike for now

• Kirchhoff's Current Law
• Currents at a point have to add up to zeros
• Current coming into the neuron has two paths
• $J_{input} = J_R+J_C$
• Ohm's Law
• Formula for current through a resistor
• $J_R = {V \over R}$
• Capacitor
• $C = {Q \over V}$
• $Q = C V$
• ${{dQ} \over {dt}} = C {{dV} \over {dt}}$
• $J_C = C {{dV} \over {dt}}$

$J_{input} = {V \over R} + C {{dV} \over {dt}}$

${{dV} \over {dt}} = {1 \over C}[J_{input} - {V \over R}]$

${{dV} \over {dt}} = {1 \over {RC}}[R J_{input} - {V}]$

${{dV} \over {dt}} = {1 \over {\tau_{RC}}} [R J_{input} - {V}]$

• What does this look like?
• If $J_{input}$ is constant (and we'll write it as $J$ for simplicity)

• ${{dV} \over {dt}} = {1 \over {\tau_{RC}}} [R J - {V}]$

• $V(t) = JR(1-e^{-t/\tau_{RC}})$

• So how long does it take to get to threshold?

• $V_{th} = JR(1-e^{-t_{th}/\tau_{RC}})$
• $t_{th} = -\tau_{RC} ln (1-{V_{th} \over {JR}})$
• So how long between spikes?

• $t_{th} + \tau_{ref}$
• Firing rate

• $a = {1 \over {t_{th} + \tau_{ref}}}$
• $a = {1 \over {\tau_{ref}-\tau_{RC}ln(1-{V_{th} \over {JR}})}}$
• Simplify by assuming $V_{th}=1$ and $R=1$

• $a = {1 \over {\tau_{ref}-\tau_{RC}ln(1-{1 \over J})}}$

## Spikes¶

• But we don't want constant $J$.
• We want it to change over time (since $x(t)$ also changes over time)
• So let's do a computer simulation of the differential equation
• ${{dV} \over {dt}} = {1 \over {\tau_{RC}}} [R J_{input} - {V}]$
In [3]:
import syde556

import numpy

x = numpy.zeros(100)

v, spikes = syde556.lif_spikes(1.5*x+2)

figure()
title('voltage and spikes over time')
plot(v+spikes)
vlines(numpy.where(spikes>0)[0], 0, 2)
ylabel('firing rate (Hz)')
xlabel('time (ms)')
show()

In [4]:
import syde556

import numpy

x = numpy.zeros(100)
for i in range(10):
x[i*10:(i+1)*10]=numpy.random.uniform(-1,1)

v, spikes = syde556.lif_spikes(1.5*x+1)

figure()
title('$x$ value changing over time')
plot(x)
ylabel('$x$')
ylim(-1,1)
xlabel('time (ms)')

figure()
title('firing rate changing over time')
plot(syde556.lif(1.5*x+1))
ylabel('firing rate (Hz)')
xlabel('time (ms)')
show()

figure()
title('voltage and spikes over time')
plot(v+spikes)
vlines(numpy.where(spikes>0)[0], 0, 2)
ylabel('voltage')
xlabel('time (ms)')
show()

• Let's try a sine wave input
In [5]:
import syde556

import numpy

dt = 0.001
t = numpy.arange(600)*dt
x = numpy.sin(10*t)

alpha = 1.5
bias = 2
J = alpha*x+bias

v, spikes = syde556.lif_spikes(J)

figure(figsize=(8,4))
plot(t, x, label='x')
plot(t, v+spikes, label='v')
vlines(dt*numpy.where(spikes>0)[0], 0, 2)
ylabel('$x$')
ylim(-1,2)
xlabel('time (s)')
legend(loc='lower left')

show()

• So this is how we're going to model neurons over time
• Weaknesses of this approach
• These are "point" neurons (i.e. we're assuming the body of the neuron is just one infinitessimal point, rather than having lots and lots of compartments, each with its own RC circuit)
• assumes R is a constant
• assumes $V_{th}$ is a constant
• assumes no adaptation (it's harder for a neuron to fire after it has recently fired)
• doesn't consider nonlinearities at the input
• Each of these could be added
• Adding these increases computational cost
• Not going to worry about it now
• But we should make sure everything we do would also work with more detailed neuron models

### The time vs. rate-coding debate¶

• There is a standard ongoing debate in the neuroscience literature about whether neurons use a "rate" code or a "timing" code
• rate coding
• The important thing is the firing rate over some window of time (~100ms)
• time coding
• The precise pattern of spike generation carries information
• We don't find this to be a useful distinction
• Some people call what we do rate coding, some call it time coding
• The important thing is how to do the decoding...

## Temporal Decoding¶

• So that's what happens when we allow $x(t)$ to vary over time
• We keep $J(t)=\alpha e \cdot x(t) + J^{bias}$
• Feed that into the neuron model $G[J]$
• We get out a sequence of spikes
• How do decode this?

• How do we get an estimate of $x$ given that sequence of spikes?
• Can we stick with linear decoding?

• Let's consider a case with 2 neurons
• For simplicity, use the same $\alpha$ and $J^{bias}$, but use $e_1=1$ and $e_2=-1$
In [6]:
import syde556

import numpy

dt = 0.001
t = numpy.arange(600)*dt
x = numpy.sin(10*t)

alpha = 1.5
bias = 2
J1 = alpha*x+bias
J2 = -alpha*x+bias

v1, spikes1 = syde556.lif_spikes(J1)
v2, spikes2 = syde556.lif_spikes(J2)

figure(figsize=(8,4))
plot(t, x, label='x')
vlines(dt*numpy.where(spikes1>0)[0], 0, 1)
vlines(dt*numpy.where(spikes2>0)[0], -1, 0)
ylabel('$x$')
ylim(-1,1)
xlabel('time (s)')

show()

• Can we still decode $x$ using a sum of activities?

• $\hat{x}(t)=\sum_i{a_i(t) d_i}$
• Let's use exactly the same technique we did before to find $d$

• $d = \Gamma^{-1} \Upsilon$
• $\Upsilon_i = \sum_x a_i x dx$
• $\Gamma_{ij} = \sum_x a_i a_j dx$
In [7]:
import syde556

import numpy

dt = 0.001
t = numpy.arange(600)*dt
x = numpy.sin(10*t)

alpha = 1.5
bias = 2
J1 = alpha*x+bias
J2 = -alpha*x+bias

v1, spikes1 = syde556.lif_spikes(J1)
v2, spikes2 = syde556.lif_spikes(J2)

A = numpy.array([spikes1, spikes2]).T
d = syde556.decoders(A, x)

xhat = numpy.dot(A, d)

figure(figsize=(8,4))
plot(t, x, label='x')
plot(t, xhat, label='x')
ylabel('$x$')
ylim(-1,1)
xlabel('time (s)')

show()

• What if we add more neurons?
In [9]:
import syde556

import numpy

N = 50
dt = 0.001
t = numpy.arange(600)*dt
x = numpy.sin(10*t)

encoders = numpy.random.choice([-1, 1], N)
intercepts = numpy.random.uniform(-0.9, 0.9, N)
max_rates = numpy.random.uniform(100, 200, N)
alpha, bias = syde556.find_alpha_and_bias(intercepts, max_rates)

A = syde556.activity_spikes(x, encoders, alpha, bias)

d = syde556.decoders(A, x)

xhat = numpy.dot(A, d)

figure(figsize=(8,4))
imshow(A.T, extent=(0,0.6,-1,1), aspect='auto', interpolation='none', cmap='binary')
plot(t, x, label='x')
plot(t, xhat, label='x')
ylabel('$x$')
ylim(-1,1);
xlabel('time (s)')

show()

• What's happening here?
• At a given instant in time $t$, only a very small number of neurons are firing
• if no neurons are firing at that instant, $\hat{x}=0$
• there's usually only one or two neurons firing at any given time
• as $dt$ gets smaller, the problem gets even worse
• What can we do?

## Temporal filter¶

• We want a spike to contribute to other points in time
• So we want to convert a train of spikes into some continuous function
• Convolution:

• $\hat{x}(t)=\sum_i{a_i(t) * h(t) d_i}$
• What should we use for $h(t)$?

• Maybe a gaussian?
• $h(t) = e^{-t^2 / {2\sigma^2}}$
In [3]:
import syde556
import numpy

dt = 0.001
sigma = 0.007
t_h = numpy.arange(200)*dt-0.1
h = numpy.exp(-t_h**2/(2*sigma*sigma))

figure()
plot(t_h, h)
xlabel('t')
ylabel('h(t)')
show()

In [4]:
t = numpy.arange(600)*dt
x = numpy.sin(10*t)

alpha = 1.5
bias = 2
J1 = alpha*x+bias
J2 = -alpha*x+bias

v1, spikes1 = syde556.lif_spikes(J1)
v2, spikes2 = syde556.lif_spikes(J2)

fspikes1 = numpy.convolve(spikes1, h, mode='same')
fspikes2 = numpy.convolve(spikes2, h, mode='same')

A = numpy.array([fspikes1, fspikes2]).T

d = syde556.decoders(A, x)

xhat = numpy.dot(A, d)

figure(figsize=(8,4))
plot(t, x)
vlines(dt*numpy.where(spikes1>0)[0], 0, 1)
vlines(dt*numpy.where(spikes2>0)[0], -1, 0)
ylabel('$x$')
ylim(-1,1)
xlabel('time (s)')

figure(figsize=(8,4))
plot(t, x, label='x')
plot(t, fspikes1*d[0])
plot(t, fspikes2*d[1])
ylabel('$x$')
ylim(-1,1)
xlabel('time (s)')

figure(figsize=(8,4))
plot(t, x, label='x')
plot(t, xhat, label='x')
ylabel('$x$')
ylim(-1,1)
xlabel('time (s)')

show()

• Better, but how can we improve this?
• Different shapes for $h(t)$?
• Different $\sigma$?
• Is there a general solution?

## Optimal filters¶

• Let's try to find the optimal filter for a particular special case
• Two neurons
• same $\alpha$ and $J^{bias}$ but different $e$ (-1 and +1)

$\hat{x}(t)=\sum_i{a_i(t) * h(t) d_i}$

• Since there are two neurons and they are opposites of each other, we can assume that $d_1=-d_2$ (i.e. they both contribute equally, but oppositely)

• $\hat{x}(t)= \sum_i{a_i(t) * h(t) d_i}$
• $\hat{x}(t)= a_1(t)*h(t)d_1 + a_2(t)*h(t)d_2$
• $\hat{x}(t) = (a_1(t)-a_2(t))*h(t) d_1$
• $\hat{x}(t) = (a_1(t)-a_2(t))*h(t)$ (since we can roll the constant into $h(t)$
• Let's call $a_1(t)-a_2(t)=r(t)$, for response

$\hat{x}(t)=r(t) * h(t)$

### Random inputs¶

• Can't just optimize for decoding a sine wave

• needs to work for any random input
• What do we mean by a random input?

In [12]:
import numpy
x = numpy.random.uniform(-1,1,500)
plot(numpy.arange(500)*0.001, x)
show()

• Maybe that's a bit too random
• Real signals don't change that quickly
• how do we get a random signal that doesn't change quickly?
• i.e. a signal with a limit on its maximum frequency?
In [1]:
import numpy
import syde556
x, X = syde556.generate_signal(T=1.0, dt=0.001, rms=0.5, limit=10, seed=4)
plot(numpy.arange(1000)*0.001, x)
show()

• Generate random Fourier coefficients and convert back to time domain
• randomly generate coefficients from a Gaussian distribution
• for $\omega$ values outside of some range, set to zero
In [2]:
import numpy
import syde556
T = 1.0
dt = 0.001
x, X = syde556.generate_signal(T, dt, rms=0.5, limit=10, seed=3)
freq = numpy.arange(1000)/T
freq -= (T/dt)/2

figure()
plot(freq, np.abs(X))
xlim(-500, 500)
xlabel('freq (Hz)')
ylabel('$|X(\omega)|$')

figure()
plot(freq, np.abs(X))
xlim(-50, 50)
xlabel('freq (Hz)')
ylabel('$|X(\omega)|$')

figure()
plot(numpy.arange(1000)*0.001, x)
xlabel('time (s)')
ylabel('$x(t)$')
show()

• Things to remember
• $X(\omega)$ values are complex numbers
• When we plot $X(\omega)$ we actually plot $|X(\omega)|$
• Since $x(t)$ should be real (i.e. no imaginary component), we need to constrain $X(\omega)$
• This constraint is that $X(\omega)=X(-\omega)^*$
• (the complex conjugate: the real parts are equal, and the imaginary parts switch sign)

# Finding an optimal decoder¶

• $\hat{x}(t)=r(t) * h(t)$
• $r(t)$ is the response of the two neurons combined together
• $r(t)=a_1(t)-a_2(t)$
• $h(t)$ is the filter we are trying to find
• how do we find $h(t)$?
• convolution is annoying; let's get rid of it
• one nice thing about convolution is that it turns into multiplication when you do a Fourier transform

$\hat{X}(\omega) = R(\omega)H(\omega)$

• we want to find $H(\omega)$ that minimizes the error

• $E = (X(\omega)-\hat{X}(\omega))^2$ (added up over all time)
• $E = (X(\omega)-R(\omega)H(\omega))^2$ (added up over all time)
• but, we know that our signal $x(t)$ can be written in the frequency domain as a sum of sine waves (that's the Fourier transform)

• we've already computed those for our signal $X$; that's how we made it in the first place
• so, we can write our error in terms of different frequency values $\omega$

• $E(\omega) = (X(\omega)-R(\omega)H(\omega))^2$

• now we can take the derivative and set it equal to zero to do the minimization

• $H(\omega)= {{X(\omega)R^*(\omega)} \over {|R(\omega)|^2}}$

• note the complex conjugate that appears there
• so now we can find $H(\omega)$ given the Fourier transform of our signal $X(\omega)$ and the Fourier transform of the spiking response $R(\omega)$

### Notes for comparison to the textbook¶

• Here we're using the convention that lower-case letters are for time-domain functions and upper-case letters are used for Fourier transfomr
• $x(t) \rightarrow X(\omega)$
• $r(t) \rightarrow R(\omega)$
• $h(t) \rightarrow H(\omega)$
• the textbook uses $A(\omega)$ instead of $X(\omega)$
• $A$ for amplitude
• but we're already using $A$ for the matrix for activities of neurons so let's avoid re-using the letter
In [27]:
import numpy
import syde556
T = 1.0
dt = 0.001
x, X = syde556.generate_signal(T, dt, rms=0.3, limit=10, seed=3)
t = numpy.arange(int(T/dt))*dt
freq = numpy.arange(int(T/dt))/T - (T/dt)/2
omega = freq*2*numpy.pi

plot(t,x)
xlabel('t')
ylabel('$x$')
show()

In [32]:
alpha = 1.5
bias = 2
J1 = alpha*x+bias
J2 = -alpha*x+bias

v1, spikes1 = syde556.lif_spikes(J1)
v2, spikes2 = syde556.lif_spikes(J2)

r = spikes1 - spikes2

plot(t, r)
plot(t,x)
xlabel('t')
ylabel('$r$')
show()

In [31]:
R = np.fft.fftshift(np.fft.fft(r))

figure()
plot(omega, np.abs(X))
xlabel('$\omega$')
ylabel('$|X(\omega)|$')

figure()
plot(omega, np.abs(R))
xlabel('$\omega$')
ylabel('$|R(\omega)|$')

show()

• Now we can find our optimal filter

$H(\omega)= {{X(\omega)R^*(\omega)} \over {|R(\omega)|^2}}$

In [18]:
H = (X*R.conjugate()) / (R*R.conjugate())
h = np.fft.fftshift(np.fft.ifft(np.fft.ifftshift(H))).real

figure()
plot(omega, np.abs(H))
xlabel('$\omega$')
ylabel('$|H(\omega)|$')
xlim(-500, 500)

figure()
plot(t-T/2, h)
xlabel('t')
ylabel('h(t)')
show()

In [19]:
fspikes1 = numpy.convolve(spikes1, h, mode='same')
fspikes2 = numpy.convolve(spikes2, h, mode='same')

A = numpy.array([fspikes1, fspikes2]).T

d = syde556.decoders(A, x)

xhat = numpy.dot(A, d)

figure()
plot(t, x)
plot(t, xhat)
show()
print 'RMSE:', numpy.average((x-xhat)**2)

RMSE: 0.00133916386085


### Testing how well it generalizes¶

• if it's a good decoder, it should work for other inputs too
In [20]:
y, Y = syde556.generate_signal(T, dt, rms=0.3, limit=10, seed=1)

J1y = alpha*y+bias
J2y = -alpha*y+bias

v1y, spikes1y = syde556.lif_spikes(J1y)
v2y, spikes2y = syde556.lif_spikes(J2y)

fspikes1y = numpy.convolve(spikes1y, h, mode='same')
fspikes2y = numpy.convolve(spikes2y, h, mode='same')

Ay = numpy.array([fspikes1y, fspikes2y]).T

yhat = numpy.dot(Ay, d)

figure()
plot(t, y)
plot(t, yhat)
show()
print 'RMSE:', numpy.average((y-yhat)**2)

RMSE: 0.00524624605476

• It's 5x worse on a different signal
• Not bad, but could use some improvement

### Improving the optimal decoder¶

• We're only training on a pretty small set of data
• So let's increase $T$
• This helps, but can get computationally expensive to generate more and more data
• There's also a bit of a shortcut
• We've already created a pretty long signal ($T=1.0$ seconds)
• We know the filter should probably decay towards zero
• A spike shouldn't affect values far away from when it happens
• So let's take our one existing signal and chop it up into lots of little signals
In [112]:
x2 = numpy.zeros_like(x)
r2 = numpy.zeros_like(r)

x2[400:600] = x[100:300]
r2[400:600] = r[100:300]

figure()
plot(t, x)
plot(t, r)
vlines(0.1, -1.5, 1.5, linewidth=3)
vlines(0.3, -1.5, 1.5, linewidth=3)

figure()
plot(t, x2)
plot(t, r2)
show()

• Add up a whole bunch of these with different slices
• But, this introduces some weird edge effects at the beginning and ending of the window
• So we should do some sort of smoothing of the data
• Let's go with a Gaussian
In [113]:
sigma = 0.1
window = np.exp(-(t-0.5)**2/(sigma**2))

x2w = numpy.roll(x, 300)*window
r2w = numpy.roll(r, 300)*window

figure()
plot(t, x)
plot(t, r)
vlines(0.1, -1.5, 1.5, linewidth=3)
vlines(0.3, -1.5, 1.5, linewidth=3)

figure()
plot(t, x2w)
plot(t, r2w)
show()

• So we're going to take our original data ($x(t)$ and $r(t)$) and shift it around, multiply it by some other function (the Gaussian), and then add up the results
• There's a word for that
• This is actually a pretty common mathematical operation that we've already used

$H(\omega)= {{(X(\omega)R^*(\omega)) * W(\omega)} \over {|R(\omega)|^2*W(\omega)}}$

In [114]:
# original code
H = (X*R.conjugate()) / (R*R.conjugate())

# new code
sigma_t = 0.025
W2 = np.exp(-omega**2*sigma_t**2)
H = (np.convolve(X*R.conjugate(),W2, 'same')) / (np.convolve(R*R.conjugate(),W2, 'same'))


How well does this work?

In [11]:
import numpy
import syde556
T = 1.0
dt = 0.001

x, X = syde556.generate_signal(T, dt, rms=0.3, limit=10, seed=3)
y, Y = syde556.generate_signal(T, dt, rms=0.3, limit=10, seed=1)

t = numpy.arange(int(T/dt))*dt
freq = numpy.arange(int(T/dt))/T - (T/dt)/2
omega = freq*2*numpy.pi

alpha = 1.5
bias = 2
J1 = alpha*x+bias
J2 = -alpha*x+bias
J1y = alpha*y+bias
J2y = -alpha*y+bias

v1, spikes1 = syde556.lif_spikes(J1)
v2, spikes2 = syde556.lif_spikes(J2)
v1y, spikes1y = syde556.lif_spikes(J1y)
v2y, spikes2y = syde556.lif_spikes(J2y)

r = spikes1 - spikes2
R = np.fft.fftshift(np.fft.fft(r))

sigma_t = 0.025
W2 = np.exp(-omega**2*sigma_t**2)
H = (np.convolve(X*R.conjugate(),W2, 'same')) / (np.convolve(R*R.conjugate(),W2, 'same'))
h = np.fft.fftshift(np.fft.ifft(np.fft.ifftshift(H))).real

fspikes1 = numpy.convolve(spikes1, h, mode='same')
fspikes2 = numpy.convolve(spikes2, h, mode='same')
fspikes1y = numpy.convolve(spikes1y, h, mode='same')
fspikes2y = numpy.convolve(spikes2y, h, mode='same')

A = numpy.array([fspikes1, fspikes2]).T
Ay = numpy.array([fspikes1y, fspikes2y]).T

d = syde556.decoders(A, x)

xhat = numpy.dot(A, d)
yhat = numpy.dot(Ay, d)

figure()
plot(t, x)
plot(t, xhat)
title('original signal')
show()
print 'RMSE:', numpy.average((x-xhat)**2)

figure()
title('new testing signal')
plot(t, y)
plot(t, yhat)
show()
print 'RMSE:', numpy.average((y-yhat)**2)

RMSE: 0.00383378139875

RMSE: 0.00317380360121

• As expected, it's a bit worse on the original signal
• But it doesn't get hugely worse when tested on a different signal
• What about higher frequency inputs? Or lower frequency?
In [14]:
z, Z = syde556.generate_signal(T, dt, rms=0.3, limit=2, seed=1)

J1z = alpha*z+bias
J2z = -alpha*z+bias

v1z, spikes1z = syde556.lif_spikes(J1z)
v2z, spikes2z = syde556.lif_spikes(J2z)

fspikes1z = numpy.convolve(spikes1z, h, mode='same')
fspikes2z = numpy.convolve(spikes2z, h, mode='same')

Az = numpy.array([fspikes1z, fspikes2z]).T

zhat = numpy.dot(Az, d)

figure()
plot(t, z)
plot(t, zhat)
show()
print 'RMSE:', numpy.average((z-zhat)**2)

RMSE: 0.00246507856803

• the decoder needs to be made for the right range of frequencies

## Optimal filters reconsidered¶

• So we have now found the optimal filter to apply to the spike data to decode out information
• Note a limitation: we've only done this for the case of 2 neurons!
• But we could use the same filter for many more neurons
• What does this filter look like?
In [137]:
figure()
plot(omega, np.abs(H))
xlabel('$\omega$')
ylabel('$|H(\omega)|$')
xlim(-500, 500)

figure()
plot(t-T/2, h)
xlabel('t')
ylabel('h(t)')

figure()
plot(t-T/2, h)
xlabel('t')
ylabel('h(t)')
xlim(-0.1, 0.1)

show()

h_optimal = h
H_optimal = H
xhat_optimal = xhat

• What is this saying?
• Every time a spike occurs, when we interpret the data we should replace the spike with this shape
• What does this mean?
In [32]:
figure(figsize=(8,4))
vlines(dt*numpy.where(spikes1>0)[0], -1, 1)
plot(t, fspikes1*d[0], label='filtered spikes')
ylabel('$x$')
ylim(-1,1)
xlim(0.2, 0.4)
xlabel('time (s)')
legend()
show()

• If we're analyzing the data after the fact, this is fine
• But what if we're just sitting here obseving the neural activity and wanting to know what's being represented right now?
• Notice that the blue line (the filtered value we're using for decoding $\hat(x)$) starts going up before the spike happens
• The filtered value is using information from the future.
• This means that when we look at $\hat{x}$ this way, we're getting a better estimate of $x$ than it is possible for other neurons to get
• (assuming neurons aren't psychic)
• What could we do instead?

## Biologically plausible filter¶

• We're looking for something that takes a spike and turns it into some smooth signal
• Instead of coming up with the optimal filter, is there anything from neuroscience we could use instead?
• What actually happens when a spike occurs?
• What is the input to the next neuron that occurs when a spike happens?

• A spike causes a vesicle to release neurotransmitter
• The amount of neurotransmitter causes current to flow into the next neuron
• The neurotransmitter is slowly reabsorbed
• What is the current that goes into the next neuron thanks to a single spike?

(data stolen from here)

• So other neurons don't get spikes as inputs
• they get these post-synaptic currents
$$h(t) = \begin{cases} e^{-t/\tau} &\mbox{if } t > 0 \\ 0 &\mbox{otherwise} \end{cases}$$
• this is the standard simple model

• $\tau$ is different for different neurotransmitters and different neurotransmitter receptors
• GABA-A-R: $\tau=10.41 \pm 6.16$ ms
• AMPA-type GluR: $\tau=2.2 \pm 0.2$ ms
• NMDA-type GluR: $\tau=146 \pm 9.1$ ms
• for more complex models, some people use

$$h(t) = \begin{cases} t^n e^{-t/\tau} &\mbox{if } t > 0 \\ 0 &\mbox{otherwise} \end{cases}$$
In [19]:
import syde556
import numpy

dt = 0.001
tau = 0.05
t_h = numpy.arange(1000)*dt-0.5
h = numpy.exp(-t_h/tau)
h[numpy.where(t_h<0)]=0

figure()
plot(t_h, h)
xlabel('t')
ylabel('h(t)')
show()

In [21]:
t = numpy.arange(1000)*dt
x, X = syde556.generate_signal(1.0, dt, rms=0.3, limit=10, seed=3)

alpha = 1.5
bias = 2
J1 = alpha*x+bias
J2 = -alpha*x+bias

v1, spikes1 = syde556.lif_spikes(J1)
v2, spikes2 = syde556.lif_spikes(J2)

fspikes1 = numpy.convolve(spikes1, h, mode='same')
fspikes2 = numpy.convolve(spikes2, h, mode='same')

A = numpy.array([fspikes1, fspikes2]).T

d = syde556.decoders(A, x)

xhat = numpy.dot(A, d)

figure(figsize=(8,4))
plot(t, x)
vlines(dt*numpy.where(spikes1>0)[0], 0, 1)
vlines(dt*numpy.where(spikes2>0)[0], -1, 0)
ylabel('$x$')
ylim(-1,1)
xlabel('time (s)')

figure(figsize=(8,4))
plot(t, x, label='x')
plot(t, fspikes1*d[0])
plot(t, fspikes2*d[1])
ylabel('$x$')
ylim(-1,1)
xlabel('time (s)')

figure(figsize=(8,4))
plot(t, x, label='x')
plot(t, xhat, label='x')
ylabel('$x$')
ylim(-1,1)
xlabel('time (s)')

show()