Chris Eliasmith
This notebook addresses the issue of how the number and tuning curves of neurons in an ensemble is related to functions that the ensemble can be used to compute using the NEF. If you just want something to tell you how many neurons to use to compute a function, go to the very end of this notebook.
NB: For a longer discussion and description of the methods here, and if you're a CNRG lab member, please see the 'Function Quality' notebook in the nef_chip_software repository on ctn_waterloo (sorry, this is a private repository).
What is the relation between the number of neurons and the quality of function computation?
%pylab inline
Populating the interactive namespace from numpy and matplotlib
This code shows the tuning curves of different neural models, and whatever is chosen here is used for most of the subsequent analyses.
#You need to be able to import nengo to use neuron models. If you have a neuron model
#that isn't in Nengo, you need to write code to generate X and A below, and that should be it.
import numpy as np
import nengo
from nengo.solvers import LstsqL2
num_neurons = 300
num_points = 100
noise=0.1 #Standard deviation of expected noise
neuron_type = nengo.LIF() #Change this for Nengo to use other neuron types
model = nengo.Network(label='Ensemble')
with model:
ens = nengo.Ensemble(num_neurons, dimensions=1,
neuron_type=neuron_type,
eval_points=np.linspace(-1,1,num_points).reshape(-1,1))
tmp = nengo.Ensemble(1,1) #temporary ensemble to allow finding decoders
conn = nengo.Connection(ens,tmp) #temorary connection to allow finding decoders
solver = LstsqL2(reg=noise)
sim = nengo.Simulator(model)
X, A, T = nengo.builder.build_linear_system(conn, sim.model)
A = A.T
plot(X,A[1::10,:].T)
title('Neuron Tuning Curves')
<matplotlib.text.Text at 0x1076d2a50>
As shown in the NEF book, broadly tuned neural populations approximate the Legendre basis. Let's plot that basis.
BasisSize = 5 #Number of basis fcns to plot
legCoeffMat = eye(BasisSize)
legBasis=zeros([BasisSize, X[:,0].size])
for i in np.arange(BasisSize):
legBasis[i,:] = np.polynomial.legendre.legval(linspace(-1,1,num_points),legCoeffMat[i,:])
figure()
plot(legBasis.T)
title('Legendre Basis')
pylab.ylim([-1,1.1]);
Next we can find the approximation of that basis by the above (last generated) population. First, compute the correlation matrix and do SVD on it.
G = np.dot(A,A.T)
U,S,V=np.linalg.svd(G)
Note that the left and right singular vectors are transposes of one another because G is a symmetric matrix (and they are the eigenvectors), and that the eignvectors of G are the left singular vectors of A. The singular values of A are the square root of the singular (eigen) values of G (wikipedia).
#Plot 10 singular values of the correlation matrix G
plt.plot(S[0:10])
yscale('log')
disp('Singular Values')
Singular Values
chi = np.dot(U.T,A) #Project the neural activity onto its singular vectors
chi = chi.T/chi.max(axis=1) #Normalize to 1
plot(chi[:,0:BasisSize])
title('Bases')
<matplotlib.text.Text at 0x1076cf710>
Note that these may be randomly flipped by sign.
These basis vectors tell us what function family can be represented and the SVs tell us how well. So the question now is how do those SVs change with the number of neurons, and with the tuning curve function.
We can demonstrate how the SVs change numerically.
num_neurons=2**np.arange(3,10,1) #2,4,8... make 12 ~10 to run faster
eval_points=np.linspace(-1,1,num_points).reshape(-1,1)
num_runs = 5
avg_s=[]
for N in num_neurons:
Ss=[]
for i in np.arange(num_runs):
model = nengo.Network(label='Ensemble')
with model:
ens = nengo.Ensemble(N, dimensions=1,
neuron_type=neuron_type,
eval_points=eval_points)
tmp = nengo.Ensemble(1,1) #temporary ensemble to allow finding decoders
conn = nengo.Connection(ens,tmp) #temorary connection to allow finding decoders
solver = LstsqL2(reg=noise)
sim = nengo.Simulator(model)
X, A, T = nengo.builder.build_linear_system(conn, sim.model)
A = A.T
G = np.dot(A,A.T)
S=np.linalg.svd(G, compute_uv=0) #Find singular values
Ss.append(S[0:6])
avg_s.append(np.mean(Ss, axis=0)) #Take the mean S over numRuns
tmp=plot(array(avg_s).T)
yscale('log')
legend(tmp,num_neurons)
title('SVs for increasing neurons')
<matplotlib.text.Text at 0x10781b950>
Here is code demonstrating the relationship between order of Legendre polynomial, number of neurons and MSE numerically.
#Find the relationship between Legendre polynomials and MSE
num_neurons=2**np.arange(0,11,1) #2,4,8... make 12 ~10 to run faster
num_runs = 10
eval_points=np.linspace(-1,1,num_points).reshape(-1,1)
rng = np.random.RandomState()
xi = linspace(-1,1,num_points)
basisSize = 8 #Number of basis fcns
LegNum = np.arange(0,basisSize)
LegCoeffMat = eye(basisSize)
#The legendre basis solved for error generation
LegBasis=zeros([basisSize, num_points])
for i in LegNum:
LegBasis[i,:] = np.polynomial.legendre.legval(xi,LegCoeffMat[i,:])
#A generator for the legendre basis
legendre = lambda p: (lambda x: (np.polynomial.legendre.legval(x,LegCoeffMat[p,:])))
PavgMSE=[]
for P in LegNum:
avgMSE=[]
for N in num_neurons:
MSE=[]
for i in np.arange(num_runs):
model = nengo.Network(label='Ensemble')
with model:
ens = nengo.Ensemble(N, dimensions=1,
neuron_type=neuron_type,
eval_points=eval_points)
tmp = nengo.Ensemble(1,1) #temporary ensemble to allow finding decoders
conn = nengo.Connection(ens, tmp, #temorary connection to allow finding decoders
function = legendre(P))
solver = LstsqL2(reg=noise)
sim = nengo.Simulator(model)
X, A, T = nengo.builder.build_linear_system(conn, sim.model)
A = A.T
d = sim.data[conn].decoders
#print sim.data[conn].solver_info['rmses']
A_noise = A + rng.randn(*A.shape)*(noise*np.max(A)) #Test under noise
#A_noise = A #Use this if you want to see decoding under no noise
Xhat = np.dot(A_noise.T, d.T) #Compute estimate with decoders
MSE.append(np.mean((LegBasis[P,:]-Xhat[:,0])**2)) #Compute MSE
avgMSE.append(np.mean(MSE)) #Take the mean MSE over numRuns
PavgMSE.append(avgMSE) #Collect the MSE across powers
disp('Done')
Done
tmp=plot(array(PavgMSE).T)
legend(tmp,LegNum, loc=3)
yscale('log')
xticks(log2(num_neurons),array(num_neurons).astype('str'))
plot(1.0/num_neurons,'--')
title('Accuracy for Nth Degree Legendre Polynomials with increasing neurons')
<matplotlib.text.Text at 0x107f8d490>
Let's do linear regression to see the slopes of the lines of best fit for this data, order to see how the MSE changes with polynomial order.
data = array(PavgMSE)[:,6:] #Start fit at N=64
length = data.shape[1]
xi = np.arange(0,length)
A = array([xi, ones(length)])
dataLog = log(data)
w=[]
for row in dataLog:
w.append(linalg.lstsq(A.T, row)[0]) # obtaining the parameters
w=array(w)
plot(outer(w[:,0],xi).T+w[:,1])
plot(dataLog.T,'.')
disp('Data with linear fits')
Data with linear fits
#Linear fit to slopes of the previous fit
length = w.shape[0]
xi = np.arange(0,length)
A = array([xi, ones(length)])
w2 = linalg.lstsq(A.T, w[:,0])[0]
plot(w2[0]*xi+w2[1])
plot(w[:,0],'.')
disp(w2[0])
0.0199846291648
Here, the slope is some small value (printed above the graph), which suggests a $k$ for $MSE \propto N^{kP}/N$. This $k$ depends on the neurons being used.
First, let's do a sanity test, check to see if $[d_i^2] \propto N^{k}/N^2$ for $P=1$
#Just use the first order legendre basis function (a sloped line)
rng = np.random.RandomState()
num_neurons=2**np.arange(0,11,1) #2,4,8... make 12 ~10 to run faster
num_runs = 10
LegBasis = np.polynomial.legendre.legval(xi,[0,1]) #Generate just the first order L-basis
legendre = lambda x: (np.polynomial.legendre.legval(x,[0,1]))
avgMSE=[]
decodeAvg=[]
for N in num_neurons:
MSE=[]
decodeAvgN=[]
for i in np.arange(num_runs):
model = nengo.Network(label='Ensemble')
with model:
ens = nengo.Ensemble(N, dimensions=1,
neuron_type=neuron_type,
eval_points=eval_points)
tmp = nengo.Ensemble(1,1) #temporary ensemble to allow finding decoders
conn = nengo.Connection(ens,tmp, #temorary connection to allow finding decoders
function = legendre)
solver = LstsqL2(reg=noise)
sim = nengo.Simulator(model)
X, A, T = nengo.builder.build_linear_system(conn, sim.model)
A = A.T
d = sim.data[conn].decoders
A_noise = A + rng.randn(*A.shape)*(noise*np.max(A)) #Test under noise
#A_noise = A #Use this if you want to see decoding under no noise
Xhat = np.dot(A_noise.T, d.T) #Compute estimate with decoders
MSE.append(np.mean((LegBasis-Xhat[0,:])**2)) #Compute MSE
decodeAvgN.append(np.mean(d**2))
avgMSE.append(np.mean(MSE)) #Take the mean MSE over numRuns
decodeAvg.append(np.mean(decodeAvgN))
disp('Done')
Done
scale=40.0
tmp = plot(array(decodeAvg).T)
legend(tmp,LegNum, loc=3)
yscale('log')
xticks(np.arange(0,11,1),array(num_neurons).astype('str'))
plot(scale/num_neurons,'--')
plot(scale*num_neurons**w2[0]/num_neurons**2,'.') #should match
plot(scale/num_neurons**3,'*')
title('Mean of $d_i^2$ for $P=1$ with increasing neurons')
<matplotlib.text.Text at 0x1078198d0>
So it looks like $MSE \propto N^{kP}/N$ numerically. Can we derive it? This will be hard if $a_i$ changes $k$. Unfortunately this happens. I worked on this with Kwabena and we seem to have something pretty good now, but that's another notebook.
How many neurons do I need if I want to compute a function $f(x)$ with a given $MSE$ under a given amount of noise $\sigma^2$?
To answer this we must determine all of the scaling and bias factors left out of the previous analysis. Note that we implicitly accounted for noise (usually left at 0.1) in that analysis. The following does the same because it assumes that we'll know the amount of noise and that it will stay constant.
So the best way to use this code is run it for the desired $\sigma$, and then use the resulting theoretical predictions (i.e. the code computes the constants for the theoretical fits for a given amount of noise). The result will provide an answer to the question: How many neurons are needed to compute the function $x^P$ to a certain MSE under a given amount of noise $\sigma^2$?
We can then project any arbitrary function $f(x)$ onto that basis and get an estimate for how many neurons we'll need to compute it to a given degree of precision.
Recall that we did a linear fit to changes in slope as a function of P. We begin by doing that again here.
NB: The code below does not depend (for constants, etc.) on the code above. If you want them to be the same, make sure the constants are set again.
#Find the relationship between Legendre polynomials and MSE
import numpy as np
rng = np.random.RandomState()
num_neurons=2**arange(0,11,1) #2,4,8... make 12 ~10 to run faster
num_runs = 10
noise=0.1 #Standard deviation of expected noise
num_points = 100
eval_points=np.linspace(-1,1,num_points).reshape(-1,1)
xi=linspace(-1,1,num_points)
basisSize = 7 #Number of basis fcns
LegNum = arange(0,basisSize)
LegCoeffMat = eye(basisSize)
LegBasis=zeros([basisSize, num_points])
for i in LegNum:
LegBasis[i,:] = np.polynomial.legendre.legval(xi,LegCoeffMat[i,:])
#A generator for the legendre basis
legendre = lambda p: (lambda x: (np.polynomial.legendre.legval(x,LegCoeffMat[p,:])))
PavgMSE=[]
for P in LegNum:
avgMSE=[]
for N in num_neurons:
MSE=[]
for i in np.arange(num_runs):
model = nengo.Network(label='Ensemble')
with model:
ens = nengo.Ensemble(N, dimensions=1,
neuron_type=neuron_type,
eval_points=eval_points)
tmp = nengo.Ensemble(1,1) #temporary ensemble to allow finding decoders
conn = nengo.Connection(ens, tmp, #temorary connection to allow finding decoders
function = legendre(P))
solver = LstsqL2(reg=noise)
sim = nengo.Simulator(model)
X, A, T = nengo.builder.build_linear_system(conn, sim.model)
A = A.T
d = sim.data[conn].decoders
#print sim.data[conn].solver_info['rmses']
A_noise = A + rng.randn(*A.shape)*(noise*np.max(A)) #Test under noise
#A_noise = A #Use this if you want to see decoding under no noise
Xhat = np.dot(A_noise.T, d.T) #Compute estimate with decoders
MSE.append(np.mean((LegBasis[P,:]-Xhat[:,0])**2)) #Compute MSE
avgMSE.append(np.mean(MSE)) #Take the mean MSE over numRuns
PavgMSE.append(avgMSE) #Collect the MSE across powers
disp('Done')
Done
import matplotlib.pyplot as plt
tmp=plt.plot(array(PavgMSE).T)
plt.legend(tmp,LegNum, loc=3)
plt.yscale('log')
plt.xticks(log2(num_neurons),array(num_neurons).astype('str'))
plt.plot(1.0/num_neurons,'--')
disp('Accuracy for Nth Degree Legendre Polynomials with increasing neurons')
Accuracy for Nth Degree Legendre Polynomials with increasing neurons
And now the linear fitting for the slopes.
data = array(PavgMSE)[:,6:] #Start fit at N=64, for large noise need to increase N before you get anything useful
length = data.shape[1]
xi = arange(0,length)
A = array([xi, ones(length)])
dataLog = log(data)
w=[]
for row in dataLog:
w.append(linalg.lstsq(A.T, row)[0]) # obtaining the parameters
w=array(w)
figure(figsize=(12,4))
subplot(1,2,1)
plot(outer(w[:,0],xi).T+w[:,1])
plot(dataLog.T,'.')
title('Data with linear fits')
#Fit to slopes... start linear
length = w.shape[0]
xi = arange(0,length)
A = array([xi, ones(length)])
optSlope = linalg.lstsq(A.T, w[:,0])[0]
subplot(1,2,2)
plot(optSlope[0]*xi+optSlope[1])
plot(w[:,0],'.')
title('Linear fit to slopes')
disp(w[:,0])
disp(optSlope)
[-0.70114362 -0.72127707 -0.73318164 -0.69493473 -0.68692776 -0.67263751 -0.62122376] [ 0.01368902 -0.7312565 ]
Since we now care about the intercepts as well, we can try a linear fit for the intercepts.
#Linear fit to intercepts
length = w.shape[0]
xi = arange(0,length)
A = array([xi, ones(length)])
w3 = linalg.lstsq(A.T, w[:,1])[0]
figure(2)
plot(w3[0]*xi+w3[1])
plot(w[:,1],'.')
#That doesn't work well, so do a nonlinear fit
from scipy import optimize
fitfunc = lambda p, x: p[0]*p[2]**(-x) - p[1]
errfunc = lambda p, x, y: fitfunc(p, x) - y
p0 = [-6.0,-1.0,2.0] #Start guess
optInter,success = optimize.leastsq(errfunc, p0[:], args=(xi[1::], w[1::,1]))
plot(w[:,1],'.')
plot(optInter[0]*optInter[2]**(-xi) - optInter[1] )
disp(optInter)
[-6.08424265 3.01096539 1.63551843]
So the linear fit to intercepts doesn't work well, but the fit of the nonlinear function $C_1 C_3^{-x} - C_2$ works well when leaving off the constant (P=0).
Now that we have fits for the slopes and intercepts (recall, from 64 neurons on), we can use these to fit all the data.
#Simplifying the previous expression
figure()
ax = gca()
ax.set_color_cycle(['b','g','r','m','k','y','c'])
P=arange(0,len(LegNum))
#testing predictions
f1 = optSlope[0]*P + optSlope[1]
f2 = optInter[0]*optInter[2]**(-P) - optInter[1]
f3 = log2(num_neurons) - 6 #The value 6 comes from the fact that we started the fit at 2^6 neurons.
prediction = exp(outer(f1,f3).T+f2)
toPlot = array(PavgMSE)
firstPredict = np.exp(w[0,0]*(log2(num_neurons)-6) + w[0,1])
tmp=loglog(num_neurons, toPlot.T)
legend(tmp,P, loc=3)
loglog(num_neurons, prediction,'--')
loglog(num_neurons, firstPredict,'--')
axis('tight')
disp('Data versus theory Legendre')
disp(optSlope)
disp(optInter)
Data versus theory Legendre [ 0.01368902 -0.7312565 ] [-6.08424265 3.01096539 1.63551843]
Doing some algebra gave those expressions for the functions that are included in the fit. Specifically,
$\begin{eqnarray} MSE = g(P,N,\sigma) = exp(f_2(P) + f_1(P)f_3(N)) \end{eqnarray}$
where
$\begin{eqnarray} f_1(P) &=& C_1P-C_2 \\\ f_2(P) &=& C_3 2^{-P} -1 \\\ f_3(N) &=& log_2(N)-C_4 \end{eqnarray}$
and, from the data fits we have the values of $C_i$ ($C_4$ is the N for starting number of neurons = $2^N$ for the fits).
So, we now have the answer to our question: How many neurons are needed to compute the $P^{th}$ order Legendre basis to a certain MSE under a given amount of noise $\sigma^2$? (For $P\neq0$ should probably be added, although it's not the case for all neuron tuning curve types.)
where
$$f(P,MSE,\sigma) = (log(MSE)-f_2(P))/f_1(P)+C_4$$We are now in a position to write the code that we want to use to determine the number of neurons needed to reach a given MSE for any function we are trying to compute, $f(x)$.
First we convert the function to a representation on a Legendre basis:
$\begin{eqnarray} f(x) &=& \sum_i^P C_i L_i(x) \\\ C_i &=& \int f(x) L_i(x) dx \end{eqnarray}$
If we assume that the MSEs between basis functions are independent (which they technically are not, but they are close), then we have $$MSE = \sum_i^P C_i g_i(N,\sigma) $$ and we want to find the minimum $N$ to meet this MSE. Each $g_i(N,\sigma)$ has already been found, so we can use a nonlinear function minimizer to get the answer. The following code demonstrates the full process, going from some function $f(x)$ to determining how many neurons will be needed to reach a given MSE assuming a certain $\sigma^2$.
from scipy import optimize
import numpy as np
#Find the relationship between Legendre polynomials and MSE for neurogrid neurons
#This code encapsulates the previous work in a function and is called by
#the optimizer below
def MSEPredictorCoefficients(noise, basisSize, startNeurons, sparsity=None):
rng = np.random.RandomState()
num_neurons=2**arange(startNeurons,11,1) #2,4,8... make 12 ~10 to run faster
numRuns = 20
num_points = 100 #Number of sample points need a large number for nnls
eval_points=np.linspace(-1,1,num_points).reshape(-1,1)
xi=linspace(-1,1,num_points)
LegNum = arange(0,basisSize)
LegCoeffMat = eye(basisSize)
LegBasis=zeros([basisSize, num_points])
for i in LegNum:
LegBasis[i,:] = np.polynomial.legendre.legval(xi,LegCoeffMat[i,:])
#A generator for the legendre basis
legendre = lambda p: (lambda x: (np.polynomial.legendre.legval(x,LegCoeffMat[p,:])))
PavgMSE=[]
for P in LegNum:
avgMSE=[]
for N in num_neurons:
MSE=[]
for i in np.arange(num_runs):
model = nengo.Network(label='Ensemble')
with model:
ens = nengo.Ensemble(N, dimensions=1,
neuron_type=neuron_type,
eval_points=eval_points)
tmp = nengo.Ensemble(1,1) #temporary ensemble to allow finding decoders
conn = nengo.Connection(ens, tmp, #temorary connection to allow finding decoders
function = legendre(P))
solver = LstsqL2(reg=noise)
sim = nengo.Simulator(model)
X, A, T = nengo.builder.build_linear_system(conn, sim.model)
A = A.T
d = sim.data[conn].decoders
A_noise = A + rng.randn(*A.shape)*(noise*np.max(A)) #Test under noise
Xhat = np.dot(A_noise.T, d.T) #Compute estimate with decoders
MSE.append(np.mean((LegBasis[P,:]-Xhat[:,0])**2)) #Compute MSE
avgMSE.append(np.mean(MSE)) #Take the mean MSE over numRuns
PavgMSE.append(avgMSE) #Collect the MSE across powers
data = array(PavgMSE)[:,0:] #Start fit at N=64 usually
length = data.shape[1]
xi = arange(0,length)
A = array([xi, ones(length)])
dataLog = log(data)
w=[]
for row in dataLog:
w.append(linalg.lstsq(A.T, row)[0]) # obtaining the parameters
w=array(w)
#Fit to slopes... linear
length = w.shape[0]
xi = arange(0,length)
A = array([xi, ones(length)])
optSlope = linalg.lstsq(A.T, w[:,0])[0]
#Fit to intercepts... nonlinear
fitfunc = lambda p, x: p[0]*p[2]**(-x) - p[1]
errfunc = lambda p, x, y: fitfunc(p, x) - y
p0 = [-6.0,-1.0,2.0] #Start guess
optInter,success = optimize.leastsq(errfunc, p0[:], args=(xi[1::], w[1::,1]))
firstP = w[0,:] #the slope and intercept (for P=0) which may not be well fit
return optSlope, optInter, firstP #The constants for the fit
#First we find the constants in our known MSE functions for each legendre basis
#This basically calls a function that encapsulates the code developed above
#It also includes a sparsity option, which is described in the Function quality
#under sparsification notebook.
startNeurons = 6 #N for 2^N to start the coefficient generation at
noise = 0.1
basisSize = 7 #Number of legendre bases to use to generate the fit
optSlope, optInter, firstP = MSEPredictorCoefficients(noise, #Really don't have to call this every time
basisSize, #that you change the function below.
startNeurons,
sparsity=None)
import numpy as np
#Define the problem by picking the function we're going to use, setting an MSE and choosing \sigma
xLen = 1000
x = linspace(-1,1,xLen)
dx = 2./xLen
MSE = 0.001
fx = x[:]**2
#fx = np.polynomial.legendre.legval(x,[0,0,1]) #Generate just the first order L-basis
#Now we find the coefficients of the representation of this on the Legendre basis
basisMax = 10 #Max number of basis functions to use, only hit if the MSE isn't low enough
LegCoeffMat = eye(basisMax)
basisMSE = 0
i = 0
while i < basisMax:
C=np.polynomial.legendre.legfit(x,fx,i)
basisMSE = mean((np.polynomial.legendre.legval(x,C) - fx)**2)
if basisMSE < MSE/10:
print 'Using %d basis functions, with coefficients: %s' % ((i+1), ', '.join(map(str, C)))
break
i += 1
if i==basisMax:
print 'Hit max number of basis functions before meeting MSE criteria.'
Using 3 basis functions, with coefficients: 0.333333333333, -9.71978246396e-17, 0.666666666667
#We know the nonlinear functions to be called by the optimizer since we have found the
#predictive coefficients
def MSEPrediction(P, optSlope, optinter, startNeurons, numNeurons):
f1 = optSlope[0]*P + optSlope[1]
f2 = optInter[0]*optInter[2]**(-P) - optInter[1]
f3 = log2(numNeurons) - startNeurons
prediction = np.exp(outer(f1,f3).T+f2)
prediction.shape = ()
return prediction
#Last we use an optimizer in scipy to find the minimum N to meet the desired MSE
#The function to return the current MSE as a function of the number of neurons (the parameters for the fit)
def fitfunc(p):
#p are the parameters, here the number of neurons for each basis
#x are the coefficients on those bases for this function
MSETotal = 0
#Do P=0 on its own first
MSETotal += C[0]*np.exp(firstP[0]*(log2(p)-startNeurons) + firstP[1])
#print p, MSETotal
for i in arange(1,C.shape[0]):
MSETotal += C[i]*MSEPrediction(i, optSlope, optInter, startNeurons, p)
#print p, MSETotal
return MSETotal
errfunc = lambda p: (fitfunc(p) - MSE)**2
p0 = 100 #Start guess, 100 neurons
optNumNeurons = optimize.fmin(errfunc, p0)
print 'Note that this value is the *mean* needed to get the desired MSE.' \
' So you may do better or worse for a specific population'
print 'The total number of neurons is', int(np.max(optNumNeurons)), '-ish.'
Optimization terminated successfully. Current function value: 0.000000 Iterations: 25 Function evaluations: 50 Note that this value is the *mean* needed to get the desired MSE. So you may do better or worse for a specific population The total number of neurons is 249 -ish.
Because the decoders are linear over the same neurons and the function being decoded is a sum of these bases (meaning the MSEs add, since we've assumed they are independent), the above optimization is appropriate. Our independence assumption means that the result is worst case, so when checking the result we should do as well or better than expected.
#Check the result
N = int(np.max(optNumNeurons))
xi = linspace(-1,1,num_points)
fx = xi**2
LegBasis = np.polynomial.legendre.legval(xi,[0,0,1]) #Generate just the first order L-basis
legendre = lambda x: (np.polynomial.legendre.legval(x,[0,0,1]))
def myFunc(x):
return x[0]*x[0]
model = nengo.Network(label='Ensemble')
with model:
ens = nengo.Ensemble(N, dimensions=1,
neuron_type=neuron_type,
eval_points=eval_points)
tmp = nengo.Ensemble(1,1) #temporary ensemble to allow finding decoders
conn = nengo.Connection(ens, tmp, #temorary connection to allow finding decoders
function = myFunc)
solver = LstsqL2(reg=noise)
sim = nengo.Simulator(model)
X, A, T = nengo.builder.build_linear_system(conn, sim.model)
A = A.T
d = sim.data[conn].decoders
A_noise = A + rng.randn(*A.shape)*(noise*np.max(A)) #Test under noise
Xhat = np.dot(A_noise.T, d.T) #Compute estimate with decoders
print 'The MSE for this population is: %1.8f' % np.mean((fx-Xhat[:,0])**2) #Compute MSE
The MSE for this population is: 0.00046698
plot(xi,fx)
plot(xi,Xhat)
[<matplotlib.lines.Line2D at 0x107d67b90>]