Strumenti Utente

Strumenti Sito


bionics-engineering:computational-neuroscience:lab4

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Prossima revisione
Revisione precedente
bionics-engineering:computational-neuroscience:lab4 [13/04/2016 alle 13:20 (9 anni fa)] – Created webpage for assignment 4 Davide Bacciubionics-engineering:computational-neuroscience:lab4 [20/04/2016 alle 14:38 (9 anni fa)] (versione attuale) – Corrected erroneous mention to eigenvalue in Assignment 4 Davide Bacciu
Linea 14: Linea 14:
 Implement a discrete time version of the Hebb correlation rule by Implement a discrete time version of the Hebb correlation rule by
   * Starting from a weight vector w randomly initialized in $[-1,1]$   * Starting from a weight vector w randomly initialized in $[-1,1]$
-  * For each data sample in the data matrix update the synaptic weights using $w(t+1) = w(t) + \epsilon \frac{dw}{dt}$, where $\epsilon$ is a small positive constant (e.g. $\epsilon = 0.01$) and $\frac{dw}{dt}$ is computed by the Hebb covariance rule (assuming $\tau_w = 1$)+  * For each data sample in the data matrix update the synaptic weights using $w(t+1) = w(t) + \epsilon \frac{dw}{dt}$, where $\epsilon$ is a small positive constant (e.g. $\epsilon = 0.01$) and $\frac{dw}{dt}$ is computed by the Hebb correlation rule (assuming $\tau_w = 1$)
  
 To implement this process, at each time step, feed the neuron with an input $u$ from the data matrix. Once you have reached the last data point in matrix data, shuffle (i.e. randomly reorder) the samples in data (e.g. consider using the function ''randperm()'') and start again from the first element of the reordered matrix.   Keep iterating this process until the change in $w$ between two consecutive swipes through the whole data is negligible (i.e. the norm of the difference of the new and old vectors is smaller than an arbitrary small positive threshold).  To implement this process, at each time step, feed the neuron with an input $u$ from the data matrix. Once you have reached the last data point in matrix data, shuffle (i.e. randomly reorder) the samples in data (e.g. consider using the function ''randperm()'') and start again from the first element of the reordered matrix.   Keep iterating this process until the change in $w$ between two consecutive swipes through the whole data is negligible (i.e. the norm of the difference of the new and old vectors is smaller than an arbitrary small positive threshold). 
  
-After training has converged, plot a figure displaying (on the same graph) the training data points, the final weight vector $w$ resulting from the learning process and the first eigenvalue of the zero-mean input correlation matrix (e.g. subtract the mean of the population from the data matrix, compute the correlation matrix and then apply the ''eig()'' function).+After training has converged, plot a figure displaying (on the same graph) the training data points, the final weight vector $w$ resulting from the learning process and the first eigenvector of the zero-mean input correlation matrix (e.g. subtract the mean of the population from the data matrix, compute the correlation matrix and then apply the ''eig()'' function).
    
 Generate two figures plotting the evolution in time of the two components of the weight vector $w$ (for this you will need to keep track of $w(t)$ evolution during training). The plot will have time on the $x$ axis and the weight value on the $y$ axis (provide a separate plot for each component of the weight vector). Also provide another plot of the evolution in time of the norm of the weight vector during learning. Generate two figures plotting the evolution in time of the two components of the weight vector $w$ (for this you will need to keep track of $w(t)$ evolution during training). The plot will have time on the $x$ axis and the weight value on the $y$ axis (provide a separate plot for each component of the weight vector). Also provide another plot of the evolution in time of the norm of the weight vector during learning.
bionics-engineering/computational-neuroscience/lab4.1460553642.txt.gz · Ultima modifica: 13/04/2016 alle 13:20 (9 anni fa) da Davide Bacciu

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki