**Lectures at Technische Univesität München (original resource)**

**Lectures at Technische Univesität München (original resource)**

Stefan Glasauer, Werner Hemmert: **Introduction to Computational Neuroscience** (TUM)

together with Erich Schneider

1) Overview

2) Neurons, Resting membrane potential, Ion channels, Action potential, etc. (Schneider)

3) Hodgkin-Huxley, Phase Plane analysis (Glasauer)

4) Leaky integrate-and-fire, Synaptic transmission, From spikes to firing rate (Glasauer)

4a) Model of I&F neuron in MATLAB/SIMULINK (Simulink MDL-file, MATLAB M-file)

5) Analysing firing rate (linear regression and system identification) (Glasauer)

5a) Information theoretic approach (Brostek)

6) Temporal information processing (Hemmert)

7) Firing rate networks, Attractor networks, Head direction cells (Glasauer)

8) Mechanisms of sensory fusion (gravity estimation from vestibular inputs) (Glasauer)

9) Modelling of sensorimotor systems (Glasauer)

…

**Lectures at University of Cambridge (original resource)**

**Principles of computational neuroscience** (Dr. M. Lengyel)

**how is neural activity generated?**mechanistic neuron models

integrate and fire models, spike response models, phase models, phase response curves, firing rate models**how to predict neural activity?**descriptive neuron models

neural coding, estimating firing rates, homogeneous and inhomogeneous Poisson processes, tuning curves, variability, spike triggered average and covariance, LNP models and their extensions, population coding, maximum entropy models**what should neurons do?**normative neuron models

information theory, entropy, (Shannon) mutual information, infomax for one cell, infomax for population**how to read neural activity?**neural decoding

single neuron decoding, signal detection theory, ROC curves, population decoding, dot product, maximum likelihood decoding, Cramer-Rao bound, Fisher information, spike train decoding, probabilistic population codes**what happens when many neurons are connected?**neural networks

feedforward networks, coordinate transformations, recurrent networks, oscillations and synchrony, excitatory-inhibitory networks, selective amplification, input integration, nonlinear amplification, winner-takes-all dynamics, gain control, and sustained activity**how do neurons reconfigure their connections?**plasticity

Hebbian plasticity, stability, synaptic normalisation, Bienenstock-Cooper-Munro rule, spike timing-dependent plasticity**how to tell a neural network what to do?**supervised learning

learning and memory, generalisation, taxonomy of learning tasks (supervised, unsupervised, semi-supervised, reinforcement), classification and regression, perceptron, learning rules as gradient ascent, multi-layer perceptron, error backpropagation, tempotron**how can neuronal networks learn without being told what to do?**unsupervised learning

connectionism, statistical foundations, Bayes’ rule, density estimation, representations of uncertainty, Boltzmann machine, Helmholtz machine, deep belief networks, mutual information, sparse coding**how do neural networks remember?**auto-associative memory

attractor networks, binary Hopfield network, extension to graded neurons, probabilistic interpretation, spike timing-based memories**how can our brains achieve the goal of life?**reinforcement learning

Bellman equations, temporal difference learning, dopamine signals, neuroeconomics

**Main text book**

Dayan & Abbott. Theoretical Neuroscience. MIT Press, 2005.

useful exercises with sample data and MatLab code: here

…

**MORE RESOURCES FOR LEARN**

**General Links**

**Programming with MATLAB**

- An Introduction (PDF)
- Scripts / Functions / Improving speed of calculations
- MATLAB Wikibook
- MATLAB for Neuroscientists (Amazon)
- Project code (in Matlab) – NeuroSimulator by Piotr Mirowski (2006)

…

**Additional reading**

Rieke et al. Spikes. Exploring the Neural Code. MIT Press, 1999.

Gerstner & Kistler. Spiking Neural Networks. Cambridge University Press, 2002.

Rao et al. Probabilistic Models of the Brain: Perception and Neural Function. MIT Press, 2002.

…

**Further reading of potential interest**

Izhikevich. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press, 2006.

MacKay. Information Theory, Inference & Learning Algorithms. Cambridge University Press, 2002.

Sutton & Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.

O’Reilly et al. Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. MIT Press, 2000.

## Leave a Reply