CAP 6615: Neural Networks for Computing
Schedule: MWF, 8th Period
Location: TUR2334

Texts:

  1. Required: Neural Networks for Pattern Recognition, Chris Bishop, Publisher: Oxford University Press.
  2. Recommended: Neural Networks: A Comprehensive Foundation, Simon Haykin, Publisher: Macmillan.
  3. Recommended: Pattern Classification, Duda, Hart and Stork, Publisher: Wiley Interscience.
  4. Additional: Statistical Learning Theory, Vladimir N. Vapnik, John Wiley and Sons, New York, 1998.
  5. Other Material: Notes and papers from the following: Neural Computation, IEEE Trans. Neural Networks, Neural Networks
Instructor: Prof. Anand Rangarajan, CSE E352. Phone: 352 392 1507, Fax: 352 392 1220, email:anand@cise.ufl.edu

Office hours: MWF 4-5pm or by appointment.

Grading:

  1. Homeworks: 20%.
  2. Two Midterms: 25% each.
  3. Project: 30%
Notes:
  1. Prerequisites: A familiarity with basic concepts in calculus, linear algebra, and probability theory. A partial list of basic requirements follows. Calculus: Differentiation, chain rule, integration. Linear algebra: Matrix multiplication, inverse, pseudo-inverse. Probability theory: Conditional probability, Bayes rule, conditional expectations.
  2. Homeworks/programs will be assigned bi-weekly. If you do not have any prior numerical computing experience, I suggest you use MATLAB for the programs.
  3. First Midterm will be given approximately at the middle of the semester and the second will be in the last week of classes.
  4. The project will be the same for all students. A project demonstration is due at the end of the semester and will be graded competitively. Depending on the number of students, the project will be either in teams of two or individual.
  5. A set of informal notes which will evolve with the course can be found here.


Syllabus
Supervised Learning: linear discriminants, the perceptron, backpropagation, multi-layer perceptrons, radial basis functions, learning and generalization theory, support vector machines.
Density Estimation: finite mixtures, the expectation-maximization (EM) algorithm, Bayesian networks.
Unsupervised Learning: competitive networks, clustering, Kohonen self-organizing feature maps, Hebbian learning, principal and independent component analysis (PCA and ICA), kernel methods, local linear embeddings.