Syllabus
Unit 1
Artificial Neural Networks- The Neuron-Expressing Linear Perceptrons as Neurons-Feed-Forward Neural Networks- Linear Neurons and Their Limitations –Sigmoid – Tanh – and ReLU Neurons -Softmax Output Layers – Training Feed-Forward Neural Networks-Gradient Descent-Delta Rule and Learning Rates- Gradient Descent with Sigmoidal Neurons- The Backpropagation Algorithm-Stochastic and Minibatch Gradient Descent – Test Sets – Validation Sets – and Overfitting- Preventing Overfitting in Deep Neural Networks – Implementing Neural Networks in TensorFlow.
Unit 2
Local Minima in the Error Surfaces of Deep Networks- Model Identifiability- Spurious Local Minima in Deep Networks- Flat Regions in the Error Surface – Momentum-Based Optimization – Learning Rate Adaptation.
Unit 3
Convolutional Neural Networks(CNN) – Architecture -Accelerating Training with Batch Normalization- Building a Convolutional Network using TensorFlow- Visualizing Learning in Convolutional Networks – Embedding and Representation Learning -Autoencoder Architecture-Implementing an Autoencoder in TensorFlow –DenoisingSparsity in Autoencoders Models for Sequence Analysis – Recurrent Neural Networks- Vanishing GradientsLong Short-Term Memory (LSTM) Units- TensorFlow Primitives for RNN Models-Augmenting Recurrent Networks with Attention.