Deep Learning
Course Code BCS714A
CIE Marks 50
Teaching Hours/Week (L: T:P: S) 3:0:0:0
SEE Marks 50
Total Hours of Pedagogy 40
Total Marks 100
Credits 03
Exam Hours 03
Examination type (SEE) Theory
Module-1
Introducing Deep Learning: Biological and Machine Vision: Biological Vision, Machine Vision:
The Neocognitron, LeNet-5, The Traditional Machine Learning Approach, ImageNet and the
ILSVRC, AlexNet, TensorFlow Playground. Human and Machine Language: Deep Learning for
Natural Language Processing: Deep Learning Networks Learn Representations Automatically,
Natural Language Processing, A Brief History of Deep Learning for NLP, Computational
Representations of Language: One-Hot Representations of Words, Word Vectors, Word-Vector
Arithmetic, word2viz, Localist Versus Distributed Representations, Elements of Natural Human
Language.
Text book 2 : Chapter 1, 2
Module-2
Regularization for Deep Learning: Parameter Norm Penalties, Norm Penalties as Constrained
Optimization, Regularization and Under-Constrained Problems, Dataset Augmentation, Noise Robustness,
Semi- Supervised Learning, Multi-Task Learning, Early Stopping, Parameter Tying and Parameter Sharing,
Sparse Representations, Optimization for Training Deep Models: How Learning Differs from Pure
Optimization, Basic Algorithms. Parameter Initialization Strategies, Algorithms with Adaptive Learning Rates.
Text book 1 : Chapter 7 (7.1 to 7.10), Chapter 8 (8.1,8.3,8.4,8.5)
Module-3
Convolution neural networks: The Convolution Operation, Motivation, Pooling, Convolution and
Pooling as an Infinitely Strong Prior, Variants of the Basic Convolution Function, Structured Outputs,
Data Types, Efficient Convolution Algorithms, Convolutional Networks and the History of Deep
Learning.
Text book 1 : Chapter 9 (9.1 to 9.8, 9.11)
Module-4
Sequence Modelling: Recurrent and Recursive Nets: Unfolding Computational Graphs, Recurrent
Neural Networks, Bidirectional RNNs, Encoder-Decoder Sequence-to-Sequence Architectures, Deep
Recurrent Networks, Recursive Neural Networks. Long short-term memory.
Text book 1 : Chapter 10 (10.1 to 10.6, 10.10)
Module-5
Interactive Applications of Deep Learning: Natural Language Processing: Preprocessing Natural
Language Data: Tokenization, Converting All Characters to Lowercase, Removing Stop Words and
Punctuation, Stemming, Handling n-grams, Preprocessing the Full Corpus, Creating Word Embeddings
with word2vec: The Essential Theory Behind word2vec, Evaluating Word Vectors, Running word2vec,
Plotting Word Vectors, The Area under the ROC Curve: The Confusion Matrix, Calculating the ROC
AUC Metric, Natural Language Classification with Familiar Networks: Loading the IMDb Film
Reviews, Examining the IMDb Data, Standardizing the Length of the Reviews, Dense Network,
Convolutional Networks, Networks Designed for Sequential Data: Recurrent Neural Networks, Long
Short-Term Memory Units, Bidirectional LSTMs, Stacked Recurrent Models, Seq2seq and Attention,
Transfer Learning in NLP.
Text book 2 : Chapter-8
Suggested Learning Resources:
Books
1. Ian Goodfellow, Yoshua Bengio and Aaron Courville, Deep Learning, MIT Press, 2016.
https://www.deeplearningbook.org/lecture_slides.html
2. John Krohn, Grant Beyleveld, Aglae Bassens, Deep Learning Illustrated, A Visual, Interactive
Guide to Artificial Intelligence, Pearson, 2022.

.png)
0 Comments