About Me

header ads

Machine Learning II (BAI702)

Machine Learning II

Course Code BAI702 
CIE Marks 50
Teaching Hours/Week (L:T:P: S) 3:0:2:0 
SEE Marks 50
Total Hours of Pedagogy 40 hours Theory + 8-10 Lab slots 
Total Marks 100
Credits 04 
Exam Hours 3
Examination nature (SEE) Theory/practical




MODULE-1

Introduction: Well-Posed Learning Problems, Designing a Learning System, Perspectives and Issues in Machine Learning.

Concept Learning and the General-to-Specific Ordering: A Concept Learning Task, Concept Learning as

Search, Find-S: Finding a Maximally Specific Hypothesis, Version Spaces and the Candidate-Elimination

Algorithm, Remarks on Version Spaces and Candidate-Elimination, Inductive Bias.

Text Book 1 : Ch 1 & 2




MODULE-2

Learning Sets of Rules: Sequential Covering Algorithms, Learning Rule Sets: Example-Based Methods, Learning

First-Order Rules, FOIL: A First-Order Inductive Learner.

Analytical Learning: Perfect Domain Theories: Explanation-Based Learning, Explanation-Based Learning of

Search Control Knowledge, Inductive-Analytical Approaches to Learning.

Text Book 1 : Ch 10 & 11




MODULE-3

Decision by Committee: Ensemble Learning: Boosting: Adaboost , Stumping, Bagging: Subagging, Random

Forests, Comparison With Boosting, Different Ways To Combine Classifiers.

Unsupervised Learning: The K-MEANS algorithm : Dealing with Noise ,The k-Means Neural Network,

Normalisation ,A Better Weight Update Rule ,Using Competitive Learning for Clustering.

Text Book 2: Chap 13 and 14.1




MODULE-4

Unsupervised Learning: Vector Quantisation, the self-organising feature map , The SOM Algorithm,

Neighbourhood Connections, Self-Organisation, Network Dimensionality and Boundary Conditions, Examples of

Using the SOM.

Markov Chain Monte Carlo (MCMC) Methods: Sampling : Random Numbers ,Gaussian Random Numbers

,Monte Carlo Or Bust ,The Proposal Distribution , Markov Chain Monte Carlo.

Text Book 2: Chap 14.2, 14.3, 15




MODULE-5

Graphical Models: Bayesian Networks : Approximate Inference , Making Bayesian Networks , Markov

Random Fields , Hidden Markov Models (Hmms), The Forward Algorithm , The Viterbi Algorithm , The

Baum–Welch Or Forward–Backward Algorithm , Tracking Methods , The Kalman Filter, The Particle

Filter.

Text Book 2 : Chap 16




PRACTICAL COMPONENT OF IPCC 


Experiments

1 Read a dataset from the user and i. Use the Find-S algorithm to find the most specific hypothesis that is

consistent with the positive examples. Ii. What is the final hypothesis after processing all the positive

examples? Using the same dataset, apply the Candidate Elimination algorithm.

Determine the final version space after processing all examples (both positive and negative).

What are the most specific and most general hypotheses in the version space?

2 Read a dataset and use an example-based method (such as RIPPER or CN2) to generate a set of

classification rules . Apply the FOIL algorithm (First-Order Inductive Learner) to learn first-order rules

for predicting.

3 Read a supervised dataset and use bagging and boosting technique to classify the dataset. Indicate the

performance of the model.

4 Read an unsupervised dataset and group the dataset based on similarity based on k-means clustering .

5 Read a dataset and perform unsupervised learning using SOM algorithm.

6 Write a function to generate uniform random numbers in the interval [0, 1]. Use this function to

generate 10 random samples and evaluate f(x) for each sample. What are the sampled function values?

Using the samples generated in the previous step, estimate the integral I using the Monte Carlo method.

7 Read a dataset and indicate the likelihood of an event occurring using Bayesian Networks.

8 Refer to the dataset in question 7 and indicate inferences based on the sequence of steps .




Suggested Learning Resources:

Books

1. Tom Mitchell, ―Machine Learning, McGraw Hill, 3rd Edition, 1997.

2. Stephen Marsland, “Machine Learning - An Algorithmic Perspective”, Second Edition, CRC

Press - Taylor and Francis Group, 2015. 

Post a Comment

0 Comments