Friday, July 22, 2016

Machine Learning - 3 : Evaluating a Learning Algorithm


Some of the promising things that we can try in case our learning algorithm is erroneous. Some of the strategies for evaluating a learning algorithm are as of follows

1. Splitting Data Set.

Let's say we spilt the training data in some ratio (70:30). 70% of the data is used as the training set, rest of the data (30%) is set as test data. If the data is not randomly ordered that it's always better to do so .

We then compute test set error, which is the cumulative error when using test data against the hypothesis obtained from training data.

2. Model Selection

What degree of polynomial should we pick?

Split the test data into three categories : training data, cross validation data & test data. (Ratio may be 60:20:20). Now we can define three types of error

Training error
Cross Validation error
Test Data error

Now we can pick up model based on which model gives us lowest cross validator error. We can then use the test data to find out generalization error for the hypothesis chosen.

3. Identifying if the learning algorithm is suffering from high bias or high variance.

Plot a graph of error vs degree of polynomial to visualize or simply calculate training error and cross validation error. In case of  high bias (under-fitting data) , both training error and cross validation will be high. If the training error is low but cross validation error is high, it means we have high variance and the data is over-fitting.

In case we regularize parameters, a higher lambda will lead to high bias and a low lambda will lead to high variance.


4. Learning curves

Plot a graph of errors (training error/cross validation error) vs no. of training set (m). We can conclude following from the graph

- If learning algorithm has high bias, adding more training examples will not help.
- If learning algorithm has high variance, adding more training examples will help.



Summary here's what we can do to improve our learning algorithm:


1. Get more training examples (Helpful in case of high variance)

2. Try smaller set of features (Helpful in case of high variance)

3. Try getting additional features (Helpful in case of high bias)

4. Try adding polynomial features (Helpful in case of high bias)

5. Try decreasing lambda (Helpful in case of high bias)

6. Try increasing lambda (Helpful in case of high variance)


In case of support vector machines:

A larger C  value (small lambda) will result in lower bias, high variance. And low value will result in high bias, low variance

Larger (sigma^2) -> Features vary more smoothly. High bias, low variance. In case of small (sigma^2) value -> lower bias , high variance - features vary less smoothly.

So in case of overfitting, decrease C, increase (sigma^2)



References -

1. https://www.coursera.org/learn/machine-learning

No comments:

Post a Comment