« Call R functions from any application with the AzureML package | Main | Resizing plots in the R kernel for Jupyter notebooks »

September 29, 2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I like to this article very much, but something is confusing me. Why is the training error getting bigger and the validation error getting smaller as the training set size increases. Shouldn't it be the opposite? Is it possible that there some reversal in the data or am I missing a concept?

Seth, the training error is getting bigger because the model is now fitting to the underlying pattern, rather than to the random noise in the training set. Essentially, the model is now discriminating between the noise and the underlying pattern better.

Thanks for the helpful post - I also like learning curves, but have previously struggled to find a useful and standardised way to generate them. (I envy the function in scikit-learn.)

Let's hope that they appear in caret soon!

Seth;

As Patrick said, the training error goes up with increasing training set size because the model becomes less overfitted. Since this is simulated data, we know what the error from an "optimal" model should be; this is shown by the dotted line representing the amount of random noise added to the data. Any model that seems to make predictions with error rates lower than this is fooling itself. The error rate for predictions made using an overfitted model on a validation set should be higher than optimal (on average - you sometimes get some jitter depending on how much the validation sample happens to match the biases of the overfitted model.)

The comments to this entry are closed.

Search Revolutions Blog




Got comments or suggestions for the blog editor?
Email David Smith.
Follow revodavid on Twitter Follow David on Twitter: @revodavid
Get this blog via email with Blogtrottr