Saturday, November 25, 2017

Interpreting ML Models - 1

Data Leakage - When information available in the context of model training is unavailable in deployment setting

Feature Importance Algorithm - Describes magnitude of dependance a model has on a feature

FairML can be used to measure model's dependence on its inputs

1. Seeing Data - glyphs, correlation graphs, 2-d projections, partial dependence plots, residual analysis

2. Related Models - OLS regression : penalized regression, generalized additive models, quantile regression

3. Understanding complex learning models: Surrogate modes, Local Interpretable Model agnostic explanations (LIME) , LOCO, Tree Interpreter.







References & Further Reading:

http://blog.fastforwardlabs.com/2017/03/09/fairml-auditing-black-box-predictive-models.html
https://www.oreilly.com/ideas/ideas-on-interpreting-machine-learning
http://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731
https://homes.cs.washington.edu/~marcotcr/blog/lime/

No comments:

Post a Comment