Unverified black box model is the path to the failure. Opaqueness leads to distrust. Distrust leads to ignoration. Ignoration leads to rejection.
The DALEX
package xrays any model and helps to explore and explain its behaviour, helps to understand how complex models are working. The main function explain()
creates a wrapper around a predictive model. Wrapped models may then be explored and compared with a collection of local and global explainers. Recent developents from the area of Interpretable Machine Learning/eXplainable Artificial Intelligence.
The philosophy behind DALEX
explanations is described in the Explanatory Model Analysis e-book. The DALEX
package is a part of DrWhy.AI universe.
If you work with scikit-learn
, keras
, H2O
, tidymodels
, xgboost
, mlr
or mlr3
in R, you may be interested in the DALEXtra package, which is an extension of DALEX
with easy to use explain_*()
functions for models created in these libraries.
Additional overview of the dalex Python package is available.
The DALEX
R package can be installed from CRAN
install.packages("DALEX")
The dalex
Python package is available on PyPI and conda-forge
pip install dalex -U
conda install -c conda-forge dalex
Machine Learning models are widely used and have various applications in classification or regression tasks. Due to increasing computational power, availability of new data sources and new methods, ML models are more and more complex. Models created with techniques like boosting, bagging of neural networks are true black boxes. It is hard to trace the link between input variables and model outcomes. They are use because of high performance, but lack of interpretability is one of their weakest sides.
In many applications we need to know, understand or prove how input variables are used in the model and what impact do they have on final model prediction. DALEX
is a set of tools that help to understand how complex models are working.
- Introduction to Responsible Machine Learning @ useR! 2021
- DALEX + mlr3 @ BioColl 2021 & @ Open-Forest-Training 2021
- Materials from Explanatory Model Analysis Workshop @ eRum 2020, cheatsheet
- How to use DALEX with: keras, parsnip, caret, mlr, H2O, xgboost
- Compare GBM models created in different languages: gbm and CatBoost in R / gbm in h2o / gbm in Python
- DALEX for fraud detection
- DALEX for teaching
- XAI in the jungle of competing frameworks for machine learning
- Introduction to the
dalex
package: Titanic: tutorial and examples - Key features explained: FIFA20: explain default vs tuned model with dalex
- How to use dalex with: xgboost, tensorflow
- More explanations: residuals, shap, lime
- Introduction to the Fairness module in dalex
- Introduction to the Arena: interactive dashboard for model exploration
- Code in the form of jupyter notebook
- Changelog: NEWS
- Talk with your model! at USeR 2020
- Talk about DALEX at Complexity Institute / NTU February 2018
- Talk about DALEX at SER / WTU April 2018
- Talk about DALEX at STWUR May 2018 (in Polish)
- Talk about DALEX at BayArea 2018
- Talk about DALEX at PyData Warsaw 2018
If you use DALEX
in R or dalex
in Python, please cite our JMLR papers:
@article{JMLR:v19:18-416,
author = {Przemyslaw Biecek},
title = {DALEX: Explainers for Complex Predictive Models in R},
journal = {Journal of Machine Learning Research},
year = {2018},
volume = {19},
number = {84},
pages = {1-5},
url = {http://jmlr.org/papers/v19/18-416.html}
}
@article{JMLR:v22:20-1473,
author = {Hubert Baniecki and
Wojciech Kretowicz and
Piotr Piatyszek and
Jakub Wisniewski and
Przemyslaw Biecek},
title = {dalex: Responsible Machine Learning
with Interactive Explainability and Fairness in Python},
journal = {Journal of Machine Learning Research},
year = {2021},
volume = {22},
number = {214},
pages = {1-7},
url = {http://jmlr.org/papers/v22/20-1473.html}
}
76 years ago Isaac Asimov devised Three Laws of Robotics: 1) a robot may not injure a human being, 2) a robot must obey the orders given it by human beings and 3) A robot must protect its own existence. These laws impact discussion around Ethics of AI. Today’s robots, like cleaning robots, robotic pets or autonomous cars are far from being conscious enough to be under Asimov’s ethics.
Today we are surrounded by complex predictive algorithms used for decision making. Machine learning models are used in health care, politics, education, judiciary and many other areas. Black box predictive models have far larger influence on our lives than physical robots. Yet, applications of such models are left unregulated despite many examples of their potential harmfulness. See Weapons of Math Destruction by Cathy O'Neil for an excellent overview of potential problems.
It's clear that we need to control algorithms that may affect us. Such control is in our civic rights. Here we propose three requirements that any predictive model should fulfill.
- Prediction's justifications. For every prediction of a model one should be able to understand which variables affect the prediction and how strongly. Variable attribution to final prediction.
- Prediction's speculations. For every prediction of a model one should be able to understand how the model prediction would change if input variables were changed. Hypothesizing about what-if scenarios.
- Prediction's validations For every prediction of a model one should be able to verify how strong are evidences that confirm this particular prediction.
There are two ways to comply with these requirements. One is to use only models that fulfill these conditions by design. White-box models like linear regression or decision trees. In many cases the price for transparency is lower performance. The other way is to use approximated explainers – techniques that find only approximated answers, but work for any black box model. Here we present such techniques.
Work on this package was financially supported by the NCN Opus grant 2016/21/B/ST6/02176
and NCN Opus grant 2017/27/B/ST6/01307
.