The Only Way to make Deep Learning Interpretable is to Have it Explain Itself
Francis Tseng
Info

One of the great biases that Machine Learning practitioners and Statisticians have is that our models and explanations of the world should be parsimonious. We've all bought into Occam's Razor: Among competing hypotheses, the one with the fewest assumptions should be selected. However, does that mean that our machine learning model's need to be sparse?

Connections