The Building Blocks of Interpretability
Francis Tseng

With the growing success of neural networks, there is a corresponding need to be able to explain their decisions - including building confidence about how they will behave in the real-world, detecting model bias, and for scientific curiosity. In order to do so, we need to both construct deep abstractions and reify (or instantiate) them in rich interfaces .