One thing I find a lot in critical literature about machine learning, AI, data and the algorithmic society, etc. is that it is hard to understand how the machine works, i.e. it is a black box. I've come to accept this warning/caveat without thinking too much about it, until noticing that I didn't exactly know what it meant.
Is it that some machine learning algorithms and/or applications are fundamentally, theoretically impossible to understand the mechanics thereof? Or is it more that the scale and complexity of the applications in practice make an accurate understanding unfeasible? Also, computer algorithms being black boxes seems to be a fairly common trait, and not limited to statistical algorithms—so why would it be more problematic, if it indeed is, in machine learning?
Using this collection I try to understand more precisely what the black box metaphor means, and why the opacity can be problematic.
Next up: how to use the FairML toolbox to test a machine learning model.
Other examples such as "adversarial" approaches.