Another theme at CMIMI was “explainable AI”, which was an umbrella term for any capability that allowed better understanding of how a deep network produced its result.
Prediction basis
Generative model and test data
GradCam was a visualization “heat map” that allow to visualize contributes to a specific result. Categories, mostly. Activation Atlas was not mentioned, but one that I am partial to. tSNE was shout out a few times, though.
I don’t recall any implementations of variational auto encoders, so the use of the latent space to visualize wasn’t hit upon.
Mixer for topographic latent space
The mixer paradigm also underscores the generative nature of the VAE. Data generation can be used to illustrate the function of a network, as mentioned above. It can also be used to produce test data for federated learning and proof-of-interop scenarios, as mentioned in previous posts.