Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation

Fig 2

Left: A neural network-shaped classifier during prediction time.

wij are connection weights. ai is the activation of neuron i. Right: The neural network-shaped classifier during layer-wise relevance computation time. is the relevance of neuron i which is to be computed. In order to facilitate the computation of we introduce messages . are messages which need to be computed such that the layer-wise relevance in Eq (2) is conserved. The messages are sent from a neuron i to its input neurons j via the connections used for classification, e.g. 2 is an input neuron for neurons 4, 5, 6. Neuron 3 is an input neuron for 5, 6. Neurons 4, 5, 6 are the input for neuron 7.

Fig 2