AUTHORS: Spagnolo F., Molchanova N., Schaer R., Ocampo-Pineda M., Bach Cuadra M., Melie-Garcia L., Granziera C., Andrearczyk V., Depeursinge A.

International Society for Magnetic Resonance in Medicine, : , Singapore, May 2024


ABSTRACT

Motivation:

The use of AI in clinical routine is often jeopardized by its lack of transparency. Explainable methods would help both clinicians and developers to identify model bias and interpret the automatic outputs.

Goal(s):

We propose an explainable method providing insights into the decision process of an MS lesion segmentation network.

Approach:

We adapt SmoothGrad to perform instance-level explanations and apply it to a U-Net, whose inputs are FLAIR and MPRAGE from 10 MS patients.

Results:

Our saliency maps provide local-level information on the network’s decisions. Predictions of the U-Net rely predominantly on lesions’ voxel intensities in FLAIR and the amount of perilesional volume.

Impact:

These results cast some light on the decision mechanisms of deep learning networks performing semantic segmentation. The acquired new knowledge can be an important step to facilitate AI integration into clinical practice.


BibTex


Module: