The Netherlands eScience Center and SURF join forces to develop DIANNA: a standardized open source system that will ‘explain’ the reasoning of Deep Neural Networks.

Despite their high predictive accuracy, the outcomes of Artificial Intelligence (AI) models, usually Deep Neural Networks (DNNs), are difficult to explain. This has earned them the reputation for being ‘black boxes’. Explainability, however, is necessary to foster trust and social acceptance. Although various methods to achieve explainable AI exist, they suffer from several drawbacks, and they are used mostly by AI experts rather than the wider scientific community.


Elena Ranguelova, Technology Lead at the eScience Center: “DIANNA stands for ‘Deep Insight And Neural Network Analysis.’ The project aims at creating an open source software tool that ’explains’ how DNNs reason. The ‘explanation’ represents knowledge captured by the AI system which is visualized through a ‘relevance heatmap’. In this way the visualization itself can become a source of new scientific insight.”

Best explainable AI methods for research

Aligned with the technology strategy of the eScience Center, DIANNA aims to determine the best explainable AI (XAI) methods for use in research. It supports the Open Neural Network eXchange (ONNX) standard and provides new image benchmarks suited for studying the XAI heatmaps. To make DIANNA known and accessible to the wider research community, the Netherlands eScience Center and SURF will provide tutorials and future web demonstrators.

Expertise in XAI

Ranguelova: “Both the Netherlands eScience Center and SURF have extensive expertise in applying AI to research projects in different domains and to different kinds of data. I have wanted to collaborate with SURF’s Open Innovation Lab for a while, and this project forms the perfect opportunity.”

Showcasing the capabilities of DIANNA

The capabilities of DIANNA will be showcased on a radiology case. One of the challenges in planning radiotherapy treatment is how to deal with daily variations in the internal patient anatomy. Generating CT scans can potentially be used to simulate these variations. Obtaining explanations about the factors influencing the variations could help medical experts in their decision making.

Damian Podareanu, Lead High Performance Machine Learning at SURF: “It will be interesting to apply XAI to generative models since these are notoriously difficult to quantify and compare. In the case of medical generative models specialists most often review the output manually. We are convinced DIANNA will prove to be useful in that context.”

DIANNA is a so-called SURF Alliance Project, an annual collaboration between the Netherlands eScience Center and SURF. The projects are primarily intended to connect advanced technological expertise within both organizations, based on their respective technology strategies. The software solutions resulting from the projects can potentially be reused to address other research problems in different disciplines.


Stay abreast of our latest news, events and funding opportunities

  • Dit veld is bedoeld voor validatiedoeleinden en moet niet worden gewijzigd.