The Machine Learning group has developed and maintains several open-source software packages that are widely used in the areas of Explainable AI
, Quantum Chemistry
, and Representation Learning
:
Explainable AI implementation for TensorFlow / Keras. In the attempt to better understand neural networks, several methods were proposed, e.g., Saliency maps, DeConvNet, Guided Backprop, SmoothGrad, Integrated Gradients, LRP, PatternNet and PatternAttribution. Due to the lack of a reference implementations comparing them is a major effort. This library addresses this by providing a common interface and out-of-the-box implementation for many analysis methods.
SchNetPack is a toolbox for the development and application of deep neural networks to the prediction of potential energy surfaces and other quantum-chemical properties of molecules and materials. It contains basic building blocks of atomistic neural networks, manages their training and provides simple access to common benchmark datasets. This allows for an easy implementation and evaluation of new models.
Zennit (Zennit explains neural networks in torch) is a high-level framework in Python using Pytorch for explaining/exploring neural networks. Its design philosophy is intended to provide high customizability and integration as a standardized solution for applying rule-based attribution methods in research, with a strong focus on Layerwise Relevance Propagation (LRP).
thingsvision
is a Python package for extracting image representations from many state-of-the-art computer vision models. In a nutshell, you feed thingsvision
with a directory of images and tell it which neural network (and layer) you are interested in. thingsvision
will then return the representation of the indicated neural network for each image in the data so that you will end up with one feature map (vector or matrix; depending on the layer) per image. You can use these features for further analyses or align them with human object similarity according to the approach presented in this NeurIPS paper. We use the word features for short when we mean “image representation”. See the docs for more information about functionalities and available models.
The sGDML package is an optimized implementation of the recently proposed symmetric gradient domain machine learning (sGDML) model. The sGDML model is able to faithfully reproduce global potential energy surfaces (PES) for molecules with a few dozen atoms from a limited number of user-provided reference molecular conformations and the associated atomic forces. Here, we introduce a Python software package to reconstruct and evaluate custom sGDML force fields (FFs), without requiring in-depth knowledge about the details of the model.
In pursuit of interpreting black-box models such as deep image classifiers, a number of techniques have been developed that attribute and visualize the importance of input features with respect to the output of a model. ExplainableAI.jl brings several of these methods to the Julia programming language, building on top of primitives from the Flux.jl ecosystem. The focus of this package is on extensibility and ease of use, allowing users to easily implement their own methods and rules, making it a perfect choice for both researchers and newcomers to XAI.
ViRelAy is a visualization tool for the analysis of data as generated by CoRelAy, providing a graphical user interface solution for Spectral Relevance Analysis. It runs a small website to view and interact with data representations using clusterings and (t-SNE-)embeddings, the corresponding source data and an auxiliary per-sample representation (i.e., the attribution). With this, ViRelAy attempts to simplify the analysis of classifiers and their underlying datasets.
CoRelAy is a tool to swiftly compose small-scale (single-machine) analysis pipelines to generate analysis data which can then be visualized using ViRelAy. Its main use-case is the implementation of Spectral Relevance Analysis.
Quantus is an eXplainable AI toolkit designed for responsible evaluation of neural network explanations. The library includes implementations of 30+ evaluation metrics, grouped into six different evaluation categories, such as explanation faithfulness, robustness, complexity, localisation, randomisation and axiomatic. Compatible with both PyTorch and Tensorflow models, Quantus is versatile across a range of data domains, such as image, tabular and time-series. Quantus offers a user-friendly interface that facilitates large-scale, systematic XAI performance evaluations for researchers. Additionally, it comes with built-in explainability- and visualisation capabilities.