Skip to content
#

interpretability

Here are 365 public repositories matching this topic...

Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Examples for classification, object detection, segmentation, embedding networks and more. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

  • Updated Apr 4, 2022
  • Python
adocherty
adocherty commented Nov 27, 2019

Description

Currently our unit tests are disorganized and each test creates example StellarGraph graphs in different or similar ways with no sharing of this code.

This issue is to improve the unit tests by making functions to create example graphs available to all unit tests by, for example, making them pytest fixtures at the top level of the tests (see https://docs.pytest.org/en/latest/

jklaise
jklaise commented Sep 28, 2021

The Boston dataset which we use in some examples has an ethical problem and should be replaced. Read more here: https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston

Impacted examples:

  • cfproto_housing.ipynb
  • ale_regression_boston.ipynb

The above link suggests some similar housing-related alternatives.

Good first issue Type: Docs

A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.

  • Updated Feb 10, 2022
  • Python

Improve this page

Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."

Learn more