Skip to content
#

adversarial-attacks

Here are 402 public repositories matching this topic...

jxmorris12
jxmorris12 commented Sep 1, 2020

Output when I specify an attack without a model:

(torch) jxmorris12 12:50 PM > textattack attack --recipe bae
Traceback (most recent call last):
...
  File "/p/qdata/jm8wx/research/text_attacks/textattack/textattack/commands/attack/attack_args_helpers.py", line 343, in parse_model_from_args
    raise ValueError(f"Error: unsupported TextAttack model {args.model}")
ValueError: Error: un
AdvBox

Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.

  • Updated Jun 8, 2021
  • Jupyter Notebook

A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.

  • Updated Jul 2, 2021
  • Python

Improve this page

Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."

Learn more