Attack runner¶
runner
¶
CLI tool for evaluating adversarial attack transferability.
Evaluates the effectiveness of attacks against a surrogate model and optionally
measures transfer rates to victim models. Supports all built-in attacks from
torchattack
against torchvision.models
and timm
models.
Example CLI usage
run_attack(attack, attack_args=None, model_name='resnet50', victim_model_names=None, dataset_root='datasets/nips2017', max_samples=100, batch_size=4, save_adv_batch=-1)
¶
Helper function to run evaluation on attacks.
Example
Parameters:
Name | Type | Description | Default |
---|---|---|---|
attack
|
str | Type[Attack]
|
The name of the attack to run. |
required |
attack_args
|
dict | None
|
A dict of keyword arguments to pass to the attack. Defaults to None. |
None
|
model_name
|
str
|
The name of the white-box surrogate model to attack. Defaults to "resnet50". |
'resnet50'
|
victim_model_names
|
list[str] | None
|
The names of the black-box victim models to attack. Defaults to None. |
None
|
dataset_root
|
str
|
Root directory of the NIPS2017 dataset. Defaults to "datasets/nips2017". |
'datasets/nips2017'
|
max_samples
|
int
|
Max number of samples used for the evaluation. Defaults to 100. |
100
|
batch_size
|
int
|
Batch size for the dataloader. Defaults to 4. |
4
|
save_adv_batch
|
int
|
Batch index for optionally saving a batch of adversarial examples to visualize. Set to -1 to disable. Defaults to -1. |
-1
|
Source code in torchattack/evaluate/runner.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
|