chop.adversary¶
Adversary utility classes.¶
Contains classes for generating adversarial examples and evaluating models on adversarial examples.
Classes
|
Class for generating adversarial examples given a model and data. |
-
class
chop.adversary.
Adversary
(method)[source]¶ Class for generating adversarial examples given a model and data.
-
attack_dataset
(loader, model, criterion, step=None, max_iter=20, use_best=False, initializer=None, callback=None, verbose=1, device=None, *optimizer_args, **optimizer_kwargs)[source]¶ Returns a generator of losses, perturbations over loader.
-
perturb
(data, target, model, criterion, max_iter=20, use_best=False, initializer=None, callback=None, *optimizer_args, **optimizer_kwargs)[source]¶ Perturbs the batch of datapoints with true label target, using specified optimization method.
- Parameters
data – torch.Tensor shape: (batch_size, *) batch of datapoints
target – torch.Tensor shape: (batch_size,)
model – torch.nn.Module model to attack
max_iter – int Maximum number of iterations for the optimization method.
use_best – bool if True, Return best perturbation so far. Otherwise, return the last perturbation obtained.
initializer – callable (optional) callable which returns a starting point. Typically a random generator on the constraint set. Takes shape as only argument.
callback – callable (optional) called at each iteration of the optimization method.
*optimizer_args – tuple extra arguments for the optimization method
*optimizer_kwargs – dict extra keyword arguments for the optimization method
- Returns
- torch.Tensor of shape (batch_size,)
vector of losses obtained on the batch
- delta: torch.Tensor of shape (batch_size, *)
perturbation found
- Return type
adversarial_loss
-