Skip to content
Mohammad Khalooei edited this page Feb 18, 2022 · 3 revisions

Welcome to the Layer Sustainability Analysis wiki!

Overview

Sustainability and vulnerability in different domains have many definitions. In our case, the focus is on certain vulnerabilities that fool deep learning models in the feed-forward propagation approach. One main concentration is therefore on the analysis of forwarding vulnerability effects of deep neural networks in the adversarial domain. Analyzing the vulnerabilities of deep neural networks helps better understand different behaviors in dealing with input perturbations in order to attain more robust and sustainable models.

image info

Table of Contents

  1. Requirements and Installation
  2. Getting Started

Requirements and Installation

📋 Requirements

  • PyTorch version >=1.6.0
  • Python version >=3.6

🔨 Installation

pip install layer-sustainability-analysis

Getting Started

⚠️ Precautions

  • The LSA framework could be applied to any neural network architecture with no limitations.
    • random_seed = 313 to get same training procedures. Some operations are non-deterministic with float tensors on GPU [discuss].
  • Also, torch.backends.cudnn.deterministic = True to get same adversarial examples with fixed random seed.
  • LSA uses a hook to represent each layer of the neural network. Thus, you can change its probs (checker positions). Activation functions such as ReLU and ELU are default probs.

🚀 Demos

Given selected_clean_sample, selected_pertubed_sample and comparison measure are used in LSA:

from layer-sustainability-analysis import LayerSustainabilityAnalysis as LSA
lsa = LSA(pretrained_model=model)
lst_comparison_measures = LSA.representation_comparison(img_clean=selected_clean_sample, img_perturbed=selected_pertubed_sample, measure ='relative-error')