Unofficial PyTorch reimplementation of ICLR 2020 paper: Explanation By Progressive Exaggeration.
$ pip install -r requirements.txt
- Prepare the dataset for training
./notebooks/PreprocessData.ipynb
- Train a classifier. Skip this step if you have a pretrained classifier.
Training logs of the classifier are saved at: ./$log_dir$/$name$.
Model checkpoints of the classifier are saved at: ./checkpoints/classifier/$name$ ($log_dir$ and$name$ are defined in the corresponding config file).
2.a. To train a multi-label classifier on all 40 attributes
python train_classifier.py --config 'configs/celebA_DenseNet_Classifier.yaml'
2.b. To train a binary classifier on 1 attribute
python train_classifier.py --config 'configs/celebA_Young_Classifier.yaml'
- Process the output of the classifier and create input for Explanation model by discretizing the posterior probability.
The input data for the Explanation model is saved at:
$log_dir$ /$name$/explainer_input/
./notebooks/ProcessClassifierOutput.ipynb
- Train explainer model. The output is saved at:
$log_dir$ /$name.
python train_explainer.py --config 'configs/celebA_Young_Explainer.yaml'
- Explore the trained Explanation model and see qualitative results.
./notebooks/TestExplainer.ipynb
- Save results of the trained Explanation model for quantitative experiments.
python test_explainer.py --config 'configs/celebA_Young_Explainer.yaml'
- Use the saved results to perform experiments as shown in paper
./notebooks/Experiment_CelebA.ipynb