-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to obtain the checkpoint in the paper #1
Comments
Hi xixi, thanks for your interest in our work. We have released the pretrained models at https://drive.google.com/drive/folders/1gQjsiCS7DZ8H8KwsTL_I-jTQes2Ha7b9?usp=sharing. You may check them out, where the model names are the same as the code, just changing the model path to your own. |
Thank you very much for your reply. In addition, could you please tell me the memory and numbers of GPUs used in this experiment? |
Generally, we train the models on 4 to 8 2080ti/1080ti GPUs with 16 CPU cores. You may scale down/up the number of GPU by adjusting the batch_size (see trainCIFAR10.sh --batch_size **, --gpus ", *,"). |
After comparing your paper with SCOOD, I have the following two questions:
|
Hi, xixi-Lu. 1. We use the In-D dataset (e.g. cifar10) as the training set for the generative model. 2. We did not employ CCR@FPRn in our paper. |
Hi, I want to calculate the CCR@FPRn metric(which means correct classification accuracy when FPR reaches a value n) of your method, but I can't find where calculate the ID classsification accuracy in your code. So could please you point out where this part is? |
Hi, I check eq3 in [1]. As the CCR@FPRn is not directly defined for ours, which is a standalone ood detector, we should make some conversions. I think it is ok to calculate the numerator as the number of samples that 1. pass our detector and 2. can be correctly classified. Since CCR typically shows the performance of the classifier, not our ood detector, we do not use this for evaluation in the paper. Note that, given the FPRn of our detector, whether one sample can pass or not is fixed depending on the input label. [1] A. R. Dhamija, M. Gunther, and T. Boult, “Reducing network agnostophobia,” in NeurIPS, 2018. |
Yes, your mentioned calculation of the numerator is right. But I can't understand why "whether one sample can pass or not is fixed depending on the input label". When given the FPR=n, we will obtain a corresponding threshold in the “OOD score”, and a sample will be viewed as ID if its “OOD score” is larger than this threshold, so I think "whether one sample can pass or not" just depend on its “OOD score” instead of its label. |
You are correct. Sorry, my words are a little confusing. I am just talking about our method. In MoodCat, given the class label and the image as input, there is a score. |
Thank you for your reply. But I still can’t find where is the code for calculating the ID classsification accuracy in "mcoodcat_on_scood.py", "./CIFAR10_masking/testmodel.py" and "./ CIFAR100_masking/testmodel.py" , which make it difficult for me to test the CCR@FPRn metric with your method. Could you please tell where's this part in your code to let me complete test ? |
How to prepare the training dataset, such as CIFAR and ImageNet?How to split? |
@LuFan31 Sorry, we did not consider that metric in the evaluation. There is no code in this repo for CCR@FPRn. As a standalone OOD detector, we only test the ID classification accuracy with the classifier itself, without MOODCat. Therefore, we do not include the test for ID classification accuracy in our evaluation.
@Buren-joker There is an official split for cifar10 and cifar100. We follow them from torchvision dataloader. For testing, we follow SCOOD. I'm closing this issue. If you have any follow-ups, please continue #2. |
Thank you for your great job!The above error occurred when I reproduced your code. May I ask where can I obtain this checkpoint and other checkpoints mentioned in your code (such as those in the following picture)?
The text was updated successfully, but these errors were encountered: