Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about MedAL #2

Open
MST9998 opened this issue May 23, 2022 · 0 comments
Open

Question about MedAL #2

MST9998 opened this issue May 23, 2022 · 0 comments

Comments

@MST9998
Copy link

MST9998 commented May 23, 2022

Hi, these days I am working on an active learning project, and I read your paper about the combination of AL and medical images, which is really interesting and the datasets you choose to experiment with are so perfect!

Therefore, I am trying to achieve the same result in paper MedAL: Accurate and Robust Deep Active Learning for Medical Image Analysis. I think I used the same model (Inception V3), same data (Messidor), same data-processing (data augmentation in your paper O-MedAL), same baselines(entropy-based method) and hyper parameters as yours, but I found the result is not as stable as yours, also much higher than yours. The graph shown in your paper looks really good and the performance increases as the labeled data size gets larger smoothly... I am not sure if I did something wrong or if there are some details I missed. Is that a normal phenomenon?

Here's my setting Adam optimizer with lr 0.0002, weight decay=0; batchsize=16; stop the retraining until the training accuracy=1... Is there anything wrong with my settings?

THANKS!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant