This repository contains the code for implementing the cycle inference process and detecting out-of-distribution data using our Cycle Consistency-based Uncertainty Quantification method. For more details, refer to our paper.
Paper: Cycle Consistency-based Uncertainty Quantification of Neural Networks in Inverse Imaging Problems
UCLA
Submitted to Nature Computational Science
Uncertainty estimation is critical for numerous real-world applications of deep neural networks and draws growing attention from researchers.
Our method employs a unique approach to quantify uncertainty in deep neural networks for inverse problems, using cycle consistency. We construct forward-backward cycles with the physical forward model and the trained neural network, subsequently developing uncertainty estimators via regression analysis of these cycles. This technique offers insights into the relationship between the neural network's uncertainty, inference bias, and robustness. It excels in classifying corrupted and out-of-distribution data, demonstrating its effectiveness with popular image deblurring and super-resolution neural networks.
First, download our repo:
git clone https://github.com/Jackyy-Li/Cycle-Consistency-based-Uncertainty-Estimator.git
cd Cycle-Consistency-based-Uncertainty-Estimator
For an express instillation, we include requirements.txt
file.
Run the following command to install all the dependencies needed:
pip install -r requirements.txt
- The corrupted input detection experiment uses the GoPro Dataset.
- The out-of-distribution detection experiment uses the Anime Names and Images dataset, Flickr-Faces-HQ dataset, and microscopy datasets (not uploaded due to confidentiality, user could create their training dataset by injecting noise into pristine images of various object classes).
The out-of-distribution detection task uses average pooling algorithm as physical forward model and REAL-ESRGAN network as trained neural work.
The following guidance shows the process for the out-of-distribution experiment:
cd ESRGAN_CCUQ
Use inference_cycle.py to implement cycle inference process.
The following command will execute cycle inference process for cycle number=5 on 'input' directory and save the output image and .mat file needed to derive uncertainty estimators in 'output' directory using GPU device 0 :
python inference_cycle.py -i ".\input\demo" -o ".\output\demo" -cn 5 -g 0
Please use the following command if you want to use CPU:
python inference_cycle.py -i ".\input\demo" -o ".\output\demo" -cn 5 --fp32
We used cycle number = 20 as the default number. Quicker results can be obtained through using a lesser cycle number.
The following command will then use uncertainty_quant_alt.py to derive the uncertainty estimators:
python uncertainty_quant_alt.py
"OOD_detection.ipynb" contains the code to perform out-of-distribution detection on the result. The presentation shows the RMSE heatmap for the supervised ResNet-50 baseline model result. Please change the code (directory of your result paths)accordingly to generate your own result.
The corrupted input detection experiment uses motion blur kernels as physical forward model and DeepRFT model as trained neural network. Before running the following script, please download GoPro dataset in the above link and follow the procedures below:
cd DeepRFT_CCUQ
The blurred kernels used in the experiment are included in the blur_kernel folder.
The following command blurred sharp images using randomly chosen motion blur kernel
python batch_gen_blur_data.py
Use batch_test_cycle.py to implement cycle inference process.
The following command will execute cycle inference process for cycle number=5 on 'input' directory and save the output image and .mat file needed to derive uncertainty estimators in 'output' directory. Before using the command, please change the input direcotry and output directory using input argument accordinlgy:
python batch_test_cycle.py
Please change the input directory to your downloaded GoPro datasets accordingly. The default input directory only includes 10 sharp images for demo purposes.
The following command will then use uncertainty_quant_alt.py to derive the uncertainty estimators:
python uncertainty_quant_alt.py
"corrupt_input_detection.ipynb" contains the code to perform corrupted input detection on the result. The presentation shows the detection accuracy of the different noise level. Please change the code (directory of your result paths) accordingly to generate your own result.
- Nguyen, G. et al. Machine Learning and Deep Learning frameworks and libraries for largescale data mining: a survey. Artif. Intell. Rev. 52, 77–124 (2019).
- Otter, D. W., Medina, J. R. & Kalita, J. K. A Survey of the Usages of Deep Learning for Natural Language Processing. IEEE Trans. Neural Netw. Learn. Syst. 32, 604–624 (2021).
- Voulodimos, A., Doulamis, N., Doulamis, A. & Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 1–13 (2018).
- Grigorescu, S., Trasnea, B., Cocias, T. & Macesanu, G. A survey of deep learning techniques for autonomous driving. J. Field Robot. 37, 362–386 (2020).
- Barbastathis, G., Ozcan, A. & Situ, G. On the use of deep learning for computational imaging. Optica 6, 921–943 (2019).
- Rivenson, Y. et al. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nat. Biomed. Eng. 3, 466–477 (2019).
- Yang, J., Zhou, K., Li, Y. & Liu, Z. Generalized Out-of-Distribution Detection: A Survey. Preprint at http://arxiv.org/abs/2110.11334 (2022).
If you find our code useful for your research, please cite our paper.
Huang, L., Li, J., Ding, X., Zhang, Y., Chen, H., & Ozcan, A. (2023).
Cycle Consistency-based Uncertainty Quantification of Neural Networks in Inverse Imaging Problems. arXiv preprint.
arXiv:2305.12852. Retrieved from https://doi.org/10.48550/arXiv.2305.12852