You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@zsc Thank you for your sharing! As your paper said, you can train with batch size 16 on 8 2080TI GPUs when you use the pytorch framework. But when I want to train your network, the GPU memory is large as 8.5G with batch size 1. So what is the problem?
The text was updated successfully, but these errors were encountered:
@zsc Thank you for your sharing! As your paper said, you can train with batch size 16 on 8 2080TI GPUs when you use the pytorch framework. But when I want to train your network, the GPU memory is large as 8.5G with batch size 1. So what is the problem?
@zsc Thank you for your sharing! As your paper said, you can train with batch size 16 on 8 2080TI GPUs when you use the pytorch framework. But when I want to train your network, the GPU memory is large as 8.5G with batch size 1. So what is the problem?
The text was updated successfully, but these errors were encountered: