You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not a priority, but batch transforms (parallelized on GPU) will make training marginally faster. Currently, I let the DataLoader in Torch take care of it example by example, using its own optimizations like loading the batch while updating weights. Relevant issue here: pytorch/vision#157
The text was updated successfully, but these errors were encountered:
Not a priority, but batch transforms (parallelized on GPU) will make training marginally faster. Currently, I let the DataLoader in Torch take care of it example by example, using its own optimizations like loading the batch while updating weights. Relevant issue here: pytorch/vision#157
The text was updated successfully, but these errors were encountered: