Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to optionally use Numpy? Cupy is a horrible mess when it comes to compatibility. #8

Open
n00mkrad opened this issue May 19, 2021 · 12 comments

Comments

@n00mkrad
Copy link

It's impossible to run this code on any up-to-date Nvidia machine, as Cupy does not support CUDA 11.3.

Redistributing would also be near-impossible as Cupy only works for one specific CUDA version.

@lisiyao21
Copy link
Owner

The cupy is used in softsplat. I have a substitution but will make the code slower. I can first update it to make you run the code successfully and find a faster way afterwards...

@n00mkrad
Copy link
Author

That would be great as I'm looking into implementing this into a GUI that should run on any recent CUDA version.

@lisiyao21
Copy link
Owner

lisiyao21 commented May 19, 2021

Hi, I just update a version without using cupy. To use it, please substitute "model = AnimeInterp" in config file to "model = AnimeInterpNoCupy" and comment the the content about AnimeInterp (import & all) in models/init.py

The substitution seems slightly affect the results but it should not be too much.

@n00mkrad
Copy link
Author

Hi, I just update a version without using cupy. To use it, please substitute "model = AnimeInterp" in config file to "model = AnimeInterpNoCupy" and comment the the content about AnimeInterp (import & all) in models/init.py

The substitution seems slightly affect the results but it should not be too much.

Thanks for the dedication, but I still get errors as it's trying to import CuPy either way.

Traceback (most recent call last):
  File "D:\Software\Python38\lib\site-packages\cupy\__init__.py", line 16, in <module>
    from cupy import _core  # NOQA
  File "D:\Software\Python38\lib\site-packages\cupy\_core\__init__.py", line 1, in <module>
    from cupy._core import core  # NOQA
  File "cupy\_core\core.pyx", line 1, in init cupy._core.core
  File "D:\Software\Python38\lib\site-packages\cupy\cuda\__init__.py", line 8, in <module>
    from cupy.cuda import compiler  # NOQA
  File "D:\Software\Python38\lib\site-packages\cupy\cuda\compiler.py", line 11, in <module>
    from cupy.cuda import device
  File "cupy\cuda\device.pyx", line 1, in init cupy.cuda.device
ImportError: DLL load failed while importing runtime: Die angegebene Prozedur wurde nicht gefunden.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "test_anime_sequence_one_by_one.py", line 1, in <module>
    import models
  File "D:\Code\GitHub\AnimeInterp\models\__init__.py", line 1, in <module>
    from .AnimeInterp import AnimeInterp
  File "D:\Code\GitHub\AnimeInterp\models\AnimeInterp.py", line 9, in <module>
    from .softsplat import ModuleSoftsplat as ForwardWarp
  File "D:\Code\GitHub\AnimeInterp\models\softsplat.py", line 7, in <module>
    import cupy
  File "D:\Software\Python38\lib\site-packages\cupy\__init__.py", line 37, in <module>
    raise ImportError(_msg) from e
ImportError: CuPy is not correctly installed.

@routineLife1
Copy link

init.py

from .AnimeInterp_no_cupy import AnimeInterpNoCupy

__all__ = [ 'AnimeInterp' ]

@n00mkrad
Copy link
Author

utils/correlation.py still calls cupy.

@routineLife1
Copy link

utils/correlation.py still calls cupy.

utils.init.py

from utils import config

__all__ = ['config', 'correlation', 'EMA']

@routineLife1
Copy link

There is no obvious difference between using cupy and not using cupy
cupy
no_cupy

@n00mkrad
Copy link
Author

n00mkrad commented May 19, 2021

CuPy no longer throws errors.

However, it seems like the paths are based on Linux?

Traceback (most recent call last):
  File "test_anime_sequence_one_by_one.py", line 216, in <module>
    psnrs, ssims, ies, psnr, ssim, psnr_roi, ssim_roi, psnrs_level, ssims_level, folder = validate(config)
  File "test_anime_sequence_one_by_one.py", line 95, in validate
    for validationIndex, validationData in enumerate(validationloader, 0):
  File "D:\Software\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 517, in __next__
    data = self._next_data()
  File "D:\Software\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 1199, in _next_data
    return self._process_data(data)
  File "D:\Software\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 1225, in _process_data
    data.reraise()
  File "D:\Software\Python38\lib\site-packages\torch\_utils.py", line 429, in reraise
    raise self.exc_type(msg)
PermissionError: Caught PermissionError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "D:\Software\Python38\lib\site-packages\torch\utils\data\_utils\worker.py", line 202, in _worker_loop
    data = fetcher.fetch(index)
  File "D:\Software\Python38\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "D:\Software\Python38\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "E:\AI\AnimeInterp\datas\AniTripletWithSGMFlowTest.py", line 198, in __getitem__
    image = _pil_loader(self.framesPath[index][frameIndex], cropArea=cropArea[frameIndex],  resizeDim=self.resizeSize, frameFlip=randomFrameFlip)
  File "E:\AI\AnimeInterp\datas\AniTripletWithSGMFlowTest.py", line 66, in _pil_loader
    with open(path, 'rb') as f:
PermissionError: [Errno 13] Permission denied: 'E:/AI/AnimeInterp/datasets\\test_2k_540p\\Disney_v4_0_000024_s2'

Seems like it does a double backslash when concatenating paths instead of using a forward slash.

@routineLife1
Copy link

routineLife1 commented May 19, 2021

I run this code with Windows , maybe you can try to remove calculations like PSNR in test_anime_sequence_one_by_one.py and only get the visual effect in store_path

def save_flow_to_img(flow, des):
        f = flow[0].data.cpu().numpy().transpose([1, 2, 0])
        fcopy = f.copy()
        fcopy[:, :, 0] = f[:, :, 1]
        fcopy[:, :, 1] = f[:, :, 0]
        cf = flow_to_color(-fcopy)
        cv2.imwrite(des + '.jpg', cf)


def validate(config):   
    # preparing datasets & normalization
    normalize1 = TF.Normalize(config.mean, [1.0, 1.0, 1.0])
    normalize2 = TF.Normalize([0, 0, 0], config.std)
    trans = TF.Compose([TF.ToTensor(), normalize1, normalize2, ])

    revmean = [-x for x in config.mean]
    revstd = [1.0 / x for x in config.std]
    revnormalize1 = TF.Normalize([0.0, 0.0, 0.0], revstd)
    revnormalize2 = TF.Normalize(revmean, [1.0, 1.0, 1.0])
    revNormalize = TF.Compose([revnormalize1, revnormalize2])

    revtrans = TF.Compose([revnormalize1, revnormalize2, TF.ToPILImage()])

    testset = datas.AniTripletWithSGMFlowTest(config.testset_root, config.test_flow_root, trans, config.test_size, config.test_crop_size, train=False)
    sampler = torch.utils.data.SequentialSampler(testset)
    validationloader = torch.utils.data.DataLoader(testset, sampler=sampler, batch_size=1, shuffle=False, num_workers=1)
    to_img = TF.ToPILImage()
 
    print(testset)
    sys.stdout.flush()

    # prepare model
    model = getattr(models, config.model)(config.pwc_path).cuda()
    model = nn.DataParallel(model)
    retImg = []

    # load weights
    dict1 = torch.load(config.checkpoint)
    model.load_state_dict(dict1['model_state_dict'], strict=False)

    # prepare others
    store_path = config.store_path

    folders = []

    print('Everything prepared. Ready to test...')  
    sys.stdout.flush()

    #  start testing...
    with torch.no_grad():
        model.eval()
        ii = 0
        for validationIndex, validationData in enumerate(validationloader, 0):
            print('Testing {}/{}-th group...'.format(validationIndex, len(testset)))
            sys.stdout.flush()
            sample, flow,  index, folder = validationData

            frame0 = None
            frame1 = sample[0]
            frame3 = None
            frame2 = sample[-1]

            folders.append(folder[0][0])
            
            # initial SGM flow
            F12i, F21i  = flow

            F12i = F12i.float().cuda() 
            F21i = F21i.float().cuda()

            ITs = [sample[tt] for tt in range(1, 2)]
            I1 = frame1.cuda()
            I2 = frame2.cuda()
            
            if not os.path.exists(config.store_path + '/' + folder[0][0]):
                os.mkdir(config.store_path + '/' + folder[0][0])


            revtrans(I1.cpu()[0]).save(store_path + '/' + folder[0][0] + '/'  + index[0][0] + '.jpg')
            revtrans(I2.cpu()[0]).save(store_path + '/' + folder[-1][0] + '/' +  index[-1][0] + '.jpg')
            for tt in range(config.inter_frames):
                x = config.inter_frames
                t = 1.0/(x+1) * (tt + 1)
                
                outputs = model(I1, I2, F12i, F21i, t)

                It_warp = outputs[0]

                to_img(revNormalize(It_warp.cpu()[0]).clamp(0.0, 1.0)).save(store_path + '/' + folder[1][0] + '/' + index[1][0] + '.png')
                
                save_flow_to_img(outputs[1].cpu(), store_path + '/' + folder[1][0] + '/' + index[1][0] + '_F12')
                save_flow_to_img(outputs[2].cpu(), store_path + '/' + folder[1][0] + '/' + index[1][0] + '_F21')

if __name__ == "__main__":

    # loading configures
    parser = argparse.ArgumentParser()
    parser.add_argument('config')
    args = parser.parse_args()
    config = Config.from_file(args.config)

    if not os.path.exists(config.store_path):
        os.mkdir(config.store_path)

    validate(config)

@n00mkrad
Copy link
Author

I made it work, I had something else wrong.

@routineLife1
Copy link

routineLife1 commented May 19, 2021

It runs like this
1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants