Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

IndexError: index 4 is out of bounds for dimension 1 with size 3 #3

Open
YQX1996 opened this issue Nov 5, 2019 · 16 comments
Open

IndexError: index 4 is out of bounds for dimension 1 with size 3 #3

YQX1996 opened this issue Nov 5, 2019 · 16 comments

Comments

@YQX1996
Copy link

YQX1996 commented Nov 5, 2019

Hi,

Thanks for this amazing library . Can you please help me with this?

when I run the /FashionPlus/separate_vae/encode_features.py file , the following error occurs.

Traceback (most recent call last):
File "...FashionPlus/separate_vae/encode_features.py", line 63, in
label_encodings, num_labels = model.encode_features(Variable(data['input']))
File ".../FashionPlus/separate_vae/models/separate_clothing_encoder_models.py", line 201, in encode_features
zs_encoded[:, count_i*self.opt.nz: (count_i+1)*self.opt.nz] = self.Separate_encoder(real_B_encoded[:,label_i].unsqueeze(1))
IndexError: index 4 is out of bounds for dimension 1 with size 3

Thanks.

@wlhsiao
Copy link

wlhsiao commented Nov 5, 2019

Hi,

Is the index out of bound error happening on the tensor real_B_encoded? real_B_encoded should be of shape (batch_size, 18, 256, 256). 18 is the number of total labels defined by HumanParsing dataset, which is where our network is trained from (see README page for full list of labels). Let me know if the error comes from other tensors.

@YQX1996
Copy link
Author

YQX1996 commented Nov 15, 2019

Hi,
I run the file my real_B_encoded shanpe (1,3,256,256)?And there are only 4, 5, 6, 7 clothing labels. There are no 18 clothing labels.Why is that?
Thanks.

@wlhsiao
Copy link

wlhsiao commented Nov 15, 2019

Hi,
18 labels include face, hair, legs, arms, shoes, .... and the clothing related labels (4, 5, 6, 7).
In our VAE encoder, clothing related labels 4, 5, 6, 7 will be encoded separately, while the others are encoded together.
The tensor real_B_encoded is constructed by the one_hot_tensor function in separate_clothing_encoder_models.py, and the number of labels was passed in by the pre-specified option (opt.ouput_nc).

Did you run the script separate_vae/scripts/encode_shape_features_demo.sh to encode features? In the script, output_nc is specified to 18. Without explicit specification, the default output_nc=3.

@YQX1996
Copy link
Author

YQX1996 commented Nov 20, 2019

Hi,
Thank you for your help, my problem has been solved.now I run the update_demo.py file,It has the following error.
Traceback (most recent call last):
File "/home/yangqiuxia/FashionPlus/classification/data_dict/shape_and_feature/update_demo.py", line 578, in
part_type_dict, type_part_dict, PART_NUM = set_dataset_parameters(argopt.classname)
File "/home/yangqiuxia/FashionPlus/classification/data_dict/shape_and_feature/update_demo.py", line 156, in set_dataset_parameters
raise NotImplementedError
NotImplementedError

After I commented on lines 155 and 156, and the following error occurred.
Traceback (most recent call last):
File "/home/yangqiuxia/FashionPlus/classification/data_dict/shape_and_feature/update_demo.py", line 578, in
part_type_dict, type_part_dict, PART_NUM = set_dataset_parameters(argopt.classname)
TypeError: 'NoneType' object is not iterable

Can you tell me the solution?
Thanks.

@wlhsiao
Copy link

wlhsiao commented Nov 20, 2019

Hi,

What value did you set for the option --classname when you run update_demo.py?
The raise NotImplementedError should not be commented unless you have extended the code to support corresponding functions.
Since our encoder and decoder are trained with segmentation labels defined in the humanparsing dataset, our code only supports segmentation labels defined in this format.
If your segmentation labels are in other formats, you could check the taxonomy of the humanparsing labels (in README) and convert your labels into that format, so that it could be compatible with the code.

@YQX1996
Copy link
Author

YQX1996 commented Nov 21, 2019

Hi,
Thank you for your guidance. I have solved the previous problem, but now there are new problems.
MLP(
(model): Sequential(
(0): Linear(in_features=12, out_features=4, bias=True)
(1): ReLU()
(2): Linear(in_features=4, out_features=2, bias=True)
)
)
3
Traceback (most recent call last):
File "/home/yangqiuxia/FashionPlus/classification/data_dict/shape_and_feature/update_demo.py", line 627, in
part_idx, mode='shape_and_texture')
File "/home/yangqiuxia/FashionPlus/classification/data_dict/shape_and_feature/update_demo.py", line 126, in overwrite_feature
(partID+1) * (self.texture_feat_num + self.shape_feat_num)] = target_feature
ValueError: could not broadcast input array from shape (11) into shape (16)
Can you tell me the solution?
Thanks.

@wlhsiao
Copy link

wlhsiao commented Nov 21, 2019

Hi,

Have you tried directly running scripts/edit_and_visualize_demo.sh with your input image?
I think this error may be instantiated from wrong feature dimension: the default feature dimension for shape feature is 8 and for texture feature is 3, summing up to 11.
Our pre-trained models use texture features with dimension 8, so summing its dimension up with shape features' will be 16. This is the dimension our model expects. So explicitly specifying texture_feat_num to 8 is necessary.
The option values compatible with our model are all specified in the bash files in scripts/ directory.

@YQX1996
Copy link
Author

YQX1996 commented Nov 22, 2019

Hi,
when I directly running scripts/edit_and_visualize_demo.sh,get the following error.
python: can't open file 'update_demo.py': [Errno 2] No such file or directory
I don't know what the problem is, so I directly run the update_demo.py. now my shape feature is 8 and texture feature is 8.I run the program without error, but get cannot load pretrained model.
why?
Thanks.

@wlhsiao
Copy link

wlhsiao commented Nov 22, 2019

Hi,

Did you run the scripts/edit_and_visualize_demo.sh in classification/data_dict/shape_and_feature/ or in classification/data_dict/shape_and_feature/scripts/ ?
You may get the "no such file or directory" error by running the script in the wrong directory. It should be run in the classification/data_dict/shape_and_feature/ directory.
Why the pretrained model could not be loaded is also likely because you didn't specify a value for the --load_pretrain_clf option. Which value to specify is also in the scripts/edit_and_visualize_demo.sh file.

You may consider running the bash script in the correct directory, or manually following the edit_and_visualize_demo.sh script to specify all arguments for the update_demo.py.

@YQX1996
Copy link
Author

YQX1996 commented Dec 5, 2019

Hi,
Can you consider telling us the code you trained? I really need it.
Thanks.

@YQX1996
Copy link
Author

YQX1996 commented Dec 11, 2019

Hi,
I rerun it with pictures from the humanparsing dataset,But the effect is very bad. would it possible to also release the training code?

image

@wlhsiao
Copy link

wlhsiao commented Dec 18, 2019

Hi,
Training details are in our paper's supplementary file. Specifically, we adopted pix2pixHD's model architecture and training recipe to train our model, so our training code is the same as their train.py. The arguments we passed in to the script are:
python ./train.py \ --dataroot ./datasets/humanparsing \ --name humanparsing \ --label_feat \ --checkpoints_dir <PATH_TO_SAVE_MODEL> \ --label_dir <PATH_TO_SEGMENTATION_MAPS> \ --img_dir <PATH_TO_IMG> \ --resize_or_crop pad_and_resize \ --loadSize 256 \ --fineSize 256 \ --save_epoch_freq 100 \ --label_nc 18 \ --output_nc 8 \ --color_mode Lab \ --no_style_loss \ --no_recon_loss \ --gpu_ids 0,1,2,3,4,5,6,7 \ --batchSize 8

@wlhsiao
Copy link

wlhsiao commented Dec 18, 2019

This output looks incorrect, which is likely due to wrong input image format, and/or unsuccessfully loading pre-trained model weights. Did you get the correct output as in the 3 demo examples?

@NachoBosch
Copy link

I have the same problem with this:

Traceback (most recent call last):
File "train.py", line 105, in
loss, outputs = model(imgs, targets)
File "C:\Users\Nacho\AppData\Roaming\Python\Python35\site-packages\torch\nn\modules\module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "C:\Users\Nacho\Code\deteccion-objetos-video-2\models.py", line 259, in forward
x, layer_loss = module[0](x, targets, img_dim)
File "C:\Users\Nacho\AppData\Roaming\Python\Python35\site-packages\torch\nn\modules\module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "C:\Users\Nacho\Code\deteccion-objetos-video-2\models.py", line 188, in forward
ignore_thres=self.ignore_thres,
File "C:\Users\Nacho\Code\deteccion-objetos-video-2\utils\utils.py", line 315, in build_targets
tcls[b, best_n, gj, gi, target_labels] = 2
RuntimeError: index 15 is out of bounds for dim with size 2

@wlhsiao
Copy link

wlhsiao commented Aug 3, 2020

Hi, is it possible to share which python script you ran that gave you this error? From the error message, is seems to be from the train.py script, while we don't have a file with that name.

@hanchaoyuan
Copy link

Hi,

Thanks for this amazing library . Can you please help me with this?

when I run the /FashionPlus/separate_vae/encode_features.py file , the following error occurs.

Traceback (most recent call last):
File "...FashionPlus/separate_vae/encode_features.py", line 63, in
label_encodings, num_labels = model.encode_features(Variable(data['input']))
File ".../FashionPlus/separate_vae/models/separate_clothing_encoder_models.py", line 201, in encode_features
zs_encoded[:, count_i*self.opt.nz: (count_i+1)*self.opt.nz] = self.Separate_encoder(real_B_encoded[:,label_i].unsqueeze(1))
IndexError: index 4 is out of bounds for dimension 1 with size 3

Thanks.

Hello, have you ever meet such a problem。
Snipaste_2021-09-24_22-03-02

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants