Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I actually test the pretrained model? #16

Closed
ghost opened this issue Dec 30, 2019 · 7 comments
Closed

How do I actually test the pretrained model? #16

ghost opened this issue Dec 30, 2019 · 7 comments

Comments

@ghost
Copy link

ghost commented Dec 30, 2019

Hi Chris,

This might be a relatively trivial question but for some reason I am not able to test any custom .ply file (as well as any file from scannet).

if I run,

python indoor.py --weights ./Mink16UNet34C_ScanNet.pth --conv1_kernel_size 3

I get the following error;

Traceback (most recent call last):
  File "indoor.py", line 106, in <module>
    'scene0635_00.ply', voxel_size=voxel_size)
  File "indoor.py", line 86, in generate_input_sparse_tensor
    coordinates, features = ME.utils.sparse_collate(coordinates_, featrues_)
  File "/home/x23/miniconda3/envs/sts2/lib/python3.6/site-packages/MinkowskiEngine-0.3.2-py3.6-linux-x86_64.egg/MinkowskiEngine/utils/collation.py", line 137, in sparse_collate
    bcoords[s:s + cn, :D] = coord
RuntimeError: expand(torch.IntTensor{[130652, 3, 3]}, size=[130652, 3]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)

To counter above issue, if I add return_index=True in ME.utils.sparse_quantize, i get this,

Traceback (most recent call last):
  File "indoor.py", line 109, in <module>
    soutput = model(sinput)
  File "/home/x23/miniconda3/envs/sts2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/x23/workspace_pcs/SpatioTemporalSegmentation/models/res16unet.py", line 197, in forward
    out = self.conv0p1s1(x)
  File "/home/x23/miniconda3/envs/sts2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/x23/miniconda3/envs/sts2/lib/python3.6/site-packages/MinkowskiEngine-0.3.2-py3.6-linux-x86_64.egg/MinkowskiEngine/MinkowskiConvolution.py", line 272, in forward
    out_coords_key, input.coords_man)
  File "/home/x23/miniconda3/envs/sts2/lib/python3.6/site-packages/MinkowskiEngine-0.3.2-py3.6-linux-x86_64.egg/MinkowskiEngine/MinkowskiConvolution.py", line 65, in forward
    f"Type mismatch input: {input_features.type()} != kernel: {kernel.type()}"
AssertionError: Type mismatch input: torch.cuda.DoubleTensor != kernel: torch.cuda.FloatTensor

What could be the possible cause?

@chrischoy
Copy link
Owner

chrischoy commented Dec 30, 2019

Hi,

Thanks for reporting the issue. I've made some breaking changes in the MinkowskiEngine and I didn't update the other codebase. This is due to collation function returning the original input data type.

You could download the latest indoor.py from the MinkowskiEngine master or

Could you try

sinput = SparseTensor(feats=feats.float(), coords=coords) # float() is the only difference in this code
soutput = model(sinput)

@ghost
Copy link
Author

ghost commented Dec 30, 2019

It ran successfully now. Thanks!

PS: There is a a typo in readme,

python -m lib.datasets.prepreocessing.scannet

preprocessing is misspelled!

@ghost ghost closed this as completed Dec 30, 2019
@ghost ghost reopened this Dec 30, 2019
@ghost
Copy link
Author

ghost commented Dec 30, 2019

[EDIT]
Hi,

One more question regarding testing pretrained model;

what preprocessing is required if I want to test my custom indoor .PLY file?

When I try custom .PLY I get the following error,

Traceback (most recent call last):
  File "indoor.py", line 107, in <module>
    'data/Mesh2.ply', voxel_size=voxel_size)
  File "indoor.py", line 83, in generate_input_sparse_tensor
    batch = [load_file(file_name, voxel_size)]
  File "indoor.py", line 78, in load_file
    return quantized_coords[inds], feats[inds], pcd
IndexError: index 0 is out of bounds for axis 0 with size 0

@chrischoy I figured the problem is I am trying to get predictions for a plane PLY file i.e feats = np.array(pcd.colors) returns []. Is there any way to get the predictions for such files?

@chrischoy
Copy link
Owner

The network is trained assuming that there is color input features. If you do not have color channels, you have to train the network from scratch without any feature (or simply occupancy which is 1-vector like https://github.com/chrischoy/FCGF/blob/master/lib/data_loaders.py#L215).

@ghost
Copy link
Author

ghost commented Dec 31, 2019

@chrischoy Alright! Thanks for the clarification.
I passed arrays of zeros and ones instead of colors just to check, the code executes but the results are off.

@ghost ghost closed this as completed Jan 2, 2020
@ghost ghost reopened this Jan 4, 2020
@ghost
Copy link
Author

ghost commented Jan 4, 2020

Hi @chrischoy

Apologies for bothering again.

I tried to understand how can I train using existing network without color (only geometry) from scratch but I couldn't get my head around it; maybe lack of understanding from my side.

Can you point me on how exactly I should go about it? I was thinking of training the network & share the model here for other people to use.

@chrischoy
Copy link
Owner

In the data loader, you can simply feed torch.ones(N, 3) instead of color.

However, using 3 channels for just no information is redundant. So feed torch.ones(N, 1) as a feature and make the network to take 1 channel as an input.

@ghost ghost closed this as completed May 27, 2020
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant