You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone,
I'm working on the Hessigheim dataset, where the training set alone has 128.887.190 number of points. Will subdividing the training/validation set into smaller patches have an influence over the quality of the learned model? The paper says: To train our RandLA-Net in parallel, we sample a fixed number of points (∼ 10^5) from each point cloud as the input - does that mean the gigantic training point cloud is sampled with just 10^5 points? That is not enough by a long shot.
The text was updated successfully, but these errors were encountered:
Hello everyone,
I'm working on the Hessigheim dataset, where the training set alone has 128.887.190 number of points. Will subdividing the training/validation set into smaller patches have an influence over the quality of the learned model? The paper says: To train our RandLA-Net in parallel, we sample a fixed number of points (∼ 10^5) from each point cloud as the input - does that mean the gigantic training point cloud is sampled with just 10^5 points? That is not enough by a long shot.
The text was updated successfully, but these errors were encountered: