You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For both, binary and ascii dataset files the index (position/line) of the values correspond to the node numbers in the mesh. In most cases nodes are numbered from 1 to maximumId. But this is not mandatory. There may be reasons to remove some nodes from a mesh without renumbering, having the ability to use existing datasets.
If I use ascii format this is no problem in most cases, because the validation of the dataset is made by comparision of the value behind the ND-card and the maximum ID of mesh nodes (see mdal_ascii_dat.cpp, MDAL::DriverAsciiDat::loadNewFormat/loadOldFormat). size_t fileNodeCount = toSizeT( items[1] ); size_t meshIdCount = maximumId( mesh ) + 1; if ( meshIdCount != fileNodeCount ) { MDAL::Log::error( MDAL_Status::Err_IncompatibleMesh, name(), "IDs in mesh and nodes in file are not equal" ); return; }
In binary format (mdal_binary_dat.cpp, MDAL::DriverBinaryDat::load) this doesn't work, because the comparision is made between the number of values in the dataset (numdata) and the number of nodes in the mesh (vertexCount). case CT_NUMDATA: // Num data if ( read( in, reinterpret_cast< char * >( &numdata ), 4 ) ) return exit_with_error( MDAL_Status::Err_UnknownFormat, "unable to read num data" ); if ( numdata != static_cast< int >( vertexCount ) ) return exit_with_error( MDAL_Status::Err_IncompatibleMesh, "invalid num data" ); break;
I think both dataset formats should use the same criteria and for more flexibilty it should be the first one (the maximum node id of the mesh).
This would solve the problem of gaps in node numbering. Bit there may be cases where the node with the maximum ID should be removed from the mesh. In this case loading of datasets would not work, because the criteria is meshIdCount != fileNodeCount
and would cause an error. So the only really mandatory check should be: meshIdCount > fileNodeCount
where no values can be found in th datasets file.
A second barrier in file format can be the number of elements, if stored in datasets files. Flexibility would rise if this would be ignored completely for vertex based datasets.
I'm aware that this behavior is intendet to avoid users from combining mismatching data. I think, the user would recognize this viewing the results immidiatly and the benefit of these checks would be less than the higher flexibility and the ability to create clipped TIN-Meshes (e.g. for watersurface elevation).
Steps to reproduce the issue
Create a 2dm-TIN test.2dm from a Points Vector Layer
Create a binary DAT datasets file binary.dat with Mesh Calculator with the expression "Bed Elevation"
Create a ascii DAT datasets file ascii.dat with Mesh Calculator with the expression "Bed Elevation"
copy mesh.2dm to clipped.2dm
edit clipped.2dm and first node and first element by deleting the two lines ND 1 ... and E3T 1 ...
You can load clipped.2dm as a Mesh Layer. But you can't add the dataset files as dataset group as you can do with test.2dm
Remove the line starting with NC in ascii.dat. Now you can load it to clipped.2dm and test.2dm.
Versions
3.22.8
Supported QGIS version
I'm running a supported QGIS version according to the roadmap.
New profile
I tried with a new QGIS profile
Additional context
No response
The text was updated successfully, but these errors were encountered:
wilhelmje
added
the
Bug
Either a bug report, or a bug fix. Let's hope for the latter!
label
Jul 7, 2022
What is the bug or the crash?
For both, binary and ascii dataset files the index (position/line) of the values correspond to the node numbers in the mesh. In most cases nodes are numbered from 1 to maximumId. But this is not mandatory. There may be reasons to remove some nodes from a mesh without renumbering, having the ability to use existing datasets.
If I use ascii format this is no problem in most cases, because the validation of the dataset is made by comparision of the value behind the ND-card and the maximum ID of mesh nodes (see mdal_ascii_dat.cpp, MDAL::DriverAsciiDat::loadNewFormat/loadOldFormat).
size_t fileNodeCount = toSizeT( items[1] ); size_t meshIdCount = maximumId( mesh ) + 1; if ( meshIdCount != fileNodeCount ) { MDAL::Log::error( MDAL_Status::Err_IncompatibleMesh, name(), "IDs in mesh and nodes in file are not equal" ); return; }
In binary format (mdal_binary_dat.cpp, MDAL::DriverBinaryDat::load) this doesn't work, because the comparision is made between the number of values in the dataset (numdata) and the number of nodes in the mesh (vertexCount).
case CT_NUMDATA: // Num data if ( read( in, reinterpret_cast< char * >( &numdata ), 4 ) ) return exit_with_error( MDAL_Status::Err_UnknownFormat, "unable to read num data" ); if ( numdata != static_cast< int >( vertexCount ) ) return exit_with_error( MDAL_Status::Err_IncompatibleMesh, "invalid num data" ); break;
I think both dataset formats should use the same criteria and for more flexibilty it should be the first one (the maximum node id of the mesh).
This would solve the problem of gaps in node numbering. Bit there may be cases where the node with the maximum ID should be removed from the mesh. In this case loading of datasets would not work, because the criteria is
meshIdCount != fileNodeCount
and would cause an error. So the only really mandatory check should be:
meshIdCount > fileNodeCount
where no values can be found in th datasets file.
A second barrier in file format can be the number of elements, if stored in datasets files. Flexibility would rise if this would be ignored completely for vertex based datasets.
I'm aware that this behavior is intendet to avoid users from combining mismatching data. I think, the user would recognize this viewing the results immidiatly and the benefit of these checks would be less than the higher flexibility and the ability to create clipped TIN-Meshes (e.g. for watersurface elevation).
Steps to reproduce the issue
Versions
3.22.8
Supported QGIS version
New profile
Additional context
No response
The text was updated successfully, but these errors were encountered: