-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Remove Observable Helper #4540
base: develop
Are you sure you want to change the base?
[WIP] Remove Observable Helper #4540
Conversation
In the ObservableHelper::set_dimensions call already has an offset into the data as a function argument (the lower_bound variable in ObservableHelper). The problem with SpinDensityNew seems to be in SpinDensityNew::registerObservableEstimator. In that we register both the SpinDensity/u and SpinDensity/d to the hdf5 and set the dimensions. However, the offset is passed in as 0 in both cases, even though the data_ is twice the size to account for up and down. Thats why it writes the up twice...it started from index 0 in the underlying data for both u and d. It seems that all of this is fixed by using oh.set_dimensions(ng, s * derived_parameters_.npoints) in that API. Maybe I'm wrong, but if you set the correct starting index using this set_dimensions API, you don't need to make OperatorEstBase::write virtual since it is already taken care of. |
You're right about that. But I think now a better solution would be just to drop ObservableHelper altogether its adding 0 value here. |
std::vector<QMCT::FullPrecRealType> expanded_data(data_.size(), 0.0); | ||
std::copy_n(data_.begin(), data_.size(), expanded_data.begin()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we do the cast during construction of expanded_data
?
std::vector<QMCT::FullPrecRealType> expanded_data(data_.size(), 0.0); | |
std::copy_n(data_.begin(), data_.size(), expanded_data.begin()); | |
std::vector<QMCT::FullPrecRealType> expanded_data(data_.cbegin(), data_.cend()); |
std::vector<hsize_t> my_dims; | ||
hsize_t rank = dims.size() + 1; | ||
my_dims.resize(rank, 1); | ||
copy(dims.begin(), dims.end(), my_dims.begin() + 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
copy(dims.begin(), dims.end(), my_dims.begin() + 1); | |
std::copy(dims.begin(), dims.end(), my_dims.begin() + 1); |
Proposed changes
Make
OperatorEstBase::write
virtual and override for estimators that do not simply write their data block to hdf5 actually implemented. Integration test added.My only explanation here is that we had inadequate testing, and there is plenty more "corner" cases like this totally uncovered to be found.
This is a draft because for whatever reason I have not put together code to successfully use hdfarchive to read a previous written dataset. Anyone who can point out my issue please do so.
What type(s) of changes does this code introduce?
Delete the items that do not apply
Does this introduce a breaking change?
What systems has this change been tested on?
Mac laptop
Checklist
Update the following with a yes where the items apply. If you're unsure about any of them, don't hesitate to ask. This is
simply a reminder of what we are going to look for before merging your code.