-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add PreferSameShardOnResizeWeigher. #316
Conversation
ebe01f0
to
f0c83be
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please rename to shard
, as it doesn't allow to configure the aggregate and thus the more generic "PreferSameAggregate" is not telling the story correctly.
e0d116c
to
89d5b25
Compare
89d5b25
to
3ed14b8
Compare
Done. Tested on qa-de-1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, it looks good to me
3ed14b8
to
996f266
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a test missing that actually tests the same shard
part of the weigher. Please add an additional host to your tests that's in the same shard as one of the existing hosts. This host should have the same weight given by the weigher as the other host in that shard in every test.
I assume this test with then show a bug in your weigher, as the weigher returns 0.0
if the instance is not on the currently-checked host through the first if
.
group='filter_scheduler') | ||
request_spec = objects.RequestSpec( | ||
instance_uuid=uuids.fake_new_instance, | ||
scheduler_hints={'_nova_check_type': ['resize']}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does a new instance have _nova_check_type: ['resize']
set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to tell that even if resize is triggered with new instance it wont work since instance is new and "source_host" is missing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not clear from the code, as a new instance normally wouldn't have resize
set. Please add a comment that explains why, so anybody reading the code later on can understand it.
996f266
to
db8fe1a
Compare
host_shard_hosts = set(host for host in host_shard_aggr.hosts) | ||
if instance_host in host_shard_hosts: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need a set()
here?
Why do we need the iterator here instead of directly using host_shard_aggr.host
?
Wouldn't if instance_host in host_shard_aggr.hosts:
do the same in just one line with the same readability?
group='filter_scheduler') | ||
request_spec = objects.RequestSpec( | ||
instance_uuid=uuids.fake_new_instance, | ||
scheduler_hints={'_nova_check_type': ['resize']}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not clear from the code, as a new instance normally wouldn't have resize
set. Please add a comment that explains why, so anybody reading the code later on can understand it.
nova/tests/unit/scheduler/weights/test_weights_resize_same_shard.py
Outdated
Show resolved
Hide resolved
This filter enable to select same host-aggregate/shard/VC for instance resize because it could take more time to migrate the volumes over other shards.
db8fe1a
to
fd4c33b
Compare
This filter enable to select same host-aggregate/shard/VC for instance
resize because it could take more time to migrate the volumes over other
shards.