-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cold boot should handle scrimlet sled-agent restarts #4592
Comments
@rcgoodfellow was #4857 sufficient to solve this, or do we also need the changes from #4822? |
We've confirmed we're in good shape here on a |
This can now be closed, both scrimlet reboots and sled-agent restarts have been tested. |
Test1: reboot without any ongoing orchestration activities
Test2: reboot with ongoing orchestration activities
Test3: reboot with in-progress vm-to-vm traffic and a guest OS image import
Test4: repeat test3 on scrimlet1 |
I saw an issue after the scrimlet reboot #5214. I haven't lined up all the timeline events but the new instances were all newly created after the reboot testing. It may be related to the scrimlet cold boot testing, regardless, this ticket can stay closed while we have more specifiy things to track down in #5214. |
During an update of rack 2, we encountered the following.
As sled agents began to launch, there was a bug (introduced by yours truly) that prevented the agents from getting out of early bootstrap. A new field added to the early network config caused a deserialization error that prevented sled agents from fully starting up. To work around this error, we read the persistent early network config file kept by the bootstore in
/pool/int
, added the missing field, and serialized the file back to/pool/int
. We then restartedsled-agent
. This causedsled-agent
to read the updated early network config, which it was now able to parse. We had also bumped the generation number of the config, which caused the bootstore protocol to propagate this new value to all the othersled-agent
s.At this point, things started to move forward again. Sled agents were transitioning from
bootstrap-agent
tosled-agent
. However, we then hit another roadblock, the switches were not fully initialized. Thesled-agent
we restarted was a scrimletsled-agent
. So restarting it took down the switch zone and everything in it. When the switch zone came back up, it came up without any configuration. Thedendrite
service was not listening on the underlay, links had not been configured, addresses had not been configured, etc.After looking through logs and various different states in the system, we decided to restart the same sled agent again. It got much further this time, with configured links and various other dpd state. However, the system was still not coming up. There was one node in the cluster that had synchronized with an upstream NTP server and had already launched Nexus (presumably in a brief period where the network was fully set up). Other nodes in the cluster had not made any real progress forward. This was because their NTP zones had not reached synchronization yet. After looking around more, we discovered this was due to the fact that there were missing NAT entries on the switches, and some missing address entries.
It appears that there were NAT entries created before our scrimlet sled-agent restart, and the act of restarting that
sled-agent
took out the switch zone clobbering these entries. I believe these entries were created by a differentsled-agent
, one with a boundary NTP zone that needed NAT. So when we restarted the scrimletsled-agent
, it had no idea it had missing NAT entries to repopulate. For the missing address entries, these were uplink addresses. They were present in theuplink
SMF service properties, but they had not been added to the ASIC via Dendrite as local addresses. Not sure how that happened.The takeaway here is that we need to be able to handle scrimlet sled-agent restarts during cold boot and keep driving forward toward the system coming back online, not getting stuck in half-configured states.
The text was updated successfully, but these errors were encountered: