-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
restart customer Instances after sled reboot #3633
Comments
One thing that came up in my conversation with our customer onsite is the ability for them to specify the autoboot behavior. If we can expose that as a user-configurable option, we don't have to make the decision for them on whether to bring up an instance when a sled-agent comes back up. Ideally:
|
A thought related to this different scenario: perhaps we can also mark the instance stopped and reset the active_sled_id and active_propolis_id in CRDB? This is assuming that we wire in the mechanism to pick a sled when starting an instance that has a NULL value in these attributes. Or we'd always blank out the sled/propolis ids as part of the process of stopping instances, as discussed in #2315. To put things in perspective, I'd suggest that you substitute "customer instance" with "buildomat" and imagine how you'd want sled-agent to handle it in the scenarios we've discussed so far in this ticket (i.e. sled reboot, sled gone, instance staying in failed/starting/stopping status). 😅 |
Some drive-by commentary:
This is right AFAIK--the zones are gone, the instances aren't running, and there's no way to restore them to exactly the state they had when the sled rebooted. They can be cold-booted onto the same sled, but sled agent will need to be told to do this (it won't come back up and realize "oh hey I was running such-and-such instances here" and automatically restart them).
I strongly agree with this--this should be configurable, if not now then in the (relatively) near future. With the caveat that I know basically nothing about Buildomat's internals, I can easily see it being an example of a sort of system where you wouldn't necessarily want a VM to come back up automatically if its sled reboots: Buildomat scheduler creates a VM; scheduler sends agent on the VM a set of commands; VM's sled reboots; scheduler decides the job is unresponsive and gives up on it; if the agent is then just sitting there waiting for commands, you've got a zombie VM. I can imagine enough workloads of this kind (i.e. where I don't want the VM to start unless I'm there to tell it to do something) that I feel pretty strongly that this behavior should be configurable.
FWIW there's now a sled agent API ( |
I think it's probably better not to think of this as "autoboot" per se, as we're asking the user to think of the whole rack (and eventually a fleet of racks) as "the computer". From that perspective an individual sled rebooting is more like an individual disk or DIMM failing: that the sled "boots up" is an implementation detail. Rather, if we expose a per-instance property I think it should be more explicitly about what to do after a fault which interrupts the instance. Something like "on_fault" which could have an initial choice of "restart" or "none" or something like that. Additionally, I think it's important that we not consider customer instances as living on a particular sled. If they're restarted after a fault, they should go back through the regular instance placement process (in Nexus) that occurred when they were initially started, potentially ending up on a different sled. |
As an aside, buildomat uses a lot of AWS machines today, and we maintain a local catalogue of instances that we intend to create and have successfully been able to create. On instance boot we also register with the buildomat central API, which, if it occurs again later, we can use as a signal that something has gone terribly wrong with a particular instance. We're also able to detect through listing instances any which are surplus to requirements and clean them up. I think you pretty much have to do all this if you're managing infrastructure in an EC2-like cloud environment and intending not to leak things. |
Implemented in #6503. |
I haven't verified this but after talking with @smklein we believe that if a sled reboots, any customer Instances that were running on that system will no longer be running (not there, nor anywhere). But the API state will probably reflect that they are still running. It's not clear if there'd be any way to get them running again.
Part of the design here was that the
sled_agent_put()
call from the Sled Agent to Nexus would be an opportunity for Nexus to verify that the expected Instances were still running. In practice, this probably needs to trigger an RFD 373-style RPW that determines what's supposed to be on each Sled, what is running on each sled, and fixes things appropriately. It might be cleanest to factor that into two RPWs:There's a related issue here around sleds that have failed more permanently. I'd suggest we treat this as a different kind of thing and not try to automatically detect this using a heartbeat mechanism or something like that. That kind of automation can make things worse. For this case (which really should be rare), we could require that an operator mark the sled as "permanently gone -- remove it from the cluster", after which we mark its Instances failed.
The text was updated successfully, but these errors were encountered: