You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When implementing a typical driver VM, we want the userspace UIO driver to behave like a microkit protection domain in the sense that a microkit signal to the VMM should wake up the UIO notified handler as it would for a microkit protection domain's notified handler. Currently we do this by mapping a microkit signal to the VMM to a vIRQ injection that emulates an edge triggered interrupt to the VM.
There are 3 problems:
Linux's uio_pdrv_genirq kernel module disables the interrupt upon receiving one. This means any attempts to "signal" the driver VM can be lost before the driver requests for it to be enabled from userspace. I don't see how we can work around this issue except for writing our own custom uio kernel side module that doesn't disable interrupts upon receiving one.
Libvmm's VGIC driver does not fully emulate a proper edge-triggered interrupt. The current vIRQ injection correctly will do nothing if the vIRQ is already pending, however, the problem is that it incorrectly tracks the pending state of the IRQ. The VGIC driver tries to shadow the state of the list registers by updating it during a vIRQ injection and upon a maintenance interrupt triggered by an EOI request from the guest VM. It then uses this shadow to report back to the guest the state of the interrupt, and also for whether we should drop a vIRQ injection call. But this shadow tracking is not accurate at all times, as the hardware will update the true state of the list registers from pending to active, back to invalid after an EOI from the guest, and this is not captured. The shadow will only go from pending back to invalid. To accurately keep track of the state of the list register, we need to actually read into the list register which we can only do via an seL4 syscall. This syscall currently does not exist yet, but we should consider adding.
The VGIC also incorrectly enqueues multiple instances of the same interrupt. This means our VGIC driver must additionally keep track of this one more condition in order to determine whether an IRQ is pending, which is by further inspecting whether that vIRQ has already been enqueued into the virq_queue, and drop a vIRQ injection request if its already there.
The text was updated successfully, but these errors were encountered:
erichchan999
changed the title
VGIC bug + active and pending state emulation needed for Driver VM support
Proper VGIC edge triggered virtual interrupt emulation
Nov 12, 2024
erichchan999
changed the title
Proper VGIC edge triggered virtual interrupt emulation
Proper VGIC edge triggered virtual interrupt emulation needed for Driver VM support
Nov 12, 2024
erichchan999
changed the title
Proper VGIC edge triggered virtual interrupt emulation needed for Driver VM support
Problems within the VGIC vIRQ edge-triggered interrupt emulation and and the problem it's causing when implementing driver VMs
Nov 13, 2024
When implementing a typical driver VM, we want the userspace UIO driver to behave like a microkit protection domain in the sense that a microkit signal to the VMM should wake up the UIO notified handler as it would for a microkit protection domain's notified handler. Currently we do this by mapping a microkit signal to the VMM to a vIRQ injection that emulates an edge triggered interrupt to the VM.
There are 3 problems:
Linux's uio_pdrv_genirq kernel module disables the interrupt upon receiving one. This means any attempts to "signal" the driver VM can be lost before the driver requests for it to be enabled from userspace. I don't see how we can work around this issue except for writing our own custom uio kernel side module that doesn't disable interrupts upon receiving one.
Libvmm's VGIC driver does not fully emulate a proper edge-triggered interrupt. The current vIRQ injection correctly will do nothing if the vIRQ is already pending, however, the problem is that it incorrectly tracks the pending state of the IRQ. The VGIC driver tries to shadow the state of the list registers by updating it during a vIRQ injection and upon a maintenance interrupt triggered by an EOI request from the guest VM. It then uses this shadow to report back to the guest the state of the interrupt, and also for whether we should drop a vIRQ injection call. But this shadow tracking is not accurate at all times, as the hardware will update the true state of the list registers from pending to active, back to invalid after an EOI from the guest, and this is not captured. The shadow will only go from pending back to invalid. To accurately keep track of the state of the list register, we need to actually read into the list register which we can only do via an seL4 syscall. This syscall currently does not exist yet, but we should consider adding.
The VGIC also incorrectly enqueues multiple instances of the same interrupt. This means our VGIC driver must additionally keep track of this one more condition in order to determine whether an IRQ is pending, which is by further inspecting whether that vIRQ has already been enqueued into the virq_queue, and drop a vIRQ injection request if its already there.
The text was updated successfully, but these errors were encountered: