Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch back to Counter based SpinLockIRQ #534

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
5 changes: 4 additions & 1 deletion kernel/src/i386/interrupt_service_routines.rs
Original file line number Diff line number Diff line change
Expand Up @@ -567,10 +567,12 @@ macro_rules! generate_trap_gate_handler {

use crate::i386::structures::gdt::SegmentSelector;
use crate::i386::interrupt_service_routines::INSIDE_INTERRUPT_COUNT;
use crate::sync::spin_lock_irq::{disable_interrupts, decrement_lock_count};
use core::sync::atomic::Ordering;

if $interrupt_context {
let _ = INSIDE_INTERRUPT_COUNT.fetch_add(1, Ordering::SeqCst);
disable_interrupts();
}

if let PrivilegeLevel::Ring0 = SegmentSelector(userspace_context.cs as u16).rpl() {
Expand All @@ -592,6 +594,7 @@ macro_rules! generate_trap_gate_handler {

if $interrupt_context {
let _ = INSIDE_INTERRUPT_COUNT.fetch_sub(1, Ordering::SeqCst);
unsafe { decrement_lock_count(); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unsafe needs to have a // Safety comment, like this

}

// if we're returning to userspace, check we haven't been killed
Expand Down Expand Up @@ -1134,5 +1137,5 @@ pub unsafe fn init() {
(*idt).load();
}

sti();
crate::sync::spin_lock_irq::enable_interrupts();
}
2 changes: 2 additions & 0 deletions kernel/src/i386/process_switch.rs
Original file line number Diff line number Diff line change
Expand Up @@ -347,6 +347,8 @@ fn jump_to_entrypoint(ep: usize, userspace_stack_ptr: usize, arg1: usize, arg2:
const_assert_eq!((GdtIndex::UTlsRegion as u16) << 3 | 0b11, 0x3B);
const_assert_eq!((GdtIndex::UTlsElf as u16) << 3 | 0b11, 0x43);
const_assert_eq!((GdtIndex::UStack as u16) << 3 | 0b11, 0x4B);
// We're about to enable interrupts by doing an iret.
unsafe { crate::sync::spin_lock_irq::decrement_lock_count(); }
Copy link
Member

@roblabla roblabla Oct 31, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IDEM, needs a // Safety comment explaining why this operation is safe.

Also, I'm a bit confused at why this is necessary? Is this what was talked about on discord, because of the interrupt_manager SpinLockIRQ? If so:

  1. It should definitely have a big fat comment explaining this in detail
  2. I think we can get rid of interrupt_manager, removing the need for this.

Copy link
Contributor Author

@kitlith kitlith Oct 31, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is interrupt_manager necessary?

the guard for the queue is dropped before the context switch (if it wasn't, that'd probably cause deadlocks or cause need to force_unlock, etc.), which must still have interrupts disabled. So it's not removable as such.

Alternatives to what I did do include wrapping context_switch in decrements/increments of the counter and/or removing the guard object and using calls to increment/decrement. I think it's a bit better to associate it with the iret, but shrug.

Copy link
Contributor Author

@kitlith kitlith Oct 31, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, here's another alternative: making context_switch and co less unsafe by having it manage interrupt enabling/disabling by itself instead of making everything else manage it.

unsafe {
asm!("
mov ax,0x33 // ds, es <- UData, Ring 3
Expand Down
3 changes: 3 additions & 0 deletions kernel/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -250,6 +250,9 @@ pub extern "C" fn common_start(multiboot_info_addr: usize) -> ! {

info!("Allocating cpu_locals");
init_cpu_locals(1);
// Now that cpu_locals are present, ensure that SpinLockIRQs
// are aware that interrupts are disabled.
sync::spin_lock_irq::disable_interrupts();

info!("Enabling interrupts");
unsafe { i386::interrupt_service_routines::init(); }
Expand Down
1 change: 1 addition & 0 deletions kernel/src/process.rs
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,7 @@ pub struct ThreadStruct {
///
/// This is used when signaling that this thread as exited.
state_event: ThreadStateEvent

}

/// A handle to a userspace-accessible resource.
Expand Down
5 changes: 4 additions & 1 deletion kernel/src/sync/spin_lock.rs
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,10 @@ impl<T: ?Sized> SpinLock<T> {
/// a guard within Some.
pub fn try_lock(&self) -> Option<SpinLockGuard<T>> {
use core::sync::atomic::Ordering;
if crate::i386::interrupt_service_routines::INSIDE_INTERRUPT_COUNT.load(Ordering::SeqCst) != 0 {
use crate::cpu_locals::ARE_CPU_LOCALS_INITIALIZED_YET;
use crate::i386::interrupt_service_routines::INSIDE_INTERRUPT_COUNT;
use super::INTERRUPT_DISARM;
if !INTERRUPT_DISARM.load(Ordering::SeqCst) && ARE_CPU_LOCALS_INITIALIZED_YET.load(Ordering::SeqCst) && INSIDE_INTERRUPT_COUNT.load(Ordering::SeqCst) != 0 {
panic!("\
You have attempted to lock a spinlock in interrupt context. \
This is most likely a design flaw. \
Expand Down
151 changes: 84 additions & 67 deletions kernel/src/sync/spin_lock_irq.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,54 @@ use spin::{Mutex as SpinLock, MutexGuard as SpinLockGuard};
use core::fmt;
use core::mem::ManuallyDrop;
use core::ops::{Deref, DerefMut};
use core::sync::atomic::Ordering;
use core::sync::atomic::{AtomicU8, Ordering};
use super::INTERRUPT_DISARM;
use crate::cpu_locals::ARE_CPU_LOCALS_INITIALIZED_YET;

/// Interrupt disable counter.
///
/// # Description
///
/// Allows recursively disabling interrupts while keeping a sane behavior.
/// Should only be manipulated through sync::enable_interrupts and
/// sync::disable_interrupts.
///
/// Used by the SpinLockIRQ to implement recursive irqsave logic.
#[thread_local]
static INTERRUPT_DISABLE_COUNTER: AtomicU8 = AtomicU8::new(0);

/// Decrement the interrupt disable counter.
///
/// Look at documentation for ProcessStruct::pint_disable_counter to know more.
pub fn enable_interrupts() {
if !INTERRUPT_DISARM.load(Ordering::SeqCst) && ARE_CPU_LOCALS_INITIALIZED_YET.load(Ordering::SeqCst) {
if INTERRUPT_DISABLE_COUNTER.fetch_sub(1, Ordering::SeqCst) == 1 {
unsafe { interrupts::sti() }
}
}
}

/// Decrement the interrupt disable counter without possibly re-enabling interrupts.
///
/// Used to decrement counter while keeping interrupts disabled inside an interrupt.
/// Look at documentation for INTERRUPT_DISABLE_COUNTER to know more.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safety comment explaining when it is safe to call this function.

pub unsafe fn decrement_lock_count() {
if !INTERRUPT_DISARM.load(Ordering::SeqCst) && ARE_CPU_LOCALS_INITIALIZED_YET.load(Ordering::SeqCst) {
let _ = INTERRUPT_DISABLE_COUNTER.fetch_sub(1, Ordering::SeqCst);
}
}

/// Increment the interrupt disable counter.
///
/// Look at documentation for INTERRUPT_DISABLE_COUNTER to know more.
pub fn disable_interrupts() {
if !INTERRUPT_DISARM.load(Ordering::SeqCst) && ARE_CPU_LOCALS_INITIALIZED_YET.load(Ordering::SeqCst) {
if INTERRUPT_DISABLE_COUNTER.fetch_add(1, Ordering::SeqCst) == 0 {
unsafe { interrupts::cli() }
}
}
}


/// Permanently disables the interrupts. Forever.
///
Expand All @@ -32,27 +78,12 @@ pub unsafe fn permanently_disable_interrupts() {
/// - `lock` behaves like a `spinlock_irqsave`. It returns a guard.
/// - Dropping the guard behaves like `spinlock_irqrestore`
///
/// This means that locking a spinlock disables interrupts until all spinlock
/// guards have been dropped.
/// This means that locking a spinlock disables interrupts until all spinlocks
/// have been dropped.
///
/// A note on reordering: reordering lock drops is prohibited and doing so will
/// result in UB.
//
// TODO: Find sane design for SpinLockIRQ safety
// BODY: Currently, SpinLockIRQ API is unsound. If the guards are dropped in
// BODY: the wrong order, it may cause IF to be reset too early.
// BODY:
// BODY: Ideally, we would need a way to prevent the guard variable to be
// BODY: reassigned. AKA: prevent moving. Note that this is different from what
// BODY: the Pin API solves. The Pin API is about locking a variable in one
// BODY: memory location, but its binding may still be moved and dropped.
// BODY: Unfortunately, Rust does not have a way to express that a value cannot
// BODY: be reassigned.
// BODY:
// BODY: Another possibility would be to switch to a callback API. This would
// BODY: solve the problem, but the scheduler would be unable to consume such
// BODY: locks. Maybe we could have an unsafe "scheduler_relock" function that
// BODY: may only be called from the scheduler?
/// Note that it is allowed to lock/unlock the locks in a different order. It uses
/// a global counter to disable/enable interrupts. View INTERRUPT_DISABLE_COUNTER
/// documentation for more information.
pub struct SpinLockIRQ<T: ?Sized> {
/// SpinLock we wrap.
internal: SpinLock<T>
Expand All @@ -74,45 +105,32 @@ impl<T> SpinLockIRQ<T> {

impl<T: ?Sized> SpinLockIRQ<T> {
/// Disables interrupts and locks the mutex.
pub fn lock(&self) -> SpinLockIRQGuard<T> {
if INTERRUPT_DISARM.load(Ordering::SeqCst) {
let internalguard = self.internal.lock();
SpinLockIRQGuard(ManuallyDrop::new(internalguard), false)
} else {
// Save current interrupt state.
let saved_intpt_flag = interrupts::are_enabled();

// Disable interruptions
unsafe { interrupts::cli(); }

let internalguard = self.internal.lock();
SpinLockIRQGuard(ManuallyDrop::new(internalguard), saved_intpt_flag)
}
pub fn lock(&self) -> SpinLockIRQGuard<'_, T> {
// Disable irqs
unsafe { disable_interrupts(); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safety comment


// TODO: Disable preemption.
// TODO: Spin acquire

// lock
let internalguard = self.internal.lock();
SpinLockIRQGuard(ManuallyDrop::new(internalguard))
}

/// Disables interrupts and locks the mutex.
pub fn try_lock(&self) -> Option<SpinLockIRQGuard<T>> {
if INTERRUPT_DISARM.load(Ordering::SeqCst) {
self.internal.try_lock()
.map(|v| SpinLockIRQGuard(ManuallyDrop::new(v), false))
} else {
// Save current interrupt state.
let saved_intpt_flag = interrupts::are_enabled();

// Disable interruptions
unsafe { interrupts::cli(); }

// Lock spinlock
let internalguard = self.internal.try_lock();

if let Some(internalguard) = internalguard {
// if lock is successful, return guard.
Some(SpinLockIRQGuard(ManuallyDrop::new(internalguard), saved_intpt_flag))
} else {
// Else, restore interrupt state
if saved_intpt_flag {
unsafe { interrupts::sti(); }
}
pub fn try_lock(&self) -> Option<SpinLockIRQGuard<'_, T>> {
// Disable irqs
unsafe { disable_interrupts(); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safety comment


// TODO: Disable preemption.
// TODO: Spin acquire

// lock
match self.internal.try_lock() {
Some(internalguard) => Some(SpinLockIRQGuard(ManuallyDrop::new(internalguard))),
None => {
// We couldn't lock. Restore irqs and return None
unsafe { enable_interrupts(); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safety comment

None
}
}
Expand All @@ -126,19 +144,20 @@ impl<T: ?Sized> SpinLockIRQ<T> {

impl<T: fmt::Debug> fmt::Debug for SpinLockIRQ<T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
if let Some(v) = self.try_lock() {
f.debug_struct("SpinLockIRQ")
.field("data", &v)
.finish()
} else {
write!(f, "SpinLockIRQ {{ <locked> }}")
match self.try_lock() {
Some(d) => {
write!(f, "SpinLockIRQ {{ data: ")?;
d.fmt(f)?;
write!(f, "}}")
},
None => write!(f, "SpinLockIRQ {{ <locked> }}")
}
}
}

/// The SpinLockIrq lock guard.
#[derive(Debug)]
pub struct SpinLockIRQGuard<'a, T: ?Sized>(ManuallyDrop<SpinLockGuard<'a, T>>, bool);
pub struct SpinLockIRQGuard<'a, T: ?Sized>(ManuallyDrop<SpinLockGuard<'a, T>>);

impl<'a, T: ?Sized + 'a> Drop for SpinLockIRQGuard<'a, T> {
fn drop(&mut self) {
Expand All @@ -147,9 +166,7 @@ impl<'a, T: ?Sized + 'a> Drop for SpinLockIRQGuard<'a, T> {
unsafe { ManuallyDrop::drop(&mut self.0); }

// Restore irq
if self.1 {
unsafe { interrupts::sti(); }
}
unsafe { enable_interrupts(); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know the old code didn't have one, but a safety comment here would be nice ^^.


// TODO: Enable preempt
}
Expand Down