SEV and WFE are the main instructions used for implementing spinlock in case of ARM architecture. Let's look briefly at those two instructions before looking into actual spinlock implementation.
SEV causes an event to be signaled to all cores within a multiprocessor system. If SEV is implemented, WFE must also be implemented.
WFEIf the Event Register is not set, WFE suspends execution until one of the following events occurs:
- an IRQ interrupt, unless masked by the CPSR I-bit
- an FIQ interrupt, unless masked by the CPSR F-bit
- an Imprecise Data abort, unless masked by the CPSR A-bit
- a Debug Entry request, if Debug is enabled
- an Event signaled by another processor using the SEV instruction.
In case of spin_lock_irq( )/spin_lock_irqsave( ),
- as IRQs are disabled, the only way to to resume after WFE intruction has executed is to execute SEV instruction on some other core.
In case of spin_lock( ),
- If IRQs are enabled even before we had called spin_lock( ) and we executed WFE and execution got suspended,
- Scenario 1: Interrupt occured and handled; we resume, but as the lock was still unreleased, we will loopback and execute WFE.
- Scenario 2: Some other core executed WFE and released some other lock (but didn't release our lock); we resume; as the lock is still unreleased, we will loopback and execute WFE.
- Scenario 3: Some other core executed WFE and released this lock; we resume; as the lock was released, we will acquire the lock.
- If IRQs are disabled before calling spin_lock(), then the situation is same as spin_lock_irqsave().
In case of spin_unlock( ),
- lock is released and SEV instruction is executed.
Check out the following code snippets for actual implementation:
static inline void arch_spin_lock(arch_spinlock_t *lock)
unsigned long tmp;
"1: ldrex %0, [%1]\n"
" teq %0, #0\n"
" strexeq %0, %2, [%1]\n"
" teqeq %0, #0\n"
" bne 1b"
: "=&r" (tmp)
: "r" (&lock->lock), "r" (1)
static inline void arch_spin_unlock(arch_spinlock_t *lock)
" str %1, [%0]\n"
: "r" (&lock->lock), "r" (0)
static inline void dsb_sev(void)
#if __LINUX_ARM_ARCH__ >= 7
__asm__ __volatile__ (
__asm__ __volatile__ (
"mcr p15, 0, %0, c7, c10, 4\n"
: : "r" (0)
For more information, check arch/arm/include/asm/spinlock.h in Linux kernel source code. The above code snippet is from 3.4 kernel.