Beware of os_unfair_lock
TL;DR: os_unfair_lock
is difficult to use in Swift correctly, directly. Use OSAllocatedUnfairLock
or Mutex
(from Swift 6).
Like many iOS developers, I'm looking forward to Swift 6 and its grand claims of data race safety. In preparation, I've been reviewing some of the code in my personal projects to make sure I'm ready for the migration.
I've previously written about creating an Atomic type that's safe for use in Swift concurrency, where I wanted to lock access to some resource that may be accessed from multiple threads and provide synchronous access.
In short, it involved creating a Sendable
wrapper around os_unfair_lock
to mediate access to some data, through an Atomic<Value>
type that I created.
I chose to use os_unfair_lock
rather than NSLock
as I've heard good things from WWDC sesssions about its low-level effciency and given that the API was simple enough to use, I considered this a good bet.
This is what my Lock
type looked like (and maybe what yours looks like too!):
// This is unsafe, don't use this.
public final class Lock: @unchecked Sendable {
private var _mutex = os_unfair_lock()
public init() {}
public func lock() {
os_unfair_lock_lock(&_mutex)
}
public func unlock() {
os_unfair_lock_unlock(&_mutex)
}
public func `try`() -> Bool {
os_unfair_lock_trylock(&_mutex)
}
}
Pretty simple: it's a reference type that holds a lock which we can aquire and subsequently release when we are done. At a glance, this looks fine.
What's wrong?
- Memory exclusivity: Using
inout
/&
from Swift can trigger property observers, potential temporary copies, and runtime checks for exclusivity, which makes it unsuitable for use with atomic operations or locks likeos_unfair_lock
. - Memory address safety: This implementation, while
@unchecked Sendable
, is actually a safe usage ofos_unfair_lock
from a memory-address perspective, but it's very easy to misuse it. Usingos_unfair_lock
directly from Swift is discouraged for this reason.
1. Memory exclusivity
Russ Bishop covers this topic very well so I'll just point you to his article, The Law.
Essentially the &
operator might not always only perform a simple memory address access.
In our case it does, but that might not always be the case.
By referencing this concurrently, as locks often are, we could violate exclusive access to memory simply by referencing &_mutex
.
Swift reserves the right to crash if memory exclusivity is violated.
The solution here is to avoid using inout
behaviour entirely by wrapping the os_unfair_lock
within an UnsafeMutablePointer
.
This ensures that contention is only on unlocking the lock and won't cause any memory exclusivity violations.
2. Memory address safety
However, I recently came across a (relatively old) Swift Forums thread which pointed me in the direction of OSAllocatedUnfairLock
.
This is also part of the core Darwin os
module.
Overall, it provides the same functionality, with a slightly more ergonomic and modern API.
When reading through the (uncharacteristically substantive) documentation, this note caught my eye:
If you’ve existing Swift code that uses os_unfair_lock, change it to use OSAllocatedUnfairLock to ensure correct locking behavior.
The reason os_unfair_lock
is unsafe to use from Swift is "because it’s a value type and, therefore, doesn’t have a stable memory address".
That means when we pass the mutex to os_unfair_lock_lock
or os_unfair_lock_unlock
it may lock or unlock the wrong object.
Swift value types don't have stable memory addresses because they are copied by value rather than by reference. This hairy example illustrates this issue:
struct MyStruct {
var value: Int
}
func printMemoryAddress(of structInstance: inout MyStruct) {
print("Address:", withUnsafePointer(to: &structInstance) { $0 })
}
var myStruct = MyStruct(value: 42)
printMemoryAddress(of: &myStruct) // Address: 0x000000016eea1808
var anotherStruct = myStruct
printMemoryAddress(of: &anotherStruct) // Address: 0x000000016eea1800
In this case, the struct was copied immediately when we assigned it to a new variable, so the struct (and all its fields) now have different memory addresses, as it's a completely new object.
That's why this fails, as this is treated as 2 seperate locks:
var lock1 = os_unfair_lock()
var lock2 = lock1
os_unfair_lock_lock(&lock1)
os_unfair_lock_unlock(&lock2)
And this is fine, as it refers to the same lock:
let lock1 = OSAllocatedUnfairLock()
let lock2 = lock1
lock1.lock()
lock2.unlock()
OSAllocatedUnfairLock
behaves like a reference type, despite being defined as a struct
.
This makes the lock safe to pass around (even across threads and suspension boundries) and the reference to the same instance of the lock will be maintained.
My original Lock
implementation was safe because the os_unfair_lock
was wrapped in a reference type (class
), which has a persistent memory address.
The os_unfair_lock
itself is stored at some offset from the class's base address.
This importantly meant that the address of the lock wouldn't change.
OSAllocatedUnfairLock
is available on Apple plaforms only.
However, Swift 6 is (very) shortly going to introduce a new Mutex
type, which will wrap different locking primitives depending on your platform and has a near-identical API:
- macOS, iOS, watchOS, tvOS, visionOS:
os_unfair_lock
- Linux:
futex
- Windows:
SRWLOCK
I'll be updating my Atomic<Value>
to use this when Swift 6 is generally available, so it can work on all platforms!
Atomic
Given this new lock type and what we now know about inout
, here's my updated Atomic<Value>
type.
public final class Atomic<Value: Sendable>: Sendable {
private let lock: OSAllocatedUnfairLock<Value>
public init(initialValue value: Value) {
lock = .init(initialState: value)
}
public var value: Value {
lock.withLock { $0 }
}
public func get<T: Sendable>(
_ block: @Sendable (Value) throws -> T
) rethrows -> T {
try lock.withLock { value in
try block(value)
}
}
@discardableResult
public func modify<T: Sendable>(
_ block: @Sendable (inout Value) throws -> T
) rethrows -> T {
try lock.withLock { value in
try block(&value)
}
}
}
The Atomic<Value>
type is almost unnecessary now as it provides almost the same API as the lock anyway.
However, it's nice to keep the separation of concerns here, as I prefer the Atomic
naming and it allows me to change the underlying lock type whenever I like without breaking clients.
Note that modifications to the internal value in done via an inout Value
closure, rather than get
/set
.
This allows for inplace modification of a given value, rather than calling the full getter/setter for any change to the value.
You can read more about why this is a potential issue in this Swift Forums post.
Note: making "forward progress"
A final note I would make is to be careful using locks with Swift concurrency, regardless if they are OSAllocatedUnfairLock
s or not.
Swift concurrency's model requires that we are able to make "forward progress" at any given time (see the WWDC talk, Swift concurrency: Behind the scenes).
This mean's it is not safe to hold a lock that waits for another Swift Task
to complete, for example.
Doing so could result in a deadlock, as the cooperative thread pool may use the same "thread" for performing a different "Task".
The Task
/thread that holds the lock should also be the one to release it, and shouldn't suspend while the lock is being held.
That's why our Atomic<Value>
type requires that the block of work being performed is not async
.
Additionally the locking types, os_unfair_lock
and OSAllocatedUnfairLock
, will actually enforce that you lock and unlock from the safe thread, trapping if not.
Using an actor
in many cases will be more efficient than a lock, but it depends on your use case.
For me, I use an Atomic<Value>
when I specifically don't want an async
suspension point in my code and I know the lock will be very briefly held, such as controlling simple read/write access contention to a shared value.
You can read more about lock vs. actors here.
Further reading
- This article from SwiftRocks covers all kinds of thread-safety related tools in Swift, a very informative read
- Swift concurrency: Behind the scenes