Lessons moving from iOS delegates to Rust async
The majority of my async programming experience is on iOS and let me tell you, life is good. You can easily dispatch work to background threads. You can bring work back to the main thread. You can mark your classes as delegates and when you need to handle some event the OS will use a magic pre-existing thread pool to invoke your method and you can do whatever you like. It works perfectly almost all the time, except for when it doesn’t because of race conditions or it crashes due to concurrency. Life is good.
Rust is less tolerant about the crashing part. While I agree that crashing is bad in principle, avoiding it has significant ramifications for how you can write async code at all. Recently I’ve been finding out what the differences are. Obviously this means I’m more of a noob than an expert, but I’m currently in a good position to point out what the confusing parts are and what the Rust solutions seem to be. (But I’m a noob so take it with a grain of salt.)
Delegates are hard
The Apple-provided Cocoa API is full of delegates. When you start writing your own classes that perform asynchronous work it’s natural to follow the same pattern. (Cocoa also uses completion callbacks/blocks extensively and we’ll get to those in a moment.) Delegates work really well in Swift or Objective-C.
This is easy to understand and works fine. It even translates to Rust pretty easily—instead of defining a
TaskDoerDelegate protocol you define a
TaskDoerDelegate trait and implement it for
MyViewController. The key question here is: who actually owns the instance of
On iOS you would say: well, the OS owns the view controller really with its system code, and the
TaskDoer has a weak reference to it, so it knows about it but it doesn’t own it.
TaskDoer doesn’t influence the lifetime of its delegate since it’s weak, it does have a mutable reference to it (using Rust terminology). When it calls
taskDone(), the corresponding method is running inside a mutable version of
MyViewController and it can do whatever it wants to its state (incrementing
tasksDone). In Swift you can very easily have multiple mutable references to the same object flying around—even if some of them are weak. And this is why crashes can happen.
Rust does not allow this at all. It does have
Weak references but these presuppose an
Arc. If you are a
MyViewController, you can’t just generate an
Arc<Self> out of nothing. You need to be inside an
Arc<MyViewController> to begin with. You need access to that
Arc so that you can give a clone trait object to the
TaskDoer as its delegate. (At that point it might be downgraded to a
Weak, but this detail is not important.)
Using the delegate pattern means you need to be able to pass around an
Arc reference to yourself. This detail leaks into whatever owns the
MyViewController becomes an
Arc<MyViewController>, for reasons that are internal to that object. You then
impl TaskDoerDelegate for Arc<MyViewController> instead of
Everything about this is horrible. Delegates do not fit well with Rust ownership. As far as I can tell there is no fix for this except for the messy
Arc workaround. Direct delegates are a dangerous construct and it’s not allowed.
Here’s another example of iOS programming where life is good. You call a function and you supply a completion closure that will be executed when the action finishes. You can capture
self (implicitly a mutable reference) in the closure and run some other method or change some state if you want.
If you’re an experienced iOS developer you may be looking at this warily. A relatively common concurrency stuff-up in iOS is that
self goes missing. If your receiving object gets deallocated at the wrong time, it’s possible for the callback to run but trying to access memory related to
self crashes. The solution is to instead capture
weak self, which makes
self an optional. If you successfully convert it back to a strong reference, life is good, and you can do whatever you wanted to safely:
Rust will allow a callback closure but you need to guarantee that the lifetime of the closure is shorter than any references captured inside it. In practice this is kind of difficult.
Rust allows closures to have limited lifetimes. Storing them gets syntactically messy. In general, particularly if you’re using tokio, the callback closures have to have a
'static lifetime because you don’t know when it’s actually going to be scheduled for execution relative to any other program flow. In other words, the closure needs to own whatever it needs, or it needs to own an
Arc of whatever it needs.
Back in iOS-land the callback was essentially a notification. Suppose you write a class, and a function inside that class, and inside that function you write a closure that’s used as a callback. When the callback is fired, you have the full capabilities of that class instance and its state at your disposal by capturing
In Rust that’s just not true. You no longer have access to
self, unless you did an
Arc trick similar to what was mentioned earlier about delegates. Even then it’s only an immutable reference so any fields you need to access will have to be protected by a
RwLock. Life isn’t good.
The Rust solution, as far as I can tell, is to think carefully about your specific callback. When it runs, what will it actually do? What data does it need to do that job? There are two options.
- Move the data required into the closure, if possible.
- If the data must be shared, pass in clones of
Arcs of the minimum required data.
Arc references probably won’t include
self. It your object is a struct it would make more sense to pick out the one or two or however many fields that are required for the callback and place those specific fields in
If you truly do have an object “state” struct that is indivisible it could perhaps be an
Arc<Mutex<MyObjectState>>. You can pass a clone of that
Arc into all the futures callbacks that need it. If you can organise your code so it doesn’t depend on shared state at all for its callbacks, so much the better. Functionality that’s shared or otherwise “too big for the closure” can be implemented as associated functions that don’t take a
self parameter and instead are passed all the data they need as arguments.
Executors are not omnipresent
On iOS we have Grand Central Dispatch. When our app runs it already has a main queue and a global queue and we can use the
DispatchQueue global to move closures onto whatever we want, whenever we want. Life is good.
If you read the tokio docs, life is also good, because your
main() function defines a future and this is passed directly to
tokio::run. From there you can call
tokio::spawn whenever you like and all your async tasks and futures are running on an executor that is provided from the beginning.
That is, there is no equivalent of the
DispatchQueue global in Rust. Instead you must create a
Runtime, possibly create
executor(), and pass them to whatever needs to schedule futures on that thread pool. This is not a major hindrance, but it is important to be aware of. Any Rust code can call
thread::spawn and get a closure running, but if you want to use your tokio runtime you’re going to have to either already be in it, or have a reference to an executor.
Also, be aware that the tokio thread pool is optimised for I/O, i.e. not large amounts of CPU usage. If you know you’re going to do something processor intensive, consider moving it off to a worker thread with
Rust does async but lifetimes make it more challenging. This isn’t necessarily bad—it rules out certain kinds of crashes that I’ve certainly experienced in the past. However I’ve found it difficult to get out of the rut of thinking in that very object-oriented state-focused way. I wrote this post to explain my journey so far out of that trap.
As mentioned previously, I am still a noob, so if any of these observations are wrong please drop me an email and I’ll update the post with any improvements.