Auto-advance vs Time Guards for mocking async Rust code
In this post I’m going to briefly describe two strategies for mocking timers in Rust/tokio tests and demonstrate some situations where one works better than the other. I had hoped that by bringing these observations together I’d be able to say something wise about async time mocking in general, but sadly I don’t think I can. We don’t have a slam-dunk solution that is obviously the best and it remains unclear how we get there. If a solution does arise, it’s going to contend with the issues described here.
If you’ve arrived at this article hoping to understand how to mock time in your async code generally, I’d recommend starting with tokio::time::pause
(see the section on that page about “Auto-advance”). For this remainder of this post I want to look at specific cases which may or may not apply to your own code.
- The Basic Problem
- The Hard Problem
- Comparing Auto-Advance and Time Guards
- Time Guards’ Superpower: Bounded I/O
- Unsolved Problem: Unbounded I/O
- Situations where Time Guards are Problematic
- Conclusion
The Basic Problem
- We have business logic that uses async timers like
sleep()
orinterval()
- We want to test it without having to wait for that much time to actually elapse when running our tests
This test will take 5 seconds to run:
async fn get_value() -> i32 {
tokio::time::sleep(Duration::from_secs(5)).await;
4
}
#[tokio::test]
async fn test_delay() {
let val = get_value().await;
assert_eq!(val, 4);
}
But if we mock time by some means—in this case tokio’s auto-advance—then it executes ~instantly:
#[tokio::test]
async fn test_fast() {
tokio::time::pause(); // <-- new line
let val = get_value().await;
assert_eq!(val, 4);
}
There are various ways to implement this which I’m not going to get into here. This is just to set the scene about what we’re trying to accomplish.
The Hard Problem
- We might want to test for side effects that occur after a timer is triggered
- Or after a timer expires it might schedule another timer—if it’s the earliest, then the new one should be triggered next
That is: we want the work associated with a given timer trigger to finish before we update the clock and simulate the next timer. Here’s an example:
#[tokio::test(flavor = "current_thread", start_paused = true)]
async fn test_associated() {
// When the code following the delay runs, this will be updated to true
let complete = Arc::new(AtomicBool::new(false));
tokio::spawn({
let complete = complete.clone();
async move {
tokio::time::sleep(Duration::from_secs(1)).await; // 1
complete.store(true, Ordering::Relaxed); // 2
}
});
// Advance time past the trigger point by sleeping here...
tokio::time::sleep(Duration::from_secs(2)).await; // 3
// `complete` should have been updated
assert!(complete.load(Ordering::Relaxed)); // 4
}
If we ran this test at normal speed we would expect the marked lines to complete execution in the order: 1
, 2
, 3
, 4
. In the mocking scenario, when we’re executing as fast as possible, once we reach 3
it’s not sufficient to simply “wake up” the timer at 1
. We need to wake up timer 1
and wait for the following line 2
to be executed before we continue to 4
. Otherwise we get a different result from running at normal speed: the atomic isn’t updated yet.1
This is a consequence of how Rust’s async timers can easily end up embedded in futures alongside business logic. In another design you might register a timer and provide a synchronous onTimerTriggered()
callback method. A mocking system could call this callback and have some confidence that by the time it returns, all associated work has completed. Here we can’t do that. All we can do is ask the executor to unpark the corresponding task, which it will do at some point in the future.
There is a second more subtle issue in this example: the 1-second sleep future is not created synchronously. In all likelihood execution will reach 3
before 1
has run and registered the 1-second delay with the mocking system. A simplistic mock would say “well the only timer I know about right now is this 2-second one, so let’s simulate that”—and it would skip along to 4
and fail the test, all before the spawned task got to run. One way or another we need to let that 1-second delay register itself before our mock implementation proceeds to trigger any timers.
In the above code we’re again using tokio’s auto-advance (via the start_paused = true
attribute) and it has taken care of both problems for us. It is very clever, but not invincible, as we shall see.
Comparing Auto-Advance and Time Guards
If you haven’t used it before you might be wondering how the tokio::time::pause()
in the above examples is making this work. It looks kind of magical.
The answer is that time advancement is built into the executor. If any tasks need polling then it will do so; then if it finds itself in the state that all tasks are parked, it will automatically advance time to the soonest pending timer and trigger it, in order to make something happen. Then it will allow all desired execution to proceed until it again reaches a quiescent state, at which point it continues with the next-soonest timer, and so on.
When contemplating this, remember that the async test code is itself one task on the executor. That’s why using a longer sleep in the test code ensures ordering relative to the shorter sleep inside the inner task.
But what if we didn’t have such a smart executor, and our test chooses to advance time by N seconds manually? For each individual triggered timer, it needs a way to provide feedback that its associated work has now finished—whatever that means for a given application.
One way to implement this is a drop guard. When we await on the timer and it’s triggered, it returns an opaque token—a “frozen time guard”—which must be dropped when handling has been completed. Dropping fires a oneshot channel back to the mocking machinery which knows it is safe to move on to the next timer.
I implemented this concept at work, which is described in some detail on the company blog. (No source available, sorry!) This gives us similar kind of information without relying on visibility over executor internals.
{
let _guard = ditto_time::delay_for(Duration::from_secs(10)).await.unwrap();
// handle timer event
// guard dropped; time can advance
}
I always wanted something like tokio’s auto-advance but originally I wasn’t aware of it—for a long time it wasn’t documented properly so I ended up writing ditto_time
instead. When I did discover tokio’s implementation later, it seemed cleaner so I set about porting over some of my existing tests from time guards to auto-advance. In many cases this went smoothly, but unfortunately some tests began to fail or flake and it took me a little while to work out why.
Time Guards’ Superpower: Bounded I/O
Let’s take the same test as above except with one small (contrived) tweak—in the inner async block we perform some async file I/O before updating the atomic variable.
#[tokio::test(flavor = "current_thread", start_paused = true)]
async fn test_associated_io() {
let complete = Arc::new(AtomicBool::new(false));
tokio::spawn({
let complete = complete.clone();
async move {
tokio::time::sleep(Duration::from_secs(1)).await; // 1
// Run an async task which operates outside our program
// This causes the test to fail - comment it out to pass
let _f = tokio::fs::File::open("/tmp/foo").await.unwrap();
complete.store(true, Ordering::Relaxed);
}
});
tokio::time::sleep(Duration::from_secs(2)).await;
assert!(complete.load(Ordering::Relaxed));
}
This modified test consistently fails under auto-advance. How come? Well, remember that if all tasks are in the parked state then auto-advance will skip on to the next timer. When we are waiting for an external I/O operation then none of the tasks in our program are doing anything. In other words, while we await the file-open, auto-advance thinks everything is done so it triggers the next timer (2 seconds) and the test fails.
It is hopefully clear that if you had a frozen time guard instead, which is stored on the stack across the entire async I/O operation, then time remains fixed in place until the whole thing completes and you get correct mocking behaviour.
In general, any I/O operations bounded by the triggered timer code are easily handled by time guards. This is helpful in many situations but it doesn’t work for everything.
Unsolved Problem: Unbounded I/O
Imagine you are doing a tokio::select
across two operations: a read from file descriptor, and a timer. This time the I/O is not bounded by the timer execution—it sits alongside. This confounds both auto-advance and timer guards.
In auto-advance you hit a similar problem to before where time can shoot ahead while the read is occurring.
With time guards, there is no guard in scope while the read is occurring, so time can freewheel forward. This means that a timer scheduled 10 seconds in the future might get to run before reading the data which is right there in the file descriptor, an operation that would normally complete right away. If that 10 second timer happened to be a read timeout, then you start to get weird test failures.
You might look at this and think, “okay what if we instrumented all our I/O operations to suppress advancing time until they’ve completed”? This is an interesting idea but I believe it’s impossible to do this any automatic way. We don’t know whether a given read()
is something that we expect to complete immediately and therefore we should wait for it, or if it’s something like a read from stdin or a network socket where it could be in progress for a long period of time and should be ignored for mocking purposes.
What if we allowed the developer to suppress auto-advance manually across particular operations? This is ugly because now the production code needs to have extra junk in it just for the benefit of tests. There is an open issue requesting this feature in tokio nonetheless.
Situations where Time Guards are Problematic
If you’re considering using time guards you should be aware that it doesn’t play well with certain constructions that work fine in normal code. All of the situations I’m about to mention can be worked around, but you probably won’t be very happy about it.
Downstream channels
{
let _guard = ditto_time::delay_for(Duration::from_secs(10)).await.unwrap();
some_channel_tx.send(Event::TimerExpired).await;
// guard dropped
}
If the code triggers some downstream async operation, such as by sending a notification through a channel, then there’s no guarantee that the channel receiver will get to run before time moves on. This might be a problem.
Workarounds include adding some sort of completion feedback channel, or passing the frozen time guard itself into the downstream channel, possibly inside an Arc
.
Spawn then side effect
tokio::spawn(async move {
// performed immediately
complete.store(true, Ordering::Relaxed);
// other async stuff...
});
When using time guards, spawns are invisible. If the spawned task has a preamble that must be done “immediately” then it needs to be hoisted outside the spawn.
complete.store(true, Ordering::Relaxed);
tokio::spawn(async move {
// other async stuff...
});
Spawn then sleep
A common specific example of the spawn problem is an initial sleep. Until the spawned future is polled, the timer is not created and the mock controller remains unaware of it. It is important to move the creation of any sleep
or interval
outside the spawn.
let delay = sleep(Duration::from_secs(1));
tokio::spawn(async move {
let _guard = delay.await;
// handle trigger
});
Loop then sleep
It is tempting to write code which repeatedly creates delays in a loop.
loop {
let _guard = sleep(Duration::from_secs(1));
// handle trigger
}
Sadly, there is a tiny gap between where the guard is dropped at the end of the loop and when the next sleep is created. This can be enough for mock time to fly off into the future between iterations. A workaround is to use an interval instead, which effectively registers the entire series of triggers in advance.
let mut i = interval(Duration::from_secs(1));
loop {
// Take care whether your `interval` implementation also fires immediately!
let _guard = i.next().await;
// handle trigger
}
Conclusion
In most scenarios, tokio’s auto-advance is the neatest solution for mocking time in async tests. Frozen time guards offer additional power when handling external async triggers such as file I/O. Even then, many common I/O patterns remain difficult to combine with mocked time and it’s not obvious that there’s a way to solve that without including some sort of hints in the code to influence the mocking behaviour.
Remember, though, that none of this matters too much. We still have the regular technique of writing any critical code in a synchronous manner, where all timing functionality is hoisted to the outside where those parameters can be supplied manually in a test. If in doubt, writing your business logic this way is still a great idea. However if we can smooth the rough edges in order to test increasing amounts of async code “as-is”, then that’s a very cool development.
-
Although we expect it, note that nothing in the original program guarantees that
2
executes before4
. A heavily-loaded machine can schedule its threads in strange ways and it’s possible, though unlikely, that the test could fail even without mocking. Generally when we mock, we opt in to the fantasy that this random rearrangement doesn’t happen. ↩