In the last several years async-friendly languages and APIs have received a large amount of attention. One contentious point in the language design space are the “colored functions”, or division of functions to async and non-async ones. The term was introduced by the now-famous 2015 article titled What Color is Your Function?, which uses color as a metaphor for the often painful mismatch between sync and async functions in JavaScript and other languages with explicitly async functions. Since 2015 many more languages have jumped on the aysnc bandwagon, so many more programmers are now getting familiar with the metaphor. Given that some languages managed to provide async IO without colored functions, such as Go, Zig, and in the future likely Java, the discussion around function colors is picking up once again and is raised in the context of Rust. Some people have even tried to argue that the bad rap of colored function doesn’t apply to Rust’s async because it’s not colored in the first place. An article from several days ago is titled “Rust’s async isn’t f#@king colored!”, and similar arguments have appeared on reddit. I’m not picking on any specific post, but I’d like to provide a response to that sort of argument in general.
In this article I will show that Rust async functions are colored, by both the original definition and in practice. This is not meant as an criticism of Rust async, though – I don’t see function colors as an insurmountable issue, but as a reflection of the fundamental difference of async and sync models of the world. Languages that hide that difference do so by introducing compromises that might not be acceptable in a systems language like Rust or C++ – for example, by entirely forbidding the use of system threads, or by complicating the invocation of foreign or OS-level blocking calls. Colored functions are also present in at least C#, Python, Kotlin, and C++, so they’re not a quirk of JavaScript and Rust. And additional features of Rust async do make it easier to connect async code with traditional blocking code, something that is just not possible in JavaScript.
Colored functions
“What Color is Your Function?” starts off by describing an imaginary language that perversely defines two types of functions: red and blue. The language enforces a set of seemingly arbitrary rules regarding how the two are allowed to interact:
- Every function has a color.
- The way you call a function depends on its color.
- You can only call a red function from within another red function.
- Red functions are more painful to call.
- Some core library functions are red.
Without knowing the details, a reasonable person would agree that the described language is not particularly well designed. Of course, readers of this article in 2021 will not find it hard to recognize the analogy with async: red functions are async functions, and blue functions are just ordinary functions. For example, #2 and #4 refers to the fact that calling an async function requires either explicit callback chaining or await
, whereas a sync function can just be called. #3 refers to the fact that await
and callback resolution work only inside async functions, and JavaScript doesn’t provide a way to block the current non-async function until a promise (async value) is resolved. The article portrays async functions as a leaky abstraction that profoundly and negatively affects the language, starting with the above rules.
The rules of async make async code contagious, because using just one async function in one place requires all the callers up the stack to become async. This splits the ecosystem into async and non-async libraries, with little possibility to use them interchangeably. The article describes async functions as functions that operate on classic JavaScript callbacks, but further argues that async/await, which was novel at the time, doesn’t help with the issue. Although await
constitutes a massive ergonomic improvement for calling async from async (#4), it does nothing to alleviate the split – you still cannot call async code from non-async code because await
requires async.
Function colors in Rust async
How does all this apply to Rust? Many people believe that it applies only in part, or not at all. Several objections have been raised:
- Rust async functions are in essence ordinary functions that happen to return values that implement the
Future
trait.async fn
is just syntactic sugar for defining such a function, but you can make one yourself using an ordinaryfn
as long as your function returns a type that implementsFuture
. Since async functions, the argument goes, are functions that returnFuture<Output = T>
instead ofT
and as such are not “special” in any way, any more than functions that return aResult<T>
instead ofT
are special – so rule #1 (“every function has a color”) doesn’t apply. - Unlike JavaScript, Rust async executors provide a
block_on()
primitive that invokes an async function from a non-async context and blocks until the result is available – so rule #3 (“you can only call a red function from within another red function”) doesn’t apply. - Again unlike JavaScript, Rust async provides
spawn_blocking()
which invokes a blocking sync function from an async context, temporarily suspending the current async function without blocking the rest of the async environment. This one doesn’t correspond to a rule from the original article because JavaScript doesn’t support blocking sync functions. - Rule #5 (“some core library functions are red”) doesn’t apply because Rust’s stdlib is sync-only.
If these arguments are correct, the only color rule that remains is rule #4, “red functions are more painful to call”, and that part is almost completely alleviated by await
. The original JavaScript problems where async functions “don’t compose in expressions because of the callbacks” or “have different error-handling” simply don’t exist with await
, in either JavaScript or Rust. Taking these arguments at face value, it would seem that the whole function-color problem is made up or at least wildly exaggerated from some exotic JavaScript problems that Rust async doesn’t inherit. Unfortunately, this is not the case.
First, the split between the sync and async ecosystems is immediately apparent to anyone who looks at the ecosystem. The very existence of async_std
, a crate with the explicit purpose to provide an “async version of the Rust standard library”, shows that the regular standard library is not usable in an async context. If function colors weren’t present in Rust, the ordinary stdlib would be used in both sync and async code, as is the case in Go, where a distinction between “sync” and “async” is never made to begin with.
Then what of the above objections? Let’s go through them one by one and see how they hold up under scrutiny.
Aren’t Rust async functions just ordinary functions with a wacky return type?
While this is true in a technical sense, the same is also true in JavaScript and almost all languages with colored async (with the exception of Kotlin) in exactly the same way. JavaScript async functions are syntactic sugar for functions that create and return a Promise
. Python’s async functions are regular callables that immediately return a coroutine object. That doesn’t change the fact that in all those languages the caller must handle the returned Promise
(coroutine object in Python, Future
in Rust) in ways that differ from handling normal values returned from functions. For example, you cannot pass an async function to Iterator::filter()
because Iterator::filter()
expects a function that returns an actual bool
, not an opaque value that just might produce a bool at some point in the future. No matter what you put in the body of your async function, it will never return bool, and extracting the bool requires executor magic that creates other problems, as we’ll see below. Regardless of whether it’s technically possible to call an async function from a sync context, inability to retrieve its result is at the core of function color distinction.
Ok, but doesn’t the same apply to Result
? Functions that need a u32
aren’t particularly happy to receive a Result<u32, SomeError>
. A generic function that accepts u32
, such as Iterator::min()
, has no idea what to do with Result<u32, SomeError>
. And yet people don’t go around claiming that Result
somehow “colors” their functions. I admit that this argument has merit – Result
indeed introduces a semantic shift that is not always easy to bridge, including in the example we used above, Iterator::filter()
. There is even a proposal to add 21 new iterator methods such as try_filter()
, try_min_by_key()
, try_is_partitioned()
, and so on, in order to support doing IO in your filter function (and key function, etc.). Doing this completely generically might require Haskell-style monads or at least some form of higher-kinded types. All this indicates that supporting both Result
and non-Result
types in fully generic code is far from a trivial matter. But is that enough to justify the claim that Result
and Future
are equivalent in how they affect functions that must handle them? I would say it’s not, and here is why.
If the recipient of a Result
doesn’t care about the error case, it can locally resolve Result
to the actual value by unwrapping it. If it doesn’t want to panic on error, it can choose to convert the error to a fallback value, or skip the processing of the value. While it can use the ?
operator to propagate the error to its caller, it is not obliged to do so. The recipient of a Future
doesn’t have that option – it can either .await
the future, in which case it must become async itself, or it must ask an executor to resolve the future, in which case it must have access to an executor, and license to block. What it cannot do is get to the underlying value without interaction with the async environment.
Verdict: Rule #1 mostly applies to Rust – async functions are special because they return values that require async context to retrieve the actual payload.
Doesn’t block_on()
offer a convenient way to invoke an async function from a non-async context?
Yes, provided you are actually allowed to use it. Libraries are expected to work with the executor provided by the environment and don’t have an executor lying around which they can just call to resolve async code. The standard library, for example, is certainly not allowed to assume any particular executor, and there are currently no traits that abstract over third-party executors.
But even if you had access to an executor, there is a more fundamental problem with block_on()
. Consider a sync function fn foo()
that, during its run, needs to obtain the value from an async function async fn bar()
. To do so, foo()
does something like let bar_result = block_on(bar())
. But that means that foo()
is no longer just a non-async function, it’s now a blocking non-async function. What does that mean? It means that foo()
can block for arbitrarily long while waiting for bar()
to complete. Async functions are not allowed to call functions like foo()
for the same reason they’re not allowed to call thread::sleep()
or TcpStream::connect()
– calling a blocking function from async code halts the whole executor thread until the blocking function returns. In case of that happening in multiple threads, or in case of a single-threaded executor, that freezes the whole async system. This is not described in the original function color article because neither block_on()
nor blocking functions exist in stock JavaScript. But the implications are clear: a function that uses block_on()
is no longer blue, but it’s not red either – it’s of a new color, let’s call it purple.
If this looks like it’s changing the landscape, that’s because it is. And it gets worse. Consider another async function, xyzzy()
, that needs to call foo()
. If foo()
were a blue/non-async function, xyzzy()
would just call it and be done with it, the way it’d call HashMap::get()
or Option::take()
without thinking. But foo()
is a purple function which blocks on block_on(bar())
, and xyzzy()
is not allowed to call it. The irony is that both xyzzy()
and bar()
are async and if xyzz()
could just await bar()
directly, everything would be fine. The fact that xyzzy()
calls bar()
through the non-async foo()
is what creates the problem – foo
‘s use of block_on()
breaks the chain of suspensions required for bar()
to communicate to xyzzy()
that it needs to suspend until further notice. The ability to propagate suspension from the bottom-most awaitee all the way to the executor is the actual reason why async must be contagious. By eliminating async from the signature of foo()
one also eliminates much of the advantage of bar()
being async, along with the possibility of calling foo()
from async code.
Verdict: rule #3 applies because block_on()
changes a blue function into something that is neither red nor callable from red.
Doesn’t spawn_blocking()
resolve the issue of awaiting blocking functions in async contexts?
spawn_blocking()
is a neat bridge between sync and async code: it takes a sync function that might take a long time to execute, and instead of calling it, submits it to a thread pool for execution. It returns a Future
, so you can await spawn_blocking(|| some_blocking_call())
like you’d await a true async function without the issues associated with block_on()
. This is because the Future
returned by spawn_blocking()
is pending until until the thread pool reports that it’s done executing the submitted sync function. In our extended color metaphor, spawn_blocking()
is an adapter that converts a purple function into a red function. Its main intended use case are CPU-bound functions that might take a long time to execute, as well as blocking functions that just don’t have a good async alternative. The example of the latter are functions that work with the file system, which still don’t have a good async alternative, or legacy blocking code behind FFI (think ancient database drivers and the like).
Problems arise when code tries to avoid multiple function colors and use block_on()
or spawn_blocking()
to hide the “color” of the implementation. For example, a library might be implemented using async code internally, but use block_on()
to expose only a sync API. Someone might then use that library in an async context and wrap the sync calls in spawn_blocking()
. What would be the consequences if that was done across the board? Recall that the important advantage of async is the ability to scale the number of concurrent agents (futures) without increasing the number of OS threads. As long as the agents are mostly IO-bound, you can have literally millions of them executing (most of them being suspended at any given time) on a single thread. But if an async function like the above xyzzy()
uses spawn_blocking()
to await a purple function like foo()
, which itself uses block_on()
to await an async function like bar()
, then we have a problem: the number of xyzzy()
instances that can run concurrently and make progress is now limited by the number of threads in the thread pool employed by spawn_blocking()
. If you need to spawn a large number of tasks awaiting xyzzy()
concurrently, most of them will need to wait for a slot in the thread pool to open up before their foo()
functions even begin executing. And all this because foo()
blocks on bar()
, which is again ironic because bar()
, being an async function, is designed to scale independently of the number of threads available to execute it.
The above is not just a matter of performance degradation; in the worst case spawn_blocking(|| block_on(...))
can deadlock. Consider what happens if one async function behind spawn_blocking(|| block_on(...))
needs data from another async function started the same way in order to proceed. It is possible that the other async function cannot make progress because it is waiting for a slot in the thread pool to even begin executing. And the slot won’t free up because it is taken by the first async function, which also runs inside a spawn_blocking()
invocation. The slot is never going to change owner, and a deadlock occurs. This can’t happen with async functions that are directly executed as async tasks because those don’t require a slot in a fixed-size pool. They can all be in a suspended state waiting for something to happen to any of them, and resume execution at any moment. In an async system the number of OS threads deployed by the executor doesn’t limit the number of async functions that can work concurrently. (There are executors that use a single thread to drive all futures.)
Verdict: spawn_blocking()
is fine to use with CPU-bound or true blocking code, but it’s not a good idea to use it with block_on()
because the advantages of async are then lost and there is a possibility of deadlock.
But Rust’s stdlib is sync-only.
That’s technically true, but Rust’s stdlib is intentionally minimal. Important parts of functionality associated with Rust are delegated to external crates, with great success. Many of these external crates now require async, or even a specific executor like tokio. So while the standard library is async-free, you cannot ignore async while programming in Rust.
Verdict: technically true but not useful in a language with a minimalistic standard library.
Dealing with a two-colored world
Again, the above is not a criticism of Rust async, but merely of the claim that it’s not colored. Once we accept that it is, it becomes clear that, unlike JavaScript, Rust actually does provide the tools we need to deal with the mismatch. We can:
- Accept that sync and async are two separate worlds, and not try to hide it. In particular, don’t write “sync” interfaces that use
block_on()
to hide async ones, and the other way around withspawn_blocking()
. If you absolutely must hide the async interfaces behind sync ones, then do so at immediately at the entry point, document that you’re doing so, and provide a public interface to the underlying native call. - Respecting the above, use
block_on()
andspawn_blocking()
in application-level code on the boundaries between the two worlds. - In more complex scenarios, create clear and documented boundaries between the two worlds and use channels to communicate between them. This technique is already used for both multi-threaded and async code, so it should come to no surprise to future maintainers. Ideally you’d use channels that provide both a sync and an async interface, but if those are not available, use async channels with
block_on()
on the sync side.