Can using async-await give you any performance benefits?

18,251

Solution 1

In this interview, Eric Lippert compared async await with a cook making breakfast. It helped me a lot to understand the benefits of async-await. Search somewhere in the middle for 'async-await'

Suppose a cook has to make breakfast. He has to toast some bread and boil some eggs, maybe make some tea as well?

Method 1: Synchronous. Performed by one thread. You start toasting the bread. Wait until the bread is toasted. Remove the bread. Start boiling water, wait until the water boils and insert your egg. Wait until the egg is ready and remove the egg. Start boiling water for the tea. Wait until the water is boiled and make the tea.

Youl see all the waits. while the thread is waiting it could do other things.

Method 2: Async-await, still one thread You start toasting the bread. While the bread is being toasted you start boiling water for the eggs and also for the tea. Then you start waiting. When any of the three tasks is finished you do the second part of the task, depending on which task finished first. So if the water for the eggs boils first, you cook the eggs, and again wait for any of the tasks to finish.

In this description only one person (you) is doing all the stuff. Only one thread is involved. The nice thing is, that because there is only one thread doing the stuff the code looks quite synchronous to the reader and there is not much need to make your variables thread safe.

It's easy to see that this way your breakfast will be ready in shorter time (and your bread will still be warm!). In computer life these things will happen when your thread has to wait for another process to finish, like writing a file to a disk, getting information from a database or from the internet. Those are typically the kind of functions where'll see an async version of the function: Write and WriteAsync, Read and ReadAsync.

Addition: after some remarks from other users elsewhere, and some testing, I found that in fact it can be any thread who continues your work after the await. This other thread has the same 'context', and thus can act as if it was the original thread.

Method 3: Hire cooks to toast the bread and boil the eggs while you make the tea: Real asynchronous. Several threads This is the most expensive option, because it involves creating separate threads. In the example of making breakfast, this will probably not speed up the process very much, because relatively large times of the process you are not doing anything anyway. But if for instance you also need to slice tomatoes, it might be handy to let a cook (separate thread) do this while you do the other stuff using async-await. Of course, one of the awaits you do is await for the cook to finish his slicing.

Another article that explains a lot, is Async and Await written by the ever so helpful Stephen Cleary.

Solution 2

Whenever I read about async-await, the use case example is always one where there's a UI that you don't want to freeze.

That's the most common use case for async. The other one is in server-side applications, where async can increase scalability of web servers.

Are there any examples of how one could use async-await to eke out performance benefits in an algorithm?

No.

You can use the Task Parallel Library if you want to do parallel processing. Parallel processing is the use of multiple threads, dividing up parts of an algorithm among multiple cores in a system. Parallel processing is one form of concurrency (doing multiple things at the same time).

Asynchronous code is completely different. The point of async code is to not use the current thread while the operation is in progress. Async code is generally I/O-bound or based off events (like a timer). Asynchronous code is another form of concurrency.

I have an async intro on my blog, as well as a post on how async doesn't use threads.

Note that the tasks used by the Task Parallel Library can be scheduled onto threads and will execute code. The tasks used by the Task-Based Asynchronous Pattern do not have code and do not "execute". Although both types of task are represented by the same type (Task), they are created and used completely differently; I describe these Delegate Tasks and Promise Tasks in more detail on my blog.

Solution 3

In short and very general case - No, it usually will not. But it requires few words more, because "performance" can be understood in many ways.

Async/await 'saves time' only when the 'job' is I/O-bound. Any application of it to jobs that are CPU-bound will introduce some performance hits. That's because if you have some computations that take i.e. 10seconds on your CPU(s), then adding async/await - that is: task creation, scheduling and synchronization - will simply add X extra time to that 10seconds that you still need to burn on your CPU(s) to get the job done. Something close to the idea of Amdahl law. Not really it, but quite close.

However, there's some 'but..'s.

First of all, that performance hits often due to introducing async/await are not that large. (especially if you are careful to not overdo it).

Second, since async/await allows you to write I/O-interleaved code much easier, you may notice new opportunities to remove waiting times on I/O in places where you'd be too lazy ( :) ) to do it otherwise or in places where it'd make the code to hard to follow without async/await syntax goodness. For example, splitting the code around network requests is rather obvious thing to do, but you may notice that i.e. you can also upgrade some file i/o at that few places where you write CSVs files or read configuration files, etc. Still, note that the gain here will not be thanks to async/await - it will be thanks to rewriting the code that handles file i/o. You can do that without async/await too.

Third, since some i/o ops are easier, you may notice that offloading the CPU-intensive work to another service or machine is much easier, which can improve your perceived performance too (shorter 'wall-clock' time), but the overall resource consumption will rise: added another machine, spent time on network ops, etc.

Fourth: UI. You really don't want to freeze it. It's so very easy to wrap both I/O-bound and CPU-bound jobs in Tasks and async/await on them and keep the UI responsive. That's why you see it mentioned everywhere. However, while I/O-bound ops ideally should be asynchronous down to the very leaves to remove as much idle waiting time on all lengthy I/O, the CPU-bound jobs does not need to be split or asyncized any more than just 1 level down. Having huge monolithic calculation job wrapped in just one task is just enough to have the UI unblocked. Of course, if you have many processors/cores, it's still worth parallelizing whatever is possible inside, but in contrast to I/O - split too much and you will be busy switching tasks instead of chewing the calculations.

Summarizing: if you have time-taking I/O - async ops can save much time. It's hard to overdo asynchronizing I/O operations. If you have CPU-taking ops, then adding anything will consume more CPU time and more memory in total, but the wall-clock time can be better thanks to splitting the job into smaller parts that maybe can be run on more cores at the same time. It's not hard to overdo it, so you need to be a little careful.

Solution 4

Most often you don't gain in direct performance (task you are performing happens faster and/or in less memory) as in scalability; using less threads to perform the same number of simultaneous tasks means the number of simultaneous tasks you can do is higher.

For the most part therefore, you don't find a given operation improving in performance, but can find heavy use has improved performance.

If an operation requires parallel tasks that involve something truly async (multiple async I/O) then that scalability can though benefit that single operation. Because the degree of blocking happening in threads is reduced, this happens even if you have only one core, because the machine divides its time only between those tasks that are not currently waiting.

This differs from parallel CPU-bound operations which (whether done using tasks or otherwise) will generally only scale up to the number of cores available. (Hyper-threaded cores behave like 2 or more cores in some regards and not in others).

Solution 5

The method runs on the current synchronization context and uses time on the thread only when the method is active. You can use Task.Run to move CPU-bound work to a background thread, but a background thread doesn't help with a process that's just waiting for results to become available.

When you have one CPU and multiple threads in your application, your CPU switches between threads to simulate parallel processing. With async/await your async operation doesn't need thread time, thus you give more time for other threads of your application to do job. For instance your application (non-UI) can still make HTTP calls, and all you need is just wait for the response. This is one of the cases when benefit of using async/await is big.

When you call async DoJobAsync() don't forget to .ConfigureAwait(false) to get better performance for non-UI apps that don't need to merge back to UI thread context.

I don't mention nice syntax that helps a lot to keep your code clean.

MSDN

Share:
18,251
user6048670
Author by

user6048670

Updated on June 13, 2022

Comments

  • user6048670
    user6048670 almost 2 years

    Whenever I read about async-await, the use case example is always one where there's a UI that you don't want to freeze. Either all programming books/tutorials are the same or UI blocking is the only case of async-await that I should know about as a developer.

    Are there any examples of how one could use async-await to eke out performance benefits in an algorithm? Like let's take any of the classic programming interview questions:

    • Find the nearest common ancestor in a binary tree
    • Given a[0], a[1], ..., a[n-1] representing digits of a base-10 number, find the next highest number that uses the same digits
    • Find the median of two sorted arrays (i.e. the median number if you were to merge them)
    • Given an array of numbers 1, 2, ..., n with one number missing, find the missing number
    • Find the largest 2 numbers in an array

    Is there any way to do those using async-await with performance benefits? And if so, what if you only have 1 processor? Then doesn't your machine just divide its time between tasks rather than really doing them at the same time?

  • Alexei Levenkov
    Alexei Levenkov about 8 years
    Notes: thread waiting for IO (or anything else) will not be scheduled for execution - so there is not much difference between async tasks and blocking thread from CPU usage point of view (unless your app is using many threads like ASP.Net). Also there is no point calling ConfigurAwait(false) on non-UI thread (or non-UI app) - there will be no synchronization context to try to return on original thread (and obviously ConfigurAwait(false) is flat out harmful for ASP.Net - but this is common knowledge so no need to specify explicitly).
  • hendryanw
    hendryanw about 8 years
    Hi Stephen,, do you have any writing which explains how does async increase scalability of web servers?
  • quetzalcoatl
    quetzalcoatl about 8 years
    @Hendry Just take any webserver-targetted paper that discusses advantages of asynchronous I/O over multithreading and blocking and all the key points will be basically the same. I.e. check stackoverflow.com/questions/14795145/… and mentally replace "callbacks", "queues" and "events" with "tasks/continuations" and "schedulers"
  • hendryanw
    hendryanw about 8 years
    thanks @quetzalcoatl , so am I right if I assume that, it is because the thread is now fully-utilized by not waiting for I/O bound operation, the scalability is improved by minimizing the thread pool size increment due to thread pool starvation? basically no wasted resources.
  • quetzalcoatl
    quetzalcoatl about 8 years
    @Hendry: If I understood you well - yes. Less threads can handle the same amount of requests, because all the threads that would sleep on I/O now spend their time doing other chunks of work => more work doable with the same N threads. Thread pool size is one thing, but also handling thread is usually expensive. Not only creation/destruction (what is somewhat mitigated by pooling) but also thread switching(putting thread to forced sleep, waking another thread) is more expensive than making the thread jump from jobA to jobB, altough still some time is lost on task scheduling/(de)queueing/etc.
  • Stephen Cleary
    Stephen Cleary about 8 years
    @Hendry: Right. You can say that async permits you to make maximum use of your thread pool.
  • Mitch3091
    Mitch3091 over 6 years
    Thank you very much for sharing this, it helped me a lot to understand this concept.