Can I use std::async without waiting for the future limitation?

33,832

Solution 1

You can move the future into a global object, so when the local future's destructor runs it doesn't have to wait for the asynchronous thread to complete.

std::vector<std::future<void>> pending_futures;

myResonseType processRequest(args...)
{
    //Do some processing and valuate the address and the message...

    //Sending the e-mail async
    auto f = std::async(std::launch::async, sendMail, address, message);

    // transfer the future's shared state to a longer-lived future
    pending_futures.push_back(std::move(f));

    //returning the response ASAP to the client
    return myResponseType;

}

N.B. This is not safe if the asynchronous thread refers to any local variables in the processRequest function.

While using std::async (at least on MSVC) is using an inner thread pool.

That's actually non-conforming, the standard explicitly says tasks run with std::launch::async must run as if in a new thread, so any thread-local variables must not persist from one task to another. It doesn't usually matter though.

Solution 2

why do you not just start a thread and detach if you do not care on joining ?

std::thread{ sendMail, address, message}.detach();   

std::async is bound to the lifetime of the std::future it returns and their is no alternative to that.

Putting the std::future in a waiting queue read by an other thread will require the same safety mechanism as a pool receiving new task, like mutex around the container.

Your best option, then, is a thread pool to consume tasks directly pushed in a thread safe queue. And it will not depends on a specific implementation.

Below a thread pool implementation taking any callable and arguments, the threads do poling on the queue, a better implementation should use condition variables (coliru) :

#include <iostream>
#include <queue>
#include <memory>
#include <thread>
#include <mutex>
#include <functional>
#include <string>

struct ThreadPool {
    struct Task {
        virtual void Run() const = 0;
        virtual ~Task() {};
    };   

    template < typename task_, typename... args_ >
    struct RealTask : public Task {
        RealTask( task_&& task, args_&&... args ) : fun_( std::bind( std::forward<task_>(task), std::forward<args_>(args)... ) ) {}
        void Run() const override {
            fun_();
        }
    private:
        decltype( std::bind(std::declval<task_>(), std::declval<args_>()... ) ) fun_;
    };

    template < typename task_, typename... args_ >
    void AddTask( task_&& task, args_&&... args ) {
        auto lock = std::unique_lock<std::mutex>{mtx_};
        using FinalTask = RealTask<task_, args_... >;
        q_.push( std::unique_ptr<Task>( new FinalTask( std::forward<task_>(task), std::forward<args_>(args)... ) ) );
    }

    ThreadPool() {
        for( auto & t : pool_ )
            t = std::thread( [=] {
                while ( true ) {
                    std::unique_ptr<Task> task;
                    {
                        auto lock = std::unique_lock<std::mutex>{mtx_};
                        if ( q_.empty() && stop_ ) 
                            break;
                        if ( q_.empty() )
                            continue;
                        task = std::move(q_.front());
                        q_.pop();
                    }
                    if (task)
                        task->Run();
                }
            } );
    }
    ~ThreadPool() {
        {
            auto lock = std::unique_lock<std::mutex>{mtx_};
            stop_ = true;
        }
        for( auto & t : pool_ )
            t.join();
    }
private:
    std::queue<std::unique_ptr<Task>> q_;
    std::thread pool_[8]; 
    std::mutex mtx_;
    volatile bool stop_ {};
};

void foo( int a, int b ) {
    std::cout << a << "." << b;
}
void bar( std::string const & s) {
    std::cout << s;
}

int main() {
    ThreadPool pool;
    for( int i{}; i!=42; ++i ) {
        pool.AddTask( foo, 3, 14 );    
        pool.AddTask( bar, " - " );    
    }
}
Share:
33,832
Roee Gavirel
Author by

Roee Gavirel

Updated on July 09, 2022

Comments

  • Roee Gavirel
    Roee Gavirel almost 2 years

    High level
    I want to call some functions with no return value in a async mode without waiting for them to finish. If I use std::async the future object doesn't destruct until the task is over, this make the call not sync in my case.

    Example

    void sendMail(const std::string& address, const std::string& message)
    {
        //sending the e-mail which takes some time...
    }
    
    myResonseType processRequest(args...)
    {
        //Do some processing and valuate the address and the message...
    
        //Sending the e-mail async
        auto f = std::async(std::launch::async, sendMail, address, message);
    
        //returning the response ASAP to the client
        return myResponseType;
    
    } //<-- I'm stuck here until the async call finish to allow f to be destructed.
      // gaining no benefit from the async call.
    

    My questions are

    1. Is there a way to overcome this limitation?
    2. if (1) is no, should I implement once a thread that will take those "zombie" futures and wait on them?
    3. Is (1) and (2) are no, is there any other option then just build my own thread pool?

    note:
    I rather not using the option of thread+detach (suggested by @galop1n) since creating a new thread have an overhead I wish to avoid. While using std::async (at least on MSVC) is using an inner thread pool.

    Thanks.

  • Roee Gavirel
    Roee Gavirel over 10 years
    std::async when compiling with MSVC is using an inner thread pool. creating a thread myself each time is having a performance overhead I wish to avoid.
  • Casey
    Casey over 10 years
    The program has a data race: ThreadPool and ~ThreadPool access stop_ potentially simultaneously. volatile has no useful (portable) semantics for multithreading: it needs to be a std::atomic or ~ThreadPool needs to access it with mtx_ held. Your threads also busy-wait, it would be nice to block on a condition variable while the queue is empty.
  • galop1n
    galop1n over 10 years
    @Casey This is only a proof of concept for variable task queue, i tried to keep it as simple as possible. Also, I never give a serious try to the c++11 condition variables ( only native ones ) and did not want to miss something, in fact, i use that sample to test them at this time. I add a note about using condition variable in a real use case.
  • Maxim Egorushkin
    Maxim Egorushkin about 10 years
    Your thread in ThreadPool busy spins wasting CPU. Learn how to use condition variables.
  • Roee Gavirel
    Roee Gavirel about 10 years
    That is a bad approach, I'll end up with a constantly growing vector (pending_futuers)
  • Jonathan Wakely
    Jonathan Wakely about 10 years
    So go through the vector periodically and remove the ready futures. You could add that to the processRequest function, so every time you call it you see if there are any ready futures that can be removed from the vector. That's not complicated.
  • Roee Gavirel
    Roee Gavirel about 10 years
    But that required, more or less, the same overhead of creating a thread pool that will handle the async calls. that was my choice in the end.
  • Jonathan Wakely
    Jonathan Wakely about 10 years
    Your question was how to avoid waiting in the future destructor, which I answered. If you want to create your own thread pool that's fine (although I doubt your thread pool is as efficient as the one in the Windows runtime) but that doesn't change what you originally asked.
  • Jens Munk
    Jens Munk about 10 years
    I am personally not very fond of Microsoft's non-conforming implementation of std::async. I investigated on this for a colleague and there is only an overhead for the first time std::async is called. At the first call, all threads are initialized and put in a waiting state atleast using VS2010
  • Amit
    Amit over 8 years
    If std::async is called in class method, you can assign future returned by it to a class member. This way waiting for future will be deferred until class is destructed.
  • Jonathan Wakely
    Jonathan Wakely almost 8 years
    @starfury no you can't, that won't compile. f is an lvalue, you can't construct another future from it without turning it into an rvaue.
  • scrutari
    scrutari almost 8 years
    @Jonathan Wakely, you are right in the above example, but you could easily change it so that only rvalue is used and there is no f variable. It would be even safer in such a case.
  • Jonathan Wakely
    Jonathan Wakely almost 8 years
    @starfury, which would make the example less clear and harder to read, so it would not be a good example. There is nothing unsafe about the example as written.
  • hanshenrik
    hanshenrik over 5 years
    @JensMunk i wouldn't want all c++ programs to start a bunch of useless threads just in case std::async would ever be called.. because the majority of programs out there doesn't use it.
  • Paulo Neves
    Paulo Neves almost 5 years
    Another reason for using async over thread is that thread will not save the exception for you so you need an extra mechanism to know that the thread failed and with what.
  • Roee Gavirel
    Roee Gavirel about 3 years
    Thanks for the idea, but it won't work. it's true that static variables are only visible to the block they are defined in but they are also singeltons in a sense that two calls to this function will use the same f, which will make the second call blocked on this line until the first call future will be back.
  • Kitiara
    Kitiara about 3 years
    @RoeeGavirel I just edited my answer. No more blocks with the new code. Have fun..
  • Roee Gavirel
    Roee Gavirel about 3 years
    I'm long gone since the days I was programming in C/C++, but wouldn't it leave a dangling pointer and will leas to memory leak.
  • Kitiara
    Kitiara about 3 years
    @RoeeGavirel unique_ptr is a smart pointer, which provides the additional feature of automatic memory management. Besides i'm destroying the object by reset(); as soon as i created the unique_ptr so there is absolutely no possible way to cause a memory leak.