Are there any concurrent containers in C++11?

42,400

Solution 1

According to Diego Dagum from Microsoft's Visual C++ Team:

A recurrent question (well, one of the many) is about STL containers and whether they are thread safe.

Taking Stephan’s words here, the reality is that they aren’t, not as a bug but as a feature: having every member function of every STL container acquiring an internal lock would annihilate performance. As a general purpose, highly reusable library, it wouldn’t actually provide correctness either: the correct level to place locks is determined by what the program is doing. In that sense, individual member functions don’t tend to be such correct level.

The Parallel Patterns Library (PPL) includes several containers that provide thread-safe access to their elements:

  • The concurrent_vector Class is a sequence container class that allows random access to any element. It enables concurrency-safe append, element access, iterator access and iterator traversal operations.
  • The concurrent_queue Class is a sequence container class that allows first-in, first-out access to its elements. It enables a limited set of concurrency-safe operations, such as push and try_pop, to name a few.

Some samples here.

Also interesting: http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html.

Solution 2

C++11 does not provide concurrent containers by itself. However, there are library options. Besides the already mentioned PPL, don't forget the Intel TBB library.

It has a concurrent queue, hash_map, set and vector implementation. But it's not only a thread-safe container library, it also comes with parallel version of standard algorithms (for-loop, reduce, sort,...).

Intel TBB website

Solution 3

I am surprised that nobody mentioned moodycamel::ConcurrentQueue. We have been using it for quite some time and it performs very well. It is specific that its implementation is lock-free, which immediately brings an enormous speed. Other reasons for using it (quoting from the official site):

There are not that many full-fledged lock-free queues for C++. Boost has one, but it's limited to objects with trivial assignment operators and trivial destructors, for example. Intel's TBB queue isn't lock-free, and requires trivial constructors too. There's many academic papers that implement lock-free queues in C++, but usable source code is hard to find, and tests even more so.

Some benchmarks and comparisons are available here, here and here.

Caveat: in a case of multiple producers, the order of popped elements is not guaranteed to be the same as the order of pushed elements (@IgorLevicki), so if you need this guarantee, look for some other option.

Solution 4

The containers' interfaces have simply not been designed with this objective. For the interfaces they use, a lock visible to the client is really the only way you could accomplish this while guaranteeing correctness and predictable behaviour. It would also be terribly inefficient because the number of acquisitions would be very high (relative to a good implementation).

Solution 1

Pass by value (where applicable).

Solution 2

Create a collection of simple bolt-on implementations that you can use to pass containers while holding a scope lock (consider it pseudo c++):

template <typename TCollection>
class t_locked_collection {
public:
    t_locked_collection(TCollection& inCollection, t_lock& lock) : collection(inCollection), d_lock(lock), d_nocopy() {
    }

    TCollection& collection;
    // your convenience stuff
private:
    t_scope_lock d_lock;
    t_nocopy d_nocopy;
};

then the caller pairs the lock with the collection, and then you update your interfaces over to use (pass by) the container type where appropriate. It's just a poor man's class extension.

This locked container is one simple example, and there are a few other variants. This is the route I chose because it really allows you to use the granularity level which is ideal for your program, even though it not as transparent (syntactically) as locked methods. It's also relatively easy to adapt existing programs. At least it behaves in a predictable manner, unlike collections with internal locks.

Another variant would be:

template <typename TCollection>
class t_lockable_collection {
public:
// ...
private:
    TCollection d_collection;
    t_mutex d_mutex;
};

// example:
typedef t_lockable_collection<std::vector<int> > t_lockable_int_vector;

...where a type similar to t_locked_collection could be used to expose the underlying collection. Not to imply that approach is foolproof, just fool resistant.

Solution 5

My version of a concurrent undordered map namespace concurrency {

template<typename T,typename T1>
class unordered_bucket: private std::unordered_map<T,T1>
{
mutable std::recursive_mutex m_mutex;

public:
T1 &operator [](T a)
{
    std::lock_guard<std::recursive_mutex> l(m_mutex);
    return std::unordered_map<T,T1>::operator [](a);
}

size_t size() const noexcept {
    std::lock_guard<std::recursive_mutex> l(m_mutex);
    return  std::unordered_map<T,T1>::size();
}

vector<pair<T,T1>> toVector() const
{
    std::lock_guard<std::recursive_mutex> l(m_mutex);

    vector<pair<T,T1>> ret;
    for(const pair<T,T1> &p:*this)
    {
        ret.push_back(p);
    }
    return ret;
}

bool find(const T &t) const
{
    std::lock_guard<std::recursive_mutex> l(m_mutex);
    if(this->std::unordered_map<T,T1>::find(t) == this->end())
        return false;  //not found
    return true;
}
void erase()
{
    std::lock_guard<std::recursive_mutex> l(m_mutex);
    this->unordered_map<T,T1>::erase(this->begin(),this->end());
}
void erase(const T &t)
{
    std::lock_guard<std::recursive_mutex> l(m_mutex);
    this->unordered_map<T,T1>::erase(t);
}
};

#define BUCKETCOUNT 10
template<typename T,typename T1>
class ConcurrentMap
{
std::vector<unordered_bucket<T,T1>> m_v;
public:
ConcurrentMap():m_v(BUCKETCOUNT){}   //using 10 buckets

T1 &operator [](T a)
{
    std::hash<T> h;
    return m_v[h(a)%BUCKETCOUNT][a];
}

size_t size() const noexcept {
    size_t cnt=0;

    for(const unordered_bucket<T,T1> &ub:m_v)
        cnt=cnt+ub.size();

    return  cnt;
}

vector<pair<T,T1>> toVector() const
{
    vector<pair<T,T1>> ret;
    for(const unordered_bucket<T,T1> &u:m_v)
    {
        const vector<pair<T,T1>> &data=u.toVector();
        ret.insert(ret.end(),data.begin(),data.end());
    }
    return ret;
}

bool find(const T &t) const
{
    for(const unordered_bucket<T,T1> &u:m_v)
        if(true == u.find(t))
            return true;
    return false;
}
void erase()
{
    for(unordered_bucket<T,T1> &u:m_v)
        u.erase();
}
void erase(const T &t)
{
    std::hash<T> h;
    unordered_bucket<T,T1> &ub = m_v[h(t)%BUCKETCOUNT];
    ub.erase(t);
}
};
}
Share:
42,400
fredoverflow
Author by

fredoverflow

Updated on July 08, 2022

Comments

  • fredoverflow
    fredoverflow almost 2 years

    In particular, I am looking for a blocking queue. Is there such a thing in C++11? If not, what are my other options? I really don't want to go down to the thread level myself anymore. Way too error-prone.