How to give priority to privileged thread in mutex locking?

20,123

Solution 1

I can think of three methods using only threading primitives:

Triple mutex

Three mutexes would work here:

  • data mutex ('M')
  • next-to-access mutex ('N'), and
  • low-priority access mutex ('L')

Access patterns are:

  • Low-priority threads: lock L, lock N, lock M, unlock N, { do stuff }, unlock M, unlock L
  • High-priority thread: lock N, lock M, unlock N, { do stuff }, unlock M

That way the access to the data is protected, and the high-priority thread can get ahead of the low-priority threads in access to it.

Mutex, condition variable, atomic flag

The primitive way to do this is with a condition variable and an atomic:

  • Mutex M;
  • Condvar C;
  • atomic bool hpt_waiting;

Data access patterns:

  • Low-priority thread: lock M, while (hpt_waiting) wait C on M, { do stuff }, broadcast C, unlock M
  • High-priority thread: hpt_waiting := true, lock M, hpt_waiting := false, { do stuff }, broadcast C, unlock M

Mutex, condition variable, two non-atomic flag

Alternatively you can use two non-atomic bools with a condvar; in this technique the mutex/condvar protects the flags, and the data is protected not by a mutex but by a flag:

  • Mutex M;

  • Condvar C;

  • bool data_held, hpt_waiting;

  • Low-priority thread: lock M, while (hpt_waiting or data_held) wait C on M, data_held := true, unlock M, { do stuff }, lock M, data_held := false, broadcast C, unlock M

  • High-priority thread: lock M, hpt_waiting := true, while (data_held) wait C on M, data_held := true, unlock M, { do stuff }, lock M, data_held := false, hpt_waiting := false, broadcast C, unlock M

Solution 2

Put requesting threads on a 'priority queue'. The privileged thread can get first go at the data when it's free.

One way to do this would be withan array of ConcurrentQueues[privilegeLevel], a lock and some events.

Any thread that wants at the data enters the lock. If the data is free, (boolean), it gets the data object and exits the lock. If the data is in use by another thread, the requesting thread pushes an event onto one of the concurrent queues, depending on its privilege level, exits the lock and waits on the event.

When a thread wants to release its ownership of the data object, it gets the lock and iterates the array of ConcurrentQueues from the highest-privilege end down, looking for an event, (ie queue count>0). If it finds one, it signals it and exits the lock, if not, it sets the 'dataFree' boolean and and exits the lock.

When a thread waiting on an event for access to the data is made ready, it may access the data object.

I thnk that should work. Please, other developers, check this design and see if you can think of any races etc? I'm still suffering somewhat from 'hospitality overload' after a trip to CZ..

Edit - probably don't even need concurrent queues because of the explicit lock across them all. Any old queue would do.

Solution 3

#include <thread>
#include <mutex>
#include <condition_variable>
#include <cassert>

class priority_mutex {
  std::condition_variable cv_;
  std::mutex gate_;
  bool locked_;
  std::thread::id pr_tid_; // priority thread
public:
  priority_mutex() : locked_(false) {}
  ~priority_mutex() { assert(!locked_); }
  priority_mutex(priority_mutex&) = delete;
  priority_mutex operator=(priority_mutex&) = delete;

  void lock(bool privileged = false) {
    const std::thread::id tid = std::this_thread::get_id();
    std::unique_lock<decltype(gate_)> lk(gate_);
    if (privileged)
      pr_tid_ = tid;
    cv_.wait(lk, [&]{
      return !locked_ && (pr_tid_ == std::thread::id() || pr_tid_ == tid);
    });
    locked_ = true;
  }

  void unlock() {
    std::lock_guard<decltype(gate_)> lk(gate_);
    if (pr_tid_ == std::this_thread::get_id())
      pr_tid_ = std::thread::id();
    locked_ = false;
    cv_.notify_all();
  }
};

NOTICE: This priority_mutex provides unfair thread scheduling. If privileged thread acquires the lock frequently, other non-privileged threads may almost not scheduled.

Usage example:

#include <mutex>
priority_mutex mtx;

void privileged_thread()
{
  //...
  {
    mtx.lock(true);  // acquire 'priority lock'
    std::unique_lock<decltype(mtx)> lk(mtx, std::adopt_lock);
    // update shared state, etc.
  }
  //...
}

void normal_thread()
{
  //...
  {
    std::unique_lock<decltype(mtx)> lk(mtx);  // acquire 'normal lock'
    // do something
  }
  //...
}

Solution 4

pthreads has thread priorities:

pthread_setschedprio( (pthread_t*)(&mThreadId), wpri );

If multiple threads are sleeping waiting in a lock, the scheduler will wake the highest priority thread first.

Solution 5

On linux you can check this man: pthread_setschedparam and also man sched_setscheduler

pthread_setschedparam(pthread_t thread, int policy, const struct sched_param *param);

Check this also for c++2011: http://msdn.microsoft.com/en-us/library/system.threading.thread.priority.aspx#Y78

Share:
20,123
d3k
Author by

d3k

Updated on January 26, 2021

Comments

  • d3k
    d3k over 3 years

    First of all: I am completely a newbie in mutex/multithread programming, so sorry for any error in advance...

    I have a program that runs multiple threads. The threads (usually one per cpu core) do a lot of calculation and "thinking" and then sometimes they decide to call a particular (shared) method that updates some statistics. The concurrency on statistics updates is managed through the use of a mutex:

    stats_mutex.lock();
    common_area->update_thread_stats( ... );
    stats_mutex.unlock();
    

    Now to the problem. Of all those threads there is one particular thread that need almost
    realtime priority, because it's the only thread that actually operates.

    With "almost realtime priority" I mean:

    Let's suppose thread t0 is the "privileged one" and t1....t15 are the normal ones.What happens now is:

    • Thread t1 acquires lock.
    • Thread t2, t3, t0 call the lock() method and wait for it to succeed.
    • Thread t1 calls unlock()
    • One (at random, as far as i know) of the threads t2, t3, t0 succeeds in acquiring the lock, and the other ones continue to wait.

    What I need is:

    • Thread t1 acquire lock.
    • Thread t2, t3, t0 call the lock() method and wait for it to succeed.
    • Thread t1 calls unlock()
    • Thread t0 acquires lock since it's privileged

    So, what's the best (possibly simplest) method to do this thing?

    What I was thinking is to have a bool variable called "privileged_needs_lock".

    But I think I need another mutex to manage access to this variable... I dont know if this is the right way...

    Additional info:

    • my threads use C++11 (as of gcc 4.6.3)
    • code needs to run on both Linux and Windows (but tested only on Linux at the moment).
    • performance on locking mechanism is not an issue (my performance problem are in internal thread calculations, and thread number will always be low, one or two per cpu core at maximum)

    Any idea is appreciated. Thanks


    The below solution works (three mutex way):

    #include <thread>
    #include <iostream>
    #include <mutex>
    #include "unistd.h"
    
    std::mutex M;
    std::mutex N;
    std::mutex L;
    
    void lowpriolock(){
      L.lock();
      N.lock();
      M.lock();
      N.unlock();
    }
    
    void lowpriounlock(){
      M.unlock();
      L.unlock();
    }
    
    void highpriolock(){
      N.lock();
      M.lock();
      N.unlock();
    }
    
    void highpriounlock(){
      M.unlock();
    }
    
    void hpt(const char* s){
      using namespace std;
      //cout << "hpt trying to get lock here" << endl;
      highpriolock();
      cout << s << endl;
      sleep(2);
      highpriounlock();
    }
    
    void lpt(const char* s){
      using namespace std;
      //cout << "lpt trying to get lock here" << endl;
      lowpriolock();
      cout << s << endl;
      sleep(2);
      lowpriounlock();
    }
    
    int main(){
    std::thread t0(lpt,"low prio t0 working here");
    std::thread t1(lpt,"low prio t1 working here");
    std::thread t2(hpt,"high prio t2 working here");
    std::thread t3(lpt,"low prio t3 working here");
    std::thread t4(lpt,"low prio t4 working here");
    std::thread t5(lpt,"low prio t5 working here");
    std::thread t6(lpt,"low prio t6 working here");
    std::thread t7(lpt,"low prio t7 working here");
    //std::cout << "All threads created" << std::endl;
    t0.join();
    t1.join();
    t2.join();
    t3.join();
    t4.join();
    t5.join();
    t6.join();
    t7.join();
    return 0;
    }
    

    Tried the below solution as suggested but it does not work (compile with " g++ -std=c++0x -o test test.cpp -lpthread"):

    #include <thread>
    #include <mutex>
    
    #include "time.h"
    #include "pthread.h"
    
    std::mutex l;
    
    void waiter(){
      l.lock();
      printf("Here i am, waiter starts\n");
      sleep(2);
      printf("Here i am, waiter ends\n");
      l.unlock();
    }
    
    void privileged(int id){
      usleep(200000);
      l.lock();
      usleep(200000);
      printf("Here i am, privileged (%d)\n",id);
      l.unlock();  
    }
    
    void normal(int id){
      usleep(200000);
      l.lock();
      usleep(200000);
      printf("Here i am, normal (%d)\n",id);
      l.unlock();    
    }
    
    int main(){
      std::thread tw(waiter);
      std::thread t1(normal,1);
      std::thread t0(privileged,0);
      std::thread t2(normal,2);
    
      sched_param sch;
      int policy; 
    
      pthread_getschedparam(t0.native_handle(), &policy, &sch);
      sch.sched_priority = -19;
      pthread_setschedparam(t0.native_handle(), SCHED_FIFO, &sch);
    
      pthread_getschedparam(t1.native_handle(), &policy, &sch);
      sch.sched_priority = 18;
      pthread_setschedparam(t1.native_handle(), SCHED_FIFO, &sch);
    
      pthread_getschedparam(t2.native_handle(), &policy, &sch);
      sch.sched_priority = 18;
      pthread_setschedparam(t2.native_handle(), SCHED_FIFO, &sch);
      
      tw.join();
      t1.join();
      t0.join();
      t2.join();
    
      return 0;  
    }
    
  • RedX
    RedX almost 12 years
    Can you please elaborate on how setting the schduling priority will help the real-time thread go first on the shared access?
  • d3k
    d3k almost 12 years
    I like this solution, it's quite simple and elegant, and allows for future expansions (different prioritites, for example). Sorry for newbie question: what would you use for thread signalling? a condition variable? standard unix signals?
  • Martin James
    Martin James almost 12 years
    @d3k Well, on Windows, (which I am more familiar with), an AutoRestEvent would do, either as a data member of a thread class or actually allocated inside this 'PrivilegeQueue' class. You could even store 'spare' ARE in another queue to save continually making system calls to create them. On Linux, I guess a condvar would do, or a sema - anything that can be waited on by one thread and signaled by another.
  • Flexo
    Flexo almost 12 years
    @d3k - you're correct in thinking a condvar would be the way to do it.
  • Martin James
    Martin James almost 12 years
    @Flexo - is it expensive to create a condvar? Would it be a reasonable optimization to 'pool' left-over condvars in a queue for use by later threads as and when they turn up? Just curious in case I ever want to do this myself.
  • Flexo
    Flexo almost 12 years
    @MartinJames - I'd use just one condvar for the "low priority" clients to sleep on, you can ask for just one to be awoken like that.
  • Flexo
    Flexo almost 12 years
    That still leave your "priority" thread only a 50-50 chance of getting it every time the lock becomes available.
  • Martin James
    Martin James almost 12 years
    @Flexo - hmm.. maybe we could get away with one condvar for each privilege level - every thread at that level would then wait on the condvar for that level. That would fix the issue of continually creating condvars without any pool queue. I guess the only downside is that there is no FIFO behaviour within threads at the same level. Not much of a downside, probably:)
  • d3k
    d3k almost 12 years
    I'll try with a condvar. @Flexo: cool using a condvar only for "low priority" threads, makes things simpler.
  • Rafael Baptista
    Rafael Baptista almost 12 years
    No. Say you have 5 low priority threads waiting on mLock, and one low pri thread in doStuff. Along comes the hi pri thread and it locks mPriLock. Now it is waiting on the current thread in doStuff. That thread finishes and unlocks mPriLock - now the hi-pri thread's lock on mPriLock succeeds. The low pri threads are waiting on mLock that gets unlocked second.
  • d3k
    d3k almost 12 years
    I got your idea, but I didnt manage to create a thread from your class... my fault (lack of knowledge...):
  • d3k
    d3k almost 12 years
    main(){ ThreadPrioFun p; sem_t* s; std::thread t1(p.fun,1,s); std::thread t2(p.fun,2,s); std::thread t0(p.fun,0,s); std::thread t3(p.fun,3,s); t1.join(); t2.join(); t0.join(); t3.join();
  • d3k
    d3k almost 12 years
    gives: test2.cpp:60:26: error: no matching function for call to ‘std::thread::thread(<unresolved overloaded function type>, int, sem_t*&)’
  • Flexo
    Flexo almost 12 years
    There's no guarantee that the high priority thread can take the one lock before a low priority thread can take both. Imagine on a single CPU machine where a low priority process has it and all the other processes are waiting. All threads are currently blocked except the running low prio on. If the timeslices are coarse enough (there's no reason for them not to be) then the releasing of both locks by the current holder happens in at once. The set of runnable threads is now all threads, whichever thread, high or low priority gets the next chunk of CPU they will get the lock.
  • Flexo
    Flexo almost 12 years
    Just because one thread is in a runnable state before another doesn't mean it will get run first. You guarantee that the high priority thread is the first one to become runnable, but not that it is the only one runnable next time a context swap happens or that it is the first to make progress.
  • d3k
    d3k almost 12 years
    Sorry, but let's say there's only one priority thread runnning. It's locking only mPriLock and it's doing Stuff. A low prio thread arrives, locks mLock, and then stays inside mPriLock.lock() waiting for the lock to be free. Another hi prio thread arrives, and he stays inside mPriLock.lock() too. So you have 2 threads both in mPriLock.lock(), so it's 50%-50%.
  • Rafael Baptista
    Rafael Baptista almost 12 years
    @d3k: that only happens if the hi-pri thread tries to run twice in a row with nothing happening in between. In that case, the low pri thread will will 50% of the time. But I do see another problem, with the schedulers granularity. E.g. a low pri thread can wake and grab both locks before the hi pri thread wakes and grabs its hi pri lock. but that can be solved with a pthread_yield. It will force the low pri threads to go back to the scheduler at least once before trying to lock the hi-pri lock.
  • Rafael Baptista
    Rafael Baptista almost 12 years
    @flexo: yes you're right. But you can pthread_yield() between both locks. Fixed.
  • Flexo
    Flexo almost 12 years
    yield never fixes anything. It's nothing more than an optimisation that might tweak probabilities but not enforce any rules, or change any overall characteristics.
  • Martin James
    Martin James almost 12 years
    @d3k - careful! Even high-privilege thread/s may have to wait because a low-privilege thread is using the object. I think you need, at least, one condvar per privilege level.
  • Claudiu Attila Balogh
    Claudiu Attila Balogh almost 12 years
    Member function pointers cannot be passed as function pointer parameters. Wrap it into a standalone function and pass to it the pointer to ThreadPrioFun, prio integer and the pointer to the semaphore and call the member function from within. Although the solution with three mutexes posted by ecatmur above seems much better.
  • d3k
    d3k almost 12 years
    Ok, managed to get it working wrapping into myfun(int,sem_t*,ThreadPriofun*), but as you wrote i prefer the other solution, maybe simpler to work with
  • Dmitry Poroh
    Dmitry Poroh almost 12 years
    @RedX What does the scheduling priority if not the "go first after synchronization point"?
  • d3k
    d3k almost 12 years
    Both ways (ecatmur and Martin James) will work. The 3 mutex way is simpler and much more easier to implement, just done it and it works. The priority queue is much more flexible but needs more attention in coding it, and so I will implement it only if i need specific features.
  • d3k
    d3k almost 12 years
    Of the 3 ecatmur ways, I'd prefer to go with the third (more intuitive) but in my implementation resulted in a freezed process (deadlock I suppose). I think it's my fault in implementing condvar, but not sure. Anyway, the first one worked like a breeze.
  • Dori
    Dori about 9 years
    Why not: High prio: lock(M)-->{do stuff}-->unlock(M) Low prio: lock(N)-->lock(M)-->{do stuff}-->unlock(M)-->unlock(N) ?
  • ecatmur
    ecatmur about 9 years
    @Dori that works if M is a fair (FIFO) mutex, but not otherwise; another low-priority task could lock N and lock M without giving the high-priority task access.
  • 김선달
    김선달 almost 3 years
    Shouldn't hpt_waiting be atomic_int instead of atomic_bool in case of multiple high-priority threads?
  • Gianks
    Gianks over 2 years
    Can you please quote where this is stated in POSIX or C++? It seems interesting but i can't find any reference to it. Thanks
  • Rafael Baptista
    Rafael Baptista over 2 years
    It is in the Posix standard for pthreads since 2008, documented in 2018 edition here: pubs.opengroup.org/onlinepubs/9699919799
  • pete
    pete over 2 years
    It seems for algorithm #1, the c# lock syntax wouldn't allow for this? When executing code inside locking L, L can't be unlocked. If we unlock L by exiting the brakets, M must've been unlocked too if it was locked when L was locked.
  • pete
    pete over 2 years
    @ecatmur I am still having a bit of trouble wrapping my head around why Dori's solution wouldn't work. Is it a race condition that seems very unlikely to occur? e.g. unlock(M), unlock(N), and another tasks's lock(N) and lock(M) all somehow execute before a waiting high-prio task who already called lock(M) is able to lock M?
  • pete
    pete over 2 years
    I had been imagining only 1 high-prio task. In the case there are multiple high-prio tasks already waiting on the lock, a low-prio task can compete with the high-prio tasks for the same lock and sometimes win. However, it would seem that the same flaw also exists in OP's solution.
  • Niko O
    Niko O about 2 years
    @pete The C# "lock" block is syntactic sugar around Mutex.Enter and Mutex.Exit (with a bit of checking). I suggest getting ILSpy and decompiling some simple example programs to see what's happening under the hood.