Impossible to make a cached thread pool with a size limit?

57,230

Solution 1

The ThreadPoolExecutor has the following several key behaviors, and your problems can be explained by these behaviors.

When tasks are submitted,

  1. If the thread pool has not reached the core size, it creates new threads.
  2. If the core size has been reached and there is no idle threads, it queues tasks.
  3. If the core size has been reached, there is no idle threads, and the queue becomes full, it creates new threads (until it reaches the max size).
  4. If the max size has been reached, there is no idle threads, and the queue becomes full, the rejection policy kicks in.

In the first example, note that the SynchronousQueue has essentially size of 0. Therefore, the moment you reach the max size (3), the rejection policy kicks in (#4).

In the second example, the queue of choice is a LinkedBlockingQueue which has an unlimited size. Therefore, you get stuck with behavior #2.

You cannot really tinker much with the cached type or the fixed type, as their behavior is almost completely determined.

If you want to have a bounded and dynamic thread pool, you need to use a positive core size and max size combined with a queue of a finite size. For example,

new ThreadPoolExecutor(10, // core size
    50, // max size
    10*60, // idle timeout
    TimeUnit.SECONDS,
    new ArrayBlockingQueue<Runnable>(20)); // queue with a size

Addendum: this is a fairly old answer, and it appears that JDK changed its behavior when it comes to core size of 0. Since JDK 1.6, if the core size is 0 and the pool does not have any threads, the ThreadPoolExecutor will add a thread to execute that task. Therefore, the core size of 0 is an exception to the rule above. Thanks Steve for bringing that to my attention.

Solution 2

Unless I've missed something, the solution to the original question is simple. The following code implements the desired behavior as described by the original poster. It will spawn up to 5 threads to work on an unbounded queue and idle threads will terminate after 60 seconds.

tp = new ThreadPoolExecutor(5, 5, 60, TimeUnit.SECONDS,
                    new LinkedBlockingQueue<Runnable>());
tp.allowCoreThreadTimeOut(true);

Solution 3

Had same issue. Since no other answer puts all issues together, I'm adding mine:

It is now clearly written in docs: If you use a queue that does not blocks (LinkedBlockingQueue) max threads setting has no effect, only core threads are used.

so:

public class MyExecutor extends ThreadPoolExecutor {

    public MyExecutor() {
        super(4, 4, 5,TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>());
        allowCoreThreadTimeOut(true);
    }

    public void setThreads(int n){
        setMaximumPoolSize(Math.max(1, n));
        setCorePoolSize(Math.max(1, n));
    }

}

This executor has:

  1. No concept of max threads as we are using an unbounded queue. This is a good thing because such queue may cause executor to create massive number of non-core, extra threads if it follows its usual policy.

  2. A queue of max size Integer.MAX_VALUE. Submit() will throw RejectedExecutionException if number of pending tasks exceeds Integer.MAX_VALUE. Not sure we will run out of memory first or this will happen.

  3. Has 4 core threads possible. Idle core threads automatically exit if idle for 5 seconds.So, yes, strictly on demand threads.Number can be varied using setThreads() method.

  4. Makes sure min number of core threads is never less than one, or else submit() will reject every task. Since core threads need to be >= max threads the method setThreads() sets max threads as well, though max thread setting is useless for an unbounded queue.

Solution 4

In your first example, subsequent tasks are rejected because the AbortPolicy is the default RejectedExecutionHandler. The ThreadPoolExecutor contains the following policies, which you can change via the setRejectedExecutionHandler method:

CallerRunsPolicy
AbortPolicy
DiscardPolicy
DiscardOldestPolicy

It sounds like you want cached thread pool with a CallerRunsPolicy.

Solution 5

None of the answers here fixed my problem, which had to do with creating a limited amount of HTTP connections using Apache's HTTP client (3.x version). Since it took me some hours to figure out a good setup, I'll share:

private ExecutorService executor = new ThreadPoolExecutor(5, 10, 60L,
  TimeUnit.SECONDS, new SynchronousQueue<Runnable>(),
  Executors.defaultThreadFactory(), new ThreadPoolExecutor.CallerRunsPolicy());

This creates a ThreadPoolExecutor which starts with five and holds a maximum of ten simultaneously running threads using CallerRunsPolicy for executing.

Share:
57,230
Matt Wonlaw
Author by

Matt Wonlaw

https://github.com/tantaman #SOreadytohelp #SOreadytohelp

Updated on July 08, 2022

Comments

  • Matt Wonlaw
    Matt Wonlaw almost 2 years

    It seems to be impossible to make a cached thread pool with a limit to the number of threads that it can create.

    Here is how static Executors.newCachedThreadPool is implemented in the standard Java library:

     public static ExecutorService newCachedThreadPool() {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
    }
    

    So, using that template to go on to create a fixed sized cached thread pool:

    new ThreadPoolExecutor(0, 3, 60L, TimeUnit.SECONDS, new SynchronusQueue<Runable>());
    

    Now if you use this and submit 3 tasks, everything will be fine. Submitting any further tasks will result in rejected execution exceptions.

    Trying this:

    new ThreadPoolExecutor(0, 3, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runable>());
    

    Will result in all threads executing sequentially. I.e., the thread pool will never make more than one thread to handle your tasks.

    This is a bug in the execute method of ThreadPoolExecutor? Or maybe this is intentional? Or there is some other way?

    Edit: I want something exactly like the cached thread pool (it creates threads on demand and then kills them after some timeout) but with a limit on the number of threads that it can create and the ability to continue to queue additional tasks once it has hit its thread limit. According to sjlee's response this is impossible. Looking at the execute() method of ThreadPoolExecutor it is indeed impossible. I would need to subclass ThreadPoolExecutor and override execute() somewhat like SwingWorker does, but what SwingWorker does in its execute() is a complete hack.

    • rsp
      rsp over 14 years
      What is your question? Isn't your 2nd code example the answer to your title?
    • Matt Wonlaw
      Matt Wonlaw over 14 years
      I want a thread pool that will add threads on demand as the number of tasks grow, but will never add more than some max number of threads. CachedThreadPool already does this, except it will add an unlimited number of threads and not stop at some pre-defined size. The size I define in the examples is 3. The second example adds 1 thread, but doesn't add two more as new tasks arrive while the other tasks have not yet completed.
    • ethan
      ethan almost 12 years
      Check this, it solves it, debuggingisfun.blogspot.com/2012/05/…
    • Gray
      Gray over 6 years
  • Matt Wonlaw
    Matt Wonlaw over 14 years
    Sure, I could use a fixed thread pool but that would leave n threads around for forever, or until I call shutdown. I want something exactly like the cached thread pool (it creates threads on demand and then kills them after some timeout) but with a limit on the number of threads that it can create.
  • jtahlborn
    jtahlborn over 12 years
    You are correct. That method was added in jdk 1.6, so not as many people know about it. also, you can't have a "min" core pool size, which is unfortunate.
  • hsestupin
    hsestupin about 11 years
    You must write few words about method allowCoreThreadTimeOut to make this answer perfect. See the answer of @user1046052
  • Jeff
    Jeff about 10 years
    Great answer! Just one point to add: Other rejection policies are also worth mentioning. See the answer of @brianegge
  • eric
    eric over 9 years
    I think you also need to set 'allowCoreThreadTimeOut' to 'true', otherwise, once the threads are created, you will keep them around forever: gist.github.com/ericdcobb/46b817b384f5ca9d5f5d
  • eric
    eric over 9 years
    oops I just missed that, sorry, your answer is perfect then!
  • Leonmax
    Leonmax over 9 years
    I think you mean size 0 (by default), so that there will be no task queued and truly force executorservice to create new thread every time.
  • veegee
    veegee about 8 years
    My only worry about this is (from the JDK 8 docs): "When a new task is submitted in method execute(Runnable), and fewer than corePoolSize threads are running, a new thread is created to handle the request, even if other worker threads are idle."
  • Matt Wonlaw
    Matt Wonlaw about 8 years
    Pretty sure this doesn't actually work. Last time I looked doing the above actually only ever runs your work in one thread even though you spawn 5. Again, been a few years but when I dove into the implementation of ThreadPoolExecutor it only dispatched to new threads once your queue was full. Using an unbounded queue causes this to never happen. You can test by submitting work and loggin'g the thread name then sleeping. Every runnable will end up printing the same name / not be run on any other thread.
  • T-Gergely
    T-Gergely about 8 years
    This does work, Matt. You set the core size to 0, that's why you only had 1 thread. The trick here is to set the core size to the max size.
  • Zoltán
    Zoltán almost 8 years
    Shouldn't behaviour 2 say 'If the maxThread size has been reached and there is no idle threads, it queues tasks.'?
  • Zoltán
    Zoltán almost 8 years
    Could you elaborate on what the size of the queue implies? Does it mean that only 20 tasks can be queued before they are rejected?
  • sjlee
    sjlee almost 8 years
    @Zoltán I wrote this a while ago, so there is a chance some behavior might have changed since then (I didn't follow the latest activities too closely), but assuming these behavior are unchanged, #2 is correct as stated, and that's perhaps the most important (and somewhat surprising) point of this. Once the core size is reached, TPE favors queueing over creating new threads. The queue size is literally the size of the queue that's passed to the TPE. If the queue becomes full but it hasn't reached the max size, it will create a new thread (not reject tasks). See #3. Hope that helps.
  • Chris Riddell
    Chris Riddell over 7 years
    @vegee is right - This doesn't actually work very well - ThreadPoolExecutor will only re-use threads when above corePoolSize. So when corePoolSize is equal to maxPoolSize, you'll only benefit from the thread caching when your pool is full (So if you intend to use this but usually stay under your max pool size, you might as well reduce the thread timeout to a low value; and be aware that there's no caching - always new threads)
  • Gray
    Gray over 6 years
    The problem with this solution is that if you increase the number or producers, you will increase the number of threads running the background threads. In many cases that's not what you want.
  • Tvaroh
    Tvaroh over 6 years
    Notice that this thread pool will have max 10 threads and only grow above it up to 50 after the queue is full.
  • Steve
    Steve over 5 years
    @sjlee So, could you explain this strange behavior in this ThreadExecutor program. It is not following the 4 cases you mentioned here.
  • Stefan Feuerhahn
    Stefan Feuerhahn over 4 years
    @Riddell is correct. It does not behave like a CachedThreadPool with a thread limit and an unbounded queue because it does not re-use threads. A ThreadPoolExecutor can offer two major benefits: 1) reduce the overhead of executing task by re-using threads and 2) limit the number of resources (e.g. threads) used. This solution does not offer the former.