Java Quartz scheduled Job - disallow concurrent execution of Job

43,288

Solution 1

Just use the @DisallowConcurrentExecution Annotation on top of the Job class.

See this official example or this tutorial about concurrent job execution.

Solution 2

@DisallowConcurrentExecution can do your job but please consider that it would only prevent your class from being run twice on the same node.

Please see @ReneM comment in Quartz 2.2 multi scheduler and @DisallowConcurrentExecution

Solution 3

You can implement StatefulJob or annotate DisallowConcurrentExecution

Share:
43,288
thanili
Author by

thanili

Updated on July 05, 2022

Comments

  • thanili
    thanili almost 2 years

    I am using a Quartz Job for executing specific tasks.

    I am also scheduling its execution in my Main application class and what i am trying to accomplish is not to allow simultaneous instances of this job to be executed.

    So the scheduler should only execute the job if its previous instance is finished.

    Here is my Job class:

    public class MainJob implements Job {
    
    static Logger log = Logger.getLogger(MainJob.class.getName());
    
    @Override
    public void execute(JobExecutionContext arg0) throws JobExecutionException {
    
        GlobalConfig cfg = new GlobalConfig();
    
        ProcessDicomFiles processDicomFiles = new ProcessDicomFiles();  
        ProcessPdfReportFiles processPdf = new ProcessPdfReportFiles();
    
        try {
    
                log.info("1. ---- SCHEDULED JOB -- setStudiesReadyToProcess");
                processDicomFiles.setStudiesReadyToProcess();
    
                log.info("2. ---- SCHEDULED JOB --- distributeToStudies");
                processDicomFiles.distributeToStudies(cfg.getAssocDir());                
    
                ...
    
                //process any incoming PDF file
                log.info("11. ---- SCHEDULED JOB --- processPdfFolder");
                processPdf.processPdfFolder();
    
            } catch (Exception ex) {
                Logger.getLogger(FXMLDocumentController.class.getName()).log(Level.ERROR, null, ex);
            }
    
        log.info(">>>>>>>>>>> Scheduled Job has ended .... <<<<<<<<<<<<<<<<<<<<");
    
        }
    }
    

    So in my application's main class i am starting the scheduler:

    ...
    //start Scheduler
        try {             
            startScheduler();
        } catch (SchedulerException ex) {
            log.log(Level.INFO, null, ex);
        }
    ...
    
    public void startScheduler () throws SchedulerException {
    
            //Creating scheduler factory and scheduler
            factory = new StdSchedulerFactory();
            scheduler = factory.getScheduler();
    
            schedulerTimeWindow = config.getSchedulerTimeWindow();
    
            JobDetailImpl jobDetail = new JobDetailImpl();
            jobDetail.setName("First Job");
            jobDetail.setJobClass(MainJob.class);
    
            SimpleTriggerImpl simpleTrigger = new SimpleTriggerImpl();
            simpleTrigger.setStartTime(new Date(System.currentTimeMillis() + 1000));
            simpleTrigger.setRepeatCount(SimpleTrigger.REPEAT_INDEFINITELY);
            simpleTrigger.setRepeatInterval(schedulerTimeWindow);
            simpleTrigger.setName("FirstTrigger");
    
            //Start scheduler
            scheduler.start();
            scheduler.scheduleJob(jobDetail,simpleTrigger);
    
    }
    

    I would like to prevent scheduler from starting a second MainJob instance if another one is still running ...

  • aProgger
    aProgger about 7 years
    StatefulJob is deprecated (dont know if it was 05.2015). DisallowConcurrentExecution and/or PersistJobDataAfterExecution is the way to go.
  • João Portela
    João Portela over 2 years
    Where did you get that from? Reading the linked comment, it seems that what he is saying is that DisallowConcurrentExecution will only work if you ahve clustering properly setup. Not that it doesn't work across nodes.
  • b15
    b15 over 2 years
    Still looking for a way that doesn't queue up the job, and instead just skips it.