Hai Friends, I Scheduled a job in ds management console to execute for every one hour daily, once in a while due to heavy data it has taken more than one hour of time to execute… then suddenly a conflict arises with currently running job and job to be executed in the next hour…
how can we resolve this issue…??
The easiest way to do it? Use an enterprise scheduler than understands how to do tasks like this.
Otherwise, to ensure there are no collisions you could put code in either check to see if another version of the job is already running, or you could just make the end of the job run another version of the job.
The DS scheduler is very simple. It can’t do sophisticated scheduling.
we have a Script followed by a Conditional as the first Items of a Batch job.
Script:
$Variable1 = e.g. C:\TEMP[JOBNAME].run
Conditional:
if condition with file_exist($Variable1) = 0
Within the Condition in the “Then” Area:
First dataflow is always:
Row Generation -> Query -> XML Template (name of File = $Variable1)
Afterwards all regular steps of this jobs are executed.
The last item is once again a script with:
exec(‘cmd’,'del /Q ’ || $Variable1,8);
In the “Else” Area of the Conditional we just add a script which is doing the following things:
Make a screen print that processing was skipped because active
Send E-mail that processing was skipped
–Edit–
By the way this is a procedure which only works if the job is not started within 5-20 seconds multiple times.
But for jobs running every 5 or 10 minutes, every hour, every day,… this is working fine for us
You can use windows scheduler to schedule the jobs instead. All you need to do is just generate a batch command from the management console which windows scheduler kicks off. Windows scheduler is aware if a task is still running if the next task run is executed.
Alternatively you can create a meta data table which flags whether the job is running or not. The table could be structured with something like the job name, possibly configuration name and a column for the run flag.
All you need to do is use a script at the start of the job which has a look up on this table at runtime and based on what value the flag is, execute the entire job or just end it without doing anything using an conditional workflow.
If the job is not running, add an SQL statement to set the run flag to R (Running) and execute the entire job logic, and when the job completes successfully set it to F (Finished). You can use whatever values you like.
Hope this is of some help.
I generally use an entire job control set of tables to control the running of BODS jobs which store an entry for each time the job executes as well as meta data tables to store any required variable values at runtime.