What is the setting in DSConfig file to increase parallel threads? In one of the posts I saw MAX_NO_OF_PARALLEL_PROCESSES mentioned. However we could not find it in our config file. There is a setting for MAX_NO_OF_PROCESSES. Is it the same?
I have just WFs. The WF in turn does a function call from script to a Oracle SQL proc.
There are 75 WFs that are run sequentially with each having 10 WFs running in parallel. For some reason the job gets hung in one of the 75 group WFs at random.
I don’t think there is any throttling of the number of Workflows that can be spawned. What might be happening is you’re hitting the limit of the number of connections that can be made to the database.
Jim,
I do not think it has to do with limit to number of connections to DB. I was running the same job with 50 WFs earlier on 4.0. After upgrade to 4.2 we started having this issue. Then I reduced the number of parallel WFs to 10. Still running into the hung issue time to time.
There is another job that has 7 WFs that runs fine.
We were investigating that this morning when we had the hung issue. There were 8 WFs (out of 10) spawned from the job and stuck. There were corresponding DB connection sessions both to the data DB, as well as to the repo DB.
On the repo DB side all except one sessions had INSERT to AL_Statistics table. And one session had UPDATE to AL_Statistics. There were no locking DB sessions. There was nothing logged in AL_Statistics table for the current spawned WFs yet.
The DS job and the DB sessions just sit out there. When we kill the DS job then all the DB sessions disappear.
If the sessions were showing an active INSERT then I wouldn’t expect you to be able to see the values that were inserted since they hadn’t been committed. At least that’s what I think would be happening.
To further troubleshoot this I would set the Workflows in series and see if you still get a problem. If that works then try setting a delay in each Workflow.
WF1 = 0 second delay
WF2 = 2 second delay
WF3 = 4 second delay
etc, etc.