Job Fails When Dataflows are Run in Parallel

I have a job with a workflow that runs 7 parameterized instances of a single dataflow in parallel (i.e. df#1 processes customers 1 - 10, df#2 processes customers 11-20 …etc). The dataflow is loading a single partitioned table in Oracle 11g using the API bulk loader.

The job runs successfully in DI 6.5. However, the job fails in Data Services (DS) XI (12.1.1.3) with an Oracle error:

OCI call for connection failed: <ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Linux-x86_64 Error: 2: No such file or directory>

If I link the dataflows so that they run serially, the dataflow does work in DS XI. So, it appears to have something to do with the parallel execution + DS XI.

Any help is very much appreciated.

Dave


dwhitten (BOB member since 2006-02-02)

Oracle behaves weired when it runs out of sessions or processes (each connection is a session and a process).

So first thing I would do is checking those two init.ora parameters and then set them higher.

alter system set processes=500 scope=spfile;
alter system set sessions=800 scope=spfile;

shutdown (immediate);
startup;

Second I would consider using Oracle’s MTS, their connection sharing mechanism to avoid consuming excessive amounts of memory with Oracle. Usually I have two tnsnames.ora aliases, one with DEDICATED=YES used for the high volume data (the datastores) and one for simple selects (the repo).


Werner Daehn :de: (BOB member since 2004-12-17)

Thanks. I have requested of DBAs to increase these parameter values. However, I am curious why the workflow ran under 6.5 but not under XI? Does XI handle parallel dataflow execution differently? If so, I was unable to see it when I compared the trace logs.


dwhitten (BOB member since 2006-02-02)

That’s a good question. Nothing that comes to my mind immediately.


Werner Daehn :de: (BOB member since 2004-12-17)