I was hoping someone could help me with the following:
We have created a job which has Workflows in it and in those workflows we have Dataflows.
One of those dataflows start with a Staging table, a few queries, a Key_generation and a Target table.
One of the Queryies has 20 columns which are mapped with lookup_ext functions for looking up ID’s.
When I execute the job, it will end in failure, at some point where the mentionned Dataflow is running.
The error is:
OCI call for connection ‘xxxxx’ failed: ora-12518 TNS:listener could not hand off client connection.
First thought is connection failure, but it happens at the same point every time we execute the job, even the Dataflow alone in a test job.
The State of the Rowcounts at that point is READY, with a zero count.
The Target table is filled, but not completely as expected.
Most of the ID’s are still NULL values.
Is it possible that the amount of lookup_ext columns is too much for BODI?
It is indeed the fact that there are too much ID lookups in the query.
With one lookup it works fine. However the creation of multiple DataFlows with all lookups spread out over them, still provides me this error.
I think that there’s a problem with the temp table spaces and their size, or some other system resource(s)
With building up the temp table, it probably places all rows in, and after having the result set complete, DI makes the commit to the database table.
Unchecking the ‘Commit at the end of INSERT… SELECT’ at the Transaction control within the target table doesn’t help either…
Well, obviously I do have the proper connections, because with a few lookups it works, but with a lot, it won’t.
And all lookups are done on the same table…
Thanks ganesh,
I’ve been trying to break it up in parts indeed, that’s working. Until a certain point. The job will stop after three DF’s within one workflow.
In those 3 DF’s I have only inserted a few of the lookups…
My wonder is, if you breaking them into seperate DF’s then the connections should be closed after the DF is done with and when the next DF starts, it has to establish a new connection. But in your case looks like the connections are not getting closed at the Oracle system (maybe).
So if you could catch up a DBA (I know I know it is lot difficult ), then you could probably see the series of connection informations and the other processes that reaches the Oracle Server.
Well, catching the DBA was indeed, as suspected difficult,
But we’ve got him…
He told us that there are too much connections indeed, that is however a bit strange: I created a temp job which had some separate DF’s (serial linked). One minute it worked, partially, the next minute it didn’t.
Somehow it looks like the connections stay alive.
We’re gonna try to change the “sessions” parameter for now, see if that works, as a quick work around.
Next to that we’ll have to find out why the connections stay alive…