We are facing this issue since last month.
Suddenly our DS Job has become very slow.
I have tried the following things:
used bulk load with commit after 10000 rows.
increased the array fetch size to 10000.
increased the monitor job log to 10000.
But, there is no improvement in the execution time.
The same jobs were running quickly just 1 month ago. I have checked the data and there doesn’t seem to be any huge increase in the number of rows being inserted.
One issue could be huge size of al_statistics table.(right now it has more than 100 million rows).
Sometime I see that the Job takes a long time in printing the message for data completion , having finished the load in the table almost 5 -10 mins ago.
Kindly suggest on ways to improve the execution time of the Job.
If you don’t need the statistics I would immediately truncate the AL_STATISTICS table. And figure out why the automatic deletion does not work.
And I would go through the standard methods of figuring out where the bottleneck is, adding a Map_Operation set to discard to see if it is the reader, the transformation or the loader.
Thanks for the advice. Can you please tell me if the size of the al_statistics table has any effect on the speed of the job and on BO reports.
Recently few of our webI Reports have also started taking longer time.
Our database is Oracle9i and is in the process of upgrade to 11g.
Regarding your suggestion to find the bottlenecks, could you please explain a bit more on how to use map_operation to find where it is taking more time.
More Details about our job:
The same dataflows complete very quickly whenever the data is incremental (around 1000 rows), but on running them as complete reload (approx 2 million rows) , it is taking around 3000 seconds or even more sometimes.
Note:
At the same time 8 separate complete reload dataflows will be runnning, each populating around 0.4 million to 2 million rows. The time taken by these dataflows is in the range of 2000 secs to 4000 secs separately.
I sincerely doubt that the size of the repository statistics table is the problem.
What likely is the problem is your use of Bulk Loader against Oracle. Enabling this causes the inserts to become a Direct Path load. When this happens Oracle will disable the indexes on the target table, load the data and then rebuild the indexes. It is the rebuild that can be very, very slow on large tables with b-tree indexes.
Recently my bods job have been running slowly, from 1-2 hours took 12 hours. No change in table configuration, no errors. Data source is OLFM and target is HANA.
After running for 11 hours it failed and restarted.
(14.2) 12-06-17 13:41:55 (E) (11989:537925376) DBS-070401: Dataflow DF_Init_OLFM_XLA_XLA_AE_LINES|XLA_AE_LINES
ODBC data source error message for operation : <[SAP AG][LIBODBCHDB SO][HDBODBC] General error;129
transaction rolled back by an internal error: Allocation failed ; $size$=40000; $name$=libhdbrskernel.so; $type$=pool;
$inuse_count$=1726511; $allocated_size$=1226819927>.
(14.2) 12-06-17 13:59:40 (E) (11941:2113906464) RUN-050316: |Session Job_Init_XLA_XLA_AE_LINES
Job failed
(14.2) 12-06-17 13:59:40 (E) (11941:2113906464) RUN-050304: |Session Job_Init_XLA_XLA_AE_LINES
Function call <raise_exception ( Job failed ) > failed, due to error <50316>: .
(14.2) 12-06-17 13:59:40 (E) (11941:2113906464) RUN-050316: |Session Job_Init_XLA_XLA_AE_LINES
Job failed