there is no change between the two tables but im really surprise by the time of execution! 16 min!
For the load, i only use parameter by default.
Before this DF, i use a script to drop and create the target.
Have you a way to perform faster this DF?
thanks for your answers
edit:
I ve change the script of table’s creation by adding the CACHE command and i change parameter of load by using API/TRUNCATE/commit 10000.
Its a bit faster -> 8min
I havn’t precised, i drop and create table but also, create index and after DF runnig, i analyzed the table.
If you execute SELECT COUNT(*) FROM VIEW how long does it take? If you aren’t doing anything special in your data flow (Table->Table) I don’t know that enabling cache would be of much benefit. The bulk loader commit has been show to be about optimal at 5,000. 10,000 may be a bit high but probably isn’t a huge negative on performance.
If you don’t have a simple data flow check to see what the optimized SQL is. It might be doing something that you aren’t expecting. Perhaps you could change or simplify something that would greatl improve the performance.
I would agree that the most likely bottleneck is the sybase view. Look into tuning that view. I can move 700K records in less than a minute when moving from Oracle to Oracle.
Also make sure your monitoring rate is set to 50,000 (instead of default 1000) as that can affect performance by writing too many log messages.