The time lost is in general in each data flow that has to insert a relatively great number of rows.
In general, we are making a truncate and full insert for each fact table. Some tables have some millions of rows. In version XIR30, for example, one data flow took 2-3 minutes, now more than an hour!
Many years ago I had issues upgrading DI…the issue turned out to be the sampling rate. We bumped this up to the 60,000 max and resolved the performance issues.
Do you have the pre and post upgrade DSConfig.txt files to compare differences/settings.
Just a few thoughts, but Werner has been able to help me get through a lot of tough issues in the past.
I’m making some test, and find the posible reason for the degraded performance.
We turn off the antivirus (trendmicro), and returned to previous execution time.
In fact, the previous version was installed on disk c, and currently on disk e.
Tonight I’m executing the full process, I hope to solve definetively the problem.
When you execute or schedule a job, it’s the number that defaults to 1000… how often should DI update the status in the web admin. Change this to 60000. You can right-click on the Job in Designer, and edit it there too, so that it’ll default to that on future runs/schedules.
But there has been a upgrade taken place. In which case there shouldn’t be a change in Monitor Sample Rate. It should be a “AS-IS” condition had it been setup previously??
Not necessarily. Mab had said that the DI installation location had moved from C: to E: , and that turning off the antivirus software had an impact. It could be that the AV software had an exclusion for the DI directory on C:. When it was moved to E:, there was no exclusion.
And all of the writes out to the monitor log files would cause the AV software to stay very busy.
Also, the monitor sample rate can be set at two places. At the Job-Properties you set the default, and when executing or scheduling you set the value to be used. Maybe when scheduled the sample rate was set to a high value, now you created the schedule again?
Anyway, I am glad you found it was caused by the virus scan so quickly. I haven’t thought about that although we had similar cases before.
We had a similar issue with our DI 11.5 environment at a previous employer. We knew what the issue was but IT was being extremely pigheaded and refused to modify the Antivirus settings on the server.
We recently changed AV software on our server (which has both the DW/SQL and BODI on the same box), and until we got all of the exclusions worked out, the performance impact was crazy.
whenever the monitor log file is appended with another line, the virus scanner has to check if the file is now a potential risk. Otherwise somebody would write the first byte, then the second and at the end the virus is complete.
Same argument for databases, whenever a block is changed, the scanner has to reevaluate.
Hence increasing the monitor sample rate means less frequent writes, less load.