We recently find that the screen refresh of “Batch Job Status” from the management console is getting slower. It is now takeing few minutes to refresh.
I am not sure if the slow response is due to data volume in AL_HISTORY table is getting big? there are currently over 107000 records in that table.
I’d say yes and no. 1200 Logs could take a while to be read from the file system. Do you think it is necessary to have that many old logs? Do you think that’s an error that they do not get deleted?
log files are not read untill you click on link to view the file, so 1200 records in AL_HISTORY is not a big number, the batch job history page in Management Console reads the data from ALVW_HISTORY view, check the number of rows in the each table used in this view
do a SELECT * FROM ALVW_HISTORY tables and check how much time it takes to execute
The statement
SELECT * FROM ALVW_HISTORY
gives results immediately (also ~1200 records).
The problem comes from a job that runs every 10 minutes.
Is there a mean to purge logs automatically only for a particular job ?
Or is there a mean to run a job without inserting into log tables ?
aha… Now I understood that. Yes I kept watching your thread. I basically designed 2 Job Servers. One for High Freqency Jobs(My case we ran for every 3 minutes) and one for Daily/Weekly/Monthly natured jobs.
So then we used Batch Scripts to purge the files everyday.
But are you talking about only one job that runs every 10 minutes?
According to me, the root cause of the problem is this job with high frequency. We have other jobs that are weekly.
Today, in AL_HISTORY, I have 1117 records for this high frequency job, and 130 records for all other jobs (weekly jobs).
We have 30 days retention log.
When you purge the files with your script, do you remove the 4 following files for each instance of the high frequency jobs ?
trace*.txt
trace*.idx
monitor*.txt
error*.txt