DB_RUNRECOVERY Error

Hi all,

One of my colleagues is having an issue running one of his jobs. As part of a large ERP replacement program they are undertaking to migrate data to SAP using Data Services. Note Data is not loaded directly but at this point it will be to iDoc or flat files.

The error he is experiencing when running the job is:

Initializing environment failed for pageable cache with error <-30975> and error message <DbEnv::open: DB_RUNRECOVERY: Fatal
error, run database recovery

The team is leveraging the SAP Best Practices for Data Migration toolkit and using the HCM module. The job works fine for a number of previous dataflows and it stops at the start of this dataflow. The dataflow is an enrichment flow that is adding extra data to time worked exports.

I notice that there are alot of lookup files generated in the PCache directory for this user when they are running the job. I’m wondering if he is hitting some limit with BDBXml (I think that is what is used).

Any thoughts? I will probably be raising a case through SAP but wondering if there is any insight.

Thanks

Glenn

Update: Watching the PCache there is 114 files sitting in this directory (lookups, equicache etc)


GlennL :australia: (BOB member since 2005-12-29)

The error is basically saying the pageable cache database has a problem. Ran out of disk space, an ulimit (Unix) set,…???


Werner Daehn :de: (BOB member since 2004-12-17)

Thanks Werner,

I realised last night that I didnt outline the environment I was in a rush to get the question out before my last meeting of the day.

The environment is 32 Bit windows 2003 server so my understanding is ulimit isn’t relevant. Also there is over 100GB of hard disk space free on the drive being used for DS (D) and 37Gb free on the system © drive.

Version Data Services 3.2 (12.2.1.3)


GlennL :australia: (BOB member since 2005-12-29)

A little update as I have contacted tech support and am extremely frustrated. I still dont udnerstand why when we contact support directly they continually point us to forum topics on SDN. Its almost like we are better off using those forums than talking to support. :hb: Sorry it happens too often at my customers and they vent on me as well

After testing and changing settings for the flow in question to use In-Memory it is appearing more like there is a limit to the number of files that the Pageable Cache can have open in a Job (there is 100GB free space on the drive where the cache is). Changing to In-Memory on the flow in question merely caused it to error in the next dataflow with the same issue.

Tech support at the moment is asking me to change settings PAGEABLE_CACHE_BUFFER_POOL_SIZE_IN_MB to 1536 which to me is similar to simply using In-Memory or postpoining the point of failure. In addition this is only 1 Job that is running at the moment at that dataflow is using 3660 records as sample. Small numbers in anyone language but the final production runs with have millions of records.

Whilst I appreciate the work arounds I need to be able to gaurentee to the project that this is a fix and not merely delaying the problem.

Sorry for the rant, seems my 2nd one in as many days on here. My client at the moment is going through a total refresh of their ERP stack and replacing it with SAP. Its a huge project with significant investment in BO and SAP products so the pressure is on :smiley:

Cheers

Glenn


GlennL :australia: (BOB member since 2005-12-29)

I have the same problem…

Anyone else have this problem ?

Thanks


pauljrg :belgium: (BOB member since 2011-09-13)

Could you post your setup, error message and BODS version please? Not all errors are exactly the same and so are the causes :wink:


Johannes Vink :netherlands: (BOB member since 2012-03-20)

And definitely let us know the versions you are on!


ganeshxp :us: (BOB member since 2008-07-17)

The problem was resolved by not running the source table in cache anymore. It had over 10.000.000 rows.

Thanks anyway :wink:


pauljrg :belgium: (BOB member since 2011-09-13)