One of my colleagues is having an issue running one of his jobs. As part of a large ERP replacement program they are undertaking to migrate data to SAP using Data Services. Note Data is not loaded directly but at this point it will be to iDoc or flat files.
The error he is experiencing when running the job is:
Initializing environment failed for pageable cache with error <-30975> and error message <DbEnv::open: DB_RUNRECOVERY: Fatal
error, run database recovery
The team is leveraging the SAP Best Practices for Data Migration toolkit and using the HCM module. The job works fine for a number of previous dataflows and it stops at the start of this dataflow. The dataflow is an enrichment flow that is adding extra data to time worked exports.
I notice that there are alot of lookup files generated in the PCache directory for this user when they are running the job. I’m wondering if he is hitting some limit with BDBXml (I think that is what is used).
Any thoughts? I will probably be raising a case through SAP but wondering if there is any insight.
Thanks
Glenn
Update: Watching the PCache there is 114 files sitting in this directory (lookups, equicache etc)
A little update as I have contacted tech support and am extremely frustrated. I still dont udnerstand why when we contact support directly they continually point us to forum topics on SDN. Its almost like we are better off using those forums than talking to support. Sorry it happens too often at my customers and they vent on me as well
After testing and changing settings for the flow in question to use In-Memory it is appearing more like there is a limit to the number of files that the Pageable Cache can have open in a Job (there is 100GB free space on the drive where the cache is). Changing to In-Memory on the flow in question merely caused it to error in the next dataflow with the same issue.
Tech support at the moment is asking me to change settings PAGEABLE_CACHE_BUFFER_POOL_SIZE_IN_MB to 1536 which to me is similar to simply using In-Memory or postpoining the point of failure. In addition this is only 1 Job that is running at the moment at that dataflow is using 3660 records as sample. Small numbers in anyone language but the final production runs with have millions of records.
Whilst I appreciate the work arounds I need to be able to gaurentee to the project that this is a fix and not merely delaying the problem.
Sorry for the rant, seems my 2nd one in as many days on here. My client at the moment is going through a total refresh of their ERP stack and replacing it with SAP. Its a huge project with significant investment in BO and SAP products so the pressure is on