I am having a very strange error with GL Rapid mart.
I am on DS 4.0 SP3 Patch2 and BI 4.0 SP4.
GL Rapidmart has been configured and when trying to run the ETL job would fail on a data flow with error
“cannot open file <<server/path/xyz.dat> check its path and permissions”
When we look into the SAP server path, the dat file would have been created. The next ETL run would successfully run and fail on another data flow with the same error but for a different dat file.
One more run of ETL job would surpass the previously failed data flow and fail on the next data flow with a similar error…
It looks like it will take me around 30 odd ETL job runs to complete the Initial load successfully.
Has some one seen this before?
We have a ticket created as well but no response as of now.
The file can be read as the next run will be able to read it
Why does the second run not delete and recreate the file or does it?
Can you open the file and see its contents?
I can think of two root causes, either timing or locks.
If DF1 creates file0 but wants to read file1 and in parallel DF2 creates file1 it would appear as if the file exists but cannot be read. Having said that I have no idea how such a thing can happen.
If the writer has a lock on the file you cannot read the file in Windows. Are you using Windows as a file share? Is the SAP datastore set to execute_in_background? Again, such a problem cannot happen actually as we start the ABAP, once it is finished we start reading but it has to be something…
Does the second run delete and recreate the file? I assume that it would create the file only if it does not exist. since the first run created the file, the second run consumes it instead of recreating… That is the case with 10 other dat files which are created as part of previous runs.
After the job fails, we can open and see the content of the file…
Windows is the file share.
SAP Data store is not set to execute in back ground as suggested by Rapid mart’s quick guide. Do you think we need to set the DS to run in back ground?
By the way, SAP support just replied back. .they are asking us to create a FTP transport method and also write some kind of delay program… which we are not comfortable when we are using a Rapidmart.
That delay-program might be a workaround but I would proof it is a timing issue. If it turns out to be I would be very very interested in understanding the root cause.
That execute_in_background might be worth a try as well.
The Jobs are running at this point with Execute in background Yes option… The job crossed more than 10 WFs where it failed previously… Fingers crossed…
Still dont understand how execute in background is helping our timing cause.
Both methods should work and I would love to know why the one is not.
Basically, without the execute_in_background we call the ABAP program like a remote procedure call. We call it, is does something for the next hours and when it is finished, then the procedure returns to us. How can the procedure return premature without an error???
With execute_in_background we submit the ABAP program as background job and every five seconds we check the program status. Is it waiting, running, still running, or finished. Once the status is finished successfully we read its created data file.
With “Execute in Background” we are seeing the trace messages "Batch bob <…> is submitted. Job Number is "… as you explained.
The Rapidmart job had made significant progress and should complete any time now… Just in our case the RPC calls did not work.
Logically both the options should work and I agree with you…but if the batch mode works, I will keep my logical thinking aside… as it is how the rapidmarts are built internally.
I would like to have one successful run as we are struggling for last couple of months with the rapidmarts
We are still not able to run the GL rapidmart once… The new issue is that the GLrapidmart runs now creating database logs of 100 GB and errors out.
I am not sure whether we need to have more than 100GB log size for the database. Also the database for GLrapid mart datamart is set to Auto Recovery mode (SQL server)…inspite of that the logs are getting created and the job fails.
IS there a recommendation for database log size while implementing a GL rapid mart? By the way, the data we are loading is very small and the ECC data is available since 2011.