SAP Note 1388857 (Error: “Cannot open file <>. Please check its path and permissions.” - Data Services XI 3.2 (12.2))
–> I have no problem executing other jobs and reading the files. I only have the issue that when using the ABAP data flow
It is not clear to me which users is trying to access the files. The SAP user? The Job Server? If the job server, I don’t understand why it has issues in this particular job when all others are fine.
Isabelle, the information you provided is very confusing.
On the one had you state that you have no problems with files, just with the BAPI call. On the other hand, a BAPI call is not related to files.
Can we test step by step?
The first thing I would do is replicating the BAPI job but without the BAPI. You said you do not have problems with the files but when reading the text carefully, you never stated the target file can be written. So read the Excel file and write a file at the same location as the BAPI output, just copy the data. If you are right this dataflow will not have any issues.
If I am guessing right, the computer with the jobserver used, running under the local Administrator account does not have access to the target file. Either because it is a network path - local Admins have full access on local file but no permissions on network shares - or because the path does not exist on the jobserver computer.
I have created several dataflows NOT involving the use of BAPI functions in queries. In such data flow, I don’t experience file access errors.
I just started trying to use the DS functions on SAP servers. I created a data flow as mentioned in my post, which contains
An Excel source file
A query using a BAPI function for a ERP system
A flat target file
When I execute the job, I get the errors that I cannot access the source and target file. I don’t think I have mentioned that I have issues with the BAPI call itself. What I have mentioned, or at least tried to, is that I have file access issues with the source and target file which resides on the local machine, where the job server is also running.
What I don’t understand is when I run OTHER jobs which have source and target file in the same location, I don’t get file access errors.
I hope this clarifies what I was trying to explain.
I will try a further to see why I suddenly get those access errors while I don’t have it with other data flows using files in the same locations :(.
What is the logon information (user) for the service?
What is the path of the source file according to the file reader. What I mean by that is go to the dataflow and open the reader object. Not editing the object from the object library. The object itself has a default path, the actual path used is defined in the reader.
Same thing for the loader.
Copy the exact error message you get
Proof to me that the reader/loader path exists on the server running the jobserver. Not the Designer computer.
Can you open the Excel file right now? Maybe somebody else has opened it, e.g. a still running/hung DS job?
Can you delete the target file - somebody might hold a lock there.
I’m getting the same error FIL-080101 but when Data Services tries to reach its own index error file!
The error and index files are both created; and the error happens almost as soon as the first dataflow is launched; I tried tweaking the fileopen_retry_time as some other post suggested but I don’t think it is considered at all. I don’t think it is a security setting neither. Other jobs are successful from this same job server.
Error file content:
(12.2) 12-09-10 17:50:43 (E) (233980:0001) FIL-080101: Cannot open file /data2/dataservices/log/JS_QA_ETL434/workdata_fenix1000__ds_lcl_repo_etluser/error_12_09_2010_17_50_19_1__063d8802_3ffa_44a1_b00f_ec5c2dfc5efe.txt.idx in ‘rb’ mode. OS error message . OS error number <2>. al_engine reached ‘fileopen_retry_time’ limit so exiting
the cause for the error that you are getting is little different, there is some issue writing to job error log, this may happen if you have multiple DF running in parallel
are you running multiple DF in parallel ?
what is getting logged in error log ? are you getting lots of conversion warnings ?
I got the same error message. I have several parallel running dataflows in the job. It was working fine for about 9 months now. But since this morning the job is hanging after executing some dataflows. Never errored out but from the log file this is what I see. Can anybody tell me why this started happening all of a sudden.
(11.7) 01-05-11 11:47:34 (E) (1748:000) FIL-080101: Cannot open file e:\apps\Business Objects\Data Integrator
11.7/log/ksoveiapp036_1/sedw__ttalexan/error_01_05_2011_11_21_26_1__42e64a18_e09c_4992_aecf_78db6738ee2b.txt.idx in ‘rb’ mode.
OS error message is:No such file or directory OS error number is:2 al_engine reached ‘fileopen_retry_time’ limit so exiting
(11.7) 01-05-11 12:03:44 (E) (1748:000) FIL-080101: Cannot open file e:\apps\Business Objects\Data Integrator
11.7/log/ksoveiapp036_1/sedw__ttalexan/error_01_05_2011_11_21_26_1__42e64a18_e09c_4992_aecf_78db6738ee2b.txt.idx in ‘rb’ mode.
OS error message is:No such file or directory OS error number is:2 al_engine reached ‘fileopen_retry_time’ limit so exiting
Sorry, I never got a notification this question had a reply. The job only has one dataflow, but within it has many lookups -which I bet can be running in parallel. There are no conversion errors being logged.
I have tried to troubleshoot this extensively but I can’t pinpoint the issue. If I take out some of the lookups the job might work, but doesn’t looks it is deterministic, I can remove some and it will work, on a later time I remove the same and it won’t work…
438386 1 FIL-080101 2/8/2011 10:51:22 AM Cannot open file
438386 1 FIL-080101 2/8/2011 10:51:22 AM /bobjdi/dataservices/log/JSBOBJDEV2/michaela__michaela/error_02_08_2011_10_40_01_887__154ee9af_3754_4d93_afdd_80602eaa9300.txt.i
438386 1 FIL-080101 2/8/2011 10:51:22 AM dx in ‘rb’ mode. OS error message . OS error number <2>. al_engine reached ‘fileopen_retry_time’
But after the 4th attempt, the error disappeared. As much as I would like to ignore this error, I am worried after we go live the error reappears.
If anyone has found a root cause to this error, please share!!!
Dear all:
I have submitted a ticket to SAP, and received below response:
[i]In reviewing the information you subimitted I noticed that your ulimit for memory is rather low - much lower then our recommendation.
Can you please modify that setting, restart job service and then see if
that affects the issue you are experiencing?
These are our recommended settings for AIX for the DS user account:
User resource limit Value Comments
file (blocks) 4194302 At least 2 GB
data (kbytes) unlimited
stack (kbytes) 512000 At least 500 MB
memory (kbytes) 2097151 At least 2 GB
nofiles (descriptors) 2000 At least 2000[/i]
I have applied the recommended ulimit settings for one of my job servers so I can compare the errors and see whether the above suggestion is the remedy.
Please feel free to try and see whether the ulimit setting helps fix the error.
The bug SAP created for my scenario was addressed under 12.2.3.2, this finally solved my issue. The bug description is very very vague, just something about AIX+order by=wrong SQL, but it did helped me (even when I don’t have an order by )
I again apologize as I didn’t get a notification about your post. The ADAPT number and description is:
ADAPT01535325
In some cases, a Data Services job that uses order by and the lookup and nvl functions in a query mapping crashes on AIX . This issue has been fixed.
I know the description is quite vague but that is something the SAP engineer team decided.
You can find it under the Data Services Fix Pack 12.2.3.2 Release Notes.