BusinessObjects Board

R/3 df failed --database client cannot connect to named pipe

Greetings!!

need some help here.

R/3 dataflow in sales rapid mart job failed, below is the error it threw.

(12.1) 06-03-09 15:24:13 (2360:3108) FIL-080133: |SubDataflow DF_SalesOrderStage_SAP_1_3|Pipe Listener for IPCTarget1_R3_SalesOrdersFact1
                                                 The database client cannot connect to named pipe <\\.\pipe\5b21d774-15e6-4fe0-992a-b39eaf406ce6>

The al_engine processes on the JS are not growing much and even after the error, I see the al_engine processes are still hanging on

We are using Data Serivices 3.2, Target Oracle 10g.

SAP R/3 DS configuration
Using ABAP execution --> Generate and Execute
Execute in background --> yes
DT Method --> FTP

Thanks.


urk :us: (BOB member since 2005-11-29)

Did you ever get this working? I get the same error when running the SAP Sales Rapid Mart. My target database is on Oracle10G and I suspect it is a timeout issue around the oracle connection.


778899 :us: (BOB member since 2009-10-16)

I found the culprit here but not necesarily the final fix.

I discovered that by first removing “Run as a separate process” in the lookup_ext function from EXCH_RATE_GBL to TO_FACTOR_GBL in the query LookupRename in data flow DF_SalesOrderStage_SAP. This will then cause it to run without using sub dataflows and therefore no timeout. While the “Run as a separate process” is meant as a performance tweak in my case this did not cause a notable performance problem. I suspected that the timeout for sub dataflows checkin with parent dataflows would be controlled by the setting DFRegistrationTimeoutInSeconds in DSConfig.txt, but increaseing the setting behond 10 minutes did not help.


778899 :us: (BOB member since 2009-10-16)

778899,

I am running into this very issue. Did you stick with the workaround you described or did you ever find a better solution?

Thanks.

KMS


kmspsu93 :us: (BOB member since 2006-04-06)

Any solution on this?


pmslic (BOB member since 2010-06-24)

There were solutions given in topic, yes.

So if you are not specific on your problem, then my answer is also vague…


Johannes Vink :netherlands: (BOB member since 2012-03-20)

I believe they said they found the culprit but not the final fix.

We are also getting “Pipe Listener for IPCTarget1_Qry_Role-Mapping25
The database client cannot connect to named pipe <\.\pipe\0da7df0b-e5a5-44ac-8192-5e8c2ce02e69>.”

I’ve been asked to look into it but and not familiar with Data Integrator.

I know they’ve removed “run as a separate process”.


pmslic (BOB member since 2010-06-24)

Ah now we are talking :wink:

With certain versions of BODS “run as a seperate process” did cause more problems then it would solve anything. In general it is a best practice NOT to use it, unless there is a very clear reason for it (in my opinion almost never).

The pipe error is vague. Normally it is an error related to a crash where BODS could no longer report what the cause of the crash was.

However in your error message there is something about not able to connect to a database.

How do read or write from a table? What kind of connections/sources/targets are involved? Do you use tranforms like a hierarchy flattening, data validation, data transfer?


Johannes Vink :netherlands: (BOB member since 2012-03-20)

I have come to the understanding / interpretation that “cannot connect to named pipe” are due to timeouts.

Especially when using R3 dataflows to extract from ECC as in the Rapid Marts ETL and using old-style transfer methods such as Shared Directory, if an ECC extraction takes longer than 10 minutes you will get the
“cannot connect to named pipe” if you have any kind of “run as separate process”=yes in the main dataflow which contains the R3 dataflow.
I believe this is because the additional "al_engine"s for the separate processes are spawned at start of the main dataflow and communicate with named pipes between each other, but those opened named pipes timeout after 10 minutes of inactivity…

With DS 4.1 and later you can switch to transfer method RFC which should avoid the problem altogether.

Wild guessing on my side makes me think that DS would either need to spawn those additional "al_engine"s after the R3 dataflow (using old-style transfer) has finished, or need to send some kind of keep-alive packet to every open named pipe after 9 minutes of inactivity.


micham (BOB member since 2014-03-27)

Hi, your advised is mixed with correct statements and something I need to comment on…

The transfer method does not impact anything at all, regardless of how long the R/3 extract takes.

If a pipe error occurs due to “run as a seperate process”, then that function must be disabled.

I have never, and then really never, encountered errors due to long running R/3 extracts (if running in background).


Johannes Vink :netherlands: (BOB member since 2012-03-20)

That is absolutely positive: I had certain errors because the R/3 extraction ABAP took longer than 10 minute to finish writing the flat file (.dat),
and that’s why I mention the transfer mode because now in RFC mode it is pretty much impossible that sending a block of 5000 records (or whatever your setting) will take more than 10 minutes to prepare.
Sure, everything was always running in background mode.

It happened under 4.0 SP2 patch5 (if I remember correctly) with Rapid Marts and took me a while to figure out!


micham (BOB member since 2014-03-27)