BusinessObjects Board

Named pipe error occurred in push-down

In my DF, all of the processes can be pushed down to DB, and I can see all the generated SQL statements. The Job also can be compiled successfully. But When I run the Job, I got the follwing error message:


(11.7) 06-27-08 19:15:02 (E) (24087:0001) DFC-250038: |Dataflow DF_Dm_Crm_Agg_Dsbd_Pred_S
Sub data flow <DF_Dm_Crm_Agg_Dsbd_Pred_S_2> terminated due to error <70300>.
(11.7) 06-27-08 19:15:02 (E) (24087:0010) FIL-080134: |Dataflow DF_Dm_Crm_Agg_Dsbd_Pred_S|Pipe Listener for DF_Dm_Crm_Agg_Dsbd_Pred_S_2
Named pipe error occurred:
(11.7) 06-27-08 19:15:03 (E) (22032:0001) RUN-050316: |Session JB_Dm_Crm_Agg_Dsbd_Pred_S
Exceptions occured during Dataflow execution.
(11.7) 06-27-08 19:15:03 (E) (22032:0001) RUN-050304: |Session JB_Dm_Crm_Agg_Dsbd_Pred_S
Function call <raise_exception_ext ( Exceptions occured during Dataflow execution. , -2 ) > failed, due to error <50316>:
.
(11.7) 06-27-08 19:15:03 (E) (22032:0001) RUN-050304: |Session JB_Dm_Crm_Agg_Dsbd_Pred_S
Function call <raise_exception_ext ( Exceptions occured during Dataflow execution. , -2 ) > failed, due to error <50316>:
.


What is the pfoblem?


I think I get the reason: the name is too long (>30) for the push-down tables.


kfyme (BOB member since 2007-07-10)

You have the “execute in separate process” flag set somewhere and hence your dataflow is split into two sub-dataflows connected with each other via a pipe. One of the two sub-dataflow fails for whatever reason…


Werner Daehn :de: (BOB member since 2004-12-17)

Yes Werner, it seems that I understand this situation better as I met this error with some other different reasons. Thank you very much!


kfyme (BOB member since 2007-07-10)

Hello Werner, I am afraid that we need to continue the discussion on this issue…

Badly, We met the ‘Named pipe error’ while using DT to enable push-down. I checked the Dataflow that, all of the table names in DTs is less 30 in length. When I logon to Designer then click the generate SQL statements button in the DTs, then save the DF, finally, run the job again, the error disappeared.

Mostly, this error occurs in a new environment, and if the action above has been taken, then the error will not happend any more.

So, what might be the problem?

In fact, if I havn’t clicked the generate SQL statement button in a REPO, and export the job then load them into a new REPO, in this case, many DFs might meet this type of error. Of course, the dataflow should contain several DTs. And if clicking the butten before exporting then the error might not occur. And I also found that it is hard to control the re-produce of this error…


kfyme (BOB member since 2007-07-10)

I encounter the same problem in different dataflows lately. When a df has a DT in it, it sometimes exits with an Named pipe error (FIL 080134). Only happens with DF’s with datatransfer in it!

Seems like, when the amount of rownumber exceed a certain value, this error occures.

Werner, do you have any suggestions?

(i will be on holiday so no reply possbile from my site…)


BBatenburg :netherlands: (BOB member since 2008-09-22)

Just a thought, you’ve explicitly defined the DT type i.e. table or file rather than using the automatic option? I know we had problems in 11.7 with the automatic option selected.


swiker :uk: (BOB member since 2009-02-20)

We no longer use data transfers here due to the unreliability. They mostly work but occasionally just fail with that same error message.

So my advice would be to split the dataflow and code things as you use to have to before they existed.

However I’ve played around with these a bit to try and figure out where the problem may be and my guess its a bug when:

When you have a data transfer and the dataflow runs as pageable.
Or
When a datastore used in the data transfer uses an alias ie DSOWNER and you dont use that when specifying a table name for the transfer.
Or
Something else :smiley:

Great idea but unreliable :frowning:


ScoobyDoo :uk: (BOB member since 2007-05-10)

I always set the automatic option off, only table option.

I can confirm the DSOWNER issue! I once had the same issue, changed it to the hardcoded ownername and it worked (and next week or so it failed again…)

Werner, apparently this issue is real. Is this a known issue? It would be really sad if we can’t use the DT, it’s one of the best improvements since BODI6.5.


BBatenburg :netherlands: (BOB member since 2008-09-22)

Basically, I have faced Named Pipe errors in my Jobs for the below reasons:

  1. The dataflow is having Degree of Parallelism, set as 4 and also the dataflow has got one or more ‘Run as separate process’ set in it.
  2. There are couple of lookups set with ‘Run as separate process’ and suddenly the dataflow gets huge amount of data.
  3. When the target table is partitioned and ‘Enable partition’ is set.

DI does not support so much to be done in a single dataflow, whereas other tools does. :frowning:
I had a dataflow like below:
Source->map->case->lookup case1 ->lookup case 2->merge1->case-> lookup case1->lookup case2->merge2->Database function->Target.

After successful design of this dataflow, I executed the job. It went to ready state and it hung there. Few optimizations resulted Named Pipe error!

I had to cut the dataflow into 2 halves(one till the merge1 and the other start from the case after merge1) and introduce a staging table between them to act as Bridge. It worked!

I feel there should be a fix for Named Pipe.

Thanks.


rookie86 :india: (BOB member since 2009-09-11)

Do you guys have any resolution for “Named Pipe errors”


Priyaselvame (BOB member since 2010-09-23)

I can relate to all the causes raised in this thread - I am experiencing the very same problems, under the same conditions. (run as seperate process enabled, large data volumes = one of the sub data flows just falling over for some reason).

Any input on future fixes would be most appreciated.


ErikR :new_zealand: (BOB member since 2007-01-10)

Specifically for long running sub-data flow issues (the run-as-separate-process type of thing)… check these settings in your job server’s DSCONFIG file:

[AL_Engine]

DFRegistrationTimeoutInSeconds=300
NamedPipeWaitTime=100


dnewton :us: (BOB member since 2004-01-30)

Pff… Same issue @ BODS 12.2.2.3

Data transfers are NOT working:

  • Manually given transfer type is ignored. Although the Transfer Type is set to Table: BODS executes it as automatic which causes BODS to create the object in a random datastore with Automatic Data Transfer enabled.
  • The manually given name is ignored. The automatic generated tablenames are longer than 30 positions which isn’t possible in Oracle…

@Werner, please advice and solve this issue. The data transfer has been unreliable for many versions now.

Errors:
ORA-00972: identifier is too long
Named pipe error occurred:


BBatenburg :netherlands: (BOB member since 2008-09-22)

Curious, we’re not seeing this in 12.2.2.3. Have you tried deleting the Data Transfer step from the dataflow, saving, then adding it back?


dnewton :us: (BOB member since 2004-01-30)

issue “ORA-00972: identifier is too long” for DataTransfer transform is fixed in 12.2.3.0

I have to check the following whether this is an issue in 12.2.3.0 or not

  • Manually given transfer type is ignored. Although the Transfer Type is set to Table: BODS executes it as automatic which causes BODS to create the object in a random datastore with Automatic Data Transfer enabled.
  • The manually given name is ignored.

manoj_d (BOB member since 2009-01-02)

hi all,
It is a new dataflow which is still in development stage.

I checked the release notes:

"When a job contains a Data Transfer transform in a join query, the product
sometimes generated an Oracle alias with more than 30 characters, which
caused the job to fail with an Oracle error, “ORA-00972: identifier is too long”.
This issue has been resolved.
ADAPT01387906
"

This issue has been resolved indeed, (thanks manoj). It is unfindable in the sap notes using the search…

Other issue is still open. Tomorrow i am gonna try disabling Use collected statistics and setting the dataflow to pageable cache. I suspect BODS overruling the manually set transfer type when this is enabled…

Will keep you updated.


BBatenburg :netherlands: (BOB member since 2008-09-22)

Here is a flow a developer made which triggers the problem

Notes:

  • second Data transfer is disabled
  • Please dont mind the ‘naming conventions’…
    :roll_eyes:

Data transfer Transfer Type is Table. In the log however you see:
The Automatic Data_Transfer transform <STI_AGG_PREMIE_VERZ2_STI_AGG_P> has been resolved to a transfer object <DT__1595_7149_1_1(DSV_TRINICOM.DW_STI_TRINIC)>.

for unclear reason BODS handles the object “Data Transfer” as an automatic data transfer. It is not. When i disable the first DT the job never cancels with the messages: named pipe error/ identifier too long.

Relating topic:
https://bobj-board.org/t/152941

Werneers reply to this issue:
“In the meantime development read through the thread and believes you found a bug. An ADAPT case was created.”
However, also in 12.2.2.3 it still seems to exist.
dt.JPG
df.JPG
df_log.JPG


BBatenburg :netherlands: (BOB member since 2008-09-22)

In the attachement the reply from SAP support (26-04-10) regarding this issue.

This answer implies BODS adds a DT itself and the error isnt caused by the DT in te flow. I disagree with this conclusion, this doesn’t explain why the flow is succesfull when we disable the first DT.

What is your opinion on this?
sap_reply.JPG


BBatenburg :netherlands: (BOB member since 2008-09-22)

ok, you did file a support case for this, by any chance do you remember the incident number ? since you have provided the ATL and other details in the case, I can take the ATL and see what is the internal DT that DS creates and how removing the user added DT from the DF doesn’t causes this issue of ignoring the user set transfer type (Table)


manoj_d (BOB member since 2009-01-02)

Hi Manoj,

I have a message number: 322722 / 2010. Created 19.04.2010 - 9.21.11 CET

Thanks!


BBatenburg :netherlands: (BOB member since 2008-09-22)