(6.5) 11-08-06 15:32:20 (E) (0348:4028) RUN-050604: |Session 222_FTR_KPI_JOB|Dataflow VALTEST_STG_SKU_DF
Cannot write pipe, due to error <The pipe has been ended. >.
(6.5) 11-08-06 15:32:20 (E) (0348:4028) RUN-050409: |Session 222_FTR_KPI_JOB
The job process could not communicate with the data flow <VALTEST_STG_SKU_DF> process. For details, see previously logged error
<50604>.
(6.5) 11-08-06 15:32:20 (E) (0348:4028) RUN-050604: |Session 222_FTR_KPI_JOB|Dataflow VALTEST_STG_SKU_DF
Cannot write pipe, due to error <The pipe has been ended. >.
Help??? don’t have a clue why this goes wrong…
edit: the Dataflow VALTEST_STG_SKU_DF uses a TXT as input and a TXT as output, so I assume it can’t be a database problem?? especially since other dataflows DO work…
edit: I done some more tests, and this problem DOES NOT occur when the option “capture data” is switched on. help??
I’m curious about this too. I have a couple of jobs that can run for days (looping) without problems and suddenly they stop with this error. Very strange.
I get this one intermittenly as well. Sometimes the job completes successfully and I still get it. When I queried support about it, they said it was a known issue. When I have had it occur after long periods (jobs that run for a couple days) and the job fails, support has opined that it is a memory related error. Might try asknig support again and see if you get a different response.
apparently it happens when I wish to join two queries which both use a max(…) function
Very strange. Also it only happens in this case, does not happen in other dataflows. Bizarre.
We recently encountered similar errors and DBA confirmed that it’s an issue with DI.
I looked closely to the DF which caused this problem, there is no MAX() queries joining. Only has distinct and xml nested mappings. ??? So why this is happening still?
The directory seems to be empty. I tried to browse the ftp, but there is nothing there. Thanks for the effort, though. Anyone know where to download this hotfix?
please contact support, they provide you with a link to above ftp server and if neccessary copy the file again. (files get deleted from there automatically after a while)
Version 11.7.2.0 running on a dedicated server, accessing a combined Oracle 9i and 8 environment.
Since we converted to 11.7.2.0, we have been getting intermittant "Named pipe error occurred: " errors. They happen seemingly randomly across jobs (though there is one job in development which either receives this error or never completes). When a job receiving this error runs the next evening, it usually completes successfully.
The error doesn’t occur every night, but it does occur at least three nights a week (our jobs run five nights a week).
We think that we figured out what was causing this issue. If the Perform Join in Parallel threads option is selected, this error occurs from time to time. As a workaround, we have unchecked that box in all jobs, but that’s leading to speed issues.
Is that a known issue with any version of DI, and if it is, is there a version that has a patch?
I have the same kind of situation,
DI version is 11.7.2.1 and I get the “broken pipe” error too often nowadays, but I cannot figure out what flow or process causes it, because this error always occurs in the end, right before or instead of smtp_to() function when the script has almost finished and needs to send the report-email.
So I don’t get this email very often and when I check the log then everything seems to be done until that.
There are many work- and dataflows that are processed, some conditionals and while cycle, but the time when this error occurs is always the latest (for instance job log has last rows on 23:50:00 then this error occurs on 23:50:01, instead of sending email with smtp_to()) - could this function be the cause somehow?
The OS is actually Linux RedHat 4 and 4gb of RAM.
Should I install some hotfix on the system to replace this version with newer? Would it fix the problem?
We’ve found the only workaround is to have the repository database on the same machine as the job server. Of course if you’re running Linux job server and SQL Server database, there’s no luck there.
Do you have any sort of connection timeout setting in MySQL on the server side? If so, increase the timeout value to at least the length of the longest-running dataflow in your job.