I’m running a job in recovery mode. Note, none of my work flows are flagged with ‘Recover as a unit’ set.
My job has 4 work flows.
My job falls over in the 3rd work flow, and in this work flow there is 1 data flow which loads data from a staging table into a target table (the job actually falls over loading the data into the target table).
When restarting the job in ‘Recover from last failed execution’ mode, where will it restart from? Would it be the 3rd work flow itself, or the data flow that sits within it, or from the object that it fell over in the data flow?
DI will start WF1 - see that this was successfully finished already and skip it hence. Same thing with WF2.
WF3 did not finish successfully, hence is started. The DF1 is called but figured it was completed so is skipped. DF2 was not successful, hence this is the first object to execute.
You can only recover from points where there is a defined starting point. A dataflow that reads a source table neither knows what the memory structures looked like when it failed, nor can in guarantee the source database would provide the same rows in the exact same order. So it has to start again and your dataflow has to deal with that, e.g. Table Comparison. If the dataflow e.g. reads from an R/3 dataflow and the transport file was created, the dataflow itself failed, then there is no need for DI to start the R/3 file creation again, skip that and read from the file again.
It stores the all the metadata in repository tables.
Wdaehn,
We are getting data from R/3. So our dataflows have R/3 flows. when ever our job fails after the R/3 is done and in the middle way of the dataflow. we could never recover from the point of failure. Atleast the recovery should be done at the dataflow level right? Everytime it says that it cannot open the .dat file it just ftped from r/3 application server giving some pid error. Is there something we can do to correct this?
ours is dataservices 12.1.0.0 on windows 2005 server and
repo is sql server 2005 instance on windows 2008 server.
Recovery mechanism does access this info right. It should be somewhere in the repository tables. I think our rapid mart is using AL_ROLLFORWARD for gettting which object is refreshed or not. This can be your starting point but not sure.
If the dataflow e.g. reads from an R/3 dataflow and the transport file was created, the dataflow itself failed, then there is no need for DI to start the R/3 file creation again, skip that and read from the file again.
We are getting data from R/3. So our dataflows have R/3 flows. when ever our job fails after the R/3 is done and in the middle way of the dataflow. we could never recover from the point of failure. Atleast the recovery should be done at the dataflow level right? Everytime it says that it cannot open the .dat file it just ftped from r/3 application server giving some pid error. Is there something we can do to correct this?
ours is dataservices 12.1.0.0 on windows 2005 server and
repo is sql server 2005 instance on windows 2008 server.
If the dataflow e.g. reads from an R/3 dataflow and the transport file was created, the dataflow itself failed, then there is no need for DI to start the R/3 file creation again, skip that and read from the file again.
We are getting data from R/3. So our dataflows have R/3 flows. when ever our job fails after the R/3 is done and in the middle way of the dataflow. we could never recover from the point of failure. Atleast the recovery should be done at the dataflow level right? Everytime it says that it cannot open the .dat file it just ftped from r/3 application server giving some pid error. Is there something we can do to correct this?
ours is dataservices 12.1.0.0 on windows 2005 server and
repo is sql server 2005 instance on windows 2008 server.
I just tried that myself. I read a SAP table via an R/3 dataflow, in the regular dataflow I have mapped all columns 1:1 in a query, except the primary key column, that was mapped to a constant.
As a result this job will fail at the second row with a primary key violaton.
First run with enable-recovery did start the ABAP.
Second run with enable-recovery and recover-from-last-failed-execution to start with the transport file reading, no ABAP executed again. So working for me. (DI 12.2.0.0.development build)