Which object will DI restart from where it failed previously

Hi

I’m wondering if anyone can shed any light.

I’m running a job in recovery mode. Note, none of my work flows are flagged with ‘Recover as a unit’ set.

My job has 4 work flows.

My job falls over in the 3rd work flow, and in this work flow there is 1 data flow which loads data from a staging table into a target table (the job actually falls over loading the data into the target table).

When restarting the job in ‘Recover from last failed execution’ mode, where will it restart from? Would it be the 3rd work flow itself, or the data flow that sits within it, or from the object that it fell over in the data flow?

Thanks in advance for any help.

Ansel


ansel (BOB member since 2008-08-06)

DI will start WF1 - see that this was successfully finished already and skip it hence. Same thing with WF2.
WF3 did not finish successfully, hence is started. The DF1 is called but figured it was completed so is skipped. DF2 was not successful, hence this is the first object to execute.

You can only recover from points where there is a defined starting point. A dataflow that reads a source table neither knows what the memory structures looked like when it failed, nor can in guarantee the source database would provide the same rows in the exact same order. So it has to start again and your dataflow has to deal with that, e.g. Table Comparison. If the dataflow e.g. reads from an R/3 dataflow and the transport file was created, the dataflow itself failed, then there is no need for DI to start the R/3 file creation again, skip that and read from the file again.


Werner Daehn :de: (BOB member since 2004-12-17)

Thank you very much for your reply. That helps me.


ansel (BOB member since 2008-08-06)

Werner,

Where does DI store the meta data info that tells it which WF failed and which did not?

Thanks,
CFSCG


cfscg (BOB member since 2007-06-19)

It stores the all the metadata in repository tables.
Wdaehn,

We are getting data from R/3. So our dataflows have R/3 flows. when ever our job fails after the R/3 is done and in the middle way of the dataflow. we could never recover from the point of failure. Atleast the recovery should be done at the dataflow level right? Everytime it says that it cannot open the .dat file it just ftped from r/3 application server giving some pid error. Is there something we can do to correct this?

ours is dataservices 12.1.0.0 on windows 2005 server and
repo is sql server 2005 instance on windows 2008 server.

Correct me if i am wrong?

Regards.


SantoshNirmala :india: (BOB member since 2006-03-15)

Which repository tables? I looked at all the tables/views in the repo and I seem that I did not find any. Could you point out the table/view names?

thanks,
cfscg


cfscg (BOB member since 2007-06-19)

Recovery mechanism does access this info right. It should be somewhere in the repository tables. I think our rapid mart is using AL_ROLLFORWARD for gettting which object is refreshed or not. This can be your starting point but not sure.

Regards.


SantoshNirmala :india: (BOB member since 2006-03-15)

Thanks for your input. I will persue this path and see where I can get.

cfscg


cfscg (BOB member since 2007-06-19)

correct, AL_ROLLFORWARD.


Werner Daehn :de: (BOB member since 2004-12-17)

Wdaehn,
Quote:

If the dataflow e.g. reads from an R/3 dataflow and the transport file was created, the dataflow itself failed, then there is no need for DI to start the R/3 file creation again, skip that and read from the file again.

We are getting data from R/3. So our dataflows have R/3 flows. when ever our job fails after the R/3 is done and in the middle way of the dataflow. we could never recover from the point of failure. Atleast the recovery should be done at the dataflow level right? Everytime it says that it cannot open the .dat file it just ftped from r/3 application server giving some pid error. Is there something we can do to correct this?

ours is dataservices 12.1.0.0 on windows 2005 server and
repo is sql server 2005 instance on windows 2008 server.

Correct me if i am wrong?

Regards.
[/quote]

We need your inputs.


SantoshNirmala :india: (BOB member since 2006-03-15)

[quote:acaa6a315c=“Utopian.Santosh”]Wdaehn,
Quote:

If the dataflow e.g. reads from an R/3 dataflow and the transport file was created, the dataflow itself failed, then there is no need for DI to start the R/3 file creation again, skip that and read from the file again.

We are getting data from R/3. So our dataflows have R/3 flows. when ever our job fails after the R/3 is done and in the middle way of the dataflow. we could never recover from the point of failure. Atleast the recovery should be done at the dataflow level right? Everytime it says that it cannot open the .dat file it just ftped from r/3 application server giving some pid error. Is there something we can do to correct this?

ours is dataservices 12.1.0.0 on windows 2005 server and
repo is sql server 2005 instance on windows 2008 server.

Correct me if i am wrong?

Regards.

[/quote]

We need your inputs :lol: :smiley: .[/quote]


SantoshNirmala :india: (BOB member since 2006-03-15)

Werner,

Where does DI store the meta data info that tells it which WF failed and which did not?

Thanks,
CFSCG


cfscg (BOB member since 2007-06-19)

AL_ROLLFORWARD repo table.

For a given runid (linked to AL_HISTORY) and a job_runseq, there is a list of all objects that got started in what order and their STATUS.


Werner Daehn :de: (BOB member since 2004-12-17)

I just tried that myself. I read a SAP table via an R/3 dataflow, in the regular dataflow I have mapped all columns 1:1 in a query, except the primary key column, that was mapped to a constant.

As a result this job will fail at the second row with a primary key violaton.

First run with enable-recovery did start the ABAP.
Second run with enable-recovery and recover-from-last-failed-execution to start with the transport file reading, no ABAP executed again. So working for me. (DI 12.2.0.0.development build)


Werner Daehn :de: (BOB member since 2004-12-17)

Thanks Werner. This is great stuff. I appreciate it.


cfscg (BOB member since 2007-06-19)