I have a source table and have to do scd 2. I place query transform, then TC(One field as input key and 3 fields as comparison),then history preserving( valid_from to start date and valid_to my end_date), then key_generation based on surr_key and increment by 1 and then load to target table. But for 2 records only it takes 4 hours for mload to teradata. THIS IS TOO BAD :).
Tried that option what you have told but still it takes so much time. TableComparison in cached mode also will not help since our Data from SAP source is too huge in so many GBs. If read lock happens also, can you please suggest a way out.
If the read lock is happening then a solution may be to not insert/update the final target in the same dataflow as the TC.
Just use a couple of map-operations after the HP and filter the updates and inserts into different tables and then in subsequent dataflows do the updates and inserts.
Also you contradict yourself a little when you say you cant use Cached Table option in the TC as the SAP table is too big… The TC will cache the target table being compared not the source, so you CAN use the cache table option if your teradata table isn’t massive.
You need to identify though where your problem lies. Is it the reading of the two rows from the teradata table when doing a compare (probably because of lack of indexes) or because it has a read lock caused by the TC?