I have source data, It contains duplicate and Unique records, I want to load duplicate records in one target table and Unique records in another target table
how to achieve this requirement.
Just thinking off the top of my head, I’d use the gen_row_num_by_group to identify the duplicate records and then use a case transform to split out the unique records (gen_row_num_by_group = 1) from the duplicates (gen_row_num_by_group <> 1).
It all depends on how you classify/identify duplicate records. If you mean a duplicate key (e.g. customer number, product SKU, etc.) , you can indeed use the generate number by group function as mentioned above or just do a group by and create count record - then use a case statement to split the data flow between count = 1 and count > 1. This can be achieved using the standard Data Integrator and Platform functionality.
If you don’t have a clear duplicate key, e.g. multiple entries for the same customer but with unique customer numbers, then you can use the match transform but that does require you to be fully licensed for the Data Quality part of Data Services.