Under a certain set of circumstances, BODS tries to insert data into SQL Server table, fails, does not return an error, and the job completes successfully. Here is the scenario:
Dataflow selects data from a database.
Same dataflow “upserts” data in a different database. We use options “Auto-correct load=Yes”; “Allow merge or upsert=Yes”
Merge SQL cannot push down because two different databases involved. Instead, data is selected into a buffer. Database language IF statement is generated on the fly: “if target row exists then update else insert ”
The IF statement is successfully submitted to the database engine. The statement fails. In our example, it fails because a check constraint is violated (we are trying to insert U into a column that allows Y/N). Data Services does not recognize the error. DS thinks the insert was successful and increments the number of rows processed, and goes on to the next row.
I ran a SQL trace, copied the statement (if exists then update… else insert…) right out of the trace, pasted it into Management Studio and ran it. It now returns correct constraint error.
I reviewed target table editor options in the manual. I discussed with co-workers, nobody seems to have experienced this scenario. I am not using bulk insert.
No, if there isn’t anything in the error log then adding a try/catch won’t do you any good. I was asking if there was a try/catch already in place. Often times developers forget to re-raise an exception at a lower level of the job so that the exception “bubbles up” to the job level where it can then be used to kill the job.
I discovered that if I simply change the sequence of rows being processed I get different results. If the first row to be processed violates the check constraint, the error is returned correctly and job fails. If the first row is good, the job finishes successfully, even though some subsequent rows will fail to insert.
And if I change the commit count to 1, the job also correctly returns the error and fails.
I further refined my test job to try to boil it down. This time my job is not selecting from a database at all, but using BODS Row_Generation transform. I put check constraint on column YES_NO to allow only Y or N. I can make it return error code correctly by making row number 1 have YES_NO=U, and commit count = 1000. It pretends to finish successfully if the U is in a different row. Here is what the sql trace looks like:
SQL submitted from loader <Z_DH_TEST> in is:
IF EXISTS
(SELECT 1 FROM “DBO”.“Z_DH_TEST”
WHERE “CLAIMID” = 1 )
UPDATE “DBO”.“Z_DH_TEST” SET “YES_NO” = ‘Y’, “COLUMN2” = ‘N’ WHERE “CLAIMID” = 1
ELSE INSERT INTO “DBO”.“Z_DH_TEST” (“CLAIMID”, “YES_NO”, “COLUMN2”) VALUES (1, ‘Y’, ‘N’)
IF EXISTS
(SELECT 1 FROM “DBO”.“Z_DH_TEST”
WHERE “CLAIMID” = 2 )
UPDATE “DBO”.“Z_DH_TEST” SET “YES_NO” = ‘N’, “COLUMN2” = ‘Y’ WHERE “CLAIMID” = 2
ELSE INSERT INTO “DBO”.“Z_DH_TEST” (“CLAIMID”, “YES_NO”, “COLUMN2”) VALUES (2, ‘N’, ‘Y’)
IF EXISTS
(SELECT 1 FROM “DBO”.“Z_DH_TEST”
WHERE “CLAIMID” = 3 )
UPDATE “DBO”.“Z_DH_TEST” SET “YES_NO” = ‘U’, “COLUMN2” = ‘Y’ WHERE “CLAIMID” = 3
ELSE INSERT INTO “DBO”.“Z_DH_TEST” (“CLAIMID”, “YES_NO”, “COLUMN2”) VALUES (3, ‘U’, ‘Y’)
Data flow <z_test_constraint_error> is completed successfully.
Process to execute data flow <z_test_constraint_error> is completed.
Job <z_job_test_constraint_error> is completed successfully.
The essential element is that multiple update statements are stacked in the buffer. It appears the job correctly interprets the return from the first statement, then if the first statement is successful, it assumes the remaining statements are also successful.
If I set commit count to 2, and the third row has YES_NO=U, the job correctly returns error message and fails.
I think this is a bug. SQL Server error handling is not quite as sophisticated as what you can find in Oracle PL/SQL. A constraint error is likely created by SQL Server but the next command still gets executed and that clears out the error from the buffer.
It seems to be simply a matter of bad scripting on the part of Data Services. There isn’t any check for @@ERROR. Here is what the SQL Server manual has to say about @@error:
But is the error on the part of SQL Server clearing the @@ERROR value and happily moving alogn despite a constraint violation - or is Data Services ‘lazy’ in its checking of @@ERROR and thus misses the raised error all together?
If it is the first, then Data Services would not have a chance to catch the error amid the burst of SQL statements executed on the database?
Has an ADAPT ticket be raised for this issue by the topic starter?
Thank you all for your comments. I now have a better understanding of the issue. ADAPT ticket has not been raised. I don’t have that capability, I can only pass along the issue to my client, which I have done. As a workaround, I plan to set commit count=1 for small amounts of data, and force full pushdown for larger datasets (loading to interim template table without constraints when necessary).
on the target table that you are inserting the rows into via BODS, under options> have you enabled error handling and to output rows to a flat file?
If this is the case, BODS will write the rows that cannot be inserted to the .txt etc and the job will carry on successfully. If this is not the case, ignore me