gom
June 7, 2012, 2:41pm
1
Hi!
SQL Server settings issue? I don’t know.
(12.2) 06-06-12 16:46:32 (E) (4744:5540) SYS-170102: Unable to print the system exception – heap may be corrupted.
(12.2) 06-06-12 16:46:32 (W) (4744:5540) RUN-051007: |Data flow DF_MIGR_WRK_P_R_C_DOUBLONS|Loader BODIVView1
Warning: Due to a bulk copy error, rows <1> to <8000> were not copied.
(12.2) 06-06-12 16:46:42 (E) (4744:5540) RUN-050505: |Data flow DF_MIGR_WRK_P_R_C_DOUBLONS|Loader BODIVView1
Unknown error in transform .
I changed this setting (Server Roles), but without effect!
EXEC master..sp_addsrvrolemember @loginame = N'user_dwh', @rolename = N'bulkadmin'
GO
My user has Server Roles: db_owner and bulkadmin .
Any idea?
Gôm
gom
June 7, 2012, 2:52pm
2
Well … it seems to be a Data Services issue and not SQL Server.
My SQL Server settings look good: Microsoft Q&A | Microsoft Learn
Any idea about a Data Services issue?
Gôm
gom
June 7, 2012, 4:47pm
3
With “Maximum rejects” set to zero … Bulk works!
SAP BusinessObjects Data Services Performance Optimization Guide:
Maximum rejects option
The Maximum rejects parameter (on the Bulk Loader Options page) can
also affect your SQL Server bulk-loading performance. When you set
Maximum rejects to 0, SAP BusinessObjects Data Services stops at the
first error it encounters and does not cache rows in the transaction (caching
rows in a transaction allows the software to process each row even if an
error occurs during the transaction commit process.)
When you do not specify a value for Maximum rejects , the software ignores
the rejected rows, logs warnings, and continues processing.
Do you know why?
Gôm
system
June 8, 2012, 10:02pm
4
Because you are probably trying to use too large a buffer. Lower the rows per commit from 8,000 (I think that’s what you’re using) to something like 1,000 to start out.
eganjp (BOB member since 2007-09-12)
gom
June 8, 2012, 10:10pm
5
Thanks but no. We also thought it could be that and we tried 1000 rows and 2000.
But we only changed “Maximum rejects” to zero and it works.
Here is why 8000 rows seems to be the good value : http://wiki.sdn.sap.com/wiki/display/EIM/The+impact+of+the+CommitSize+(SQL+Server) !
Gôm
system
June 8, 2012, 10:18pm
6
All our production code uses a maximum reject value of zero. We have it set this way simply because a single failed row requires us to fail all the rows. Based on what you posted it sounds like the value of zero also has a performance benefit (no caching).
eganjp (BOB member since 2007-09-12)
gom
June 8, 2012, 10:19pm
7
“no caching” is a performance benefit?!
Gôm
system
June 8, 2012, 11:20pm
8
That’s what it sounds like to me.
eganjp (BOB member since 2007-09-12)