SQL Server, Table Comparison and dirty reads

I am doing something wrong but cannot figure out what. HELP!

I want to reproduce the situation where SQL Server fails and you have to switch to read_uncomnitted. So I created a table with an INT plus a VARCHAR(20) column, create 1’000’000 rows with row_gen, the varchar is set to a constant different to the run before and Table Comparison to row_by_row mode. Or sorted. Either way, I never get an error, dataflow goes through without a problem. Did they change something in SQL Server 2008? Is there a table property to be set?? Is my table too small???


Werner Daehn :de: (BOB member since 2004-12-17)

Did you remove any indexes on the table? That should force some blocking.

It may be that you have to either make the table larger, or throttle back the memory allocated to SQL Server so that it won’t try to do discrete row locks… ?


dnewton :us: (BOB member since 2004-01-30)

Primary key index exists. Table is 20MByte big - hmm will increase that.


Werner Daehn :de: (BOB member since 2004-12-17)

No, cannot force an error. Multiple loaders, DOP, sp_indexoption
If you want, there is no error :reallymad:

Can anybody come up with a simple reproduceable use case?


Werner Daehn :de: (BOB member since 2004-12-17)

What error are you trying to get?

Blocking doesn’t produce an error, so are you trying to get the error that can occur when you do an uncommitted read, and the underlying data changes?

“Could not continue scan with NOLOCK due to data movement.” I think it’s a 601 or 605 error.

So are you saying that you’re changing the source data constantly while running your dataflow, and are reading from the source table and not getting this error?

What if you run in debug mode so you can slow down the ETL and ensure that things are getting changed?


dnewton :us: (BOB member since 2004-01-30)

My goal is very simple: I thought that TC transform in row-by-row mode requires to set the dirty read flag, otherwise you get an error. Always, as long as the volumes are larger. Now that I try that out myself, I can use all TC modes I like and never get an error…

Worst thing I get is a blocking lock so that the DF does not move at all anymore.


Werner Daehn :de: (BOB member since 2004-12-17)

I used to get a lot of Blocking (particularly under SQL 2000), but never an error. The only error you might get is a deadlock, but that was very rare.

Under SQL 2005, most of the blocking has gone away. Perhaps it’s improved even more under 2008.

So let’s say you’re internally setting the TC’s reader to allow Dirty Reads. The reason you’re not going to get the dirty-read error on a target table is that, over the course of a typical ETL job, any row you might be reading under a dirty-read context isn’t going to be changed (written/edited) by some other process. So the underlying data never changes. Right?


dnewton :us: (BOB member since 2004-01-30)

By the way, setting the TC reader to dirty reads could add a small performance benefit, since SQL Server won’t bother trying to place read locks on the target table.


dnewton :us: (BOB member since 2004-01-30)

Side note - SQL 2008 R2 (out this May) has further improvements to its locking mechanism.


dnewton :us: (BOB member since 2004-01-30)

https://wiki.sdn.sap.com/wiki/display/BOBJ/SQL+Server+DeadLocks


Werner Daehn :de: (BOB member since 2004-12-17)