select [d].[name] [Database Name] ,databasepropertyex( [d].[name], 'COLLATION' ) [Database Collation] ,serverproperty( 'COLLATION' ) [Server Collation] from [sys].[databases] [d] where ( databasepropertyex( [d].[name], 'COLLATION' )) != serverproperty( 'COLLATION' );
select concat( quotename( [s].[name] ), N'.', quotename( [t].[name] ), N'.', quotename( [c].[name] ) ) [Column Name] ,[c].[collation_name] [Collation] from [sys].[schemas] [s] join [sys].[tables] [t] on [t].[schema_id] = [s].[schema_id] join [sys].[columns] [c] on [c].[object_id] = [t].[object_id] where [c].[collation_name] is not null order by [Column Name];
Running out of disk space on a SQL Server server is something that you should never allow to happen, but it can happen.
If you need to be the emergency Fireman putting out the fire, here is a way to “reserve” some drive space ahead of time. It is literally an 8GB “dummy” file that you can delete at any time without worry – nothing is using the file. Just SHIFT+DELETE it so it doesn’t fill up the Recycle Bin.
fsutil.exe file createnew "Emergency Disk Space (safe to delete).$$$" 8589934592
This will create an 8GB temporary file in the current folder. Adjust the size to suit your future needs.
When you need to reclaim the disk space, simply select the file and shift+delete (no need to send it to the Recycle Bin).
Here’s the gist if you don’t feel like copy & pasting.
First, create the test table and populate it with some data. The data itself isn’t too important; we just want a table that will be updated while also concurrently be queried and then deleted.
drop table if exists [dbo].[Queue]; create table [dbo].[Queue]( QueueID int identity( 1, 1 ) primary key, SomeAction varchar(100) not null ) go insert into [dbo].[Queue]( [SomeAction] ) select top 1000 ac.[name] from [master].[sys].[all_columns] ac where ac.[column_id] between 1 and 100 group by ac.[name] order by newid();
Now open a second tab and paste this next query. This one will make random changes to the queue table every tenth of a second, while holding a transaction on the modified rows.
again: begin tran; with numbers as ( select top 10 ac.[column_id] from [master].[sys].[all_columns] ac where ac.[column_id] between 1 and 100 group by ac.[column_id] order by newid() ) update q with(updlock) set q.[SomeAction] = convert( varchar(100), newid() ) from [dbo].[Queue] q join numbers n on q.[QueueID] = n.[column_id] waitfor delay '00:00:00.1'; commit tran; goto again;
Now, open two more tabs and paste this SQL into each.
Try more tabs if you want to test the behavior.
set nocount on; declare @queueid int; waitfor delay '00:00:01'; while exists(select * from [dbo].[Queue] with (updlock, readpast) ) begin; begin tran; select top 1 @queueid = q.[QueueID] from [dbo].[Queue] q with(updlock, readpast) print 'processing queueid # ' + cast(@queueid as varchar) -- Simulate some slow processing.. waitfor delay '00:00:00.1'; delete q from [dbo].[Queue] q where q.[QueueID] = @queueid commit tran; end;
Execute all 3+ tabs at once to simulate the load an application might place on the Queue table, along with 2+ queue consumers.
If everything works well, each unique QueueID should only show up once across all running consumer tabs in the results.
How does this work? Here’s my simplified English explanation.
The with(updlock) query hint tells the SQL engine that, “Hey, place some locks on these rows that I’m going to modify inside this transaction! Don’t let anyone else update them!”
And the with(readpast) query hint tells the SQL engine that, “Hey, go ahead and just skip past any rows that are locked. Just show me the ones that are not locked.”
Someone with better wording and the proper terminology (of what the locks are called) can give you a better understanding of what’s going on in each query.
I’m just hoping that these queries (my simplified re-telling) helps you to understand how to modify a queue while also consuming rows from the very same table, but without blocking and without deadlocking.
Any feedback? Questions?
Better practices I’ve glanced over? (Such as setting the isolation level..)
Run these commands on the Windows Server to open the ports for listening on a named instance of SQL Server.
Note: Remember to enable TCP in the SQL Server Configuration Manager.
netsh firewall set portopening protocol=TCP port=1433 name=SQLServerTCP mode=ENABLE scope=SUBNET profile=CURRENT netsh firewall set portopening protocol=UDP port=1434 name=SQLServerUDP mode=ENABLE scope=SUBNET profile=CURRENT
For more help, refer to SQL Server & Windows Firewall.
-- NOTE: This is an EXAMPLE to show it is possible -- to achieve moving rows using an implicit transaction -- from one table into another table. -- Set up example tables, Foo & Bar. drop table if exists [dbo].[Bar]; drop table if exists [dbo].[Foo]; create table [dbo].[Bar]( [ID] [int] identity(1,1) not null primary key clustered, [MyKey] [int] null index ix_mykey, [MyColumn] [int] null index ix_mycol ) create table [dbo].[Foo]( [ID] [int] identity(1,1) not null primary key clustered, [MyKey] [int] null index ix_mykey, [MyColumn] [int] index ix_mycol, ) GO -- Create some dummy data insert into [dbo].[Foo]( [MyKey], [MyColumn] ) values (1, 11), (2, 22), (3, 33), (4, 44); GO 10240 -- Verify the rows before. select count(*) from [dbo].[Foo]; select count(*) from [dbo].[Bar]; -- Move rows from table Foo into table Bar using two styles. -- Method 1: "Direct" move delete from [dbo].[Foo] output [deleted].[MyKey], [deleted].[MyColumn] into [dbo].[Bar]( [MyKey], [MyColumn] ) where [ID] % 3 = 1; -- Method 2: "Subquery" move insert [dbo].[Bar]( [MyKey], [MyColumn] ) select [MyKey], [MyColumn] from ( delete from [dbo].[Foo] output deleted.* where [ID] % 3 = 2 ) [MovedRows] where [MovedRows].[MyKey] = 2; -- Note: The last where is not needed, just showing it is possible to re-filter incoming [from the deleted output] rows. -- Verify the counts after. -- Note: Any deleted rows filtered out via Method 2 are now gone. select count(*) from [dbo].[Foo]; select count(*) from [dbo].[Bar];