List Databases with a different collation than the SQL Server

    [d].[name] [Database Name]
   ,databasepropertyex( [d].[name], 'COLLATION' ) [Database Collation]
   ,serverproperty( 'COLLATION' ) [Server Collation]
from [sys].[databases] [d]
where ( databasepropertyex( [d].[name], 'COLLATION' )) != serverproperty( 'COLLATION' );

List collation on all columns in all tables in database on SQL Server

    concat( quotename( [s].[name] ), N'.', quotename( [t].[name] ), N'.', quotename( [c].[name] ) ) [Column Name]
   ,[c].[collation_name] [Collation]
from [sys].[schemas] [s]
join [sys].[tables] [t] on [t].[schema_id] = [s].[schema_id]
join [sys].[columns] [c] on [c].[object_id] = [t].[object_id]
where [c].[collation_name] is not null
order by [Column Name];


How to save some Emergency Drive Space for SQL Server

Running out of disk space on a SQL Server server is something that you should never allow to happen, but it can happen.

If you need to be the emergency Fireman putting out the fire, here is a way to “reserve” some drive space ahead of time. It is literally an 8GB “dummy” file that you can delete at any time without worry – nothing is using the file. Just SHIFT+DELETE it so it doesn’t fill up the Recycle Bin.

fsutil.exe file createnew "Emergency Disk Space (safe to delete).$$$" 8589934592

This will create an 8GB temporary file in the current folder. Adjust the size to suit your future needs.

When you need to reclaim the disk space, simply select the file and shift+delete (no need to send it to the Recycle Bin).

Here’s the gist if you don’t feel like copy & pasting.

List Cached Data Per Object in Memory

	(count(*) * 8) / 1024 as [MB Used],
	object_schema_name( obj.[object_id] ) [Schema Name],
    obj.[name] [tablename],
from sys.dm_os_buffer_descriptors bd
join (
		object_name( p.[object_id] ) [name],
	from sys.allocation_units au
	join sys.partitions p on ( au.[container_id] = p.[hobt_id] and ( au.[type] = 1 or au.[type] = 3 ) )
		or ( au.[container_id] = p.[partition_id] and au.[type] = 2 )
    ) obj on bd.[allocation_unit_id] = obj.[allocation_unit_id]
where bd.[database_id] = db_id()
group by object_schema_name( obj.[object_id] ), [name], [index_id]
order by [MB Used] desc

Jeff Moden’s Tally Table Function (Generate Large Numbers)

Testing Jeff’s excellent number table generator with the examples below takes ~8 minutes on this 9-CPU 16GB ram, 3.7 GHz virtual machine running SQL Server 2019 Developer Edition CU12. Server maxdop is set to 8, parallelism cost threshold is set to 50. Database maxdop is also set to 8.

Disclaimer: This virtual machine is not optimized for performance. The 4 equally sized tempdb data files are purposely located on slow storage (only ~6200 IOPS) through the same storage controller in an attempt to exaggerate the effect of any “bad” parts of queries. There are also backup (and other various) jobs possibly running during these tests.

Jeff Modan’s script will of course run faster on better [optimized] hardware.

Test Query 1: Select Into #temp version, 1 billion numbers.

drop table if exists #temp;
select [N] into #temp from [dbo].[fnTallyBig]( 1073741824 );

Result: Test query 1 took ~8 minutes, went parallel and used all 8 cores; hovered at 14% CPU usage – according to the Hyper-V Manager.

Test Query 2: Select @n=N version, 1 billion numbers.

declare @n bigint;
select @n=[N] from [dbo].[fnTallyBig]( 1073741824 );

Result: Test query 2 took ~6.8 minutes and also went parallel and used all 8 cores; hovered around 9% CPU usage – according to the Hyper-V Manager.

Testing conclusion: There are no known bad parts of Jeff Moden’s script. (Yay!)
(Also: The tempdb performance on this virtual machine is horrible! 😉)

This is the modified version of Jeff Moden’s “fnTally” script to create the number generating function. I’ve removed the @ZeroOrOne parameter in favor of always starting at zero.

If you read through the comments, there are alternate versions.. One that lets you specify the starting number!

 Return a column of BIGINTs from 0 up to and including @MaxN with a max value of 10 Quadrillion.

--===== Syntax example
   FROM [dbo].[fnTallyBig](@MaxN) t;
 select t.[N] into #numbers from [dbo].[fnTallyBig](4294967296) t;

 @MaxN has an operational domain from 0 to 4,294,967,296. Silent truncation occurs for larger numbers.

 Please see the following notes for other important information.

Original script can be found at

 Jeff's Notes:
 1. This code works for SQL Server 2008 and up.
 2. Based on Itzik Ben-Gan's cascading CTE (cCTE) method for creating a "readless" Tally Table source of BIGINTs.
    Refer to the following URL for how it works.
 3. To start a sequence at 0, @ZeroOrOne must be 0. Any other value that's convertible to the BIT data-type will cause the sequence to start at 1.
 4. If @ZeroOrOne = 1 and @MaxN = 0, no rows will be returned.
 5. If @MaxN is negative or NULL, a "TOP" error will be returned.
 6. @MaxN must be a positive number from >= the value of @ZeroOrOne up to and including 4,294,967,296. If a larger number is used, the function will silently truncate after that max. If you actually need a sequence with that many or more values, you should consider using a different tool. 😉
 7. There will be a substantial reduction in performance if "N" is sorted in descending order. If a descending sort is required, use code similar to the following. Performance will decrease by about 27% but it's still very fast especially compared with just doing a simple descending sort on "N", which is about 20 times slower.
    If @ZeroOrOne is a 0, in this case, remove the "+1" from the code.

     SELECT @MaxN = 1000;
     SELECT DescendingN = @MaxN-N+1 
       FROM dbo.fnTally2(@MaxN);

 8. There is no performance penalty for sorting "N" in ascending order because the output is implicitly sorted by ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
 9. This will return 1-10,000,000 to a bit-bucket variable in about 986ms.
    This will return 0-10,000,000 to a bit-bucket variable in about 1091ms.
    This will return 1-4,294,967,296 to a bit-bucket variable in about 9:12(mi:ss).

 Revision History:
 Rev 00 - Unknown     - Jeff Moden 
        - Initial creation with error handling for @MaxN.
 Rev 01 - 09 Feb 2013 - Jeff Moden 
        - Modified to start at 0 or 1.
 Rev 02 - 16 May 2013 - Jeff Moden 
        - Removed error handling for @MaxN because of exceptional cases.
 Rev 03 - 07 Sep 2013 - Jeff Moden 
        - Change the max for @MaxN from 10 Billion to 10 Quadrillion to support an experiment. 
          This will also make it much more difficult for someone to actually get silent truncation in the future.
 Rev 04 - 04 Aug 2019 - Jeff Moden
        - Enhance performance by making the first CTE provide 256 values instead of 10, which limits the number of CrossJoins to just 2. Notice that this changes the maximum range of values to "just" 4,294,967,296, which is the entire range for INT and just happens to be an even power of 256. Because of the use of the VALUES clause, this code is "only" compatible with SQLServer 2008 and above.
        - Update old link from "SQLMag" to "ITPro". Same famous original article, just a different link because they changed the name of the company (twice, actually).
        - Update the flower box notes with the other changes.
  H2(N) AS ( SELECT 1 
               FROM (VALUES
                    ) V(N))           --16^2 or 256 rows
, H4(N) AS (SELECT 1 FROM H2 a, H2 b) --16^4 or 65,536 rows
, H8(N) AS (SELECT 1 FROM H4 a, H4 b) --16^8 or 4,294,967,296 rows
            SELECT N = 0 UNION ALL
            SELECT TOP(@MaxN)
                   N = ROW_NUMBER() OVER (ORDER BY N)
              FROM H8;