Categories
concurrency

How to update and query a Queue table with multiple consumers

First, create the test table and populate it with some data. The data itself isn’t too important; we just want a table that will be updated while also concurrently be queried and then deleted.

drop table if exists [dbo].[Queue];

create table [dbo].[Queue](
	QueueID int identity( 1, 1 ) primary key,
	SomeAction varchar(100) not null
)

go

insert into [dbo].[Queue]( [SomeAction] )
select top 1000
	ac.[name]
from [master].[sys].[all_columns] ac
where ac.[column_id] between 1 and 100
group by ac.[name]
order by newid();

Now open a second tab and paste this next query. This one will make random changes to the queue table every tenth of a second, while holding a transaction on the modified rows.

again:
	begin tran;
		with numbers as (
			select top 10
				ac.[column_id]
			from [master].[sys].[all_columns] ac
			where ac.[column_id] between 1 and 100
			group by ac.[column_id]
			order by newid()
		)
		update q with(updlock) set
			q.[SomeAction] = convert( varchar(100), newid() )
		from [dbo].[Queue] q
		join numbers n on q.[QueueID] = n.[column_id]

	   waitfor delay '00:00:00.1';
	commit tran;
goto again;

Now, open two more tabs and paste this SQL into each.
Try more tabs if you want to test the behavior.

set nocount on;
declare @queueid int;

waitfor delay '00:00:01';

while exists(select * from [dbo].[Queue] with (updlock, readpast) ) begin;

   begin tran;

	   select top 1 @queueid = q.[QueueID]
	   from [dbo].[Queue] q with(updlock, readpast) 

	   print 'processing queueid # ' + cast(@queueid as varchar) 

	   -- Simulate some slow processing..
	   waitfor delay '00:00:00.1';

	   delete q
	   from [dbo].[Queue] q
	   where q.[QueueID] = @queueid

   commit tran;
end;

Execute all 3+ tabs at once to simulate the load an application might place on the Queue table, along with 2+ queue consumers.

If everything works well, each unique QueueID should only show up once across all running consumer tabs in the results.


How does this work? Well.. here’s my simplified English explanation.

The with(updlock) query hint tells the SQL engine that, “Hey, place some locks on these rows that I’m going to modify inside this transaction! Don’t let anyone else update them!”

And the with(readpast) query hint tells the SQL engine that, “Hey, go ahead and just skip past any rows that are locked. Just show me the ones that are not locked.”

Someone with better wording and the proper terminology (of what the locks are called) can give you a better understanding of what’s going on in each query.

I’m just hoping that these queries (my simplified re-telling) helps you to understand how to modify a queue while also consuming rows from the very same table, but without blocking and without deadlocking.

Any feedback? Questions?
Better practices I’ve glanced over? (Such as setting the isolation level..)

Categories
MSSQL SQL SQL Script SQL Server

A Visual Explanation of Joins

https://blog.codinghorror.com/a-visual-explanation-of-sql-joins/

Categories
How To How To

How to [Smartly] add Braces in C#

If you need a SQL object correctly quoted by the [ and ] braces.

		/// <summary>
		///     Add the left [ and right ] braces if they're not already on the string.
		///     <para>An empty or whitepsace string throws <see cref="ArgumentEmptyException"/>.</para>
		/// </summary>
		/// <param name="self"></param>
		/// <returns></returns>
		/// <exception cref="ArgumentEmptyException"></exception>
		[DebuggerStepThrough]
        [NotNull]
        [Pure]
        public static String SmartBraces( [NotNull] this String? self ) {
            self = self.Trimmed();

            if ( String.IsNullOrEmpty( self ) ) {
	            throw new ArgumentEmptyException( nameof( self ) );
            }

			if ( self.StartsWith( "[", StringComparison.Ordinal ) && self.EndsWith( "]", StringComparison.Ordinal ) ) {
	            self = self[ 1..^1 ]?.Trim();
            }

            if ( String.IsNullOrEmpty( self ) ) {
	            throw new ArgumentEmptyException( nameof( self ) );
			}

			return $"{'['}{self}{']'}";
        }

		/// <summary>Trim the ToString() of the object; returning null if null, empty, or whitespace.</summary>
		/// <param name="self"></param>
		/// <returns></returns>
		[DebuggerStepThrough]
		[CanBeNull]
		[Pure]
		public static String? Trimmed<T>( [CanBeNull] this T self ) =>
			self switch {
				null => default( String? ),
				String s => s.Trim().NullIfEmpty(),
				var _ => self.ToString()?.Trim().NullIfEmpty()
			};


		/// <summary>Returns null if <paramref name="self" /> is <see cref="String.IsNullOrEmpty" />.</summary>
		/// <param name="self"></param>
		/// <returns></returns>
		[CanBeNull]
		[DebuggerStepThrough]
		[Pure]
		public static String? NullIfEmpty( [CanBeNull] this String? self ) => String.IsNullOrEmpty( self ) ? null : self;


//Note: All attributes can be removed.

Categories
Numbers SQL SQL Script Strings

SQL Server Function: How to format bytes into greater units like MB, GB, TB, etc

Here’s a link to my gist for a SQL script to return a string from bytes formatted as a larger unit.

Examples

select [dbo].[FormatBytes]( 1324.13, 'byte' );
select [dbo].[FormatBytes]( 13241322.567567, 'bytes' );
select [dbo].[FormatBytes]( 1324132255.567567, 'tb' );

Link: https://gist.github.com/Protiguous/c40ef32ee4fa0f4e93c9413d6cc3d6bd

select [dbo].[FormatBytes]( 13241322.567567, 'bytes' );
select [dbo].[FormatBytes]( 132413225.567567, 'bytes' );
select [dbo].[FormatBytes]( 1324132255.567567, 'bytes' );
select [dbo].[FormatBytes]( 13241322551.567567, 'bytes' );
select [dbo].[FormatBytes]( 13241322.567567, 'mb' );
select [dbo].[FormatBytes]( 132413225.567567, 'gb' );
select [dbo].[FormatBytes]( 1324132255.567567, 'tb' );
select [dbo].[FormatBytes]( 13241322551.567567, 'zb' );
select [dbo].[FormatBytes]( 13241322551.567567, 'yb' );
select [dbo].[FormatBytes]( 132413225512.567567, 'yb' );
select [dbo].[FormatBytes]( 132413225, 'bb' );
select [dbo].[FormatBytes]( 1324132253, 'bb' ); –too big!
select [dbo].[FormatBytes]( 1324.13, 'byte' );
select [dbo].[FormatBytes]( 1324135, 'geopbyte' ); –too big!
create or alter function [dbo].[FormatBytes] ( @bytes decimal(38, 0), @toUnit nvarchar(15) = N'bytes' )
returns sysname
with schemabinding
as
begin
declare @prefix decimal(38, 0); adjust precision to your needs. Can scale higher bytes the lower precision is.
set @toUnit = trim(@toUnit);
if @toUnit is null return null;
set @bytes = @bytes *
case @toUnit collate SQL_Latin1_General_CP1_CI_AI
when N'b' then 1
when N'byte' then 1
when N'bytes' then 1
when N'kb' then 1024
when N'kilobyte' then 1024
when N'kilobytes' then 1024
when N'mb' then 1048576
when N'megabyte' then 1048576
when N'megabytes' then 1048576
when N'gb' then 1073741824
when N'gigabyte' then 1073741824
when N'gigabytes' then 1073741824
when N'tb' then 1099511627776
when N'terabyte' then 1099511627776
when N'terabytes' then 1099511627776
when N'pb' then 1125899906842624
when N'petabyte'then 1125899906842624
when N'petabytes' then 1125899906842624
when N'eb' then 1152921504606846976
when N'exabyte' then 1152921504606846976
when N'exabytes' then 1152921504606846976
when N'zb' then 1180591620717411303424
when N'zettabyte'then 1180591620717411303424
when N'zettabytes' then 1180591620717411303424
when N'yb' then 1208925819614629174706176
when N'yottabyte' then 1208925819614629174706176
when N'yottabytes' then 1208925819614629174706176
when N'bb' then 1237940039285380274899124224
when N'brontobyte' then 1237940039285380274899124224
when N'brontobytes' then 1237940039285380274899124224
when N'geopbyte' then 1267650600228229401496703205376
when N'geopbytes' then 1267650600228229401496703205376
end;
set @prefix =
case
when abs(@bytes) < 1024 then @bytes bytes
when abs(@bytes) < 1048576 then (@bytes / 1024) kb
when abs(@bytes) < 1073741824 then (@bytes / 1048576) mb
when abs(@bytes) < 1099511627776 then (@bytes / 1073741824) gb
when abs(@bytes) < 1125899906842624 then (@bytes / 1099511627776) tb
when abs(@bytes) < 1152921504606846976 then (@bytes / 1125899906842624) pb
when abs(@bytes) < 1180591620717411303424 then (@bytes / 1152921504606846976) eb
when abs(@bytes) < 1208925819614629174706176 then (@bytes / 1180591620717411303424) zb
when abs(@bytes) < 1237940039285380274899124224 then (@bytes / 1208925819614629174706176) yb
when abs(@bytes) < 1267650600228229401496703205376 then (@bytes / 1237940039285380274899124224) bb
else (@bytes / 1267650600228229401496703205376) geopbytes
end;
return convert(sysname, @prefix) +
case
when abs(@bytes) < 1024 then N' Bytes'
when abs(@bytes) < 1048576 then N' KB'
when abs(@bytes) < 1073741824 then N' MB'
when abs(@bytes) < 1099511627776 then N' GB'
when abs(@bytes) < 1125899906842624 then N' TB'
when abs(@bytes) < 1152921504606846976 then N' PB'
when abs(@bytes) < 1180591620717411303424 then N' EB'
when abs(@bytes) < 1208925819614629174706176 then N' ZB'
when abs(@bytes) < 1237940039285380274899124224 then N' YB'
when abs(@bytes) < 1267650600228229401496703205376 then N' BB'
else N' geopbytes'
end;
end;
view raw FormatBytes.sql hosted with ❤ by GitHub
Categories
Benchmarking Developers SQL Tuning

RE: How to run your CTE just once, and re-use the output

There’s an excellent article “How to run your CTE just once, and re-use the output” over at sqlsunday.com, but a few people on Reddit are wondering if inserting into a #temp table or @table variable would be any better [for performance].

As Brent Ozar is fond [paraphrasing!] of saying, “If you don’t know, you’re wasting your time tuning!“.

Spinning up a 4GHz, 6 CPU Hyper-V with 8GB ram, SQL Server 2019 Developer Edition, parallelism cost threshold to 50, server maxdop to 6, attaching a copy of the Stack Overflow March 2016 database onto a 7200RPM 1TB hard drive, setting compatibility level to 190, database maxdop to 6, creating the index given on sqlsunday.com (which took 9 minutes), setting “set statistics io on;” and running each query, here are the results I see on this virtual test server.


Query 1, “CTE + Union All”
(8926714 rows affected)

Table 'Users'. Scan count 28, logical reads 322600.
Table 'Posts'. Scan count 30, logical reads 333030.

Total logical reads: 655,630. This is our baseline.


Query 1-B, “Query 1 + Suggested Index”

Query 1 is suggesting to create an index, in addition to the index already used from the blog post.

CREATE NONCLUSTERED INDEX ix_suggested_index
ON [dbo].[Users] ([Reputation])
INCLUDE ([DisplayName]);

This took 4 seconds to create. Let’s see if Query 1 runs any better with this new suggested index. (The suggested index is now used 4 times alongside the previous index.)

(8926714 rows affected)

Table 'Users'. Scan count 28, logical reads 99860.
Table 'Posts'. Scan count 30, logical reads 333030.

Logical reads on the Users table has dropped significantly, and the query returned results sooner.

Total logical reads: 432,890. (Better than Query 1.)


Query 2, “Query 1 – Expanded”
(8218141 rows affected)

Table 'Users'. Scan count 28, logical reads 323384.
Table 'Posts'. Scan count 28, logical reads 333030.

Already, we can see that the row counts are different. Query 1 is returning 708,573 more rows than Query 2. Without spending more time digging into why, my guess would be the query was incorrectly expanded [from the cte to subqueries] to the post?

Total logical reads: 656,414. (Worse. And the row count is new lower.)


And again, now with the suggested index. (Same query as Query 2, just with index.)

(8218141 rows affected)

Table 'Users'. Scan count 28, logical reads 99924.
Table 'Posts'. Scan count 29, logical reads 332248.

Logical reads on the Users table has dropped significantly, and this query also returned results sooner.

Total logical reads: 432,172. (Slightly better than the baseline. But I do not trust the results as the row count is still lower.)


Query 3, “CTE + Cross Apply + Union All”
(8926714 rows affected)

Table 'Posts'. Scan count 10, logical reads 109998.
Table 'Users'. Scan count 7, logical reads 24996.

Warning: Null value is eliminated by an aggregate or other SET operation.

Same row count, fewer logical reads on both tables. Except now we have a null-warning!

I would prefer to rewrite all versions of the query to be properly sorted and then run a compare using WinMerge on the resulting text files.

Total logical reads: 134,994. (A lot better than the baseline!)


Query 4, “CTE + Cross Apply”
(8926714 rows affected)

Table 'Posts'. Scan count 10, logical reads 110212.
Table 'Users'. Scan count 7, logical reads 24996.

It did run a few seconds faster than Query 3 with about the same logical reads. The execution plan is also much cleaner. Remember: these tests are still being ran with the SQL Server 2019 suggested index created after the original Query 1 on the Users table.

Total logical reads: 135,208. (A lot better than baseline!)


I’ll update this post with more information about using #temp tables and @table variables when I have the time (and ambition). For now, I have to get back to “working” šŸ˜‰.


…Later the next day…

Query 5, “Select into #temp + Cross Apply Values”
SELECT u.DisplayName, u.Reputation,
        SUM(p.ViewCount) AS ViewCount,
        SUM(p.CommentCount) AS CommentCount,
        SUM(p.FavoriteCount) AS FavoriteCount
into #temp
FROM dbo.Users AS u
LEFT JOIN dbo.Posts AS p ON p.OwnerUserId=u.Id AND p.PostTypeId IN (1, 2)
GROUP BY u.DisplayName, u.Reputation;

SELECT DisplayName, x.Metric, x.[Value]
FROM #temp cte
CROSS APPLY (
    VALUES ('Reputation', cte.Reputation),    --- 1
           ('Views',      cte.ViewCount),     --- 2
           ('Comments',   cte.CommentCount),  --- 3
           ('Favorited',  cte.FavoriteCount)  --- 4
    ) AS x(Metric, [Value])
WHERE x.[Value]>0;

And the results of selecting into a #temp table, and then selecting from that #temp table (otherwise the same query as Query 4).

(4569697 rows affected)

Table 'Posts'. Scan count 9, logical reads 110144.
Table 'Users'. Scan count 7, logical reads 24940.
Warning: Null value is eliminated by an aggregate or other SET operation.

(8926714 rows affected)

Table '#temp'. Scan count 7, logical reads 28206.

Total logical reads: 163,290. (Better than baseline, but more than Query 4. Plus took ~20 seconds longer to return results.)


Query 6, “insert @table then cross apply”
declare @temp table(
	DisplayName nvarchar(40),
	Reputation int,
	ViewCount int,
	CommentCount int,
	FavoriteCount int
)

insert into @temp( DisplayName, Reputation, ViewCount, CommentCount, FavoriteCount )
SELECT u.DisplayName, u.Reputation,
        SUM(p.ViewCount) AS ViewCount,
        SUM(p.CommentCount) AS CommentCount,
        SUM(p.FavoriteCount) AS FavoriteCount
FROM dbo.Users AS u
LEFT JOIN dbo.Posts AS p ON p.OwnerUserId=u.Id AND p.PostTypeId IN (1, 2)
GROUP BY u.DisplayName, u.Reputation;

SELECT DisplayName, x.Metric, x.[Value]
FROM @temp cte
CROSS APPLY (
    VALUES ('Reputation', cte.Reputation),    --- 1
           ('Views',      cte.ViewCount),     --- 2
           ('Comments',   cte.CommentCount),  --- 3
           ('Favorited',  cte.FavoriteCount)  --- 4
    ) AS x(Metric, [Value])
WHERE x.[Value]>0;

And the results of this @table query.

(4569697 rows affected)

Table 'Posts'. Scan count 2, logical reads 109728.
Table 'Users'. Scan count 1, logical reads 24782.
Warning: Null value is eliminated by an aggregate or other SET operation.

(8926714 rows affected)

Table '#B63020B0'. Scan count 7, logical reads 28203.

Total logical reads: 162,713. (Slightly better than Query 5, but still 27,505 more than Query 4. That’s an extra 225 MB in writes+reads and a slower query!)


Conclusion

The article “How to run your CTE just once, and re-use theĀ output” over at sqlsunday.com had the correct idea.

Using #temp tables when you need to use the same data in multiple separate queries is usually a good idea (better than repeatedly querying the same source tables), but if the same data needs to be queried multiple times in the same query, then go with the Query 4’s “CTE+cross apply” method.

Why the @table variables sucked: As far as I’ve read, we don’t get up-to-date statistics when using @table variables. And that would cause less than ideal query plans. If I’m behind on my readings about this, please let me know with a link to some updated reading material!

Categories
script SQL The Arcane

ALL, ANY, and SOME: 3 SQL operators you do NOT want to use…

http://bradsruminations.blogspot.com/2009/08/all-any-and-some-three-stooges.html

-- 4 questions with BoxOf3Coins
with BoxOf3Coins(CoinValue) as (select  5 union all select 10 union all select 25)
	,EmptyBox as (select CoinValue from BoxOf3Coins where 1 = 2)
select 'Is twenty-five cents worth more than ANY of the coins in BoxOf3Coins?',case when 25 > any (select CoinValue from BoxOf3Coins) then 'Yes' else 'No' end;

with BoxOf3Coins(CoinValue) as (select  5 union all select 10 union all select 25)
	,EmptyBox as (select CoinValue from BoxOf3Coins where 1 = 2)
select 'Is twenty-five cents worth more than ALL (EACH) of the coins in BoxOf3Coins?',case when 25 > all (select CoinValue from BoxOf3Coins) then 'Yes' else 'No' end;

with BoxOf3Coins(CoinValue) as (select  5 union all select 10 union all select 25)
	,EmptyBox as (select CoinValue from BoxOf3Coins where 1 = 2)
select 'Is twenty-five cents worth the same as ANY of the coins in BoxOf3Coins?',case when 25 = any (select CoinValue from BoxOf3Coins) then 'Yes' else 'No' end;

with BoxOf3Coins(CoinValue) as (select  5 union all select 10 union all select 25)
	,EmptyBox as (select CoinValue from BoxOf3Coins where 1 = 2)
select 'Is twenty-five cents worth the same as ALL (EACH) of the coins in BoxOf3Coins?',case when 25 = all (select CoinValue from BoxOf3Coins) then 'Yes' else 'No' end;

-- Now answer the same question with the EmptyBox
with BoxOf3Coins(CoinValue) as (select  5 union all select 10 union all select 25)
	,EmptyBox as (select CoinValue from BoxOf3Coins where 1 = 2)
select 'Is twenty-five cents worth more than ANY of the coins in EmptyBox?',case when 25 > any (select CoinValue from EmptyBox) then 'Yes' else 'No' end;

with BoxOf3Coins(CoinValue) as (select  5 union all select 10 union all select 25)
	,EmptyBox as (select CoinValue from BoxOf3Coins where 1 = 2)
select 'Is twenty-five cents worth more than ALL (EACH) of the coins in EmptyBox?',case when 25 > all (select CoinValue from EmptyBox) then 'Yes' else 'No' end;

with BoxOf3Coins(CoinValue) as (select  5 union all select 10 union all select 25)
	,EmptyBox as (select CoinValue from BoxOf3Coins where 1 = 2)
select 'Is twenty-five cents worth the same as ANY of the coins in EmptyBox?',case when 25 = any (select CoinValue from EmptyBox) then 'Yes' else 'No' end;

with BoxOf3Coins(CoinValue) as (select  5 union all select 10 union all select 25)
	,EmptyBox as (select CoinValue from BoxOf3Coins where 1 = 2)
select 'Is twenty-five cents worth the same as ALL (EACH) of the coins in EmptyBox?',case when 25 = all (select CoinValue from EmptyBox) then 'Yes' else 'No' end;

Categories
script SQL Strings

Jeff Moden’s Script for Splitting CSV Strings up to 8000 Length

Pulled from https://www.sqlservercentral.com/articles/tally-oh-an-improved-sql-8k-%E2%80%9Ccsv-splitter%E2%80%9D-function, I think.

CREATE OR ALTER FUNCTION [dbo].[DelimitedSplit8K](
	@pString VARCHAR(8000),
	@pDelimiter CHAR(1)
) --WARNING!!! DO NOT USE MAX DATA-TYPES HERE!  IT WILL KILL PERFORMANCE!
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 1 up to 10,000... enough to cover VARCHAR(8000)
  WITH E1(N) AS (
                 SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
                ),                          --10E+1 or 10 rows
       E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
       E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
 cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
                     -- for both a performance gain and prevention of accidental "overruns"
                 SELECT TOP (ISNULL(DATALENGTH(@pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
                ),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
                 SELECT 1 UNION ALL
                 SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(@pString,t.N,1) = @pDelimiter
                ),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
                 SELECT s.N1,
                        ISNULL(NULLIF(CHARINDEX(@pDelimiter,@pString,s.N1),0)-s.N1,8000)
                   FROM cteStart s
                )
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
 SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
        Item       = SUBSTRING(@pString, l.N1, l.L1)
   FROM cteLen l;
Categories
Date & Times script SQL

Improved SQL Function to Generate DateTimes

-- select * from [dbo].[GenDateTimes]( 'year', '1/1/0001', '12/12/9999' ) order by [Date];
-- select * from [dbo].[GenDateTimes]( 'month', '1/1/0001', '12/12/9999' ) order by [Date];
-- select * from [dbo].[GenDateTimes]( 'week', '1/1/0001', '12/12/9999' ) order by [Date];
-- select * from [dbo].[GenDateTimes]( 'day', '1/1/0001', '12/12/9999' ) order by [Date];
-- select * from [dbo].[GenDateTimes]( 'hour', '1/1/0001', '12/12/9999' ) order by [Date];
-- select * from [dbo].[GenDateTimes]( 'minute', '1/1/1999', '12/12/2999' ) order by [Date];
-- select * from [dbo].[GenDateTimes]( 'second', '1/1/1999', '12/12/1999' ) order by [Date];
-- select * from [dbo].[GenDateTimes]( 'millisecond', '1/1/1999', '1/12/1999' ) order by [Date];
create or alter function [dbo].[GenDateTimes](
	@Increment varchar(11),
	@Start datetime2(3),
	@End datetime2(3)
)
returns	@Range table( [Date] datetime2(3) )
as
begin;
	declare @units bigint = 
				case
					when @Increment = 'ms' THEN datediff_big( ms, @Start, @End )
					when @Increment = 'millisecond' THEN datediff_big( millisecond, @Start, @End )
					when @Increment = 's' THEN datediff_big( s, @Start, @End )
					when @Increment = 'second' THEN datediff_big( second, @Start, @End )
					when @Increment = 'mi' THEN datediff_big( mi, @Start, @End )
					when @Increment = 'minute' THEN datediff_big( minute, @Start, @End )
					when @Increment = 'hh' THEN datediff_big( hh, @Start, @End )
					when @Increment = 'hour' THEN datediff_big( hour, @Start, @End )
					when @Increment = 'd' THEN datediff( d, @Start, @End )
					when @Increment = 'day' THEN datediff( day, @Start, @End )
					when @Increment = 'wk' THEN datediff( wk, @Start, @End )
					when @Increment = 'week' THEN datediff( week, @Start, @End )
					when @Increment = 'mm' THEN datediff( mm, @Start, @End )
					when @Increment = 'month' THEN datediff( month, @Start, @End )
					when @Increment = 'y' THEN datediff( y, @Start, @End )
					when @Increment = 'year' THEN datediff( year, @Start, @End )
				end;

	set @Increment = lower( @Increment );

	with
		[l0] as ( select 0 [c] union all select 1 ),
		[l1] as ( select 1 [c] from [l0] [a] cross join [l0] [b] ),
		[l2] as ( select 1 [c] from [l1] [a] cross join [l1] [b] ),
		[l3] as ( select 1 [c] from [l2] [a] cross join [l2] [b] ),
		[l4] as ( select 1 [c] from [l3] [a] cross join [l3] [b] ),
		[l5] as ( select 1 [c] from [l4] [a] cross join [l4] [b] ),
		[nums] as ( select row_number() over( order by ( select null ) ) [n] from [l5] ),
		[cte]( [Date] ) AS (
			select @Start [Date]
			union all
			select
				case
					when @Increment = 'ms' THEN dateadd( ms, [n], @Start )
					when @Increment = 'millisecond' THEN dateadd( millisecond, [n], @Start )
					when @Increment = 's' THEN dateadd( s, [n], @Start )
					when @Increment = 'second' THEN dateadd( second, [n], @Start )
					when @Increment = 'mi' THEN dateadd( mi, [n], @Start )
					when @Increment = 'minute' THEN dateadd( minute, [n], @Start )
					when @Increment = 'hh' THEN dateadd( hh, [n], @Start )
					when @Increment = 'hour' THEN dateadd( hour, [n], @Start )
					when @Increment = 'd' THEN dateadd( d, [n], @Start )
					when @Increment = 'day' THEN dateadd( day, [n], @Start )
					when @Increment = 'wk' THEN dateadd( wk, [n], @Start )
					when @Increment = 'week' THEN dateadd( week, [n], @Start )
					when @Increment = 'mm' THEN dateadd( mm, [n], @Start )
					when @Increment = 'month' THEN dateadd( month, [n], @Start )
					when @Increment = 'y' THEN dateadd( y, [n], @Start )
					when @Increment = 'year' THEN dateadd( year, [n], @Start )
				end
			from [nums] [t]
			where [t].[n] <= @units
	)
	insert into @Range( [Date] )
	select [Date]
	from [cte];

	return;
end;
Categories
Date & Times script SQL

SQL Function to Generate Date Table with Count

-- select * from [dbo].[GenDates2]( 'd', getdate()-16383, getdate()+16384 );
-- select * from [dbo].[GenDates2]( 'w', getdate()-16383, getdate()+16384 );
-- select * from [dbo].[GenDates2]( 'm', getdate()-16383, getdate()+16384 );
-- select * from [dbo].[GenDates2]( 'y', getdate()-1, getdate()-16384 );
create or alter function [dbo].[GenDates2](
	@Increment char(1),
	@Start datetime2(3),
	@End datetime2(3)
)
returns	@Range table( [Date] datetime2(3), [Count] int )
as
begin;
	set @Increment = lower( @Increment );

	with cte( [Date], [Count] ) AS (
		select @Start [Date], 0 [Count]
		union all
		select
			case
				when @Increment = 'd' THEN dateadd(day, 1, [Date])
				when @Increment = 'w' THEN dateadd(week, 1, [Date])
				when @Increment = 'm' THEN dateadd(month, 1, [Date])
				when @Increment = 'y' THEN dateadd(year, 1, [Date])
			end,
		case
			when @Increment = 'd' THEN datediff( day, @Start, dateadd(day, 1, [Date]))
			when @Increment = 'w' THEN datediff(week, @Start, dateadd(week, 1, [Date]))
			when @Increment = 'm' THEN datediff(month, @Start, dateadd(month, 1, [Date]))
			when @Increment = 'y' THEN datediff(year, @Start, dateadd(year, 1, [Date]))
		end [Count]
		from [cte]
		where [Date] <= 
			case
				when @Increment = 'd' THEN dateadd(day, -1, @End)
				when @Increment = 'w' THEN dateadd(week, -1, @End)
				when @Increment = 'm' THEN dateadd(month, -1, @End)
				when @Increment = 'y' THEN dateadd(year, -1, @End)
			end
	)
	insert into @Range( [Date], [Count] )
	select [Date], [Count]
	from [cte]
	option (maxrecursion 32767);

	return;
end;
Categories
Date & Times script SQL

SQL Function to Generate Date Table

Update: Please try the Improved version of this script.

-- select * from [dbo].[GenDates]( 'd', getdate()-16383, getdate()+16384 );
-- select * from [dbo].[GenDates]( 'w', getdate()-16383, getdate()+16384 );
-- select * from [dbo].[GenDates]( 'm', getdate()-16383, getdate()+16384 );
-- select * from [dbo].[GenDates]( 'y', getdate()-16383, getdate()+16384 );
create or alter function [dbo].[GenDates](
	@Increment char(1),
	@Start datetime2(3),
	@End datetime2(3)
)
returns	@Range table( [Date] datetime2(3) )
as
begin;
	set @Increment = lower( @Increment );

	with [cte]( [Date] ) AS (
		select @Start
		union all
		select
			case
				when @Increment = 'd' THEN dateadd(day, 1, [Date])
				when @Increment = 'w' THEN dateadd(week, 1, [Date])
				when @Increment = 'm' THEN dateadd(month, 1, [Date])
				when @Increment = 'y' THEN dateadd(year, 1, [Date])
			end
		from [cte]
		where [Date] <= 
			case
				when @Increment = 'd' THEN dateadd(day, -1, @End)
				when @Increment = 'w' THEN dateadd(week, -1, @End)
				when @Increment = 'm' THEN dateadd(month, -1, @End)
				when @Increment = 'y' THEN dateadd(year, -1, @End)
			end
	)
	insert into @Range( [Date] )
	select [Date]
	from [cte]
	option (maxrecursion 32767);

	return;
end;