I’m Beggin, Beggin you, to stop using VarChars as IDs

As I was troubleshooting a performance issue, I noticed that there was an implicit conversion (SQL Server automatically converts the data from one data type to another) happening in my join. The join was on a column that was named the same in both tables, but one was datatype INT (integer) and the other was a datatype of VARCHAR(50) (variable character up to 50 places).

While the implicit conversion was happening transparently to our coders and users, it was causing performance impacts to the query. I wanted to change the datatype from VARCHAR(50) to an INT, not only to match the other table, but also because INTs are faster to join on than VARCHARs in the SQL engine.

My first step was to make sure there weren’t any values in the column that would have an issue changing to an int. For this task, I am using TRY_CAST to make my life easier.

SELECT TRY_CAST(SuperHeroId as INT) as Result, SuperHeroId
FROM dbo.Lair
WHERE TRY_CAST(SuperHeroId as INT) IS NULL
AND SuperHeroId IS NOT NULL

The TRY_CAST above is checking to see if I can CAST the value as an INT. If it can’t, it will return a NULL value. My WHERE clause will help me quickly identify the values that are failing which will allow me to fix the data before I change the data type on the column.

Once my query doesn’t return any rows, I am ready to change my datatype, which will remove that implicit conversion and increase the performance of any queries using that join.

The song for this post is Beggin’ by Maneskin.

These are the ints that never die, we reseeded negative.

Recently we had a system go down because we ran out of integers. (Mental note to create an alert when we are getting near to running out of integers.) If we upped the column to a bigint we were going to have to drop all the indexes. This server doesn’t have the capacity needed to do an operation of that size and it was estimated that it would be down for 8 hours while we dropped indexes, upped the column type and added back the indexes. This was way too long as it was early in the work day (had it been evening, it would have been fine).

There was also a concern about how many stored procedures were expecting an int but would need to be modified to a bigint, along with any code. That was a big undertaking and we were in an emergency down.

We talked about adding an additional table to take over but again, we were going to need to adjust a lot of things. During our discussion, I was reviewing this awesome blog post by Ed Pollack.

We decided that the fastest temporary solution was to reseed starting with the the smallest of negative ints -2147483648. Our table was already counting up with an increment of 1 and so we picked the smallest of numbers. That means we will seed with -2147483648, then go to -2147483647 and up to 0. We will have to watch closely as we approach 0 to not spill over to the positive numbers that have already been used.

This is only a temporary solution because of that spill over. Our real solution is to do the work and change that column to a bigint.

Here is a sample of what our code looked like:

DBCC CHECKIDENT ('dbo.MyTable', RESEED, -2147483648);

Because we chose this option, there was no need to drop indexes or fix code, but we are in the process of getting all of that ready for a production release.

Within seconds of reseeding, we were back online and working. It was absolutely a day I will remember.

The song for this post is Avicci’s The Nights.

I Made That Slow Query Run, Run, Run

I have been tuning queries and one of the first things I noticed was Sub-queries in the SELECT Clause.

SELECT (SELECT H1.FormalName
			FROM HideOut H1
			WHERE TypeDesc = 'Villain' 
			AND H1.Storyline = S.Storyline) AS VillainHideOut
     , (SELECT H2.FormalName
			FROM HideOut H2
			WHERE TypeDesc = 'Hero' 
			AND H2.Storyline = S.Storyline) AS HeroHideOut
     , S.HeroName AS Hero
     , V.VillainName AS Villain
FROM dbo.SuperHero S
	INNER JOIN [dbo].[Villain] V
		ON S.HeroLink = V.HeroLink
WHERE V.IsActive = 1 AND S.IsActive = 1

Why is this query slow? If this query were to return 50 rows, it would run each query in the SELECT clause 50 times, and since there are two of them, that is 100 query runs. What if I returned 100,000 rows? That would be 200,000 query runs. How could I do this differently?

SELECT H1.FormalName AS VillainHideOut
     , H2.FormalName AS HeroHideOut
     , S.HeroName AS Hero
     , V.VillainName AS Villain
FROM dbo.SuperHero S
	INNER JOIN [dbo].[Villain] V
		ON S.HeroLink = V.HeroLink
	LEFT JOIN HideOut H1 
		ON H1.Storyline = S.Storyline AND H1.TypeDesc = 'Villain'
	LEFT JOIN HideOut H2 
		ON H2.Storyline = S.Storyline AND H2.TypeDesc = 'Hero'
WHERE V.IsActive = 1 AND S.IsActive = 1

I moved the two correlated sub-queries into the JOIN clause and then simplified them to just join to the tables. This means I will only select from each table once, instead of for each row and will drop my query runs significantly. I could have left them as correlated sub-queries in the JOIN clause and it still would have performed better than having them in the SELECT clause.

In the query I was working with, I checked the statistics IO output and saw my logical reads on the HeroHideOut table drop from 24,103,280 logical reads down to 10,064 logical reads by making this one change.

Happy Tuning!

The song for this post is OneRepublic’s Run.

I can query multiple instances, I am King!

In the past, I have talked about CMS (Central Management Servers), but now I don’t have CMS configured and still want to query multiple instances at once. Local Server Groups are my friend.

In SSMS, I start by selecting View>>Registered Servers.

I then right click on “Local Server Groups” and select “New Server Group”.

Next I right click on the group I just created, in this case “Production” and select “New Server Registration”. I then fill in my servername, the type of Authentication, in this case I am using SQL Server Authentication and my login/password. I also am saving my password. This will help in the future. The Registered Server Name can be different. In the real world, my servernames are weird and so the Registered Server Name is the easy to remember name or the nickname I use for the server (all of my servers have nicknames). The description will come up when I hover over the server name once I have it registered.

Then I repeat this process until I have registered all my servers for Production under the Production group.

Now comes the cool part. I right click on my Production Server Group and select “New Query”. Because I saved my password, it connects to all my Production instances in one window. By default, it creates a pink bar at the bottom showing how many instances connected and the name of the Server Group.

Now I can run all my queries at once and the results will have the instance name prepended to each row. Word of warning, I never leave this connection open. I open it when I need it and then close it again so I don’t accidentally run something against all my servers.

The song for this post, King, is by Florence + The Machines.

Hit me with them good vibes, CTfP is set nice. Everything is so fire, little bit of sunshine!

Cost Threshold for Parallelism (CTfP) is one of my favorite server level settings in SQL Server. I remember the first time I heard this setting mentioned by Grant Fritchey. I quickly hopped on my servers and found them all set at the default (5) and adjusted them to 50 for the non SSRS servers and 30 for the SSRS ones. That was many years ago, but I had kept those numbers in my head because I didn’t know a better way.

Peter Shore gave an awesome presentation on Waits to our user group last week and reminded me of how much this setting can impact tuning. He also pointed us to a fantastic blog by Jonathan Kehayias about how to know the correct setting for your CTfP.

Peter explained that as I ran Jonathan’s awesome query, I would start to see a point in the StatementSubTreeCost column to help me identify the best CTfP for my environment.

My first thought after looking at this query, “I am so glad Jonathan wrote it because with that much XML, I wouldn’t know if it were safe to run without that trust.”

Today, I gave it a go. I kicked off the query and held my breath. Then I started to turn blue and realized this would probably take a minute. It took about 15 minutes and I was happy I didn’t panic at the wrong disco. It runs in a read uncommitted state which prevents blocking (thank you so much!) and I ran sp_whoisactive over and over to be safe.

This is Jonathan’s query, but I recommend you read his article too because there was so much good information.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; 
WITH XMLNAMESPACES   
(DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')  
SELECT  
     query_plan AS CompleteQueryPlan, 
     n.value('(@StatementText)[1]', 'VARCHAR(4000)') AS StatementText, 
     n.value('(@StatementOptmLevel)[1]', 'VARCHAR(25)') AS StatementOptimizationLevel, 
     n.value('(@StatementSubTreeCost)[1]', 'VARCHAR(128)') AS StatementSubTreeCost, 
     n.query('.') AS ParallelSubTreeXML,  
     ecp.usecounts, 
     ecp.size_in_bytes 
FROM sys.dm_exec_cached_plans AS ecp 
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS eqp 
CROSS APPLY query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple') AS qn(n) 
WHERE  n.query('.').exist('//RelOp[@PhysicalOp="Parallelism"]') = 1 

After running it, I got back 43 records. I felt that was low until I remembered that our CTfP is set higher than my brain standard at 150. After looking over the results, I felt that 150 was about right for this environment. I didn’t stop there.

Jonathan had mentioned how he uses this query to identify what needs to be tuned, and since tuning is my favorite, I started to play with the queries to get them running better.

Huge THANK YOU to the awesome SQL Server Community that is always willing to share and teach! I love being able to find what I need from people that I trust to make my job easier and I couldn’t do it without all of you!

Hugs and please stay safe!

The song for this post is Sunshine by OneRepublic.

I’m going on down to New Orleans and renaming some database things…

This week, I had a co-worker that was stuck. They no longer use SSMS and needed to rename a database. They asked if I had a script and so I wrote one. Here it is:

USE master /*Use the master database when renaming a database*/
GO

DECLARE @SQL VARCHAR(8000)

SELECT @SQL=COALESCE(@SQL,'')+'Kill '+CAST(spid AS VARCHAR(10))+ '; '
FROM sys.sysprocesses
WHERE DBID=DB_ID('DatabaseName')

EXEC(@SQL) /*This will kill all the connections to the database, which will allow it to be renamed*/

ALTER DATABASE DatabaseName /*This is the start of the rename*/
SET SINGLE_USER /*keep everyone out while we rename*/
WITH ROLLBACK IMMEDIATE;
GO
ALTER DATABASE DatabaseName MODIFY NAME = NewDatabaseName /*All the magic has been preparing for this moment, the rename*/
GO
ALTER DATABASE NewDatabaseName /*Make sure to use the new name*/
SET MULTI_USER; /*Back to letting others into the newly named database*/
GO

So useful, I had to save it.

The song for this post is Goin’ Down by the Monkees.

Now a story about the song. The Monkees’ TV Show came back on the air when I was little. I was immediately hooked and LOVED their music. I could relate a lot to Micky Dolenz because he was a prankster like me and my family. Goin’ Down was a song I didn’t pay much attention to because I could hardly understand what Micky was singing.

Recently, I read a new story about a time that Micky was doing a concert and there were deaf people in the audience. There was an ASL interpreter that had done a wonderful job with all the songs. Just as Micky was getting ready to sing this one, he looked over at the interpreter and said, “Good Luck”. After the first few lines, she gave up and just clapped along. He ended up standing next to her while singing the rest of the song. How awesome is that?

The story made me want to listen to the song more and it has been one of my favorite fast moving songs the last few weeks.

What for do you yearn? Watch that Replication Burn!

Replication is not my favorite, it is kind of far from my favorite. No further than that. Little further.

When it breaks, it can cause havoc and it always seems to break at the worst time. Recently we noticed that our logfile was massive (like 3 times the size of the database) and that was making many of the other processes painful. We didn’t know how long the log hadn’t been clearing so we got to burn it all (kind of).

The first thing I did was tell replication that we were done with all the transactions that had been committed.

EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0,@time = 0, @reset = 1

checkpoint

Had I known when replication had broken, I could have narrowed it down and put more details in this command to only clear out exactly what I needed to remove. Since I didn’t know, I told it to just be done with all of it. I ran a checkpoint trying to get it to clear out. At this point I took a log backup (I just kicked off my agent job).

“Burn, Replication, Burn!”

Then I checked to see how my log was looking and wanted to see if there were files that could be used.

DBCC LOGINFO('MyDatabaseName')

I was looking for status 0 (meaning the logfile could be reused or overwritten). There were still a lot that hadn’t cleared out so I repeated the process a few times and finally it was awesome! I had to do some clean up on my transaction log, but it was soooo much more usable.

We also had to re-initialize replication, but it was totally worth it.

The song for this post is Burn Butcher Burn by Joseph Trapanese and Joey Batey

Days pass by and my logs apply and I think that I’m ok until I find myself with data that is fading away…

Oh my goodness, are we back on the log-shipping journey? We are! After the last post, I found a bunch of things that I wanted to fix and one of them was alerting. How do I know if my logs are applying? How do I know if something bad is happening?

I don’t want emails that tell me everything is ok, only when things are bad and I want them to be helpful emails. Not only did I want an alert, I wanted an email with actual information that I can use to make my decisions. Decisions like, can I just apply a few logs to get caught up or did everything burn down and I need to pull a full backup plus all the logs to be up and running again?

This was a task for some super fancy alerts on my agent job.

First, I need to come up with the query of information I wanted to display in my email:

SELECT Max(CreatedDate) AS LastRestore, Origin AS [Database]
FROM RestoreFile
WHERE IsApplied = 1
GROUP BY Origin
HAVING DATEDIFF(hh,Max(CreatedDate),GETDATE()) > 3

This query will give me the LastRestored file time and for which Database it was restored. It will help me identify which database is having problems and how much I need to try to fix it. So, cool query, but how do I get that to email me? This next part saves me so much time with digging on my agent job issues.

DECLARE @xml NVARCHAR(MAX)
DECLARE @body NVARCHAR(MAX)

SET @xml = CAST(( SELECT [LastRestore] AS 'td','',[Database] AS 'td'
FROM (SELECT Max(CreatedDate) AS LastRestore, Origin AS [Database]
FROM RestoreFile
WHERE IsApplied = 1
GROUP BY Origin
HAVING DATEDIFF(hh,Max(CreatedDate),GETDATE()) > 3) A 
FOR XML PATH('tr'), ELEMENTS ) AS NVARCHAR(MAX))

SET @body ='<html><body><H3>Last Applied Log</H3>
<table border = 1> 
<tr>
<th> Last Restore </th> <th> Database </th></tr>'    

SET @body = @body + @xml +'</table></body></html>'

IF (SELECT TOP 1 COUNT (DISTINCT(Origin))
FROM RestoreFile
WHERE IsApplied = 1 
GROUP BY Origin
HAVING DATEDIFF(hh,Max(CreatedDate),GETDATE()) > 3) > 0
BEGIN
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Default', -- replace with your SQL Database Mail Profile 
@body = @body,
@body_format ='HTML',
@recipients = 'andrea@WayneManor.com', -- replace with your email address
@subject = 'Transaction Logs Are Old' ;
END

A few things are happening here. First I am declaring my @xml and my email @body. Then I create the query that is going to return my formatted table in my email and shove that into XML. Next, I start building the body of the email with a title for my table, and column names too (using HTML). Then I combine it with the XML.

The “If ” statement allows me to only send the email if it meets conditions in the query. This query is similar to the one I created above, but it doesn’t have to be. Also, the count has to be greater than 0 to send the email, otherwise it will skip sending.

Finally, I get to the part that calls sp_send_dbmail. This allows me to set the profile for the sender, add in my XML\HTML built body and tell the proc to use that HTML formatting. I add my email recipients and a subject for the email.

I started testing it and it was awesome…until one day when I got an error from what is reading my databases because one wasn’t in Standby. I decided to add another step that would also check that all the databases were in Standby. It is almost exactly the same, with the exception of the query, which looks like this:

SELECT name as [Database]
FROM sys.databases
WHERE is_read_only <> 1 -- Read_only should equal 1 if the database is in standby. 
AND database_id > 5 -- 5 is my database that is controlling the log-shipping and lower are system databases.

I have these emails also forward me a text so I know that something is wrong. It is awesome and makes me worry so much less.

The song for this post is Imagine Dragons’ Wrecked.

Oh my my, yeah I’m loving extra tuning time. ‘Cause I’m a sucker for Auto Tuning life.

A few weeks ago, we were talking to a new employee about how much time we spend with Query Store and they asked, “Why aren’t you using the Auto Tuning?”

THE WHAT NOW?!!!!

This awesome, Enterprise-only feature has been a bit of a trial and error for me.

Let’s start with turning it on, the only place I have found to turn it on is by using T-SQL:

ALTER DATABASE <DatabaseThatNeedsTuning> SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON ); 

Now, there are a couple of cool things that we can check behind the scenes to see what is driving the auto-tuning. Once Auto-Tuning is enabled, it will collect information that can be viewed by running this query:

SELECT *
FROM sys.dm_db_tuning_recommendations

The first column (name) is the QueryID with “PR_” added to it. I like to read through the columns, the “reason” the plan was chosen, the current “state” of the plan. When it was initiated and when it was reverted, all of this is fun for me to dig through and see what plans my system is finding that are better.

I also really like the Microsoft example with the JSON:

SELECT reason, score,
      script = JSON_VALUE(details, '$.implementationDetails.script'),
      planForceDetails.*,
      estimated_gain = (regressedPlanExecutionCount + recommendedPlanExecutionCount)
                  * (regressedPlanCpuTimeAverage - recommendedPlanCpuTimeAverage)/1000000,
      error_prone = IIF(regressedPlanErrorCount > recommendedPlanErrorCount, 'YES','NO')
FROM sys.dm_db_tuning_recommendations
CROSS APPLY OPENJSON (Details, '$.planForceDetails')
    WITH (  [query_id] int '$.queryId',
            regressedPlanId int '$.regressedPlanId',
            recommendedPlanId int '$.recommendedPlanId',
            regressedPlanErrorCount int,
            recommendedPlanErrorCount int,
            regressedPlanExecutionCount int,
            regressedPlanCpuTimeAverage float,
            recommendedPlanExecutionCount int,
            recommendedPlanCpuTimeAverage float
          ) AS planForceDetails;

Now to the other stuff. It isn’t perfect. Sometimes I have to manually go in and pin plans that are better than what the system is finding. If I manually pin a plan, it will honor it and not unpin or try to find a better plan for that query. It has helped me spend a bunch less time on tuning, but since many of my servers are on Standard Edition I am still using Query Store a lot.

Happy Tuning!

The song for this post is Oh My My by Blue October

And it was never a question, Query Store was crowing for repair. You gave it space and direction but you couldn’t keep it there…

Yes! Back to Query Store! I have had this problem for months where one of my Query store databases grows by a gig each week! It completely fills up, goes into a Read only state (which sets off the an alarm that I built to tell me when it switches to read only) and the only way I could get it to work again was to add space. I would add a gig and think, “Surely that will be enough to feed the hunger”. The next week, the alarm would go off again and I would feed it again!

I adjusted how often stats were collected, how frequently data was flushed, the max plans per query and anything else I could think to do, and still, it was hungry.

I had searched, read, googled, and kept coming up with nothing. I finally found something on corruption in the query store. CORRUPTION? Could it be possible? It was worth a try, my query store was in need of a serious diet and I still needed it to function.

The next time it went in to read only mode, I turned it off (it has to be off to fix the corruption) and ran this:

sp_query_store_consistency_check

Guess what happened next?!!!! My query store had a full gig free! I have left it alone for a few weeks and today I was able to shrink it by 5 gb! It has been glorious to have it working and not being worried as to why it was growing out of control.

The song for this post is Toad the Wet Sprocket’s Crowing

Now a personal note about Toad the Wet Sprocket. They are one of my favorite bands. Last night as I was listening to “Crowing”. I looked up the lyrics to figure out one of the words and realized I had be singing along to the wrong words. I thought it was “crowing for her” when it is actually “crowing for repair”. That completely changed the meaning of the song for me and made me love that song even more. It also made me realize I need to read Toad lyrics more often.

This also took my mind to the time that Ryan surprised me with tickets to go see them at a Reunion show in Vegas. After the show, fortune shown on me and I got to meet Glen Phillips the lead singer. He was super kind and gracious and let us take a picture and right after I fan-girled out a lot and started crying while trying to tell him how much I appreciated his and the band’s music. Huge apology to all the people I have completely scared with a fan-girl episode. I promise I try not to, just sometimes I can’t word how important that moment is to me.