Days pass by and my logs apply and I think that I’m ok until I find myself with data that is fading away…

Oh my goodness, are we back on the log-shipping journey? We are! After the last post, I found a bunch of things that I wanted to fix and one of them was alerting. How do I know if my logs are applying? How do I know if something bad is happening?

I don’t want emails that tell me everything is ok, only when things are bad and I want them to be helpful emails. Not only did I want an alert, I wanted an email with actual information that I can use to make my decisions. Decisions like, can I just apply a few logs to get caught up or did everything burn down and I need to pull a full backup plus all the logs to be up and running again?

This was a task for some super fancy alerts on my agent job.

First, I need to come up with the query of information I wanted to display in my email:

SELECT Max(CreatedDate) AS LastRestore, Origin AS [Database]
FROM RestoreFile
WHERE IsApplied = 1
GROUP BY Origin
HAVING DATEDIFF(hh,Max(CreatedDate),GETDATE()) > 3

This query will give me the LastRestored file time and for which Database it was restored. It will help me identify which database is having problems and how much I need to try to fix it. So, cool query, but how do I get that to email me? This next part saves me so much time with digging on my agent job issues.

DECLARE @xml NVARCHAR(MAX)
DECLARE @body NVARCHAR(MAX)

SET @xml = CAST(( SELECT [LastRestore] AS 'td','',[Database] AS 'td'
FROM (SELECT Max(CreatedDate) AS LastRestore, Origin AS [Database]
FROM RestoreFile
WHERE IsApplied = 1
GROUP BY Origin
HAVING DATEDIFF(hh,Max(CreatedDate),GETDATE()) > 3) A 
FOR XML PATH('tr'), ELEMENTS ) AS NVARCHAR(MAX))

SET @body ='<html><body><H3>Last Applied Log</H3>
<table border = 1> 
<tr>
<th> Last Restore </th> <th> Database </th></tr>'    

SET @body = @body + @xml +'</table></body></html>'

IF (SELECT TOP 1 COUNT (DISTINCT(Origin))
FROM RestoreFile
WHERE IsApplied = 1 
GROUP BY Origin
HAVING DATEDIFF(hh,Max(CreatedDate),GETDATE()) > 3) > 0
BEGIN
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Default', -- replace with your SQL Database Mail Profile 
@body = @body,
@body_format ='HTML',
@recipients = 'andrea@WayneManor.com', -- replace with your email address
@subject = 'Transaction Logs Are Old' ;
END

A few things are happening here. First I am declaring my @xml and my email @body. Then I create the query that is going to return my formatted table in my email and shove that into XML. Next, I start building the body of the email with a title for my table, and column names too (using HTML). Then I combine it with the XML.

The “If ” statement allows me to only send the email if it meets conditions in the query. This query is similar to the one I created above, but it doesn’t have to be. Also, the count has to be greater than 0 to send the email, otherwise it will skip sending.

Finally, I get to the part that calls sp_send_dbmail. This allows me to set the profile for the sender, add in my XML\HTML built body and tell the proc to use that HTML formatting. I add my email recipients and a subject for the email.

I started testing it and it was awesome…until one day when I got an error from what is reading my databases because one wasn’t in Standby. I decided to add another step that would also check that all the databases were in Standby. It is almost exactly the same, with the exception of the query, which looks like this:

SELECT name as [Database]
FROM sys.databases
WHERE is_read_only <> 1 -- Read_only should equal 1 if the database is in standby. 
AND database_id > 5 -- 5 is my database that is controlling the log-shipping and lower are system databases.

I have these emails also forward me a text so I know that something is wrong. It is awesome and makes me worry so much less.

The song for this post is Imagine Dragons’ Wrecked.

Oh my my, yeah I’m loving extra tuning time. ‘Cause I’m a sucker for Auto Tuning life.

A few weeks ago, we were talking to a new employee about how much time we spend with Query Store and they asked, “Why aren’t you using the Auto Tuning?”

THE WHAT NOW?!!!!

This awesome, Enterprise-only feature has been a bit of a trial and error for me.

Let’s start with turning it on, the only place I have found to turn it on is by using T-SQL:

ALTER DATABASE <DatabaseThatNeedsTuning> SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON ); 

Now, there are a couple of cool things that we can check behind the scenes to see what is driving the auto-tuning. Once Auto-Tuning is enabled, it will collect information that can be viewed by running this query:

SELECT *
FROM sys.dm_db_tuning_recommendations

The first column (name) is the QueryID with “PR_” added to it. I like to read through the columns, the “reason” the plan was chosen, the current “state” of the plan. When it was initiated and when it was reverted, all of this is fun for me to dig through and see what plans my system is finding that are better.

I also really like the Microsoft example with the JSON:

SELECT reason, score,
      script = JSON_VALUE(details, '$.implementationDetails.script'),
      planForceDetails.*,
      estimated_gain = (regressedPlanExecutionCount + recommendedPlanExecutionCount)
                  * (regressedPlanCpuTimeAverage - recommendedPlanCpuTimeAverage)/1000000,
      error_prone = IIF(regressedPlanErrorCount > recommendedPlanErrorCount, 'YES','NO')
FROM sys.dm_db_tuning_recommendations
CROSS APPLY OPENJSON (Details, '$.planForceDetails')
    WITH (  [query_id] int '$.queryId',
            regressedPlanId int '$.regressedPlanId',
            recommendedPlanId int '$.recommendedPlanId',
            regressedPlanErrorCount int,
            recommendedPlanErrorCount int,
            regressedPlanExecutionCount int,
            regressedPlanCpuTimeAverage float,
            recommendedPlanExecutionCount int,
            recommendedPlanCpuTimeAverage float
          ) AS planForceDetails;

Now to the other stuff. It isn’t perfect. Sometimes I have to manually go in and pin plans that are better than what the system is finding. If I manually pin a plan, it will honor it and not unpin or try to find a better plan for that query. It has helped me spend a bunch less time on tuning, but since many of my servers are on Standard Edition I am still using Query Store a lot.

Happy Tuning!

The song for this post is Oh My My by Blue October

And it was never a question, Query Store was crowing for repair. You gave it space and direction but you couldn’t keep it there…

Yes! Back to Query Store! I have had this problem for months where one of my Query store databases grows by a gig each week! It completely fills up, goes into a Read only state (which sets off the an alarm that I built to tell me when it switches to read only) and the only way I could get it to work again was to add space. I would add a gig and think, “Surely that will be enough to feed the hunger”. The next week, the alarm would go off again and I would feed it again!

I adjusted how often stats were collected, how frequently data was flushed, the max plans per query and anything else I could think to do, and still, it was hungry.

I had searched, read, googled, and kept coming up with nothing. I finally found something on corruption in the query store. CORRUPTION? Could it be possible? It was worth a try, my query store was in need of a serious diet and I still needed it to function.

The next time it went in to read only mode, I turned it off (it has to be off to fix the corruption) and ran this:

sp_query_store_consistency_check

Guess what happened next?!!!! My query store had a full gig free! I have left it alone for a few weeks and today I was able to shrink it by 5 gb! It has been glorious to have it working and not being worried as to why it was growing out of control.

The song for this post is Toad the Wet Sprocket’s Crowing

Now a personal note about Toad the Wet Sprocket. They are one of my favorite bands. Last night as I was listening to “Crowing”. I looked up the lyrics to figure out one of the words and realized I had be singing along to the wrong words. I thought it was “crowing for her” when it is actually “crowing for repair”. That completely changed the meaning of the song for me and made me love that song even more. It also made me realize I need to read Toad lyrics more often.

This also took my mind to the time that Ryan surprised me with tickets to go see them at a Reunion show in Vegas. After the show, fortune shown on me and I got to meet Glen Phillips the lead singer. He was super kind and gracious and let us take a picture and right after I fan-girled out a lot and started crying while trying to tell him how much I appreciated his and the band’s music. Huge apology to all the people I have completely scared with a fan-girl episode. I promise I try not to, just sometimes I can’t word how important that moment is to me.

Give me a minute to restore my logs…

This is part 4 of my log-shipping journey, if you missed part 1, you can find it here, part 2 is here, and part 3 is here.

We have one more set up piece to finish up before we do our restores. We need to tell my scripts which logfile is going to be the final one for now, so it can restore with Standby and I can read my databases.

UPDATE RestoreFile
SET IsFinalLogFile = 1
FROM RestoreFile RF1
INNER JOIN (SELECT RF.Origin, MAX(RF.CreatedDate) AS CreatedDate
FROM RestoreFile RF
INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
WHERE FileType = 'L' 
GROUP BY RF.Origin)RF2 ON RF1.Origin = RF2.Origin AND RF1.CreatedDate = RF2.CreatedDate

There are a few things going on here. First, I need to make sure that my logfile is more recent than my most recently restored full backup. Because I am restoring more than one database with this script, I have to join on both the ForDatabase and the CreatedDate (making sure it is greater than or, in the logfile case, equal to Date and Time I need). When I find the last logfile that is greater than the full backup’s created date, I am marking it with a IsFinalLogFile = 1 (True).

Now we are ready to build our full database restores.

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE DATABASE ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH FILE =  1, MOVE N''' + MF.name + ''' TO N''D:\SQLData\' + RF.ForDatabase + '.mdf'', MOVE N''' + MF.name + '_log'' TO N''L:\SQLLogs\'+ RF.ForDatabase + '_log.ldf'', NORECOVERY,  NOUNLOAD,  REPLACE,  STATS = 5'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
  WHERE RF.FileType = 'B' AND RF.IsApplied = 0
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN
    EXEC sp_executeSQL @SQLToExecute
	
	UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

I am using a cursor to run each restore one at a time. I make sure that I am only doing FileType = ‘B’ so I know they are full backups and ones that haven’t been applied yet. I am building my restore string to include a move since the location on the new server may be different from what was used in the past. I update the applied status as I do the restore so I won’t apply this one twice, for this I am using the RestoreFileId which allows me to get the specific record that I want. This is the main reason I wanted a new table. If I would have stayed with one table, there is a chance I would have had duplicates on the filenames. That would have meant duplicate chances to restore records. It just made everything so much cleaner to make one a stage table and one a production table.

For the logfile restore, I followed a similar pattern:

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE LOG ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH NORECOVERY;'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
	INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
  WHERE RF.FileType = 'L' AND RF.IsApplied = 0 AND RF.IsFinalLogFile = 0
  ORDER BY RF.ForDatabase, RF.CreatedDate
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN

BEGIN TRY
    EXEC sp_executeSQL @SQLToExecute

		UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId
END TRY
BEGIN CATCH
    --SELECT   
        --ERROR_NUMBER() AS ErrorNumber,ERROR_MESSAGE() AS ErrorMessage
		UPDATE [dbo].[RestoreFile]  SET IsApplied = 0 WHERE Id = @RestoreFileId
END CATCH 

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

The pattern here is almost the same. I build my simple restore script but only for filetype = ‘L’ (Logfile) and only if it hasn’t been applied and if it isn’t the final logfile. I also check to make sure the full backup has restored and is earlier than the logfile. Then I execute the string of sql that I built and update the record to applied. I ended up having issues with the IsApplied being marked to true, even when it didn’t apply correctly so I added in error handling to change it back if that is the case. I may go back and add this to the Full Backup Restore script.

Next, I run the script that will do my final logfile restore for each database:

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE LOG ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH STANDBY=''Z:\Standby\' + RF.ForDatabase + '_Standby.bak'';'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
	INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
  WHERE RF.FileType = 'L' AND RF.IsApplied = 0 AND RF.IsFinalLogFile = 1
  ORDER BY RF.ForDatabase, RF.CreatedDate
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN

BEGIN TRY
    EXEC sp_executeSQL @SQLToExecute

		UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId
END TRY
BEGIN CATCH
    --SELECT   
        --ERROR_NUMBER() AS ErrorNumber,ERROR_MESSAGE() AS ErrorMessage
		UPDATE [dbo].[RestoreFile]  SET IsApplied = 0 WHERE Id = @RestoreFileId
END CATCH 

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

The only difference between these last two are the “WITH STANDBY” instead of “WITH NORECOVERY” and the IsFinalLogFile being set to true.

Last, but not least, I am cleaning old records out of my RestoreFile table so it doesn’t get large. I debated how much data to keep and for me, it makes sense to delete anything older than two days. I am doing a full restore daily and if I change that to weekly, I will keep this data longer.

  DELETE
  FROM [Maintenance].[dbo].[RestoreFile]
  WHERE CreatedDate < GETDATE()-2

This process allows me to stay flexible. If I don’t get a new full backup, this process will keep restoring logfiles until a new full backup file appears. If the backup chain is broken, this process will also break, but it will work as long as that chain is healthy and it gets all the files it needs.

Whew! We did it! That is my full poor girl log shipping process.

The song for this post is George Ezra’s Hold My Girl.

When I add a table new, that missing piece is found…

This is part 3 of my log-shipping journey, if you missed part 1, you can find it here and if you missed part 2, you can find it here.

As I starting working with my log-shipping job, I realized that I needed a second table. One to stage all my powershell loaded filenames and one that I could use to build the restores. I also wanted to be able to truncate the Logshipping Table to empty it before loading the filenames so I won’t miss any, but not have to redo work in my main table that had already been done. At the beginning of my SQL Agent job, I have the step that Truncates the Logshipping Table. Then I load the filenames with the powershell script, I update the created dates and then I load it all into my new RestoreFile Table.

CREATE TABLE [dbo].[RestoreFile](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[FileName] [varchar](500) NULL,
	[FileLocation] [varchar](1000) NULL,
	[FileType] [char](1) NULL,
	[Origin] [varchar](50) NULL,
	[ForDatabase] [varchar](50) NULL,
	[CreatedDate] [datetime2](7) NULL,
	[IsApplied] [bit] NOT NULL,
	[IsFinalLogFile] [bit] NOT NULL,
 CONSTRAINT [PK_RestoreFile] PRIMARY KEY CLUSTERED 
(
	[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[RestoreFile] ADD  CONSTRAINT [DF_RestoreFile_IsApplied]  DEFAULT ((0)) FOR [IsApplied]
GO

ALTER TABLE [dbo].[RestoreFile] ADD  CONSTRAINT [DF_RestoreFile_IsFinalLogFile]  DEFAULT ((0)) FOR [IsFinalLogFile]
GO

I removed the IsApplied column from the Logshipping table because I didn’t need it. The IsFinalLogFile becomes super important when we go to bring the database into Standby mode.

Next, I need to move the data from Logshipping into RestoreFile:

INSERT INTO RestoreFile(FileName, FileLocation, FileType, Origin, ForDatabase, CreatedDate)
SELECT A.FileName, A.FileLocation, A.FileType, A.Origin, A.ForDatabase, A.CreatedDate	
FROM [dbo].[LogshippingFile] A
LEFT JOIN [dbo].[RestoreFile] B ON A.FileName = B.FileName 
WHERE B.FileName IS NULL

I am comparing the filenames so that I don’t get duplicates. Thanks again to the date stamping in the names, I won’t get the same name twice, but I run this load frequently during the day and don’t want to bloat my table.

Now, we have all the pieces in place to do our restores! Get ready for some fun!

The song for this post is Vance Joy’s Missing Piece

Starting now, is the wrong date for insert…

This is part 2 of my log-shipping journey, if you missed part 1, you can find it here.

I collected all of the file names I need, but you will notice I left the dates off. When my files are moved from one domain to another, their created dates are being changed. I needed the real dates for the correct restore order and to match the backups to the logs. If I were a Dark Knight Powershell master, I am sure I would have figured out how to do it. Every time I started to get close, I would would have a production issue or another distraction that needed my time. In the end I landed in my happy place, so we are fixing the dates in the database!

How do I get the right date for a file when the created date is being changed after it has moved? I was super lucky that the date is being stored in the filename too! (Huge thank you to Ola for his awesome database maintenance solution.)

An example of my filename is this:

Batcentral$Alfred_Batman_LOG_20210610_224501.trn

This is how I dissect the filename to get the date and time from it:

  UPDATE [DBAStuff].[dbo].[LogshippingFile]
  SET CreatedDate = CAST(Substring(FileName, (LEN(FileName)-18),8)  +' '+ (Substring(FileName, (LEN(FileName)-9),2)+ ':' + Substring(FileName, (LEN(FileName)-7),2) + ':' + Substring(FileName, (LEN(FileName)-5),2)) AS DATETIME)
  WHERE CreatedDate IS NULL

My filenames are different lengths which means the the dates won’t always be in the same place, instead I go to the end of the string and count backwards because my dates are always consistent. Then I add all the parts back together to get my datetime and update it into my table.

Are we done yet? Nope, there is more.

The song for this post is Toad the Wet Sprockets’ Starting Now

Load quick, got a list of files that need to be in a table…

Yes, I am trying to do a version of log-shipping that doesn’t require linking any servers and allows them to be in different domains. I also know I am crazy.

Now that we have that out of the way, let me show you some of the stuff I am doing! Loading a table from multiple file locations using Powershell. It will go and read what is in the directories and load them into a table for me (Oh my goodness, I knew Powershell was cool, but it took me so long to figure this out, that I was super proud with how fast it loaded).

First, the housekeeping. Create two tables in a DBA Database:

CREATE TABLE [dbo].[LogshippingFile](
	[FileName] [varchar](500) NULL,
	[FileLocation] [varchar](1000) NULL,
	[FileType] [char](1) NULL,
	[Origin] [varchar](50) NULL,
	[ForDatabase] [varchar](50) NULL,
	[CreatedDate] [datetime2](7) NULL,
	[Applied] [bit] NOT NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[LogshippingFile] ADD  CONSTRAINT [DF_LogshippingFiles_Applied]  DEFAULT ((0)) FOR [Applied]
CREATE TABLE [dbo].[Config](
	[Origin] [varchar](50) NULL,
	[FileLocation] [varchar](500) NULL,
	[Type] [char](1) NULL
) ON [PRIMARY]

In the Config table, I am going to put the common name for the database I am restoring so the rows will look like this:

OriginFileLocationType
Batman J:\FullBackups\Batman B
Batman J:\LogBackups\Batman L
Superman J:\FullBackups\Superman B
Superman J:\LogBackups\Superman L

Yay! Now we can get to the fun stuff. This is how I set up the Powershell:



# Credentials for connection to SQL Server. 
$username = 'Batgirl'
$password = '!Pennyworth54'
#Connecting to the SQL Server and Query to get FileLocation and Origin from the Config table. 
$DS = Invoke-Sqlcmd -ServerInstance "BatCave01" -Database "DBAStuff" -Username $username -Password $password -Query "SELECT FileLocation, Origin FROM Config" -As DataSet

#Magic that puts my two columns into variables
$data = $DS.Tables[0].Rows
$data
ForEach ($Row in $data) {
$FileLocation = $Row.FileLocation 
$Origin = $Row.Origin



#Build my insert statement so Powershell knows what to expect. 
#gets all the filenames and loads them into the table. 
$sqlstatement=@'
INSERT INTO LogshippingFile
(
FileName,	
FileLocation, 
FileType,
Origin,
ForDatabase 
) 
VALUES (
	'{0}',
	'{1}',
	'{2}',
	'{3}',
	'{4}'

   
)
'@
Get-ChildItem -Recurse $FileLocation  | 
select Name,FullName, 
    @{
    Label='FileType'
    Expression={($FileLocation.Substring(10,1))}}, @{
    Label='Origin'
    Expression={($Origin)}}, @{
    Label='ForDatabase'
    Expression={'Gryphon_'+($Origin)}} |
   # @{Label="Length";Expression={$_.Length / 1024KB -as [int] }}  |
ForEach-Object {
          $SQL = $sqlstatement -f  $_.name,$_.FullName, $_.FileType, $_.Origin, $_.ForDatabase #,  $_.Length		
    Invoke-sqlcmd -Username $username -Password $password -Query $SQL -ServerInstance “BatCave01” -database “DBAStuff” -Querytimeout 0
    }
    }

Now we have all this data in the table, what do we do with it? Guess you will have to tune in again to this same Bat channel.

The song for this post is Matt Nathanson’s Kiss Quick

Query times just keep fallin’!

So if I can’t modify or add indexes and I can’t change code, how do I get my query times to drop? Query Store to the rescue.

I love to tune queries. I feel so satisfied to see the times dropping on my server as I tune things. Recently, I have been tracking my Batch Requests per Second and my Instance Waits to see if I am making improvements when I tune. It has been awesome!

What I am going to show you today is how I dig into my query store to find those misbehaving queries and make their performance better. First I go into a database’s properties to make sure my query store is turned on:

There are a ton of best practice posts on query store sizes and settings, we aren’t talking about that in this post. This is just an example of what it can look like with the most important part being the Operation Mode (Actual) at the top says “Read Write”.

If the query store was just barely turned on, give it at least 30 minutes to run before expecting it to show much. It may even take up to a day to get a good idea of what is happening.

My two favorite reports are Top Resource Consuming Queries and Regressed Queries.

I start by looking at the Top Resource Consuming Queries (I use the same process for Regressed Queries).

I will click on a bar in the chart and it will turn green. Then I will hit the “track query” button to the right of the “refresh” button just above the chart.

This query was more stable yesterday. I will look at all the plans available (I will even select “Configure” in the top right corner and change to look at a full month instead of the last day to see if I can find a better plan). I will then click the plan I like best and select the “Force Plan” button. SQL Server will verify the plan number and ask if I am sure, I check that it is the right one and then select “yes”. Next I will refresh until I see my newly pinned plan show up. If my plan doesn’t show up, but a new plan does, I will compare the new plan with the one I wanted. If they have the same shape, I know that is what SQL is putting in place for my forced plan and I will force the new one instead to keep it consistent.

Now I watch and see how the new forced plan behaves. If it goes horribly wrong, I will unforce it. If it is healthy and doing what I want, I move to the next query to see if I can help it.

Disclaimer: Query store won’t solve all the problems, but it can be a way to temporarily fix performance issues. I keep an eye on my pinned queries and make sure I watch for when changes hit the system. I also have alarms for when things start to run long so I can quickly diagnose performance issues. If Indexes are changed or code is modified, it can affect forced plans.

The song for this post is Fallin’ by Why Don’t We , it makes me want to dance every time I hear it.

Little Bit of Love, when your logs are rolling too much.

In January, the awesome Tim Radney (b|t) talked to the Utah user group about best practices. One that he mentioned was rolling over your error logs everyday and keeping 35 logs (a month plus 3 reboots). I loved this idea and implemented it using what I had done here and adding it to an agent job.

Then I realized we didn’t have any alerts on if our logs were rolling too much. Way back in my career, it used to be something that I would watch and it could mean someone was trying to hack your system and cover their tracks by rolling your logs over a bunch. I fought so much with figuring how to tell if my logs are rolling over, I had to save it for the future.

DROP TABLE IF EXISTS #EnumErrorLog;

CREATE TABLE #EnumErrorLog
(
    [Archive#] varchar(3) NOT NULL
        PRIMARY KEY CLUSTERED
    , [Date] datetime NOT NULL
    , [LogFileSizeByte] int NOT NULL
);

INSERT INTO #EnumErrorLog ([Archive#], [Date], [LogFileSizeByte])
EXEC sys.sp_enumerrorlogs;


SELECT CASE WHEN COUNT([Archive#]) > = 5 THEN 1 ELSE 0 END
FROM #EnumErrorLog
WHERE Date > DATEADD(hour, -3, GETDATE())

I create a temp table so I can execute a system stored proc to pull the information into a table and select it back out. I run this alert check once an hour, which means that for 3 hours if the alert condition has been met, it will alert me that something has rolled over too much (1 means to alert, 0 means to not do anything). I am using a third party tool right now, but I bet this could be set up with native SQL alerts or agent jobs.

The song for this post is Little Bit of Love by JP Cooper, it makes me smile even on the toughest days. *hugs*