I got lucky with an Azure database restore…

Recently, the wrong table got dropped and we needed to bring it back. I had never done a restore in an Azure Managed Database before so I learned something really fast.

First, I log into the Azure Portal (portal.azure.com)

I select “SQL servers”:

I pick the right server that my managed database is on (it has the type of “SQL server”) and I click on the name of the server (it is a link to where I need to go)

In the next screen under Settings, I select SQL databases:

I click on the name of the database that I need to restore. It will take me to another screen.

It gives me this list of options at the top (this is also where I would go to create a copy of a database for another environment like dev).

I select “Restore” and it takes me to another screen where it has most of the restore ready to go. Here is where I can pick a point in time recovery or the most recent possible. Since I knew about the time the delete had happened, I am going to restore to just a few minutes before the delete. (It will be in UTC time) Most recent would have not have what I need because it took the user a little bit of time to let me know what had happened.

I can name what the database is going to be, I can add a date or time to this name so I know when it was created.

I set my storage for the cheapest options because this is just a restore that I am going to pull data from back into my current database. I won’t be keeping it long.

I select review + create at the bottom and it will start restoring my database. Depending on the size, it might take time, but it was pretty quick for me.

I pull the data back into the table that I need and then go back to the SQL Databases screen and find my newly created database name. I click on the newly created database name and this time, instead of “Restore”, I select “Delete” and follow the prompts to remove it.

This was the easiest point in time restore I have EVER done. It was fast and I didn’t have to keep adding each log to roll forward.

The song for this post is Lucky by Elle King.

I can query multiple instances, I am King!

In the past, I have talked about CMS (Central Management Servers), but now I don’t have CMS configured and still want to query multiple instances at once. Local Server Groups are my friend.

In SSMS, I start by selecting View>>Registered Servers.

I then right click on “Local Server Groups” and select “New Server Group”.

Next I right click on the group I just created, in this case “Production” and select “New Server Registration”. I then fill in my servername, the type of Authentication, in this case I am using SQL Server Authentication and my login/password. I also am saving my password. This will help in the future. The Registered Server Name can be different. In the real world, my servernames are weird and so the Registered Server Name is the easy to remember name or the nickname I use for the server (all of my servers have nicknames). The description will come up when I hover over the server name once I have it registered.

Then I repeat this process until I have registered all my servers for Production under the Production group.

Now comes the cool part. I right click on my Production Server Group and select “New Query”. Because I saved my password, it connects to all my Production instances in one window. By default, it creates a pink bar at the bottom showing how many instances connected and the name of the Server Group.

Now I can run all my queries at once and the results will have the instance name prepended to each row. Word of warning, I never leave this connection open. I open it when I need it and then close it again so I don’t accidentally run something against all my servers.

The song for this post, King, is by Florence + The Machines.

Hit me with them good vibes, CTfP is set nice. Everything is so fire, little bit of sunshine!

Cost Threshold for Parallelism (CTfP) is one of my favorite server level settings in SQL Server. I remember the first time I heard this setting mentioned by Grant Fritchey. I quickly hopped on my servers and found them all set at the default (5) and adjusted them to 50 for the non SSRS servers and 30 for the SSRS ones. That was many years ago, but I had kept those numbers in my head because I didn’t know a better way.

Peter Shore gave an awesome presentation on Waits to our user group last week and reminded me of how much this setting can impact tuning. He also pointed us to a fantastic blog by Jonathan Kehayias about how to know the correct setting for your CTfP.

Peter explained that as I ran Jonathan’s awesome query, I would start to see a point in the StatementSubTreeCost column to help me identify the best CTfP for my environment.

My first thought after looking at this query, “I am so glad Jonathan wrote it because with that much XML, I wouldn’t know if it were safe to run without that trust.”

Today, I gave it a go. I kicked off the query and held my breath. Then I started to turn blue and realized this would probably take a minute. It took about 15 minutes and I was happy I didn’t panic at the wrong disco. It runs in a read uncommitted state which prevents blocking (thank you so much!) and I ran sp_whoisactive over and over to be safe.

This is Jonathan’s query, but I recommend you read his article too because there was so much good information.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; 
WITH XMLNAMESPACES   
(DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')  
SELECT  
     query_plan AS CompleteQueryPlan, 
     n.value('(@StatementText)[1]', 'VARCHAR(4000)') AS StatementText, 
     n.value('(@StatementOptmLevel)[1]', 'VARCHAR(25)') AS StatementOptimizationLevel, 
     n.value('(@StatementSubTreeCost)[1]', 'VARCHAR(128)') AS StatementSubTreeCost, 
     n.query('.') AS ParallelSubTreeXML,  
     ecp.usecounts, 
     ecp.size_in_bytes 
FROM sys.dm_exec_cached_plans AS ecp 
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS eqp 
CROSS APPLY query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple') AS qn(n) 
WHERE  n.query('.').exist('//RelOp[@PhysicalOp="Parallelism"]') = 1 

After running it, I got back 43 records. I felt that was low until I remembered that our CTfP is set higher than my brain standard at 150. After looking over the results, I felt that 150 was about right for this environment. I didn’t stop there.

Jonathan had mentioned how he uses this query to identify what needs to be tuned, and since tuning is my favorite, I started to play with the queries to get them running better.

Huge THANK YOU to the awesome SQL Server Community that is always willing to share and teach! I love being able to find what I need from people that I trust to make my job easier and I couldn’t do it without all of you!

Hugs and please stay safe!

The song for this post is Sunshine by OneRepublic.

What for do you yearn? Watch that Replication Burn!

Replication is not my favorite, it is kind of far from my favorite. No further than that. Little further.

When it breaks, it can cause havoc and it always seems to break at the worst time. Recently we noticed that our logfile was massive (like 3 times the size of the database) and that was making many of the other processes painful. We didn’t know how long the log hadn’t been clearing so we got to burn it all (kind of).

The first thing I did was tell replication that we were done with all the transactions that had been committed.

EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0,@time = 0, @reset = 1

checkpoint

Had I known when replication had broken, I could have narrowed it down and put more details in this command to only clear out exactly what I needed to remove. Since I didn’t know, I told it to just be done with all of it. I ran a checkpoint trying to get it to clear out. At this point I took a log backup (I just kicked off my agent job).

“Burn, Replication, Burn!”

Then I checked to see how my log was looking and wanted to see if there were files that could be used.

DBCC LOGINFO('MyDatabaseName')

I was looking for status 0 (meaning the logfile could be reused or overwritten). There were still a lot that hadn’t cleared out so I repeated the process a few times and finally it was awesome! I had to do some clean up on my transaction log, but it was soooo much more usable.

We also had to re-initialize replication, but it was totally worth it.

The song for this post is Burn Butcher Burn by Joseph Trapanese and Joey Batey

And it was never a question, Query Store was crowing for repair. You gave it space and direction but you couldn’t keep it there…

Yes! Back to Query Store! I have had this problem for months where one of my Query store databases grows by a gig each week! It completely fills up, goes into a Read only state (which sets off the an alarm that I built to tell me when it switches to read only) and the only way I could get it to work again was to add space. I would add a gig and think, “Surely that will be enough to feed the hunger”. The next week, the alarm would go off again and I would feed it again!

I adjusted how often stats were collected, how frequently data was flushed, the max plans per query and anything else I could think to do, and still, it was hungry.

I had searched, read, googled, and kept coming up with nothing. I finally found something on corruption in the query store. CORRUPTION? Could it be possible? It was worth a try, my query store was in need of a serious diet and I still needed it to function.

The next time it went in to read only mode, I turned it off (it has to be off to fix the corruption) and ran this:

sp_query_store_consistency_check

Guess what happened next?!!!! My query store had a full gig free! I have left it alone for a few weeks and today I was able to shrink it by 5 gb! It has been glorious to have it working and not being worried as to why it was growing out of control.

The song for this post is Toad the Wet Sprocket’s Crowing

Now a personal note about Toad the Wet Sprocket. They are one of my favorite bands. Last night as I was listening to “Crowing”. I looked up the lyrics to figure out one of the words and realized I had be singing along to the wrong words. I thought it was “crowing for her” when it is actually “crowing for repair”. That completely changed the meaning of the song for me and made me love that song even more. It also made me realize I need to read Toad lyrics more often.

This also took my mind to the time that Ryan surprised me with tickets to go see them at a Reunion show in Vegas. After the show, fortune shown on me and I got to meet Glen Phillips the lead singer. He was super kind and gracious and let us take a picture and right after I fan-girled out a lot and started crying while trying to tell him how much I appreciated his and the band’s music. Huge apology to all the people I have completely scared with a fan-girl episode. I promise I try not to, just sometimes I can’t word how important that moment is to me.

Give me a minute to restore my logs…

This is part 4 of my log-shipping journey, if you missed part 1, you can find it here, part 2 is here, and part 3 is here.

We have one more set up piece to finish up before we do our restores. We need to tell my scripts which logfile is going to be the final one for now, so it can restore with Standby and I can read my databases.

UPDATE RestoreFile
SET IsFinalLogFile = 1
FROM RestoreFile RF1
INNER JOIN (SELECT RF.Origin, MAX(RF.CreatedDate) AS CreatedDate
FROM RestoreFile RF
INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
WHERE FileType = 'L' 
GROUP BY RF.Origin)RF2 ON RF1.Origin = RF2.Origin AND RF1.CreatedDate = RF2.CreatedDate

There are a few things going on here. First, I need to make sure that my logfile is more recent than my most recently restored full backup. Because I am restoring more than one database with this script, I have to join on both the ForDatabase and the CreatedDate (making sure it is greater than or, in the logfile case, equal to Date and Time I need). When I find the last logfile that is greater than the full backup’s created date, I am marking it with a IsFinalLogFile = 1 (True).

Now we are ready to build our full database restores.

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE DATABASE ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH FILE =  1, MOVE N''' + MF.name + ''' TO N''D:\SQLData\' + RF.ForDatabase + '.mdf'', MOVE N''' + MF.name + '_log'' TO N''L:\SQLLogs\'+ RF.ForDatabase + '_log.ldf'', NORECOVERY,  NOUNLOAD,  REPLACE,  STATS = 5'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
  WHERE RF.FileType = 'B' AND RF.IsApplied = 0
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN
    EXEC sp_executeSQL @SQLToExecute
	
	UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

I am using a cursor to run each restore one at a time. I make sure that I am only doing FileType = ‘B’ so I know they are full backups and ones that haven’t been applied yet. I am building my restore string to include a move since the location on the new server may be different from what was used in the past. I update the applied status as I do the restore so I won’t apply this one twice, for this I am using the RestoreFileId which allows me to get the specific record that I want. This is the main reason I wanted a new table. If I would have stayed with one table, there is a chance I would have had duplicates on the filenames. That would have meant duplicate chances to restore records. It just made everything so much cleaner to make one a stage table and one a production table.

For the logfile restore, I followed a similar pattern:

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE LOG ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH NORECOVERY;'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
	INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
  WHERE RF.FileType = 'L' AND RF.IsApplied = 0 AND RF.IsFinalLogFile = 0
  ORDER BY RF.ForDatabase, RF.CreatedDate
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN

BEGIN TRY
    EXEC sp_executeSQL @SQLToExecute

		UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId
END TRY
BEGIN CATCH
    --SELECT   
        --ERROR_NUMBER() AS ErrorNumber,ERROR_MESSAGE() AS ErrorMessage
		UPDATE [dbo].[RestoreFile]  SET IsApplied = 0 WHERE Id = @RestoreFileId
END CATCH 

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

The pattern here is almost the same. I build my simple restore script but only for filetype = ‘L’ (Logfile) and only if it hasn’t been applied and if it isn’t the final logfile. I also check to make sure the full backup has restored and is earlier than the logfile. Then I execute the string of sql that I built and update the record to applied. I ended up having issues with the IsApplied being marked to true, even when it didn’t apply correctly so I added in error handling to change it back if that is the case. I may go back and add this to the Full Backup Restore script.

Next, I run the script that will do my final logfile restore for each database:

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE LOG ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH STANDBY=''Z:\Standby\' + RF.ForDatabase + '_Standby.bak'';'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
	INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
  WHERE RF.FileType = 'L' AND RF.IsApplied = 0 AND RF.IsFinalLogFile = 1
  ORDER BY RF.ForDatabase, RF.CreatedDate
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN

BEGIN TRY
    EXEC sp_executeSQL @SQLToExecute

		UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId
END TRY
BEGIN CATCH
    --SELECT   
        --ERROR_NUMBER() AS ErrorNumber,ERROR_MESSAGE() AS ErrorMessage
		UPDATE [dbo].[RestoreFile]  SET IsApplied = 0 WHERE Id = @RestoreFileId
END CATCH 

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

The only difference between these last two are the “WITH STANDBY” instead of “WITH NORECOVERY” and the IsFinalLogFile being set to true.

Last, but not least, I am cleaning old records out of my RestoreFile table so it doesn’t get large. I debated how much data to keep and for me, it makes sense to delete anything older than two days. I am doing a full restore daily and if I change that to weekly, I will keep this data longer.

  DELETE
  FROM [Maintenance].[dbo].[RestoreFile]
  WHERE CreatedDate < GETDATE()-2

This process allows me to stay flexible. If I don’t get a new full backup, this process will keep restoring logfiles until a new full backup file appears. If the backup chain is broken, this process will also break, but it will work as long as that chain is healthy and it gets all the files it needs.

Whew! We did it! That is my full poor girl log shipping process.

The song for this post is George Ezra’s Hold My Girl.

When I add a table new, that missing piece is found…

This is part 3 of my log-shipping journey, if you missed part 1, you can find it here and if you missed part 2, you can find it here.

As I starting working with my log-shipping job, I realized that I needed a second table. One to stage all my powershell loaded filenames and one that I could use to build the restores. I also wanted to be able to truncate the Logshipping Table to empty it before loading the filenames so I won’t miss any, but not have to redo work in my main table that had already been done. At the beginning of my SQL Agent job, I have the step that Truncates the Logshipping Table. Then I load the filenames with the powershell script, I update the created dates and then I load it all into my new RestoreFile Table.

CREATE TABLE [dbo].[RestoreFile](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[FileName] [varchar](500) NULL,
	[FileLocation] [varchar](1000) NULL,
	[FileType] [char](1) NULL,
	[Origin] [varchar](50) NULL,
	[ForDatabase] [varchar](50) NULL,
	[CreatedDate] [datetime2](7) NULL,
	[IsApplied] [bit] NOT NULL,
	[IsFinalLogFile] [bit] NOT NULL,
 CONSTRAINT [PK_RestoreFile] PRIMARY KEY CLUSTERED 
(
	[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[RestoreFile] ADD  CONSTRAINT [DF_RestoreFile_IsApplied]  DEFAULT ((0)) FOR [IsApplied]
GO

ALTER TABLE [dbo].[RestoreFile] ADD  CONSTRAINT [DF_RestoreFile_IsFinalLogFile]  DEFAULT ((0)) FOR [IsFinalLogFile]
GO

I removed the IsApplied column from the Logshipping table because I didn’t need it. The IsFinalLogFile becomes super important when we go to bring the database into Standby mode.

Next, I need to move the data from Logshipping into RestoreFile:

INSERT INTO RestoreFile(FileName, FileLocation, FileType, Origin, ForDatabase, CreatedDate)
SELECT A.FileName, A.FileLocation, A.FileType, A.Origin, A.ForDatabase, A.CreatedDate	
FROM [dbo].[LogshippingFile] A
LEFT JOIN [dbo].[RestoreFile] B ON A.FileName = B.FileName 
WHERE B.FileName IS NULL

I am comparing the filenames so that I don’t get duplicates. Thanks again to the date stamping in the names, I won’t get the same name twice, but I run this load frequently during the day and don’t want to bloat my table.

Now, we have all the pieces in place to do our restores! Get ready for some fun!

The song for this post is Vance Joy’s Missing Piece

Query times just keep fallin’!

So if I can’t modify or add indexes and I can’t change code, how do I get my query times to drop? Query Store to the rescue.

I love to tune queries. I feel so satisfied to see the times dropping on my server as I tune things. Recently, I have been tracking my Batch Requests per Second and my Instance Waits to see if I am making improvements when I tune. It has been awesome!

What I am going to show you today is how I dig into my query store to find those misbehaving queries and make their performance better. First I go into a database’s properties to make sure my query store is turned on:

There are a ton of best practice posts on query store sizes and settings, we aren’t talking about that in this post. This is just an example of what it can look like with the most important part being the Operation Mode (Actual) at the top says “Read Write”.

If the query store was just barely turned on, give it at least 30 minutes to run before expecting it to show much. It may even take up to a day to get a good idea of what is happening.

My two favorite reports are Top Resource Consuming Queries and Regressed Queries.

I start by looking at the Top Resource Consuming Queries (I use the same process for Regressed Queries).

I will click on a bar in the chart and it will turn green. Then I will hit the “track query” button to the right of the “refresh” button just above the chart.

This query was more stable yesterday. I will look at all the plans available (I will even select “Configure” in the top right corner and change to look at a full month instead of the last day to see if I can find a better plan). I will then click the plan I like best and select the “Force Plan” button. SQL Server will verify the plan number and ask if I am sure, I check that it is the right one and then select “yes”. Next I will refresh until I see my newly pinned plan show up. If my plan doesn’t show up, but a new plan does, I will compare the new plan with the one I wanted. If they have the same shape, I know that is what SQL is putting in place for my forced plan and I will force the new one instead to keep it consistent.

Now I watch and see how the new forced plan behaves. If it goes horribly wrong, I will unforce it. If it is healthy and doing what I want, I move to the next query to see if I can help it.

Disclaimer: Query store won’t solve all the problems, but it can be a way to temporarily fix performance issues. I keep an eye on my pinned queries and make sure I watch for when changes hit the system. I also have alarms for when things start to run long so I can quickly diagnose performance issues. If Indexes are changed or code is modified, it can affect forced plans.

The song for this post is Fallin’ by Why Don’t We , it makes me want to dance every time I hear it.

Little Bit of Love, when your logs are rolling too much.

In January, the awesome Tim Radney (b|t) talked to the Utah user group about best practices. One that he mentioned was rolling over your error logs everyday and keeping 35 logs (a month plus 3 reboots). I loved this idea and implemented it using what I had done here and adding it to an agent job.

Then I realized we didn’t have any alerts on if our logs were rolling too much. Way back in my career, it used to be something that I would watch and it could mean someone was trying to hack your system and cover their tracks by rolling your logs over a bunch. I fought so much with figuring how to tell if my logs are rolling over, I had to save it for the future.

DROP TABLE IF EXISTS #EnumErrorLog;

CREATE TABLE #EnumErrorLog
(
    [Archive#] varchar(3) NOT NULL
        PRIMARY KEY CLUSTERED
    , [Date] datetime NOT NULL
    , [LogFileSizeByte] int NOT NULL
);

INSERT INTO #EnumErrorLog ([Archive#], [Date], [LogFileSizeByte])
EXEC sys.sp_enumerrorlogs;


SELECT CASE WHEN COUNT([Archive#]) > = 5 THEN 1 ELSE 0 END
FROM #EnumErrorLog
WHERE Date > DATEADD(hour, -3, GETDATE())

I create a temp table so I can execute a system stored proc to pull the information into a table and select it back out. I run this alert check once an hour, which means that for 3 hours if the alert condition has been met, it will alert me that something has rolled over too much (1 means to alert, 0 means to not do anything). I am using a third party tool right now, but I bet this could be set up with native SQL alerts or agent jobs.

The song for this post is Little Bit of Love by JP Cooper, it makes me smile even on the toughest days. *hugs*

Sometimes I get so high, so low, where did all my good plans go?

Greetings! Today I was playing with query store and noticed that I had some failing forced plans. How do you find failing forced plans? I asked this question over and over and finally found an answer.

I started on my main database and ran this query to look at query store:

SELECT *
FROM sys.query_store_plan
WHERE is_forced_plan = 1 and force_failure_count > 0

I had over 20 of them that were failing! Next, I had to figure out how to unforce the failing plans. Some of them were so old, they wouldn’t come up when I tried to look for them using the plan id in the GUI. I did more digging and found this:

EXEC sp_query_store_unforce_plan @query_id = Enter your queryid, @plan_id = Enter your planid

It was incredibly satisfying to watch each of the rows in the first query disappear as I ran them through the second query. This is now on my list of things to check so I can have a clean and healthy query store.

This post’s song is High Low by The Unlikely Candidates