Starting now, is the wrong date for insert…

This is part 2 of my log-shipping journey, if you missed part 1, you can find it here.

I collected all of the file names I need, but you will notice I left the dates off. When my files are moved from one domain to another, their created dates are being changed. I needed the real dates for the correct restore order and to match the backups to the logs. If I were a Dark Knight Powershell master, I am sure I would have figured out how to do it. Every time I started to get close, I would would have a production issue or another distraction that needed my time. In the end I landed in my happy place, so we are fixing the dates in the database!

How do I get the right date for a file when the created date is being changed after it has moved? I was super lucky that the date is being stored in the filename too! (Huge thank you to Ola for his awesome database maintenance solution.)

An example of my filename is this:

Batcentral$Alfred_Batman_LOG_20210610_224501.trn

This is how I dissect the filename to get the date and time from it:

  UPDATE [DBAStuff].[dbo].[LogshippingFile]
  SET CreatedDate = CAST(Substring(FileName, (LEN(FileName)-18),8)  +' '+ (Substring(FileName, (LEN(FileName)-9),2)+ ':' + Substring(FileName, (LEN(FileName)-7),2) + ':' + Substring(FileName, (LEN(FileName)-5),2)) AS DATETIME)
  WHERE CreatedDate IS NULL

My filenames are different lengths which means the the dates won’t always be in the same place, instead I go to the end of the string and count backwards because my dates are always consistent. Then I add all the parts back together to get my datetime and update it into my table.

Are we done yet? Nope, there is more.

The song for this post is Toad the Wet Sprockets’ Starting Now

Load quick, got a list of files that need to be in a table…

Yes, I am trying to do a version of log-shipping that doesn’t require linking any servers and allows them to be in different domains. I also know I am crazy.

Now that we have that out of the way, let me show you some of the stuff I am doing! Loading a table from multiple file locations using Powershell. It will go and read what is in the directories and load them into a table for me (Oh my goodness, I knew Powershell was cool, but it took me so long to figure this out, that I was super proud with how fast it loaded).

First, the housekeeping. Create two tables in a DBA Database:

CREATE TABLE [dbo].[LogshippingFile](
	[FileName] [varchar](500) NULL,
	[FileLocation] [varchar](1000) NULL,
	[FileType] [char](1) NULL,
	[Origin] [varchar](50) NULL,
	[ForDatabase] [varchar](50) NULL,
	[CreatedDate] [datetime2](7) NULL,
	[Applied] [bit] NOT NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[LogshippingFile] ADD  CONSTRAINT [DF_LogshippingFiles_Applied]  DEFAULT ((0)) FOR [Applied]
CREATE TABLE [dbo].[Config](
	[Origin] [varchar](50) NULL,
	[FileLocation] [varchar](500) NULL,
	[Type] [char](1) NULL
) ON [PRIMARY]

In the Config table, I am going to put the common name for the database I am restoring so the rows will look like this:

OriginFileLocationType
Batman J:\FullBackups\Batman B
Batman J:\LogBackups\Batman L
Superman J:\FullBackups\Superman B
Superman J:\LogBackups\Superman L

Yay! Now we can get to the fun stuff. This is how I set up the Powershell:



# Credentials for connection to SQL Server. 
$username = 'Batgirl'
$password = '!Pennyworth54'
#Connecting to the SQL Server and Query to get FileLocation and Origin from the Config table. 
$DS = Invoke-Sqlcmd -ServerInstance "BatCave01" -Database "DBAStuff" -Username $username -Password $password -Query "SELECT FileLocation, Origin FROM Config" -As DataSet

#Magic that puts my two columns into variables
$data = $DS.Tables[0].Rows
$data
ForEach ($Row in $data) {
$FileLocation = $Row.FileLocation 
$Origin = $Row.Origin



#Build my insert statement so Powershell knows what to expect. 
#gets all the filenames and loads them into the table. 
$sqlstatement=@'
INSERT INTO LogshippingFile
(
FileName,	
FileLocation, 
FileType,
Origin,
ForDatabase 
) 
VALUES (
	'{0}',
	'{1}',
	'{2}',
	'{3}',
	'{4}'

   
)
'@
Get-ChildItem -Recurse $FileLocation  | 
select Name,FullName, 
    @{
    Label='FileType'
    Expression={($FileLocation.Substring(10,1))}}, @{
    Label='Origin'
    Expression={($Origin)}}, @{
    Label='ForDatabase'
    Expression={'Gryphon_'+($Origin)}} |
   # @{Label="Length";Expression={$_.Length / 1024KB -as [int] }}  |
ForEach-Object {
          $SQL = $sqlstatement -f  $_.name,$_.FullName, $_.FileType, $_.Origin, $_.ForDatabase #,  $_.Length		
    Invoke-sqlcmd -Username $username -Password $password -Query $SQL -ServerInstance “BatCave01” -database “DBAStuff” -Querytimeout 0
    }
    }

Now we have all this data in the table, what do we do with it? Guess you will have to tune in again to this same Bat channel.

The song for this post is Matt Nathanson’s Kiss Quick

Query times just keep fallin’!

So if I can’t modify or add indexes and I can’t change code, how do I get my query times to drop? Query Store to the rescue.

I love to tune queries. I feel so satisfied to see the times dropping on my server as I tune things. Recently, I have been tracking my Batch Requests per Second and my Instance Waits to see if I am making improvements when I tune. It has been awesome!

What I am going to show you today is how I dig into my query store to find those misbehaving queries and make their performance better. First I go into a database’s properties to make sure my query store is turned on:

There are a ton of best practice posts on query store sizes and settings, we aren’t talking about that in this post. This is just an example of what it can look like with the most important part being the Operation Mode (Actual) at the top says “Read Write”.

If the query store was just barely turned on, give it at least 30 minutes to run before expecting it to show much. It may even take up to a day to get a good idea of what is happening.

My two favorite reports are Top Resource Consuming Queries and Regressed Queries.

I start by looking at the Top Resource Consuming Queries (I use the same process for Regressed Queries).

I will click on a bar in the chart and it will turn green. Then I will hit the “track query” button to the right of the “refresh” button just above the chart.

This query was more stable yesterday. I will look at all the plans available (I will even select “Configure” in the top right corner and change to look at a full month instead of the last day to see if I can find a better plan). I will then click the plan I like best and select the “Force Plan” button. SQL Server will verify the plan number and ask if I am sure, I check that it is the right one and then select “yes”. Next I will refresh until I see my newly pinned plan show up. If my plan doesn’t show up, but a new plan does, I will compare the new plan with the one I wanted. If they have the same shape, I know that is what SQL is putting in place for my forced plan and I will force the new one instead to keep it consistent.

Now I watch and see how the new forced plan behaves. If it goes horribly wrong, I will unforce it. If it is healthy and doing what I want, I move to the next query to see if I can help it.

Disclaimer: Query store won’t solve all the problems, but it can be a way to temporarily fix performance issues. I keep an eye on my pinned queries and make sure I watch for when changes hit the system. I also have alarms for when things start to run long so I can quickly diagnose performance issues. If Indexes are changed or code is modified, it can affect forced plans.

The song for this post is Fallin’ by Why Don’t We , it makes me want to dance every time I hear it.

Little Bit of Love, when your logs are rolling too much.

In January, the awesome Tim Radney (b|t) talked to the Utah user group about best practices. One that he mentioned was rolling over your error logs everyday and keeping 35 logs (a month plus 3 reboots). I loved this idea and implemented it using what I had done here and adding it to an agent job.

Then I realized we didn’t have any alerts on if our logs were rolling too much. Way back in my career, it used to be something that I would watch and it could mean someone was trying to hack your system and cover their tracks by rolling your logs over a bunch. I fought so much with figuring how to tell if my logs are rolling over, I had to save it for the future.

DROP TABLE IF EXISTS #EnumErrorLog;

CREATE TABLE #EnumErrorLog
(
    [Archive#] varchar(3) NOT NULL
        PRIMARY KEY CLUSTERED
    , [Date] datetime NOT NULL
    , [LogFileSizeByte] int NOT NULL
);

INSERT INTO #EnumErrorLog ([Archive#], [Date], [LogFileSizeByte])
EXEC sys.sp_enumerrorlogs;


SELECT CASE WHEN COUNT([Archive#]) > = 5 THEN 1 ELSE 0 END
FROM #EnumErrorLog
WHERE Date > DATEADD(hour, -3, GETDATE())

I create a temp table so I can execute a system stored proc to pull the information into a table and select it back out. I run this alert check once an hour, which means that for 3 hours if the alert condition has been met, it will alert me that something has rolled over too much (1 means to alert, 0 means to not do anything). I am using a third party tool right now, but I bet this could be set up with native SQL alerts or agent jobs.

The song for this post is Little Bit of Love by JP Cooper, it makes me smile even on the toughest days. *hugs*

Sometimes I get so high, so low, where did all my good plans go?

Greetings! Today I was playing with query store and noticed that I had some failing forced plans. How do you find failing forced plans? I asked this question over and over and finally found an answer.

I started on my main database and ran this query to look at query store:

SELECT *
FROM sys.query_store_plan
WHERE is_forced_plan = 1 and force_failure_count > 0

I had over 20 of them that were failing! Next, I had to figure out how to unforce the failing plans. Some of them were so old, they wouldn’t come up when I tried to look for them using the plan id in the GUI. I did more digging and found this:

EXEC sp_query_store_unforce_plan @query_id = Enter your queryid, @plan_id = Enter your planid

It was incredibly satisfying to watch each of the rows in the first query disappear as I ran them through the second query. This is now on my list of things to check so I can have a clean and healthy query store.

This post’s song is High Low by The Unlikely Candidates

I’m Going Under and This Time DBCC Can’t Save Me…

Over the weekend I was troubleshooting an issue and found that part of it was being caused by corruption on one of my databases.  I have seen DBCC CheckDB fix corruption a few times and thought, “No problem, I’ve got this.”

I ran:


ALTER DATABASE CorruptDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
BEGIN TRANSACTION;
DBCC CHECKDB ('CorruptDB', REPAIR_REBUILD);
ALTER DATABASE CorruptDB SET MULTI_USER;
COMMIT

I waited for it to finish because the rule is, “Let it finish”.

There were still errors. So I ran it again.

The same errors appeared saying that some errors were waiting on others to be fixed first.  But nothing had fixed and my heart sank.  I had a moment of panic as I furiously googled.  I saw post after post telling me to restore the backup…Did I mention the small problem that the corruption had prevented the backups from completing for this database but didn’t throw an error? My backups were over two weeks old and a roll back was not an option.

I found a post by Paul Randal from 2006 talking about when DBCC CheckDB won’t solve the issue.  He mentioned that sometimes it is just an index or statistic that is corrupted.  This gave me hope and I started tracing my error.  I found it was on a Change Data Capture table that is being used in our replication.  I began pondering an idea.  I removed replication and ran the DBCC CheckDB.  It was clean! No corruption! I breathed and took a backup. Then I added replication back on and ran the DBCC CheckDB again.  It was clean again! Everything was healthy and fine!

Thank you to everyone for your past posts that save my sanity and thank you for DBCC CheckDB!!!

 

The song for this post is Someone You Loved by Lewis Capaldi

If I need to rearrange my fragmentation, I will for you….

Oh my goodness, I have been buried and learning tons about Indexing.  Something super cool that I learned was how to know if you need to adjust your fillfactors on your indexes.  My whole DBA career I have usually set it 85 and forgotten it. I learned that I can check my CommandLog and see how often that index is being rebuilt.   I am currently using Ola and when my indexes are rebuilt for maintenance (rebuilt nightly in this case), it is all logged to a table named CommandLog in the master database.  If an index is rebuilt everyday, adjust the fillfactor  down (75) so that it will rebuild less often.  If it nearly never rebuilds, adjust the fillfactor up (90) so it gets rebuilt once in a while and to avoid wasting space in an index. Here is the query I am using to see how often an index has rebuilt in the last 10 days.


SELECT Command, COUNT(StartTime) AS Rebuilt
FROM master.[dbo].[CommandLog]
WHERE CommandType LIKE '%Index%' AND StartTime > GETDATE()-10
GROUP BY Command
ORDER BY Rebuilt DESC

When I see ones that have a rebuilt number of 10, 9 or 8, I know those are rebuilding nearly everyday.  I will adjust their fillfactors down so that I don’t rebuild as often.  My sweet spot right now is once a week, but that doesn’t work for every index (or environment).  This is where the art part comes into the DBA world and I have to think about what is best for the system.  If I have already adjusted something down to 75 and it is still rebuilding frequently, I will adjust that number down more (65, then 55), and back up if it is too low.  I also evaluate how much that index is used and how important it is to keep healthy. If it is hardly ever used, do I want to waste fillfactor space?

Indexing is absolutely an art, but now I have a new brush for my kit by being able to query the CommandLog.

This post’s song is Particles by Nothing But Thieves

All of my friends say, “How much longer will it run?”

I have frequently needed to see how much longer a backup was going to run, or how much longer a restore was going to run.  This little bit of code is something I use to help me know how to plan.  I replace “Backup” with “Restore” if I want to see how much longer to plan for a restore.  I also use it for when I am tracking a rollback.  I remove the where clause and get a large list, the one that has a percentage and not a clear explanation is usually the one I want.   Sometimes the time remaining is lie.  I have had it imply 4 hours, when it took 13.  The comfort is that I can tell it is still working.


SELECT command, percent_complete,
'elapsed' = total_elapsed_time / 60000.0,
'remaining' = estimated_completion_time / 60000.0
FROM sys.dm_exec_requests
WHERE command like 'BACKUP%'

The song for this post is “Just Another Girl” by The Killers