Transactions follow me left and right but who did that over here?

I have been working my way through a fantastic training on SQL Internals and when I saw this trick, I had to write it down so I wouldn’t forget it.

Say you have a user come to you and they dropped a table sometime yesterday, but they don’t remember when and now they need it back. You could start the restore process and roll through logs until you see the drop and then restore to the hour before or you could run this super cool query to get the time the table was dropped.

(Before I ran this, I set up a test database, created a table, filled it with stuff, took a full backup and a transaction log backup, dropped the table and then took another transaction log backup)

SELECT [Current LSN]
		,[Operation]
		,[Context]
		,[Transaction ID]
		,[Description]
		,[Begin Time]
		,[Transaction SID]
FROM fn_dblog (NULL,NULL)
INNER JOIN(SELECT [Transaction ID] AS tid
FROM fn_dblog(NULL,NULL)
WHERE [Transaction Name] LIKE 'DROPOBJ%')fd ON [Transaction ID] = fd.tid

See that Begin Time? We want to roll our logs forward to right before that started. How cool is that?!!! Nearest point in time recovery that is possible all because of reading through the log to see when the drop occurred.

But this next part was the piece that blew my mind. What if I didn’t know who dropped the table, but wanted to talk to them so they didn’t do it again? I can add one more column to my query.

SELECT [Current LSN]
		,[Operation]
		,[Context]
		,[Transaction ID]
		,[Description]
		,[Begin Time]
		,[Transaction SID]
		,SUSER_SNAME ([Transaction SID]) AS WhoDidIt
FROM fn_dblog (NULL,NULL)
INNER JOIN(SELECT [Transaction ID] AS tid
FROM fn_dblog(NULL,NULL)
WHERE [Transaction Name] LIKE 'DROPOBJ%')fd ON [Transaction ID] = fd.tid

I am passing that Transaction SID into the SUSER_SNAME built in function.

Probably shouldn’t be surprised by that answer.

The song for this post is Left and Right by Charlie Puth.

You’re like the calmest failing job, I need you louder with a DPA alert…

I have an RDS instance that when backups are failing, we have no idea. We use DPA as one of our alerting systems, but with RDS, failing agent jobs are harder to find. I took the built in RDS status stored procedure and adjusted it enough to send me an alert through DPA so I can know that there are problems.

There are a couple of things I have already done, I have set up my RDS instance to talk to DPA, I have granted my DPA user access to run what it needs in RDS and I have set up a user in DPA that can email\call me when there is a problem. I used this post about DPA to help me get the basics done and then I did a lot of trial and error on my code to get the alert working just right.

This is the basic status stored proc that is telling me if my backups are working or not:

exec msdb.dbo.rds_task_status @db_name='MyDatabaseName';

I tried a few things and finally realized I needed to drop it into a temp table so that I could filter it down to only get what I need:

CREATE TABLE #tempAlert 
(task_id	bigint
,task_type	varchar(200)
,database_name	varchar(200)
,[% complete] int
,[duration(mins)] int	
,lifecycle varchar(20)
,task_info	varchar(8000)
,last_updated	datetime2
,created_at	datetime2
,S3_object_arn	varchar(2000)
,overwrite_S3_backup_file	int
,KMS_master_key_arn	varchar(200)
,filepath	varchar(200)
,overwrite_file int);

INSERT INTO #tempAlert
exec msdb.dbo.rds_task_status @db_name='MyDatabaseName';

SELECT *
FROM #TempAlert
WHERE task_type = 'BACKUP_DB_DIFFERENTIAL' AND last_updated > GETDATE()-1 AND lifecycle <> 'SUCCESS';

DROP TABLE #TempAlert;

This code is specifically looking for any diff failures in the last day, if I want fulls, I switch task_type = ‘BACKUP_DB_DIFFERENTIAL’ to task_type = ‘BACKUP_DB’.

In DPA, I go to ALERTS >>Manage Alerts and select “Custom” for my Alert Category and “Customer SQL Alert – Multiple Numeric Return” for my Alert Type then select “Create Alert”.

I gave it a name that was descriptive and changed my Execution Interval to once a day. I don’t want it to be firing all the time against my RDS instance and running up my bill. I could run it more often if I wanted. I added some text to my “Notification Text” box that will be helpful to anyone that gets the alert. I selected my Database instances that I wanted this alert to execute against. In the “SQL Statement” I pasted the bit of code above that creates the temp table, runs the stored procedure and inserts the results into that temp table, then filters the results and finally drops the temp table.

Finally, in the Alert Level and Notifications section, I set the “High” row of “Min” to a value of one and the “Max” I left empty, then added my Notification Group. I tested it to make sure all the connections were working properly and then I saved it. “I am the backup, I am the failure and I am the alert”.

The song for this post is Matt Maeson – Mr. Rattlebone

Oh my my, yeah I’m loving extra tuning time. ‘Cause I’m a sucker for Auto Tuning life.

A few weeks ago, we were talking to a new employee about how much time we spend with Query Store and they asked, “Why aren’t you using the Auto Tuning?”

THE WHAT NOW?!!!!

This awesome, Enterprise-only feature has been a bit of a trial and error for me.

Let’s start with turning it on, the only place I have found to turn it on is by using T-SQL:

ALTER DATABASE <DatabaseThatNeedsTuning> SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON ); 

Now, there are a couple of cool things that we can check behind the scenes to see what is driving the auto-tuning. Once Auto-Tuning is enabled, it will collect information that can be viewed by running this query:

SELECT *
FROM sys.dm_db_tuning_recommendations

The first column (name) is the QueryID with “PR_” added to it. I like to read through the columns, the “reason” the plan was chosen, the current “state” of the plan. When it was initiated and when it was reverted, all of this is fun for me to dig through and see what plans my system is finding that are better.

I also really like the Microsoft example with the JSON:

SELECT reason, score,
      script = JSON_VALUE(details, '$.implementationDetails.script'),
      planForceDetails.*,
      estimated_gain = (regressedPlanExecutionCount + recommendedPlanExecutionCount)
                  * (regressedPlanCpuTimeAverage - recommendedPlanCpuTimeAverage)/1000000,
      error_prone = IIF(regressedPlanErrorCount > recommendedPlanErrorCount, 'YES','NO')
FROM sys.dm_db_tuning_recommendations
CROSS APPLY OPENJSON (Details, '$.planForceDetails')
    WITH (  [query_id] int '$.queryId',
            regressedPlanId int '$.regressedPlanId',
            recommendedPlanId int '$.recommendedPlanId',
            regressedPlanErrorCount int,
            recommendedPlanErrorCount int,
            regressedPlanExecutionCount int,
            regressedPlanCpuTimeAverage float,
            recommendedPlanExecutionCount int,
            recommendedPlanCpuTimeAverage float
          ) AS planForceDetails;

Now to the other stuff. It isn’t perfect. Sometimes I have to manually go in and pin plans that are better than what the system is finding. If I manually pin a plan, it will honor it and not unpin or try to find a better plan for that query. It has helped me spend a bunch less time on tuning, but since many of my servers are on Standard Edition I am still using Query Store a lot.

Happy Tuning!

The song for this post is Oh My My by Blue October

And it was never a question, Query Store was crowing for repair. You gave it space and direction but you couldn’t keep it there…

Yes! Back to Query Store! I have had this problem for months where one of my Query store databases grows by a gig each week! It completely fills up, goes into a Read only state (which sets off the an alarm that I built to tell me when it switches to read only) and the only way I could get it to work again was to add space. I would add a gig and think, “Surely that will be enough to feed the hunger”. The next week, the alarm would go off again and I would feed it again!

I adjusted how often stats were collected, how frequently data was flushed, the max plans per query and anything else I could think to do, and still, it was hungry.

I had searched, read, googled, and kept coming up with nothing. I finally found something on corruption in the query store. CORRUPTION? Could it be possible? It was worth a try, my query store was in need of a serious diet and I still needed it to function.

The next time it went in to read only mode, I turned it off (it has to be off to fix the corruption) and ran this:

sp_query_store_consistency_check

Guess what happened next?!!!! My query store had a full gig free! I have left it alone for a few weeks and today I was able to shrink it by 5 gb! It has been glorious to have it working and not being worried as to why it was growing out of control.

The song for this post is Toad the Wet Sprocket’s Crowing

Now a personal note about Toad the Wet Sprocket. They are one of my favorite bands. Last night as I was listening to “Crowing”. I looked up the lyrics to figure out one of the words and realized I had be singing along to the wrong words. I thought it was “crowing for her” when it is actually “crowing for repair”. That completely changed the meaning of the song for me and made me love that song even more. It also made me realize I need to read Toad lyrics more often.

This also took my mind to the time that Ryan surprised me with tickets to go see them at a Reunion show in Vegas. After the show, fortune shown on me and I got to meet Glen Phillips the lead singer. He was super kind and gracious and let us take a picture and right after I fan-girled out a lot and started crying while trying to tell him how much I appreciated his and the band’s music. Huge apology to all the people I have completely scared with a fan-girl episode. I promise I try not to, just sometimes I can’t word how important that moment is to me.

Give me a minute to restore my logs…

This is part 4 of my log-shipping journey, if you missed part 1, you can find it here, part 2 is here, and part 3 is here.

We have one more set up piece to finish up before we do our restores. We need to tell my scripts which logfile is going to be the final one for now, so it can restore with Standby and I can read my databases.

UPDATE RestoreFile
SET IsFinalLogFile = 1
FROM RestoreFile RF1
INNER JOIN (SELECT RF.Origin, MAX(RF.CreatedDate) AS CreatedDate
FROM RestoreFile RF
INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
WHERE FileType = 'L' 
GROUP BY RF.Origin)RF2 ON RF1.Origin = RF2.Origin AND RF1.CreatedDate = RF2.CreatedDate

There are a few things going on here. First, I need to make sure that my logfile is more recent than my most recently restored full backup. Because I am restoring more than one database with this script, I have to join on both the ForDatabase and the CreatedDate (making sure it is greater than or, in the logfile case, equal to Date and Time I need). When I find the last logfile that is greater than the full backup’s created date, I am marking it with a IsFinalLogFile = 1 (True).

Now we are ready to build our full database restores.

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE DATABASE ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH FILE =  1, MOVE N''' + MF.name + ''' TO N''D:\SQLData\' + RF.ForDatabase + '.mdf'', MOVE N''' + MF.name + '_log'' TO N''L:\SQLLogs\'+ RF.ForDatabase + '_log.ldf'', NORECOVERY,  NOUNLOAD,  REPLACE,  STATS = 5'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
  WHERE RF.FileType = 'B' AND RF.IsApplied = 0
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN
    EXEC sp_executeSQL @SQLToExecute
	
	UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

I am using a cursor to run each restore one at a time. I make sure that I am only doing FileType = ‘B’ so I know they are full backups and ones that haven’t been applied yet. I am building my restore string to include a move since the location on the new server may be different from what was used in the past. I update the applied status as I do the restore so I won’t apply this one twice, for this I am using the RestoreFileId which allows me to get the specific record that I want. This is the main reason I wanted a new table. If I would have stayed with one table, there is a chance I would have had duplicates on the filenames. That would have meant duplicate chances to restore records. It just made everything so much cleaner to make one a stage table and one a production table.

For the logfile restore, I followed a similar pattern:

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE LOG ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH NORECOVERY;'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
	INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
  WHERE RF.FileType = 'L' AND RF.IsApplied = 0 AND RF.IsFinalLogFile = 0
  ORDER BY RF.ForDatabase, RF.CreatedDate
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN

BEGIN TRY
    EXEC sp_executeSQL @SQLToExecute

		UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId
END TRY
BEGIN CATCH
    --SELECT   
        --ERROR_NUMBER() AS ErrorNumber,ERROR_MESSAGE() AS ErrorMessage
		UPDATE [dbo].[RestoreFile]  SET IsApplied = 0 WHERE Id = @RestoreFileId
END CATCH 

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

The pattern here is almost the same. I build my simple restore script but only for filetype = ‘L’ (Logfile) and only if it hasn’t been applied and if it isn’t the final logfile. I also check to make sure the full backup has restored and is earlier than the logfile. Then I execute the string of sql that I built and update the record to applied. I ended up having issues with the IsApplied being marked to true, even when it didn’t apply correctly so I added in error handling to change it back if that is the case. I may go back and add this to the Full Backup Restore script.

Next, I run the script that will do my final logfile restore for each database:

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE LOG ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH STANDBY=''Z:\Standby\' + RF.ForDatabase + '_Standby.bak'';'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
	INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
  WHERE RF.FileType = 'L' AND RF.IsApplied = 0 AND RF.IsFinalLogFile = 1
  ORDER BY RF.ForDatabase, RF.CreatedDate
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN

BEGIN TRY
    EXEC sp_executeSQL @SQLToExecute

		UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId
END TRY
BEGIN CATCH
    --SELECT   
        --ERROR_NUMBER() AS ErrorNumber,ERROR_MESSAGE() AS ErrorMessage
		UPDATE [dbo].[RestoreFile]  SET IsApplied = 0 WHERE Id = @RestoreFileId
END CATCH 

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

The only difference between these last two are the “WITH STANDBY” instead of “WITH NORECOVERY” and the IsFinalLogFile being set to true.

Last, but not least, I am cleaning old records out of my RestoreFile table so it doesn’t get large. I debated how much data to keep and for me, it makes sense to delete anything older than two days. I am doing a full restore daily and if I change that to weekly, I will keep this data longer.

  DELETE
  FROM [Maintenance].[dbo].[RestoreFile]
  WHERE CreatedDate < GETDATE()-2

This process allows me to stay flexible. If I don’t get a new full backup, this process will keep restoring logfiles until a new full backup file appears. If the backup chain is broken, this process will also break, but it will work as long as that chain is healthy and it gets all the files it needs.

Whew! We did it! That is my full poor girl log shipping process.

The song for this post is George Ezra’s Hold My Girl.

Load quick, got a list of files that need to be in a table…

Yes, I am trying to do a version of log-shipping that doesn’t require linking any servers and allows them to be in different domains. I also know I am crazy.

Now that we have that out of the way, let me show you some of the stuff I am doing! Loading a table from multiple file locations using Powershell. It will go and read what is in the directories and load them into a table for me (Oh my goodness, I knew Powershell was cool, but it took me so long to figure this out, that I was super proud with how fast it loaded).

First, the housekeeping. Create two tables in a DBA Database:

CREATE TABLE [dbo].[LogshippingFile](
	[FileName] [varchar](500) NULL,
	[FileLocation] [varchar](1000) NULL,
	[FileType] [char](1) NULL,
	[Origin] [varchar](50) NULL,
	[ForDatabase] [varchar](50) NULL,
	[CreatedDate] [datetime2](7) NULL,
	[Applied] [bit] NOT NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[LogshippingFile] ADD  CONSTRAINT [DF_LogshippingFiles_Applied]  DEFAULT ((0)) FOR [Applied]
CREATE TABLE [dbo].[Config](
	[Origin] [varchar](50) NULL,
	[FileLocation] [varchar](500) NULL,
	[Type] [char](1) NULL
) ON [PRIMARY]

In the Config table, I am going to put the common name for the database I am restoring so the rows will look like this:

OriginFileLocationType
Batman J:\FullBackups\Batman B
Batman J:\LogBackups\Batman L
Superman J:\FullBackups\Superman B
Superman J:\LogBackups\Superman L

Yay! Now we can get to the fun stuff. This is how I set up the Powershell:



# Credentials for connection to SQL Server. 
$username = 'Batgirl'
$password = '!Pennyworth54'
#Connecting to the SQL Server and Query to get FileLocation and Origin from the Config table. 
$DS = Invoke-Sqlcmd -ServerInstance "BatCave01" -Database "DBAStuff" -Username $username -Password $password -Query "SELECT FileLocation, Origin FROM Config" -As DataSet

#Magic that puts my two columns into variables
$data = $DS.Tables[0].Rows
$data
ForEach ($Row in $data) {
$FileLocation = $Row.FileLocation 
$Origin = $Row.Origin



#Build my insert statement so Powershell knows what to expect. 
#gets all the filenames and loads them into the table. 
$sqlstatement=@'
INSERT INTO LogshippingFile
(
FileName,	
FileLocation, 
FileType,
Origin,
ForDatabase 
) 
VALUES (
	'{0}',
	'{1}',
	'{2}',
	'{3}',
	'{4}'

   
)
'@
Get-ChildItem -Recurse $FileLocation  | 
select Name,FullName, 
    @{
    Label='FileType'
    Expression={($FileLocation.Substring(10,1))}}, @{
    Label='Origin'
    Expression={($Origin)}}, @{
    Label='ForDatabase'
    Expression={'Gryphon_'+($Origin)}} |
   # @{Label="Length";Expression={$_.Length / 1024KB -as [int] }}  |
ForEach-Object {
          $SQL = $sqlstatement -f  $_.name,$_.FullName, $_.FileType, $_.Origin, $_.ForDatabase #,  $_.Length		
    Invoke-sqlcmd -Username $username -Password $password -Query $SQL -ServerInstance “BatCave01” -database “DBAStuff” -Querytimeout 0
    }
    }

Now we have all this data in the table, what do we do with it? Guess you will have to tune in again to this same Bat channel.

The song for this post is Matt Nathanson’s Kiss Quick

Little Bit of Love, when your logs are rolling too much.

In January, the awesome Tim Radney (b|t) talked to the Utah user group about best practices. One that he mentioned was rolling over your error logs everyday and keeping 35 logs (a month plus 3 reboots). I loved this idea and implemented it using what I had done here and adding it to an agent job.

Then I realized we didn’t have any alerts on if our logs were rolling too much. Way back in my career, it used to be something that I would watch and it could mean someone was trying to hack your system and cover their tracks by rolling your logs over a bunch. I fought so much with figuring how to tell if my logs are rolling over, I had to save it for the future.

DROP TABLE IF EXISTS #EnumErrorLog;

CREATE TABLE #EnumErrorLog
(
    [Archive#] varchar(3) NOT NULL
        PRIMARY KEY CLUSTERED
    , [Date] datetime NOT NULL
    , [LogFileSizeByte] int NOT NULL
);

INSERT INTO #EnumErrorLog ([Archive#], [Date], [LogFileSizeByte])
EXEC sys.sp_enumerrorlogs;


SELECT CASE WHEN COUNT([Archive#]) > = 5 THEN 1 ELSE 0 END
FROM #EnumErrorLog
WHERE Date > DATEADD(hour, -3, GETDATE())

I create a temp table so I can execute a system stored proc to pull the information into a table and select it back out. I run this alert check once an hour, which means that for 3 hours if the alert condition has been met, it will alert me that something has rolled over too much (1 means to alert, 0 means to not do anything). I am using a third party tool right now, but I bet this could be set up with native SQL alerts or agent jobs.

The song for this post is Little Bit of Love by JP Cooper, it makes me smile even on the toughest days. *hugs*

Sometimes I get so high, so low, where did all my good plans go?

Greetings! Today I was playing with query store and noticed that I had some failing forced plans. How do you find failing forced plans? I asked this question over and over and finally found an answer.

I started on my main database and ran this query to look at query store:

SELECT *
FROM sys.query_store_plan
WHERE is_forced_plan = 1 and force_failure_count > 0

I had over 20 of them that were failing! Next, I had to figure out how to unforce the failing plans. Some of them were so old, they wouldn’t come up when I tried to look for them using the plan id in the GUI. I did more digging and found this:

EXEC sp_query_store_unforce_plan @query_id = Enter your queryid, @plan_id = Enter your planid

It was incredibly satisfying to watch each of the rows in the first query disappear as I ran them through the second query. This is now on my list of things to check so I can have a clean and healthy query store.

This post’s song is High Low by The Unlikely Candidates

I’m Going Under and This Time DBCC Can’t Save Me…

Over the weekend I was troubleshooting an issue and found that part of it was being caused by corruption on one of my databases.  I have seen DBCC CheckDB fix corruption a few times and thought, “No problem, I’ve got this.”

I ran:


ALTER DATABASE CorruptDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
BEGIN TRANSACTION;
DBCC CHECKDB ('CorruptDB', REPAIR_REBUILD);
ALTER DATABASE CorruptDB SET MULTI_USER;
COMMIT

I waited for it to finish because the rule is, “Let it finish”.

There were still errors. So I ran it again.

The same errors appeared saying that some errors were waiting on others to be fixed first.  But nothing had fixed and my heart sank.  I had a moment of panic as I furiously googled.  I saw post after post telling me to restore the backup…Did I mention the small problem that the corruption had prevented the backups from completing for this database but didn’t throw an error? My backups were over two weeks old and a roll back was not an option.

I found a post by Paul Randal from 2006 talking about when DBCC CheckDB won’t solve the issue.  He mentioned that sometimes it is just an index or statistic that is corrupted.  This gave me hope and I started tracing my error.  I found it was on a Change Data Capture table that is being used in our replication.  I began pondering an idea.  I removed replication and ran the DBCC CheckDB.  It was clean! No corruption! I breathed and took a backup. Then I added replication back on and ran the DBCC CheckDB again.  It was clean again! Everything was healthy and fine!

Thank you to everyone for your past posts that save my sanity and thank you for DBCC CheckDB!!!

 

The song for this post is Someone You Loved by Lewis Capaldi

If I need to rearrange my fragmentation, I will for you….

Oh my goodness, I have been buried and learning tons about Indexing.  Something super cool that I learned was how to know if you need to adjust your fillfactors on your indexes.  My whole DBA career I have usually set it 85 and forgotten it. I learned that I can check my CommandLog and see how often that index is being rebuilt.   I am currently using Ola and when my indexes are rebuilt for maintenance (rebuilt nightly in this case), it is all logged to a table named CommandLog in the master database.  If an index is rebuilt everyday, adjust the fillfactor  down (75) so that it will rebuild less often.  If it nearly never rebuilds, adjust the fillfactor up (90) so it gets rebuilt once in a while and to avoid wasting space in an index. Here is the query I am using to see how often an index has rebuilt in the last 10 days.


SELECT Command, COUNT(StartTime) AS Rebuilt
FROM master.[dbo].[CommandLog]
WHERE CommandType LIKE '%Index%' AND StartTime > GETDATE()-10
GROUP BY Command
ORDER BY Rebuilt DESC

When I see ones that have a rebuilt number of 10, 9 or 8, I know those are rebuilding nearly everyday.  I will adjust their fillfactors down so that I don’t rebuild as often.  My sweet spot right now is once a week, but that doesn’t work for every index (or environment).  This is where the art part comes into the DBA world and I have to think about what is best for the system.  If I have already adjusted something down to 75 and it is still rebuilding frequently, I will adjust that number down more (65, then 55), and back up if it is too low.  I also evaluate how much that index is used and how important it is to keep healthy. If it is hardly ever used, do I want to waste fillfactor space?

Indexing is absolutely an art, but now I have a new brush for my kit by being able to query the CommandLog.

This post’s song is Particles by Nothing But Thieves