Give me a minute to restore my logs…

This is part 4 of my log-shipping journey, if you missed part 1, you can find it here, part 2 is here, and part 3 is here.

We have one more set up piece to finish up before we do our restores. We need to tell my scripts which logfile is going to be the final one for now, so it can restore with Standby and I can read my databases.

UPDATE RestoreFile
SET IsFinalLogFile = 1
FROM RestoreFile RF1
INNER JOIN (SELECT RF.Origin, MAX(RF.CreatedDate) AS CreatedDate
FROM RestoreFile RF
INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
WHERE FileType = 'L' 
GROUP BY RF.Origin)RF2 ON RF1.Origin = RF2.Origin AND RF1.CreatedDate = RF2.CreatedDate

There are a few things going on here. First, I need to make sure that my logfile is more recent than my most recently restored full backup. Because I am restoring more than one database with this script, I have to join on both the ForDatabase and the CreatedDate (making sure it is greater than or, in the logfile case, equal to Date and Time I need). When I find the last logfile that is greater than the full backup’s created date, I am marking it with a IsFinalLogFile = 1 (True).

Now we are ready to build our full database restores.

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE DATABASE ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH FILE =  1, MOVE N''' + MF.name + ''' TO N''D:\SQLData\' + RF.ForDatabase + '.mdf'', MOVE N''' + MF.name + '_log'' TO N''L:\SQLLogs\'+ RF.ForDatabase + '_log.ldf'', NORECOVERY,  NOUNLOAD,  REPLACE,  STATS = 5'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
  WHERE RF.FileType = 'B' AND RF.IsApplied = 0
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN
    EXEC sp_executeSQL @SQLToExecute
	
	UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

I am using a cursor to run each restore one at a time. I make sure that I am only doing FileType = ‘B’ so I know they are full backups and ones that haven’t been applied yet. I am building my restore string to include a move since the location on the new server may be different from what was used in the past. I update the applied status as I do the restore so I won’t apply this one twice, for this I am using the RestoreFileId which allows me to get the specific record that I want. This is the main reason I wanted a new table. If I would have stayed with one table, there is a chance I would have had duplicates on the filenames. That would have meant duplicate chances to restore records. It just made everything so much cleaner to make one a stage table and one a production table.

For the logfile restore, I followed a similar pattern:

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE LOG ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH NORECOVERY;'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
	INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
  WHERE RF.FileType = 'L' AND RF.IsApplied = 0 AND RF.IsFinalLogFile = 0
  ORDER BY RF.ForDatabase, RF.CreatedDate
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN

BEGIN TRY
    EXEC sp_executeSQL @SQLToExecute

		UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId
END TRY
BEGIN CATCH
    --SELECT   
        --ERROR_NUMBER() AS ErrorNumber,ERROR_MESSAGE() AS ErrorMessage
		UPDATE [dbo].[RestoreFile]  SET IsApplied = 0 WHERE Id = @RestoreFileId
END CATCH 

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

The pattern here is almost the same. I build my simple restore script but only for filetype = ‘L’ (Logfile) and only if it hasn’t been applied and if it isn’t the final logfile. I also check to make sure the full backup has restored and is earlier than the logfile. Then I execute the string of sql that I built and update the record to applied. I ended up having issues with the IsApplied being marked to true, even when it didn’t apply correctly so I added in error handling to change it back if that is the case. I may go back and add this to the Full Backup Restore script.

Next, I run the script that will do my final logfile restore for each database:

DECLARE @SQLToExecute nvarchar(max), @RestoreFileId int
DECLARE RestoreBackups CURSOR FAST_FORWARD
FOR
SELECT 
'RESTORE LOG ' + RF.ForDatabase + ' FROM DISK = N''' + RF.FileLocation + ''' WITH STANDBY=''Z:\Standby\' + RF.ForDatabase + '_Standby.bak'';'
, Id

  FROM [dbo].[RestoreFile] RF
	INNER JOIN sys.databases d ON RF.ForDatabase = d.name
	INNER JOIN sys.master_files mf on d.database_id = mf.database_id and mf.type = 0
	INNER JOIN	(SELECT ForDatabase, MAX(CreatedDate) AS CreatedDate
					FROM [dbo].[RestoreFile] RF
					WHERE RF.FileType = 'B' 
					GROUP BY ForDatabase) BF ON RF.ForDatabase = BF.ForDatabase AND RF.CreatedDate > BF.CreatedDate
  WHERE RF.FileType = 'L' AND RF.IsApplied = 0 AND RF.IsFinalLogFile = 1
  ORDER BY RF.ForDatabase, RF.CreatedDate
OPEN RestoreBackups
FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
WHILE (@@FETCH_STATUS = 0)
BEGIN

BEGIN TRY
    EXEC sp_executeSQL @SQLToExecute

		UPDATE [dbo].[RestoreFile]  SET IsApplied = 1 WHERE Id = @RestoreFileId
END TRY
BEGIN CATCH
    --SELECT   
        --ERROR_NUMBER() AS ErrorNumber,ERROR_MESSAGE() AS ErrorMessage
		UPDATE [dbo].[RestoreFile]  SET IsApplied = 0 WHERE Id = @RestoreFileId
END CATCH 

    FETCH NEXT FROM RestoreBackups INTO @SQLToExecute, @RestoreFileId
END
CLOSE RestoreBackups
DEALLOCATE RestoreBackups
GO

The only difference between these last two are the “WITH STANDBY” instead of “WITH NORECOVERY” and the IsFinalLogFile being set to true.

Last, but not least, I am cleaning old records out of my RestoreFile table so it doesn’t get large. I debated how much data to keep and for me, it makes sense to delete anything older than two days. I am doing a full restore daily and if I change that to weekly, I will keep this data longer.

  DELETE
  FROM [Maintenance].[dbo].[RestoreFile]
  WHERE CreatedDate < GETDATE()-2

This process allows me to stay flexible. If I don’t get a new full backup, this process will keep restoring logfiles until a new full backup file appears. If the backup chain is broken, this process will also break, but it will work as long as that chain is healthy and it gets all the files it needs.

Whew! We did it! That is my full poor girl log shipping process.

The song for this post is George Ezra’s Hold My Girl.

Starting now, is the wrong date for insert…

This is part 2 of my log-shipping journey, if you missed part 1, you can find it here.

I collected all of the file names I need, but you will notice I left the dates off. When my files are moved from one domain to another, their created dates are being changed. I needed the real dates for the correct restore order and to match the backups to the logs. If I were a Dark Knight Powershell master, I am sure I would have figured out how to do it. Every time I started to get close, I would would have a production issue or another distraction that needed my time. In the end I landed in my happy place, so we are fixing the dates in the database!

How do I get the right date for a file when the created date is being changed after it has moved? I was super lucky that the date is being stored in the filename too! (Huge thank you to Ola for his awesome database maintenance solution.)

An example of my filename is this:

Batcentral$Alfred_Batman_LOG_20210610_224501.trn

This is how I dissect the filename to get the date and time from it:

  UPDATE [DBAStuff].[dbo].[LogshippingFile]
  SET CreatedDate = CAST(Substring(FileName, (LEN(FileName)-18),8)  +' '+ (Substring(FileName, (LEN(FileName)-9),2)+ ':' + Substring(FileName, (LEN(FileName)-7),2) + ':' + Substring(FileName, (LEN(FileName)-5),2)) AS DATETIME)
  WHERE CreatedDate IS NULL

My filenames are different lengths which means the the dates won’t always be in the same place, instead I go to the end of the string and count backwards because my dates are always consistent. Then I add all the parts back together to get my datetime and update it into my table.

Are we done yet? Nope, there is more.

The song for this post is Toad the Wet Sprockets’ Starting Now

Load quick, got a list of files that need to be in a table…

Yes, I am trying to do a version of log-shipping that doesn’t require linking any servers and allows them to be in different domains. I also know I am crazy.

Now that we have that out of the way, let me show you some of the stuff I am doing! Loading a table from multiple file locations using Powershell. It will go and read what is in the directories and load them into a table for me (Oh my goodness, I knew Powershell was cool, but it took me so long to figure this out, that I was super proud with how fast it loaded).

First, the housekeeping. Create two tables in a DBA Database:

CREATE TABLE [dbo].[LogshippingFile](
	[FileName] [varchar](500) NULL,
	[FileLocation] [varchar](1000) NULL,
	[FileType] [char](1) NULL,
	[Origin] [varchar](50) NULL,
	[ForDatabase] [varchar](50) NULL,
	[CreatedDate] [datetime2](7) NULL,
	[Applied] [bit] NOT NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[LogshippingFile] ADD  CONSTRAINT [DF_LogshippingFiles_Applied]  DEFAULT ((0)) FOR [Applied]
CREATE TABLE [dbo].[Config](
	[Origin] [varchar](50) NULL,
	[FileLocation] [varchar](500) NULL,
	[Type] [char](1) NULL
) ON [PRIMARY]

In the Config table, I am going to put the common name for the database I am restoring so the rows will look like this:

OriginFileLocationType
Batman J:\FullBackups\Batman B
Batman J:\LogBackups\Batman L
Superman J:\FullBackups\Superman B
Superman J:\LogBackups\Superman L

Yay! Now we can get to the fun stuff. This is how I set up the Powershell:



# Credentials for connection to SQL Server. 
$username = 'Batgirl'
$password = '!Pennyworth54'
#Connecting to the SQL Server and Query to get FileLocation and Origin from the Config table. 
$DS = Invoke-Sqlcmd -ServerInstance "BatCave01" -Database "DBAStuff" -Username $username -Password $password -Query "SELECT FileLocation, Origin FROM Config" -As DataSet

#Magic that puts my two columns into variables
$data = $DS.Tables[0].Rows
$data
ForEach ($Row in $data) {
$FileLocation = $Row.FileLocation 
$Origin = $Row.Origin



#Build my insert statement so Powershell knows what to expect. 
#gets all the filenames and loads them into the table. 
$sqlstatement=@'
INSERT INTO LogshippingFile
(
FileName,	
FileLocation, 
FileType,
Origin,
ForDatabase 
) 
VALUES (
	'{0}',
	'{1}',
	'{2}',
	'{3}',
	'{4}'

   
)
'@
Get-ChildItem -Recurse $FileLocation  | 
select Name,FullName, 
    @{
    Label='FileType'
    Expression={($FileLocation.Substring(10,1))}}, @{
    Label='Origin'
    Expression={($Origin)}}, @{
    Label='ForDatabase'
    Expression={'Gryphon_'+($Origin)}} |
   # @{Label="Length";Expression={$_.Length / 1024KB -as [int] }}  |
ForEach-Object {
          $SQL = $sqlstatement -f  $_.name,$_.FullName, $_.FileType, $_.Origin, $_.ForDatabase #,  $_.Length		
    Invoke-sqlcmd -Username $username -Password $password -Query $SQL -ServerInstance “BatCave01” -database “DBAStuff” -Querytimeout 0
    }
    }

Now we have all this data in the table, what do we do with it? Guess you will have to tune in again to this same Bat channel.

The song for this post is Matt Nathanson’s Kiss Quick

Query times just keep fallin’!

So if I can’t modify or add indexes and I can’t change code, how do I get my query times to drop? Query Store to the rescue.

I love to tune queries. I feel so satisfied to see the times dropping on my server as I tune things. Recently, I have been tracking my Batch Requests per Second and my Instance Waits to see if I am making improvements when I tune. It has been awesome!

What I am going to show you today is how I dig into my query store to find those misbehaving queries and make their performance better. First I go into a database’s properties to make sure my query store is turned on:

There are a ton of best practice posts on query store sizes and settings, we aren’t talking about that in this post. This is just an example of what it can look like with the most important part being the Operation Mode (Actual) at the top says “Read Write”.

If the query store was just barely turned on, give it at least 30 minutes to run before expecting it to show much. It may even take up to a day to get a good idea of what is happening.

My two favorite reports are Top Resource Consuming Queries and Regressed Queries.

I start by looking at the Top Resource Consuming Queries (I use the same process for Regressed Queries).

I will click on a bar in the chart and it will turn green. Then I will hit the “track query” button to the right of the “refresh” button just above the chart.

This query was more stable yesterday. I will look at all the plans available (I will even select “Configure” in the top right corner and change to look at a full month instead of the last day to see if I can find a better plan). I will then click the plan I like best and select the “Force Plan” button. SQL Server will verify the plan number and ask if I am sure, I check that it is the right one and then select “yes”. Next I will refresh until I see my newly pinned plan show up. If my plan doesn’t show up, but a new plan does, I will compare the new plan with the one I wanted. If they have the same shape, I know that is what SQL is putting in place for my forced plan and I will force the new one instead to keep it consistent.

Now I watch and see how the new forced plan behaves. If it goes horribly wrong, I will unforce it. If it is healthy and doing what I want, I move to the next query to see if I can help it.

Disclaimer: Query store won’t solve all the problems, but it can be a way to temporarily fix performance issues. I keep an eye on my pinned queries and make sure I watch for when changes hit the system. I also have alarms for when things start to run long so I can quickly diagnose performance issues. If Indexes are changed or code is modified, it can affect forced plans.

The song for this post is Fallin’ by Why Don’t We , it makes me want to dance every time I hear it.

Sometimes I get so high, so low, where did all my good plans go?

Greetings! Today I was playing with query store and noticed that I had some failing forced plans. How do you find failing forced plans? I asked this question over and over and finally found an answer.

I started on my main database and ran this query to look at query store:

SELECT *
FROM sys.query_store_plan
WHERE is_forced_plan = 1 and force_failure_count > 0

I had over 20 of them that were failing! Next, I had to figure out how to unforce the failing plans. Some of them were so old, they wouldn’t come up when I tried to look for them using the plan id in the GUI. I did more digging and found this:

EXEC sp_query_store_unforce_plan @query_id = Enter your queryid, @plan_id = Enter your planid

It was incredibly satisfying to watch each of the rows in the first query disappear as I ran them through the second query. This is now on my list of things to check so I can have a clean and healthy query store.

This post’s song is High Low by The Unlikely Candidates

I’m Going Under and This Time DBCC Can’t Save Me…

Over the weekend I was troubleshooting an issue and found that part of it was being caused by corruption on one of my databases.  I have seen DBCC CheckDB fix corruption a few times and thought, “No problem, I’ve got this.”

I ran:


ALTER DATABASE CorruptDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
BEGIN TRANSACTION;
DBCC CHECKDB ('CorruptDB', REPAIR_REBUILD);
ALTER DATABASE CorruptDB SET MULTI_USER;
COMMIT

I waited for it to finish because the rule is, “Let it finish”.

There were still errors. So I ran it again.

The same errors appeared saying that some errors were waiting on others to be fixed first.  But nothing had fixed and my heart sank.  I had a moment of panic as I furiously googled.  I saw post after post telling me to restore the backup…Did I mention the small problem that the corruption had prevented the backups from completing for this database but didn’t throw an error? My backups were over two weeks old and a roll back was not an option.

I found a post by Paul Randal from 2006 talking about when DBCC CheckDB won’t solve the issue.  He mentioned that sometimes it is just an index or statistic that is corrupted.  This gave me hope and I started tracing my error.  I found it was on a Change Data Capture table that is being used in our replication.  I began pondering an idea.  I removed replication and ran the DBCC CheckDB.  It was clean! No corruption! I breathed and took a backup. Then I added replication back on and ran the DBCC CheckDB again.  It was clean again! Everything was healthy and fine!

Thank you to everyone for your past posts that save my sanity and thank you for DBCC CheckDB!!!

 

The song for this post is Someone You Loved by Lewis Capaldi

All the Masking in the World Can Maybe Cover Your Dirty Laundry….

I have spent the last week learning about new features in SQL Server 2016 and one that I want to play with is Dynamic Data Masking (DDM).

What is data masking? It is a way to change or hide sensitive data. If I want to hide an email address that is Batgirl@DC.com,  I could either change it to be Batwoman@Heros.com using a masking software or I could use DDM to cover it like this BXXXXX@XXXXXX.com. I can also determine how many letters I want to cover with the masking in DDM.

If you want to permanently mask it for security purposes and force it to never link back to your production data, SQL Server Dynamic Data Masking (DDM) is not for you.  The built-in feature only applies a mask over the top, it doesn’t actually change the data that is stored in the database.   Think of SQL Servers’ version of data masking like a Halloween mask that sits on your face as opposed to plastic surgery that will forever change the way you look.

SQL Servers’ DDM will mask data to users that you set up to see the mask.  This is helpful for reporting or for curious people who want to look at data they shouldn’t be viewing.  It will not hide the data from privileged users.  It will not protect your data from someone taking a backup and restoring it somewhere else (If you want that, try Alway Encrypted instead). As a side note, DDM and Alway Encrypted won’t work together on the same column.

Now let’s get ready to play with Dynamic Data Masking in SQL Server.  (Coming next month)

Today’s song is Dirty Laundry by Carrie Underwood.

Mayday this is an emergency, my linked server is using ODBC to connect to an AG…

Availability Groups (AG) and Linked servers can get really tricky. Especially if you are dealing with multi-subnet failover.  The only way we have figured out how to do this is with ODBC. Here is how.

First, we have create an ODBC connection on our SQL Server.  The single server in the picture below is the server we are going create the ODBC connection and the linked server on.  It will go over to the AG Listener. ag-pic

First we are creating an ODBC Connection on our server that is going to link to the AG.

odbc1

Make sure to be in the System DSN section. Click Add

odbc2

We can pick either ODBC Driver 11 or 13.  This is a separate driver install that we can get here.  We want to install the driver and then we will see it as an option in the screen above.  Click Finish.

odbc3

Give it whatever name and description wanted, but save the name for later. The server should be the name of the AG Listener. Click Next.

odbc4

Technically we don’t have to put in a login and password, but I like to test the login and password that I am going to use for the linked server. It won’t be saved here. Click Next.

odbc5

Here is the magic part, make sure to check multi-subnet failover.  That is what is going to make the connection automatically fail between the two nodes. Click next, test the connection and then Finish. The ODBC connection is ready to be used by the linked server connection.  Let’s build that part now.

Go into SQL Server Management Studio and under Server Objects, right click on Linked Servers and select “New Linked Server”.

odbc6

The provider needs to be OLE DB Provider for ODBC Drivers. Remember the name we gave the ODBC connection?  We are going to use that here. Then go to the Security Tab.

odbc7

This is where we put in the login and it will be stored here.  I also make sure this user name and password is on both Nodes of the AG with the permissions that I need. Click Server Options

odbc8

The above is what I need, but I check that I am only giving access to what is needed and not more.  When we click ok, it will test our connection.  If everything works with no errors, we are ready to go.

Some of the problems that we have noticed are querying tables that have big datatypes like time(3-7), timestamp, and a few others.  Casting or converting the datatypes doesn’t help. If we pull the table into a view without the big datatype columns, we are able to query the view from another server, but never the base table. It has been a bit frustrating, but we are still hopeful that we can find a solution or that Microsoft with fix ODBC connections. If there is a better way to do this, please reach out to me.  We have things we need to solve and could use some help.

The song for this post is Mayday by Cam.

Coming soon: Count down to PASS Summit 2016 with more pictures from PASS Summit 2015. Watch twitter and the Magic Mirrors page for more.

Takes as Long as it Takes SQL, Break on Me, I have a DAC…

The DAC, what is it? It is the Dedicated Administrator Console. Basically it is a way to connect to your SQL Server when all the other connections are tied up. But it takes a little bit of pre-planning so that you can use it when things go bad with your SQL Server.  Let’s enable it so you can test using it and know that it is there in the future.


EXEC sp_configure 'remote admin connections', 1;
GO
RECONFIGURE
GO

I was always afraid to use the DAC because I thought I had to use the command line and I am terrible with cmd. But did you know, you can connect to it through Management Studio? Now I can feel right at home when I trouble shoot a sick server.  To connect to a server using the DAC put “admin:” in front of the name of the server like this:

DAC

I use my elevated windows login for this point, but you can use what ever sys admin account that you have.

There are a few words of caution. You can only have one connection to your DAC on each server at a time. So don’t try to connect Object explorer to it. I have SQLPrompt and noticed that my server would send an alert on Severity Error 20 because SQL Prompt was trying to connect at the same time and was failing. Just be aware that you might get an error, but if everything is correct you can look down at the connection at the bottom of the query window and see you are connected.  If someone else is using the DAC, you won’t be able to use the connection, so it is a good idea to always close it when you are done.

Kendra Little b|t has a fantastic post for Brent Ozar Unlimited about the DAC and most importantly to me, how to tell who is connected.  This is the code that she wrote to help identify who is connected:


SELECT
CASE
WHEN ses.session_id= @@SPID THEN 'It''s me! '
ELSE '' END
+ coalesce(ses.login_name,'???') as WhosGotTheDAC,
ses.session_id,
ses.login_time,
ses.status,
ses.original_login_name
from sys.endpoints as en
join sys.dm_exec_sessions ses on
en.endpoint_id=ses.endpoint_id
where en.name='Dedicated Admin Connection'

Now when SQL Server Breaks on you, there is a tool that prevents us from shattering like glass.

Hey what’s that thing I can’t remember…determining RAM amounts

I frequently am trying to determine how much RAM I can give to SQL Server without starving the OS.  Since my servers usually only have SQL Server on them and no other applications, I can give them everything except 2-4gb for the OS. Depending on what that server is doing and if it still looks hungry, I will give the OS more or SQL Server more.  The question is, if I can’t remote on to the server, how do I know how much RAM is on the server? After much searching and calculating, I have come up with a T-SQL query to help. It will tell you how much RAM is on your instance.

SELECT physical_memory_kb * 9.5367431640625E-7
FROM sys.dm_os_sys_info

 

It uses the system views which I haven’t begun to scratch the surface of their awesomeness, but plan to learn more.