Listen to the AG that’s tricking you…

 

Another AG (Availability Group) Post? Yes, I learned something new and it must be cataloged. When you are failing AG’s back and forth really fast and a major indexing job kicks off in the middle, it can cause a transaction to have to rollback.  This rollback may take a REALLY long time, even if you were only on the node for 10 minutes and a large transaction had only been running for about 5 minutes. When I failed back to my preferred primary node and the AG Dashboard didn’t go completely green, I got worried.  Why in the world would it not go green? I just failed to the preferred secondary and applied a patch (see? I learned.) and then was failing back. It had been green when I started, green when I failed over to the secondary and now one of my biggest databases was not synchronizing on the primary….*sigh*

I panicked. In this situation I would normally pull the database out of the AG and then re-add it.  I didn’t have that option because it is a HUGE database and didn’t have that much time and space to move it around. I knew a large transaction had kicked off (thank you alert email that I created to warn me about such things) but thought that surely the rollback would have cleared quickly.  That lead me to looking for rolling back transactions.

I ran this on the alarming secondary node:

SELECT R.session_id, R.command ,R.status, R.percent_complete
FROM sys.dm_exec_requests R
WHERE R.command IN ('killed/rollback','rollback')

To my surprise, there were no results.  Nothing was killed or rolling back; or was it? I ran the query again, but this time without the where clause.

SELECT R.session_id, R.command ,R.status, R.percent_complete
FROM sys.dm_exec_requests R

I saw one command listed as “UNKNOWN TOKEN” that had a percent complete at about 5%. That percent was rising. I theorized that this was my rolling back process and when it finished, my AG would be healthy again.  The system isn’t used overnight. We had started the maintenance in the late afternoon and it was the secondary node in trouble, so I had time to test my theory.  It was an agonizing 8 hours as I kept checking on the percent_complete all evening.  It finally completed and the AG went green.

My lesson learned: When my AG isn’t healthy and I have already resumed data movement, before I pull the database out of the AG,I need to check for processes that have a percent complete on the secondary node. Being patient is really hard but necessary with AG’s.

The song that goes with this post Listen to the Man.

It’s different for Availability Groups when service packs get broke…

Last week I did a few things wrong.  The good news is I learned from it and now can prevent myself from repeating it.

So I have this AG, it is kind of important, hence the AG part but after 5pm I have two hours that it can be down, or so I thought.  We recently added new functionality that requires it to be up all the time.  I have been applying SQL 2014 SP2 to all of my 2014 servers all of which have been in AGs and it has been super easy, less than 10 minutes of work and only a reboot of downtime. (I am a little old fashioned and always reboot after a service pack.) My mind said, “Sweet, you can get this done and no one will notice and if you do both at the same time and delay the reboot on one of them, there will be no down time.”  That was my first mistake.  My second one was starting a few minutes early. I was excited to have it done because it was my last round of service packs. My final mistake was not realizing that SSISDDB is considered a system database and should NEVER be in an availability group. It may be super awesome because you can keep your SSIS catalog completely in sync. You could maybe do it if you plan to remove it before doing service packs or any kind of upgrades to the server, bur as far as setting it and forgetting it, you are in trouble if you do it.

Here are a few of the fun errors that we saw.

“Script level upgrade for database ‘master’ failed because upgrade step ‘SSIS_hotfix_install.sql’ encountered error 942, state 4, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.”

“Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.”

Huge thank you to all the people that have blogged about Trace Flag 902.  It allowed us to start up SQL Server and find the errors and pull SSISDB out of the AG and get the service packs to finish running and everything was happy and great.  Here is the list of steps from the Microsoft KB article:

Enable trace flag 902 on the instance of SQL Server. To do this, follow these steps:

  1. Open SQL Server Configuration Manager.
  2. In SQL Server Configuration Manager, click SQL Server Services.
  3. Double-click the SQL Serverservice.
  4. In the SQL Server Properties dialog box, click the Advanced tab.
  5. On click the Advanced tab, locate the Startup Parameters item.
  6. Add ;-T902 to the end of the existing string value, and then click OK.

Lessons Learned:

  1. Don’t start earlier than you say you are going to start.
  2. Don’t do both sides of an AG at the same time.  Do the Passive one first and make sure everything is up and working and the AG is healthy before you do the primary node. Verify the fail-over works to the passive node so that you have no down time.
  3. Don’t get too comfortable with Service Packs and Cumulative Updates. They are still a big change even though they are pretty stable.
  4. Don’t have any system databases in the AG while applying patches and know that SSISDB is a part of the system databases.
  5. An awesome team that can back you up and help you trouble shoot can make all the difference.  It is amazing to have a boss that believes in you and is encouraging to help you keep going even when you want to give up and go cry in the closet.

Also for those of you following along at home and what to know what song goes with this post: It’s Different for Girls

Happy Service Packing!

Nothing can stop me, nothing holds me back from changing recovery mode and getting development on track…

Greetings and other salutations,

Today I found out that part of the development environment was in “Full Recovery Mode”.  This means that if someone isn’t taking log backups, their databases get huge, and it also means that the backups were much bigger than they should have been.  They don’t need point in time recovery in our development environment so we decided to move them to “Simple recovery”. This could have been a big all day job if I went through the GUI, but you know me, I found a way to script it out and thought I would share it. I am showing you how to do it on one server at a time:

Connect to your development server in the master database and run this query to see how many are in “Full Recovery”:

 SELECT name, recovery_model_desc
 FROM sys.databases
 WHERE recovery_model_desc = 'FULL'

When I ran it on one of my servers, there were 24 databases that needed to be adjusted. So I built this:

 SELECT 'ALTER DATABASE [' + name + '] SET RECOVERY SIMPLE ;'
 FROM sys.databases
 WHERE recovery_model_desc = 'Full'

Then I took the results from that query and copied it into a new window and ran it and just like that, all my databases are now in “Simple Mode” in Development. I ran the first query one more time to make sure everything updated as expected.

It is a beautiful thing. I hope this helps you clean up development too!

I can make your logs clap…

The SQL error log has this nasty habit of getting big when I am not looking.  There are only two ways to keep is at a normal size. One is to stop and start your SQL instance (Reboot, Restart, Stop and Start) and the other is to run this handy little script:


EXEC sp_cycle_errorlog;

This will end the current log and start a new one.  Why does this matter?  The SQL Error log holds information about your backups, failed logins, SQL errors, edition information and other fun stuff.  The bigger it is, the longer it will take SQL to load it into memory so that you can read it.  Usually when you need to read it, you are in trouble so the slower it is, the more stressed you will be.

What is a good size?  I usually try to get it to roll over around 10 MB.  I use a monitoring tool and when the large error log alert is triggered, I have it run sp_cycle_errorlog for me so mine always stay a healthy size.  You don’t need fancy tools to do this though.  If you know about how fast your logs grow, you can set up a SQL Agent job to run it on a schedule to keep your logs healthy.

How many logs should I keep? This is completely up to you, but since I keep my logs so small, I try to keep 15 of them.  Why so many? I do it so I can go back and see issues further back if needed. You can adjust the amount you keep by right clicking on SQL Server Logs in SSMS and selecting “Configure”

Configure SQL Error Logs

Super cool, but what about the Agent error logs? There is a script for them as well!


USE msdb;
GO
EXEC sp_cycle_agent_errorlog;

See? Healthy and Happy Logs! Your Logs will be clapping with joy.

Hello Darkness my old friend, I can talk to you again because my Availability Group is quiet…

We have a lovely Availability Group that holds A LOT of data that is broken into partitions.  We have 42 partitions and they are usually moving information around daily between them.   The index rebuilds on them were making our logs HUGE because the the Availability Group was taking too long to catch up, we tried both Synchronous and Asynchronous mode.  We would see all kinds of errors.  We were doing horrible things like auto shrinking our transaction log after the indexing finished and  ignoring alarms during the time the database was rebuilding.  We had requested more and more space from our storage team and sometimes the job wouldn’t even finish because it ran out of space.  Our first idea was to split out the index rebuilds so that we could do one partition at a time.

It looks like this:


ALTER INDEX IndexName
ON [dbo].Table
REBUILD PARTITION = 42

By splitting this out, we were able to get the job to finish, but with tons of alarms, and log growth.

Then we had a thought, maybe the server is just spinning too fast and we need to give the Availability Group time to catch up.  So we added some simple waits in between each step.


WAITFOR DELAY '00:10';

As an example this is in minutes and will wait for 10 minutes before running the next step.

 

It has now been a quiet week and we are looking forward to the Sound of Silence.

Takes as Long as it Takes SQL, Break on Me, I have a DAC…

The DAC, what is it? It is the Dedicated Administrator Console. Basically it is a way to connect to your SQL Server when all the other connections are tied up. But it takes a little bit of pre-planning so that you can use it when things go bad with your SQL Server.  Let’s enable it so you can test using it and know that it is there in the future.


EXEC sp_configure 'remote admin connections', 1;
GO
RECONFIGURE
GO

I was always afraid to use the DAC because I thought I had to use the command line and I am terrible with cmd. But did you know, you can connect to it through Management Studio? Now I can feel right at home when I trouble shoot a sick server.  To connect to a server using the DAC put “admin:” in front of the name of the server like this:

DAC

I use my elevated windows login for this point, but you can use what ever sys admin account that you have.

There are a few words of caution. You can only have one connection to your DAC on each server at a time. So don’t try to connect Object explorer to it. I have SQLPrompt and noticed that my server would send an alert on Severity Error 20 because SQL Prompt was trying to connect at the same time and was failing. Just be aware that you might get an error, but if everything is correct you can look down at the connection at the bottom of the query window and see you are connected.  If someone else is using the DAC, you won’t be able to use the connection, so it is a good idea to always close it when you are done.

Kendra Little b|t has a fantastic post for Brent Ozar Unlimited about the DAC and most importantly to me, how to tell who is connected.  This is the code that she wrote to help identify who is connected:


SELECT
CASE
WHEN ses.session_id= @@SPID THEN 'It''s me! '
ELSE '' END
+ coalesce(ses.login_name,'???') as WhosGotTheDAC,
ses.session_id,
ses.login_time,
ses.status,
ses.original_login_name
from sys.endpoints as en
join sys.dm_exec_sessions ses on
en.endpoint_id=ses.endpoint_id
where en.name='Dedicated Admin Connection'

Now when SQL Server Breaks on you, there is a tool that prevents us from shattering like glass.