Hello Darkness my old friend, I can talk to you again because my Availability Group is quiet…

We have a lovely Availability Group that holds A LOT of data that is broken into partitions.  We have 42 partitions and they are usually moving information around daily between them.   The index rebuilds on them were making our logs HUGE because the the Availability Group was taking too long to catch up, we tried both Synchronous and Asynchronous mode.  We would see all kinds of errors.  We were doing horrible things like auto shrinking our transaction log after the indexing finished and  ignoring alarms during the time the database was rebuilding.  We had requested more and more space from our storage team and sometimes the job wouldn’t even finish because it ran out of space.  Our first idea was to split out the index rebuilds so that we could do one partition at a time.

It looks like this:


ALTER INDEX IndexName
ON [dbo].Table
REBUILD PARTITION = 42

By splitting this out, we were able to get the job to finish, but with tons of alarms, and log growth.

Then we had a thought, maybe the server is just spinning too fast and we need to give the Availability Group time to catch up.  So we added some simple waits in between each step.


WAITFOR DELAY '00:10';

As an example this is in minutes and will wait for 10 minutes before running the next step.

 

It has now been a quiet week and we are looking forward to the Sound of Silence.

Takes as Long as it Takes SQL, Break on Me, I have a DAC…

The DAC, what is it? It is the Dedicated Administrator Console. Basically it is a way to connect to your SQL Server when all the other connections are tied up. But it takes a little bit of pre-planning so that you can use it when things go bad with your SQL Server.  Let’s enable it so you can test using it and know that it is there in the future.


EXEC sp_configure 'remote admin connections', 1;
GO
RECONFIGURE
GO

I was always afraid to use the DAC because I thought I had to use the command line and I am terrible with cmd. But did you know, you can connect to it through Management Studio? Now I can feel right at home when I trouble shoot a sick server.  To connect to a server using the DAC put “admin:” in front of the name of the server like this:

DAC

I use my elevated windows login for this point, but you can use what ever sys admin account that you have.

There are a few words of caution. You can only have one connection to your DAC on each server at a time. So don’t try to connect Object explorer to it. I have SQLPrompt and noticed that my server would send an alert on Severity Error 20 because SQL Prompt was trying to connect at the same time and was failing. Just be aware that you might get an error, but if everything is correct you can look down at the connection at the bottom of the query window and see you are connected.  If someone else is using the DAC, you won’t be able to use the connection, so it is a good idea to always close it when you are done.

Kendra Little b|t has a fantastic post for Brent Ozar Unlimited about the DAC and most importantly to me, how to tell who is connected.  This is the code that she wrote to help identify who is connected:


SELECT
CASE
WHEN ses.session_id= @@SPID THEN 'It''s me! '
ELSE '' END
+ coalesce(ses.login_name,'???') as WhosGotTheDAC,
ses.session_id,
ses.login_time,
ses.status,
ses.original_login_name
from sys.endpoints as en
join sys.dm_exec_sessions ses on
en.endpoint_id=ses.endpoint_id
where en.name='Dedicated Admin Connection'

Now when SQL Server Breaks on you, there is a tool that prevents us from shattering like glass.

If Crazy = Genius, I’m rebuilding one partition at a time!

We have had an index job that has been failing for a while.  This is one of those things you really don’t want to clean up because no one is complaining, but you know you should.  I had heard that I could rebuild one partition at a time, but where to start?  Today, I worked my way through it, so here it is so that you can do it too.

First you need to find the biggest indexes, there is a good chance those are the ones that live on partitions. I am removing Primary Keys.


SELECT i.[name] AS IndexName
,SUM(ps.[used_page_count]) * 8 AS IndexSizeKB
FROM sys.dm_db_partition_stats AS ps
INNER JOIN sys.indexes AS i ON ps.[object_id] = i.[object_id]
AND ps.[index_id] = i.[index_id]
WHERE i.name NOT LIKE 'PK%'
GROUP BY i.[name]
ORDER BY IndexSizeKB DESC

The top ones are most likely the ones you want to focus on.  Next, we need to track down the partition function.


SELECT * FROM sys.partition_functions

Next we are going to figure out which partition we want to rebuild.


SELECT sprv.value AS [Value], sprv.boundary_id AS [ID]
FROM sys.partition_functions AS spf
INNER JOIN sys.partition_range_values sprv ON sprv.function_id = spf.function_id
WHERE (spf.name = N'NameOfFunctionFromPartitionFunctions')
ORDER BY [ID] ASC

Pick the number of the partition you want to rebuild.

Next use your index name from earlier.  You will also need the table name and the partition number.


ALTER INDEX [IndexName]
ON [dbo].[TableName]
REBUILD PARTITION = 3

This will rebuild just the partition that you requested.

Strip it down and remove the bad query plan

Today I got to play with some really bad queries. But the queries weren’t necessarily bad, it was more they had bad plans. I thought I had already blogged about it and tried to find my code. Alas, it wasn’t there so let’s strip it down on how you would remove a bad query plan. I am leaving out the trouble shooting part of how to determine if it is a bad plan because so much of it “depends”.
First you have to find the bad query plan.  Get a unique line from your query and paste it in the query below.

USE master;
GO

SELECT usecounts, cacheobjtype, objtype, text, plan_handle
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
WHERE usecounts > 1
AND TEXT LIKE '%Unique part of query%' --put the unique part of the query here.
ORDER BY usecounts DESC;

Now, copy the plan handle and paste it over the plan handle that I have listed here:

SELECT * FROM sys.dm_exec_query_plan (0x060001004DE4D526F0BEA28F05000);

If you click on the query_plan link, you can see what the plan looks like.  After you have reviewed it and determined the plan is bad then you can paste your plan handle over the one below to remove it from the proc cache.

DBCC FREEPROCCACHE (0x060001004DE4D526F0BEA28F05000)

There you have it. Best of luck with your bad plans.

For one good, naughty little girl found a diamond…Object Explorer Details

It’s Christmas time again and time to listen to my FAVORITE Christmas song called Joel The Lump of Coal.
Just before Thanksgiving we had our SQLSaturday\Big Mountain Data event and I spoke! This is my third time speaking at this event and every year I regret speaking and feel like everyone would have been better in another session, every year that is until this one! I loved my session and I will actually be submitting it to PASS Summit this year. Keep your fingers crossed with me. It was on SQL Server Management Studio Tips and Tricks.

One of the tips that I was super surprised that many people didn’t know is the Object Explorer Details. It allows you to delete multiple objects at once, script out multiple objects at once and just do some really cool stuff. How do I access this magic you are asking? When in management studio, click on View>>Object Explorer Details.

 

ObjectExplorerDetails

Now you can have a diamond that will help you too!

And you said you are unconsolable…clean up before you leave

Greetings,

I am approaching my last day at my current job. I love it here and will be really sad to leave, but have an awesome opportunity to grow my knowledge and career with a company on my “want to work for” list.

There are a lot of things to take care of before I leave.  I have been updating documentation (with meme’s) so that it is useful and fun.  I am trying to wrap up all my tickets and outstanding items and last night I woke up and realized, I was the owner of some databases.  This is how I fixed it:

I launched a query window on my Central Management Server to save time, but you can run this on one server at a time if you want.  I used the syntax from sp_helpdb to find out what I wanted to query:

select name, isnull(suser_sname(sid),'~~UNKNOWN~~') AS Owner, convert(nvarchar(11), crdate),dbid, cmptlevel
from master.dbo.sysdatabases
WHERE suser_sname(sid) = 'domain\MyUserName'

Some of the applications in my environment run under a special user and I didn’t want to interfere with those, I just wanted to fix the ones that use me.  Then I borrowed some code from Brent Ozar:

SELECT 'ALTER AUTHORIZATION ON DATABASE:: ['+ name +'] to sa;'
FROM master.dbo.sysdatabases
WHERE suser_sname(sid) = 'domain\MyUserName'

Here’s one I run on the CMS to find any SQL Agent Jobs that I own across my enterprise and then I can run the update scripts that are generated on the individual servers.


SELECT 'EXEC MSDB.dbo.sp_update_job ' + char(13) +
'@job_name = ' + char(39) + [Name] + char(39) + ',' + char(13) +
'@owner_login_name = ' + char(39) + 'sa' + char(39) + char(13) + char(13)+';'
FROM msdb.dbo.sysjobs
WHERE SUSER_SNAME(owner_sid) = 'domain\MyUserName'

What you gonna do? Memory, I’m coming for you!

Today I needed to quickly check 68 SQL Server Instances Min and Max memory settings. I didn’t have time to go through each one and I know I will need to do this again in the future. Thank goodness I have my Central Management Server configured with all those servers. I was able to connect to my main CMS server, and run this simple query that will tell me all my servers min and max memory setting:

SELECT ServerName, [Max], [Min]
FROM
(SELECT @@ServerName AS ServerName, LEFT(name,3) AS Memory, value
FROM sys.configurations
WHERE name like '%server memory%') AS SourceTable
PIVOT
(
MAX(Value)
FOR Memory IN ([Max], [Min])
) AS PivotTable;

Then, I also used this code from my last post but added a server name to it so I could see what memory was available on each server.

SELECT @@Servername,physical_memory_kb * 9.5367431640625E-7
FROM sys.dm_os_sys_info

Hey what’s that thing I can’t remember…determining RAM amounts

I frequently am trying to determine how much RAM I can give to SQL Server without starving the OS.  Since my servers usually only have SQL Server on them and no other applications, I can give them everything except 2-4gb for the OS. Depending on what that server is doing and if it still looks hungry, I will give the OS more or SQL Server more.  The question is, if I can’t remote on to the server, how do I know how much RAM is on the server? After much searching and calculating, I have come up with a T-SQL query to help. It will tell you how much RAM is on your instance.

SELECT physical_memory_kb * 9.5367431640625E-7
FROM sys.dm_os_sys_info

 

It uses the system views which I haven’t begun to scratch the surface of their awesomeness, but plan to learn more.

We could be immortals, just not for long when using a duplicate delete!

Sometimes awesome things just happen.  Today Rob Farley (@Rob_Farley) was helping me with a previous post about my dates table and just as a side note he said, “Oh, let me show you something else cool.” It was really cool and so I asked if I could add it to my blog since I never know when this problem will strike.

I have a table about Super Heroes and their cape colors.  I made a mistake and put Batman in there twice.  But since there is no primary key, how do I tell it which one to delete?  Hero Table

Since they are exactly the same, I can let SQL sort it out. This uses both a CTE (I hadn’t ever used it without some kind of join before today) and OVER which I am learning about.  So cool!

WITH SuperHeroDuplicates 
AS 
(SELECT *, ROW_NUMBER() over (partition by HeroName,CapeColor order by HeroName,CapeColor) as rownum 
FROM dbo.HeroCapeColor) 
DELETE 
FROM SuperHeroDuplicates 
where rownum > 1;

Thanks Rob!

—————UPDATE—————

Kenneth Fisher (@sqlstudent144) also wrote a blog post about another way you can accomplish this task if you only have a few of them to delete.

He has been a super big help to me.  He taught me how to display my code better in my blog so it is easier to read and copy. He also encourages me, builds my confidence and even included a link to one of my posts in an article. It made me feel special and like what I have to say matters to other people.  I love how he comments on my posts and gives me ideas on how to make them better. I feel so lucky to be a part of such a great community of people that are so thoughtful, selfless and giving.  Thanks Kenneth for being such a great example to me!

Help me, help me, my logs are over-sized!

I have a lot of servers and only home-grown monitoring.  I needed a solution that would tell me if a log file started to grow outside of its normal size.  I listened carefully at PASS Summit on how I should “right-size” my log files and came back with a plan to tackle all of them. Fast-forward a few months and I am still having trouble even after sizing them to what I thought they should be. One of the problems is that my backup solution is “touchy” with log backups and awesome at getting the full ones. I built a report to watch for missing transaction log backups, which isn’t perfect yet.  I needed a more immediate solution and after getting another Disk Space Email Alert, I knew what I had to do.  Here it is, my Large Log File Alert.

CREATE PROCEDURE [dbo].[usp_LargeLogFileAlert]
 @LogThreshold int = 1024
AS
--This procedure sends out an alert message when a logfile has exceeded a set threshold.
--It can be scheduled to run daily so you can act quickly to resolve.

IF EXISTS (SELECT * FROM tempdb..sysobjects
WHERE id = object_id(N'[tempdb]..[#LargeLogFileAlert]'))
DROP TABLE #LargeLogFileAlert
CREATE TABLE #LargeLogFileAlert (
 DatabaseName VARCHAR(50) NOT NULL,
 LogMB INTEGER NOT NULL)

DECLARE @SizeMB INT
DECLARE @DatabaseName VARCHAR(50)
DECLARE @AlertMessage VARCHAR(500)
DECLARE @MailSubject VARCHAR(100)

/* Populate #LargeLogFileAlert with data */
INSERT INTO #LargeLogFileAlert
SELECT RTRIM(instance_name) AS DatabaseName, cntr_value/1024 AS SizeMB
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Log File(s) Used Size (KB)'
AND instance_name != '_Total' AND cntr_value/1024 > @LogThreshold --Threshold in MB

DECLARE LogAlert CURSOR FAST_FORWARD FOR
SELECT DatabaseName, LogMB FROM #LargeLogFileAlert 

OPEN LogAlert
FETCH NEXT FROM LogAlert INTO @DatabaseName, @SizeMB

WHILE (@@FETCH_STATUS = 0)
BEGIN
SET @MailSubject = 'Log for ' + @DatabaseName + ' has exceeded threshold on ' + @@SERVERNAME
SET @AlertMessage = @DatabaseName + ' is ' + cast(@SizeMB AS VARCHAR) + ' MB. Please verify transaction log backups are working.'
-- Send out email
EXEC msdb..sp_send_dbmail @recipients = 'you@email.com',--Change to your email
@subject = @MailSubject,
@body = @AlertMessage
FETCH NEXT FROM LogAlert INTO @DatabaseName, @SizeMB
END
CLOSE LogAlert
DEALLOCATE LogAlert
DROP TABLE #LargeLogFileAlert

GO

The threshold is adjustable by server. I picked the one I did because my servers have a lot of small databases.

Next I tossed it into a SQL Agent Job and have it run once a day on each server. It is light so I am not worried about it causing performance issues. It will email me for each database that has a log file larger than the limit I set. It won’t email at all if there aren’t any. Once I get my logs sized properly, I can turn them off. I used my CMS to deploy the stored procedure and SQL Agent job to all my servers and Yay, I know more about my environment.

This is my first attempt at joining the blog party which is number 62 – hosted this time by Robert Pearl (@pearlknows), hopefully I did it right.