If Crazy = Genius, I’m rebuilding one partition at a time!

We have had an index job that has been failing for a while.  This is one of those things you really don’t want to clean up because no one is complaining, but you know you should.  I had heard that I could rebuild one partition at a time, but where to start?  Today, I worked my way through it, so here it is so that you can do it too.

First you need to find the biggest indexes, there is a good chance those are the ones that live on partitions. I am removing Primary Keys.


SELECT i.[name] AS IndexName
,SUM(ps.[used_page_count]) * 8 AS IndexSizeKB
FROM sys.dm_db_partition_stats AS ps
INNER JOIN sys.indexes AS i ON ps.[object_id] = i.[object_id]
AND ps.[index_id] = i.[index_id]
WHERE i.name NOT LIKE 'PK%'
GROUP BY i.[name]
ORDER BY IndexSizeKB DESC

The top ones are most likely the ones you want to focus on.  Next, we need to track down the partition function.


SELECT * FROM sys.partition_functions

Next we are going to figure out which partition we want to rebuild.


SELECT sprv.value AS [Value], sprv.boundary_id AS [ID]
FROM sys.partition_functions AS spf
INNER JOIN sys.partition_range_values sprv ON sprv.function_id = spf.function_id
WHERE (spf.name = N'NameOfFunctionFromPartitionFunctions')
ORDER BY [ID] ASC

Pick the number of the partition you want to rebuild.

Next use your index name from earlier.  You will also need the table name and the partition number.


ALTER INDEX [IndexName]
ON [dbo].[TableName]
REBUILD PARTITION = 3

This will rebuild just the partition that you requested.

Strip it down and remove the bad query plan

Today I got to play with some really bad queries. But the queries weren’t necessarily bad, it was more they had bad plans. I thought I had already blogged about it and tried to find my code. Alas, it wasn’t there so let’s strip it down on how you would remove a bad query plan. I am leaving out the trouble shooting part of how to determine if it is a bad plan because so much of it “depends”.
First you have to find the bad query plan.  Get a unique line from your query and paste it in the query below.

USE master;
GO

SELECT usecounts, cacheobjtype, objtype, text, plan_handle
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
WHERE usecounts > 1
AND TEXT LIKE '%Unique part of query%' --put the unique part of the query here.
ORDER BY usecounts DESC;

Now, copy the plan handle and paste it over the plan handle that I have listed here:

SELECT * FROM sys.dm_exec_query_plan (0x060001004DE4D526F0BEA28F05000);

If you click on the query_plan link, you can see what the plan looks like.  After you have reviewed it and determined the plan is bad then you can paste your plan handle over the one below to remove it from the proc cache.

DBCC FREEPROCCACHE (0x060001004DE4D526F0BEA28F05000)

There you have it. Best of luck with your bad plans.

For one good, naughty little girl found a diamond…Object Explorer Details

It’s Christmas time again and time to listen to my FAVORITE Christmas song called Joel The Lump of Coal.
Just before Thanksgiving we had our SQLSaturday\Big Mountain Data event and I spoke! This is my third time speaking at this event and every year I regret speaking and feel like everyone would have been better in another session, every year that is until this one! I loved my session and I will actually be submitting it to PASS Summit this year. Keep your fingers crossed with me. It was on SQL Server Management Studio Tips and Tricks.

One of the tips that I was super surprised that many people didn’t know is the Object Explorer Details. It allows you to delete multiple objects at once, script out multiple objects at once and just do some really cool stuff. How do I access this magic you are asking? When in management studio, click on View>>Object Explorer Details.

 

ObjectExplorerDetails

Now you can have a diamond that will help you too!

She keeps her management studio error to minimal, Southern Style…

It is painful when you have been working on a tough query and you start to get an out of memory error.
“An error occurred while executing batch. Error message is: Exception of type ‘System.OutOfMemoryException’ was thrown”

What in the world? But it was working before! Why is this happening? Is it happening for everyone? Do I need to reboot the SQL Server?

This is a SQL Server Management Studio Error. Usually it means that you have filled up the local memory cache on your machine. Save all your queries and close Management Studio. Once you open it again you “should” be fine, but famous last words right? If that doesn’t work, just reboot. Nice clean memory and you can get back to playing with your super tough query.

What you gonna do? Memory, I’m coming for you!

Today I needed to quickly check 68 SQL Server Instances Min and Max memory settings. I didn’t have time to go through each one and I know I will need to do this again in the future. Thank goodness I have my Central Management Server configured with all those servers. I was able to connect to my main CMS server, and run this simple query that will tell me all my servers min and max memory setting:

SELECT ServerName, [Max], [Min]
FROM
(SELECT @@ServerName AS ServerName, LEFT(name,3) AS Memory, value
FROM sys.configurations
WHERE name like '%server memory%') AS SourceTable
PIVOT
(
MAX(Value)
FOR Memory IN ([Max], [Min])
) AS PivotTable;

Then, I also used this code from my last post but added a server name to it so I could see what memory was available on each server.

SELECT @@Servername,physical_memory_kb * 9.5367431640625E-7
FROM sys.dm_os_sys_info

Help me, help me, my logs are over-sized!

I have a lot of servers and only home-grown monitoring.  I needed a solution that would tell me if a log file started to grow outside of its normal size.  I listened carefully at PASS Summit on how I should “right-size” my log files and came back with a plan to tackle all of them. Fast-forward a few months and I am still having trouble even after sizing them to what I thought they should be. One of the problems is that my backup solution is “touchy” with log backups and awesome at getting the full ones. I built a report to watch for missing transaction log backups, which isn’t perfect yet.  I needed a more immediate solution and after getting another Disk Space Email Alert, I knew what I had to do.  Here it is, my Large Log File Alert.

CREATE PROCEDURE [dbo].[usp_LargeLogFileAlert]
 @LogThreshold int = 1024
AS
--This procedure sends out an alert message when a logfile has exceeded a set threshold.
--It can be scheduled to run daily so you can act quickly to resolve.

IF EXISTS (SELECT * FROM tempdb..sysobjects
WHERE id = object_id(N'[tempdb]..[#LargeLogFileAlert]'))
DROP TABLE #LargeLogFileAlert
CREATE TABLE #LargeLogFileAlert (
 DatabaseName VARCHAR(50) NOT NULL,
 LogMB INTEGER NOT NULL)

DECLARE @SizeMB INT
DECLARE @DatabaseName VARCHAR(50)
DECLARE @AlertMessage VARCHAR(500)
DECLARE @MailSubject VARCHAR(100)

/* Populate #LargeLogFileAlert with data */
INSERT INTO #LargeLogFileAlert
SELECT RTRIM(instance_name) AS DatabaseName, cntr_value/1024 AS SizeMB
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Log File(s) Used Size (KB)'
AND instance_name != '_Total' AND cntr_value/1024 > @LogThreshold --Threshold in MB

DECLARE LogAlert CURSOR FAST_FORWARD FOR
SELECT DatabaseName, LogMB FROM #LargeLogFileAlert 

OPEN LogAlert
FETCH NEXT FROM LogAlert INTO @DatabaseName, @SizeMB

WHILE (@@FETCH_STATUS = 0)
BEGIN
SET @MailSubject = 'Log for ' + @DatabaseName + ' has exceeded threshold on ' + @@SERVERNAME
SET @AlertMessage = @DatabaseName + ' is ' + cast(@SizeMB AS VARCHAR) + ' MB. Please verify transaction log backups are working.'
-- Send out email
EXEC msdb..sp_send_dbmail @recipients = 'you@email.com',--Change to your email
@subject = @MailSubject,
@body = @AlertMessage
FETCH NEXT FROM LogAlert INTO @DatabaseName, @SizeMB
END
CLOSE LogAlert
DEALLOCATE LogAlert
DROP TABLE #LargeLogFileAlert

GO

The threshold is adjustable by server. I picked the one I did because my servers have a lot of small databases.

Next I tossed it into a SQL Agent Job and have it run once a day on each server. It is light so I am not worried about it causing performance issues. It will email me for each database that has a log file larger than the limit I set. It won’t email at all if there aren’t any. Once I get my logs sized properly, I can turn them off. I used my CMS to deploy the stored procedure and SQL Agent job to all my servers and Yay, I know more about my environment.

This is my first attempt at joining the blog party which is number 62 – hosted this time by Robert Pearl (@pearlknows), hopefully I did it right.