Mayday this is an emergency, my linked server is using ODBC to connect to an AG…

Availability Groups (AG) and Linked servers can get really tricky. Especially if you are dealing with multi-subnet failover.  The only way we have figured out how to do this is with ODBC. Here is how.

First, we have create an ODBC connection on our SQL Server.  The single server in the picture below is the server we are going create the ODBC connection and the linked server on.  It will go over to the AG Listener. ag-pic

First we are creating an ODBC Connection on our server that is going to link to the AG.

odbc1

Make sure to be in the System DSN section. Click Add

odbc2

We can pick either ODBC Driver 11 or 13.  This is a separate driver install that we can get here.  We want to install the driver and then we will see it as an option in the screen above.  Click Finish.

odbc3

Give it whatever name and description wanted, but save the name for later. The server should be the name of the AG Listener. Click Next.

odbc4

Technically we don’t have to put in a login and password, but I like to test the login and password that I am going to use for the linked server. It won’t be saved here. Click Next.

odbc5

Here is the magic part, make sure to check multi-subnet failover.  That is what is going to make the connection automatically fail between the two nodes. Click next, test the connection and then Finish. The ODBC connection is ready to be used by the linked server connection.  Let’s build that part now.

Go into SQL Server Management Studio and under Server Objects, right click on Linked Servers and select “New Linked Server”.

odbc6

The provider needs to be OLE DB Provider for ODBC Drivers. Remember the name we gave the ODBC connection?  We are going to use that here. Then go to the Security Tab.

odbc7

This is where we put in the login and it will be stored here.  I also make sure this user name and password is on both Nodes of the AG with the permissions that I need. Click Server Options

odbc8

The above is what I need, but I check that I am only giving access to what is needed and not more.  When we click ok, it will test our connection.  If everything works with no errors, we are ready to go.

Some of the problems that we have noticed are querying tables that have big datatypes like time(3-7), timestamp, and a few others.  Casting or converting the datatypes doesn’t help. If we pull the table into a view without the big datatype columns, we are able to query the view from another server, but never the base table. It has been a bit frustrating, but we are still hopeful that we can find a solution or that Microsoft with fix ODBC connections. If there is a better way to do this, please reach out to me.  We have things we need to solve and could use some help.

The song for this post is Mayday by Cam.

Coming soon: Count down to PASS Summit 2016 with more pictures from PASS Summit 2015. Watch twitter and the Magic Mirrors page for more.

I can make your logs clap…

The SQL error log has this nasty habit of getting big when I am not looking.  There are only two ways to keep is at a normal size. One is to stop and start your SQL instance (Reboot, Restart, Stop and Start) and the other is to run this handy little script:


EXEC sp_cycle_errorlog;

This will end the current log and start a new one.  Why does this matter?  The SQL Error log holds information about your backups, failed logins, SQL errors, edition information and other fun stuff.  The bigger it is, the longer it will take SQL to load it into memory so that you can read it.  Usually when you need to read it, you are in trouble so the slower it is, the more stressed you will be.

What is a good size?  I usually try to get it to roll over around 10 MB.  I use a monitoring tool and when the large error log alert is triggered, I have it run sp_cycle_errorlog for me so mine always stay a healthy size.  You don’t need fancy tools to do this though.  If you know about how fast your logs grow, you can set up a SQL Agent job to run it on a schedule to keep your logs healthy.

How many logs should I keep? This is completely up to you, but since I keep my logs so small, I try to keep 15 of them.  Why so many? I do it so I can go back and see issues further back if needed. You can adjust the amount you keep by right clicking on SQL Server Logs in SSMS and selecting “Configure”

Configure SQL Error Logs

Super cool, but what about the Agent error logs? There is a script for them as well!


USE msdb;
GO
EXEC sp_cycle_agent_errorlog;

See? Healthy and Happy Logs! Your Logs will be clapping with joy.

Takes as Long as it Takes SQL, Break on Me, I have a DAC…

The DAC, what is it? It is the Dedicated Administrator Console. Basically it is a way to connect to your SQL Server when all the other connections are tied up. But it takes a little bit of pre-planning so that you can use it when things go bad with your SQL Server.  Let’s enable it so you can test using it and know that it is there in the future.


EXEC sp_configure 'remote admin connections', 1;
GO
RECONFIGURE
GO

I was always afraid to use the DAC because I thought I had to use the command line and I am terrible with cmd. But did you know, you can connect to it through Management Studio? Now I can feel right at home when I trouble shoot a sick server.  To connect to a server using the DAC put “admin:” in front of the name of the server like this:

DAC

I use my elevated windows login for this point, but you can use what ever sys admin account that you have.

There are a few words of caution. You can only have one connection to your DAC on each server at a time. So don’t try to connect Object explorer to it. I have SQLPrompt and noticed that my server would send an alert on Severity Error 20 because SQL Prompt was trying to connect at the same time and was failing. Just be aware that you might get an error, but if everything is correct you can look down at the connection at the bottom of the query window and see you are connected.  If someone else is using the DAC, you won’t be able to use the connection, so it is a good idea to always close it when you are done.

Kendra Little b|t has a fantastic post for Brent Ozar Unlimited about the DAC and most importantly to me, how to tell who is connected.  This is the code that she wrote to help identify who is connected:


SELECT
CASE
WHEN ses.session_id= @@SPID THEN 'It''s me! '
ELSE '' END
+ coalesce(ses.login_name,'???') as WhosGotTheDAC,
ses.session_id,
ses.login_time,
ses.status,
ses.original_login_name
from sys.endpoints as en
join sys.dm_exec_sessions ses on
en.endpoint_id=ses.endpoint_id
where en.name='Dedicated Admin Connection'

Now when SQL Server Breaks on you, there is a tool that prevents us from shattering like glass.

If Crazy = Genius, I’m rebuilding one partition at a time!

We have had an index job that has been failing for a while.  This is one of those things you really don’t want to clean up because no one is complaining, but you know you should.  I had heard that I could rebuild one partition at a time, but where to start?  Today, I worked my way through it, so here it is so that you can do it too.

First you need to find the biggest indexes, there is a good chance those are the ones that live on partitions. I am removing Primary Keys.


SELECT i.[name] AS IndexName
,SUM(ps.[used_page_count]) * 8 AS IndexSizeKB
FROM sys.dm_db_partition_stats AS ps
INNER JOIN sys.indexes AS i ON ps.[object_id] = i.[object_id]
AND ps.[index_id] = i.[index_id]
WHERE i.name NOT LIKE 'PK%'
GROUP BY i.[name]
ORDER BY IndexSizeKB DESC

The top ones are most likely the ones you want to focus on.  Next, we need to track down the partition function.


SELECT * FROM sys.partition_functions

Next we are going to figure out which partition we want to rebuild.


SELECT sprv.value AS [Value], sprv.boundary_id AS [ID]
FROM sys.partition_functions AS spf
INNER JOIN sys.partition_range_values sprv ON sprv.function_id = spf.function_id
WHERE (spf.name = N'NameOfFunctionFromPartitionFunctions')
ORDER BY [ID] ASC

Pick the number of the partition you want to rebuild.

Next use your index name from earlier.  You will also need the table name and the partition number.


ALTER INDEX [IndexName]
ON [dbo].[TableName]
REBUILD PARTITION = 3

This will rebuild just the partition that you requested.

Strip it down and remove the bad query plan

Today I got to play with some really bad queries. But the queries weren’t necessarily bad, it was more they had bad plans. I thought I had already blogged about it and tried to find my code. Alas, it wasn’t there so let’s strip it down on how you would remove a bad query plan. I am leaving out the trouble shooting part of how to determine if it is a bad plan because so much of it “depends”.
First you have to find the bad query plan.  Get a unique line from your query and paste it in the query below.

USE master;
GO

SELECT usecounts, cacheobjtype, objtype, text, plan_handle
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
WHERE usecounts > 1
AND TEXT LIKE '%Unique part of query%' --put the unique part of the query here.
ORDER BY usecounts DESC;

Now, copy the plan handle and paste it over the plan handle that I have listed here:

SELECT * FROM sys.dm_exec_query_plan (0x060001004DE4D526F0BEA28F05000);

If you click on the query_plan link, you can see what the plan looks like.  After you have reviewed it and determined the plan is bad then you can paste your plan handle over the one below to remove it from the proc cache.

DBCC FREEPROCCACHE (0x060001004DE4D526F0BEA28F05000)

There you have it. Best of luck with your bad plans.

For one good, naughty little girl found a diamond…Object Explorer Details

It’s Christmas time again and time to listen to my FAVORITE Christmas song called Joel The Lump of Coal.
Just before Thanksgiving we had our SQLSaturday\Big Mountain Data event and I spoke! This is my third time speaking at this event and every year I regret speaking and feel like everyone would have been better in another session, every year that is until this one! I loved my session and I will actually be submitting it to PASS Summit this year. Keep your fingers crossed with me. It was on SQL Server Management Studio Tips and Tricks.

One of the tips that I was super surprised that many people didn’t know is the Object Explorer Details. It allows you to delete multiple objects at once, script out multiple objects at once and just do some really cool stuff. How do I access this magic you are asking? When in management studio, click on View>>Object Explorer Details.

 

ObjectExplorerDetails

Now you can have a diamond that will help you too!

She keeps her management studio error to minimal, Southern Style…

It is painful when you have been working on a tough query and you start to get an out of memory error.
“An error occurred while executing batch. Error message is: Exception of type ‘System.OutOfMemoryException’ was thrown”

What in the world? But it was working before! Why is this happening? Is it happening for everyone? Do I need to reboot the SQL Server?

This is a SQL Server Management Studio Error. Usually it means that you have filled up the local memory cache on your machine. Save all your queries and close Management Studio. Once you open it again you “should” be fine, but famous last words right? If that doesn’t work, just reboot. Nice clean memory and you can get back to playing with your super tough query.

What you gonna do? Memory, I’m coming for you!

Today I needed to quickly check 68 SQL Server Instances Min and Max memory settings. I didn’t have time to go through each one and I know I will need to do this again in the future. Thank goodness I have my Central Management Server configured with all those servers. I was able to connect to my main CMS server, and run this simple query that will tell me all my servers min and max memory setting:

SELECT ServerName, [Max], [Min]
FROM
(SELECT @@ServerName AS ServerName, LEFT(name,3) AS Memory, value
FROM sys.configurations
WHERE name like '%server memory%') AS SourceTable
PIVOT
(
MAX(Value)
FOR Memory IN ([Max], [Min])
) AS PivotTable;

Then, I also used this code from my last post but added a server name to it so I could see what memory was available on each server.

SELECT @@Servername,physical_memory_kb * 9.5367431640625E-7
FROM sys.dm_os_sys_info

Help me, help me, my logs are over-sized!

I have a lot of servers and only home-grown monitoring.  I needed a solution that would tell me if a log file started to grow outside of its normal size.  I listened carefully at PASS Summit on how I should “right-size” my log files and came back with a plan to tackle all of them. Fast-forward a few months and I am still having trouble even after sizing them to what I thought they should be. One of the problems is that my backup solution is “touchy” with log backups and awesome at getting the full ones. I built a report to watch for missing transaction log backups, which isn’t perfect yet.  I needed a more immediate solution and after getting another Disk Space Email Alert, I knew what I had to do.  Here it is, my Large Log File Alert.

CREATE PROCEDURE [dbo].[usp_LargeLogFileAlert]
 @LogThreshold int = 1024
AS
--This procedure sends out an alert message when a logfile has exceeded a set threshold.
--It can be scheduled to run daily so you can act quickly to resolve.

IF EXISTS (SELECT * FROM tempdb..sysobjects
WHERE id = object_id(N'[tempdb]..[#LargeLogFileAlert]'))
DROP TABLE #LargeLogFileAlert
CREATE TABLE #LargeLogFileAlert (
 DatabaseName VARCHAR(50) NOT NULL,
 LogMB INTEGER NOT NULL)

DECLARE @SizeMB INT
DECLARE @DatabaseName VARCHAR(50)
DECLARE @AlertMessage VARCHAR(500)
DECLARE @MailSubject VARCHAR(100)

/* Populate #LargeLogFileAlert with data */
INSERT INTO #LargeLogFileAlert
SELECT RTRIM(instance_name) AS DatabaseName, cntr_value/1024 AS SizeMB
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Log File(s) Used Size (KB)'
AND instance_name != '_Total' AND cntr_value/1024 > @LogThreshold --Threshold in MB

DECLARE LogAlert CURSOR FAST_FORWARD FOR
SELECT DatabaseName, LogMB FROM #LargeLogFileAlert 

OPEN LogAlert
FETCH NEXT FROM LogAlert INTO @DatabaseName, @SizeMB

WHILE (@@FETCH_STATUS = 0)
BEGIN
SET @MailSubject = 'Log for ' + @DatabaseName + ' has exceeded threshold on ' + @@SERVERNAME
SET @AlertMessage = @DatabaseName + ' is ' + cast(@SizeMB AS VARCHAR) + ' MB. Please verify transaction log backups are working.'
-- Send out email
EXEC msdb..sp_send_dbmail @recipients = 'you@email.com',--Change to your email
@subject = @MailSubject,
@body = @AlertMessage
FETCH NEXT FROM LogAlert INTO @DatabaseName, @SizeMB
END
CLOSE LogAlert
DEALLOCATE LogAlert
DROP TABLE #LargeLogFileAlert

GO

The threshold is adjustable by server. I picked the one I did because my servers have a lot of small databases.

Next I tossed it into a SQL Agent Job and have it run once a day on each server. It is light so I am not worried about it causing performance issues. It will email me for each database that has a log file larger than the limit I set. It won’t email at all if there aren’t any. Once I get my logs sized properly, I can turn them off. I used my CMS to deploy the stored procedure and SQL Agent job to all my servers and Yay, I know more about my environment.

This is my first attempt at joining the blog party which is number 62 – hosted this time by Robert Pearl (@pearlknows), hopefully I did it right.