And you said you are unconsolable…clean up before you leave

Greetings,

I am approaching my last day at my current job. I love it here and will be really sad to leave, but have an awesome opportunity to grow my knowledge and career with a company on my “want to work for” list.

There are a lot of things to take care of before I leave.  I have been updating documentation (with meme’s) so that it is useful and fun.  I am trying to wrap up all my tickets and outstanding items and last night I woke up and realized, I was the owner of some databases.  This is how I fixed it:

I launched a query window on my Central Management Server to save time, but you can run this on one server at a time if you want.  I used the syntax from sp_helpdb to find out what I wanted to query:

select name, isnull(suser_sname(sid),'~~UNKNOWN~~') AS Owner, convert(nvarchar(11), crdate),dbid, cmptlevel
from master.dbo.sysdatabases
WHERE suser_sname(sid) = 'domain\MyUserName'

Some of the applications in my environment run under a special user and I didn’t want to interfere with those, I just wanted to fix the ones that use me.  Then I borrowed some code from Brent Ozar:

SELECT 'ALTER AUTHORIZATION ON DATABASE:: ['+ name +'] to sa;'
FROM master.dbo.sysdatabases
WHERE suser_sname(sid) = 'domain\MyUserName'

Here’s one I run on the CMS to find any SQL Agent Jobs that I own across my enterprise and then I can run the update scripts that are generated on the individual servers.


SELECT 'EXEC MSDB.dbo.sp_update_job ' + char(13) +
'@job_name = ' + char(39) + [Name] + char(39) + ',' + char(13) +
'@owner_login_name = ' + char(39) + 'sa' + char(39) + char(13) + char(13)+';'
FROM msdb.dbo.sysjobs
WHERE SUSER_SNAME(owner_sid) = 'domain\MyUserName'

She keeps her management studio error to minimal, Southern Style…

It is painful when you have been working on a tough query and you start to get an out of memory error.
“An error occurred while executing batch. Error message is: Exception of type ‘System.OutOfMemoryException’ was thrown”

What in the world? But it was working before! Why is this happening? Is it happening for everyone? Do I need to reboot the SQL Server?

This is a SQL Server Management Studio Error. Usually it means that you have filled up the local memory cache on your machine. Save all your queries and close Management Studio. Once you open it again you “should” be fine, but famous last words right? If that doesn’t work, just reboot. Nice clean memory and you can get back to playing with your super tough query.

What you gonna do? Memory, I’m coming for you!

Today I needed to quickly check 68 SQL Server Instances Min and Max memory settings. I didn’t have time to go through each one and I know I will need to do this again in the future. Thank goodness I have my Central Management Server configured with all those servers. I was able to connect to my main CMS server, and run this simple query that will tell me all my servers min and max memory setting:

SELECT ServerName, [Max], [Min]
FROM
(SELECT @@ServerName AS ServerName, LEFT(name,3) AS Memory, value
FROM sys.configurations
WHERE name like '%server memory%') AS SourceTable
PIVOT
(
MAX(Value)
FOR Memory IN ([Max], [Min])
) AS PivotTable;

Then, I also used this code from my last post but added a server name to it so I could see what memory was available on each server.

SELECT @@Servername,physical_memory_kb * 9.5367431640625E-7
FROM sys.dm_os_sys_info

Hey what’s that thing I can’t remember…determining RAM amounts

I frequently am trying to determine how much RAM I can give to SQL Server without starving the OS.  Since my servers usually only have SQL Server on them and no other applications, I can give them everything except 2-4gb for the OS. Depending on what that server is doing and if it still looks hungry, I will give the OS more or SQL Server more.  The question is, if I can’t remote on to the server, how do I know how much RAM is on the server? After much searching and calculating, I have come up with a T-SQL query to help. It will tell you how much RAM is on your instance.

SELECT physical_memory_kb * 9.5367431640625E-7
FROM sys.dm_os_sys_info

 

It uses the system views which I haven’t begun to scratch the surface of their awesomeness, but plan to learn more.

Ima make a deal with the SA so the SA don’t bite no more

T-SQL Tuesday

TSQL Tuesday #63 – How do you manage security?

Yay, so this is my first official blog party post.  I was late with my post last month because I didn’t have it figured out until I saw everyone posting and wondered what was going on.

One of my goals for this year is to increase security across my 50 SQL servers.  The SA log-in is a dangerous one and I don’t want it out there on my servers, the trouble is that it is hard to exterminate. Here is my new plan.

I am going to script out my SA log-in and password using this script.

Next I am going to rename the SA log-in to something else and give it a much stronger password. This information will go into my password safe so that I have it just in case I need it.

I am going to create a dummy SA. I have many vendor databases that claim they need SA, but they really don’t and I don’t want them to have high privileges.  I am going to work on lowering the privileges to only give what they need (this will be a slow server by server process to get the permissions right).  The reason I am keeping a log in with the name “SA” is because of vendors who hard code that user name.

As I was talking to people in the #SLCSQL user group last night, someone suggested that we can also monitor that dummy SA login to watch for attacks. It is a great idea and plan to include monitoring on the new log in.

Lars Rasmussen suggested I have a user on each server that will always be there to handle running jobs and other database needs.  I plan to include this too so that when I don’t have a proper SA log in, I will still have a log in that can handle all my fun stuff.

By doing all of this, my SA won’t bite me anymore.

T-SQL Tuesday is a blog party started by Adam Machanic (b|t) just over five years ago.  This month it is hosted by Kenneth Fisher (b|t).

We could be immortals, just not for long when using a duplicate delete!

Sometimes awesome things just happen.  Today Rob Farley (@Rob_Farley) was helping me with a previous post about my dates table and just as a side note he said, “Oh, let me show you something else cool.” It was really cool and so I asked if I could add it to my blog since I never know when this problem will strike.

I have a table about Super Heroes and their cape colors.  I made a mistake and put Batman in there twice.  But since there is no primary key, how do I tell it which one to delete?  Hero Table

Since they are exactly the same, I can let SQL sort it out. This uses both a CTE (I hadn’t ever used it without some kind of join before today) and OVER which I am learning about.  So cool!

WITH SuperHeroDuplicates 
AS 
(SELECT *, ROW_NUMBER() over (partition by HeroName,CapeColor order by HeroName,CapeColor) as rownum 
FROM dbo.HeroCapeColor) 
DELETE 
FROM SuperHeroDuplicates 
where rownum > 1;

Thanks Rob!

—————UPDATE—————

Kenneth Fisher (@sqlstudent144) also wrote a blog post about another way you can accomplish this task if you only have a few of them to delete.

He has been a super big help to me.  He taught me how to display my code better in my blog so it is easier to read and copy. He also encourages me, builds my confidence and even included a link to one of my posts in an article. It made me feel special and like what I have to say matters to other people.  I love how he comments on my posts and gives me ideas on how to make them better. I feel so lucky to be a part of such a great community of people that are so thoughtful, selfless and giving.  Thanks Kenneth for being such a great example to me!

Help me, help me, my logs are over-sized!

I have a lot of servers and only home-grown monitoring.  I needed a solution that would tell me if a log file started to grow outside of its normal size.  I listened carefully at PASS Summit on how I should “right-size” my log files and came back with a plan to tackle all of them. Fast-forward a few months and I am still having trouble even after sizing them to what I thought they should be. One of the problems is that my backup solution is “touchy” with log backups and awesome at getting the full ones. I built a report to watch for missing transaction log backups, which isn’t perfect yet.  I needed a more immediate solution and after getting another Disk Space Email Alert, I knew what I had to do.  Here it is, my Large Log File Alert.

CREATE PROCEDURE [dbo].[usp_LargeLogFileAlert]
 @LogThreshold int = 1024
AS
--This procedure sends out an alert message when a logfile has exceeded a set threshold.
--It can be scheduled to run daily so you can act quickly to resolve.

IF EXISTS (SELECT * FROM tempdb..sysobjects
WHERE id = object_id(N'[tempdb]..[#LargeLogFileAlert]'))
DROP TABLE #LargeLogFileAlert
CREATE TABLE #LargeLogFileAlert (
 DatabaseName VARCHAR(50) NOT NULL,
 LogMB INTEGER NOT NULL)

DECLARE @SizeMB INT
DECLARE @DatabaseName VARCHAR(50)
DECLARE @AlertMessage VARCHAR(500)
DECLARE @MailSubject VARCHAR(100)

/* Populate #LargeLogFileAlert with data */
INSERT INTO #LargeLogFileAlert
SELECT RTRIM(instance_name) AS DatabaseName, cntr_value/1024 AS SizeMB
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Log File(s) Used Size (KB)'
AND instance_name != '_Total' AND cntr_value/1024 > @LogThreshold --Threshold in MB

DECLARE LogAlert CURSOR FAST_FORWARD FOR
SELECT DatabaseName, LogMB FROM #LargeLogFileAlert 

OPEN LogAlert
FETCH NEXT FROM LogAlert INTO @DatabaseName, @SizeMB

WHILE (@@FETCH_STATUS = 0)
BEGIN
SET @MailSubject = 'Log for ' + @DatabaseName + ' has exceeded threshold on ' + @@SERVERNAME
SET @AlertMessage = @DatabaseName + ' is ' + cast(@SizeMB AS VARCHAR) + ' MB. Please verify transaction log backups are working.'
-- Send out email
EXEC msdb..sp_send_dbmail @recipients = 'you@email.com',--Change to your email
@subject = @MailSubject,
@body = @AlertMessage
FETCH NEXT FROM LogAlert INTO @DatabaseName, @SizeMB
END
CLOSE LogAlert
DEALLOCATE LogAlert
DROP TABLE #LargeLogFileAlert

GO

The threshold is adjustable by server. I picked the one I did because my servers have a lot of small databases.

Next I tossed it into a SQL Agent Job and have it run once a day on each server. It is light so I am not worried about it causing performance issues. It will email me for each database that has a log file larger than the limit I set. It won’t email at all if there aren’t any. Once I get my logs sized properly, I can turn them off. I used my CMS to deploy the stored procedure and SQL Agent job to all my servers and Yay, I know more about my environment.

This is my first attempt at joining the blog party which is number 62 – hosted this time by Robert Pearl (@pearlknows), hopefully I did it right.