If I need to rearrange my fragmentation, I will for you….

Oh my goodness, I have been buried and learning tons about Indexing.  Something super cool that I learned was how to know if you need to adjust your fillfactors on your indexes.  My whole DBA career I have usually set it 85 and forgotten it. I learned that I can check my CommandLog and see how often that index is being rebuilt.   I am currently using Ola and when my indexes are rebuilt for maintenance (rebuilt nightly in this case), it is all logged to a table named CommandLog in the master database.  If an index is rebuilt everyday, adjust the fillfactor  down (75) so that it will rebuild less often.  If it nearly never rebuilds, adjust the fillfactor up (90) so it gets rebuilt once in a while and to avoid wasting space in an index. Here is the query I am using to see how often an index has rebuilt in the last 10 days.

SELECT Command, COUNT(StartTime) AS Rebuilt
FROM master.[dbo].[CommandLog]
WHERE CommandType LIKE '%Index%' AND StartTime > GETDATE()-10
GROUP BY Command

When I see ones that have a rebuilt number of 10, 9 or 8, I know those are rebuilding nearly everyday.  I will adjust their fillfactors down so that I don’t rebuild as often.  My sweet spot right now is once a week, but that doesn’t work for every index (or environment).  This is where the art part comes into the DBA world and I have to think about what is best for the system.  If I have already adjusted something down to 75 and it is still rebuilding frequently, I will adjust that number down more (65, then 55), and back up if it is too low.  I also evaluate how much that index is used and how important it is to keep healthy. If it is hardly ever used, do I want to waste fillfactor space?

Indexing is absolutely an art, but now I have a new brush for my kit by being able to query the CommandLog.

This post’s song is Particles by Nothing But Thieves

All of my friends say, “How much longer will it run?”

I have frequently needed to see how much longer a backup was going to run, or how much longer a restore was going to run.  This little bit of code is something I use to help me know how to plan.  I replace “Backup” with “Restore” if I want to see how much longer to plan for a restore.  I also use it for when I am tracking a rollback.  I remove the where clause and get a large list, the one that has a percentage and not a clear explanation is usually the one I want.   Sometimes the time remaining is lie.  I have had it imply 4 hours, when it took 13.  The comfort is that I can tell it is still working.

SELECT command, percent_complete,
'elapsed' = total_elapsed_time / 60000.0,
'remaining' = estimated_completion_time / 60000.0
FROM sys.dm_exec_requests
WHERE command like 'BACKUP%'

The song for this post is “Just Another Girl” by The Killers

SELECT * Is Bittersweet…I’m not trying to hurt you, I just love to query….


In preparation for my Groupby.org presentation on Friday, I wanted to do a post about why SELECT * is bittersweet.

Let’s start with the bitter first:

  1. If I have a wide table (lots of columns or columns that have large data types) and I select them all,  I am reading A LOT more data than I probably need to read which takes up processor.  This will also take up space in my buffer cache and can knock other things out.  I am making this sound extra scary to discourge anyone from doing it for a whole table or even a lot of rows.
  2. Indexing goes out the window with SELECT * because SQL will do a full table scan.  If I create an index using every column to make it work (and please don’t do this, it stores the table twice) , someone will come along one day and add a column to my table and it will do a full table scan again. There will be a domino effect in my system leading to the end of the world or at least poor performance.
  3. Using SELECT * in a Stored Procedure or View can seem simple and easy.  I automatically have all the columns I need.  What happens when I add a column to my table that lives underneathe?  It can change the behavior or even break the Stored Procedure or View and anything that relys on them.  Some of the issues may not show up at first for example, it may sort differently, or I may get duplicate data.  Then, I have to go back through everything trying to figure out what changed and where to fix it.  Huge headache ahead on this one.

We have seen the bitter, now let me show you the sweet:

  1. When I have a small table and can’t remember the names of all the columns, SELECT * is a quick way for me to see what the columns are named.
  2. When I need to see what the data looks like in the table, I can use SELECT Top 5 * and get an idea of the data.  By limiting the data I am bringing back, I don’t cause as many of the scary reads which uses less processor and I don’t fill up my buffer cache with useless stuff.

SELECT * Can be a friend if it is used carefully and knowingly.

*Update* Here is the GroupBy Presentation I did on T-SQL for Beginners if you would like to watch it.

The song for this post is Bittersweet by Panic! At the Disco

Dynamic Data Masking keeps playing…keep your hands off my data!

As promised, I have been playing with Dynamic Data Masking and here are some things I have learned.  I downloaded World Wide Importers so I would have a place to play and there were masked columns already included.

This query will show us what has already been masked:

SELECT mc.name, t.name as table_name, mc.is_masked, mc.masking_function
FROM sys.masked_columns AS mc
JOIN sys.tables AS t
 ON mc.[object_id] = t.[object_id]
WHERE is_masked = 1;

Here we can see the column and the table that is being masked and what masking function is being used.

masking 1

This is a great time to talk about the different masking functions and what they do.  The four types in 2016 are Default, Email, Random and Custom String.

Default – For numeric and binary it will show a “0” For a date it will show 01/01/1900 and for strings it will show xxxx’s (more or less depending on the size of the field).

Email – It will expose the first letter of the email address and the suffix at the end of the email (.com, .net, .edu etc.) For example Batgirl@DC.com  would now be bxxx@xxxx.com.

Random – Number randomly generated between a set range. Kind of like the game, “Pick a number between 1 and 10” but for SQL.

Custom String – Lets you get creative with how much you show or cover and what you use to cover (not stuck with just xxxx’s).

Now for fun, let’s create a table that will be masked.

,HeroName VARCHAR(100)
,RealName VARCHAR(100) MASKED WITH (FUNCTION = 'partial(1,"XXXXXXX",0)') NULL
,HeroEmail VARCHAR(100) MASKED WITH (FUNCTION = 'email()') NULL
,PhoneNumber VARCHAR(10) MASKED WITH (FUNCTION = 'default()') NULL);

Let’s add some data that we will want to mask:

INSERT SuperHero (HeroName, RealName, HeroEmail, PhoneNumber) VALUES
('Batman', 'Bruce Wayne', 'batsy@heros.com', '5558675309' ),
('Superman', 'Clark Kent', 'manofsteel@heros.com','5558675308' ),
('Spiderman', 'Peter Parker', 'spidey@heros.com','5558675307' );

SELECT * FROM SuperHero;

and finally we add some low level permissions of people who will look at the masked version of the data:

GRANT SELECT ON SuperHero TO CommonPeople; 

Now the test to see if CommonPeople have access to see all of our Superhero secrets:

EXECUTE AS USER = 'CommonPeople';
SELECT * FROM SuperHero; 

Try it out and see for yourself how it looks. Now you have experienced Dynamic Data Masking 101 in SQL Server 2016!

The song for this post is Good Charlotte – Keep Your Hands Off My Girl

I won’t be late for this, late for that because I have Time Zone Info….

One of the new items in SQL Server 2016 is the super awesome time_zone_info table.  When I heard about it, I started to think about all the cool things that it could help me do.  First, let’s look at the table.

FROM [sys].[time_zone_info]


Yes, it is 132 rows of magic! Now that we have this super cool table, how do we use it? Let’s pretend that my data is time-stamped in US Mountain Standard Time, but I want to display it in Western Australia Standard Time.  I would do it like this:

 GETDATE() AT TIME ZONE 'US Mountain Standard Time' AS Mountain_Time,
 GETDATE() AT TIME ZONE 'US Mountain Standard Time' 
   AT TIME ZONE 'W. Australia Standard Time'AS W_Aus_Time;

I am including the GetDate column so you can see that GetDate is using my time zone, but I have to tell it what time zone it is before I can convert it to another one.


Caution: If I put in the Hawaiian time zone instead of Mountain time on the GetDate, SQL won’t correct me, it will just do the math like a good little system and assume I know what time zone I am using to start.

I am am really loving the new features in SQL 2016! I hope you are enjoying it too!

This posts song is Cleopatra by The Lumineers

What can I say except “You’re Welcome for the AG voting script”

We recently had an issue where the network between our GEO-Cluster would go down and both Availability Group Instances thought they were supposed to take charge.  When the network came back up, both of them still though they were in charge.  You can imagine with an AG, you can’t have two instances that think they are in charge without problems.  This brought up the question of how voting was configured between the two of them.  This script helped a bunch:

SELECT member_name, member_state_desc, number_of_quorum_votes
 FROM sys.dm_hadr_cluster_members;

We found that the File Share Witness wasn’t working properly by checking the member state. In a simple AG, a good practice is to have each instance and then a File Share Witness,that keeps each side from accidentally taking over.  You’re Welcome.

The song for this post:  You’re Welcome From Moana