In January, the awesome Tim Radney (b|t) talked to the Utah user group about best practices. One that he mentioned was rolling over your error logs everyday and keeping 35 logs (a month plus 3 reboots). I loved this idea and implemented it using what I had done here and adding it to an agent job.
Then I realized we didn’t have any alerts on if our logs were rolling too much. Way back in my career, it used to be something that I would watch and it could mean someone was trying to hack your system and cover their tracks by rolling your logs over a bunch. I fought so much with figuring how to tell if my logs are rolling over, I had to save it for the future.
DROP TABLE IF EXISTS #EnumErrorLog; CREATE TABLE #EnumErrorLog ( [Archive#] varchar(3) NOT NULL PRIMARY KEY CLUSTERED , [Date] datetime NOT NULL , [LogFileSizeByte] int NOT NULL ); INSERT INTO #EnumErrorLog ([Archive#], [Date], [LogFileSizeByte]) EXEC sys.sp_enumerrorlogs; SELECT CASE WHEN COUNT([Archive#]) > = 5 THEN 1 ELSE 0 END FROM #EnumErrorLog WHERE Date > DATEADD(hour, -3, GETDATE())
I create a temp table so I can execute a system stored proc to pull the information into a table and select it back out. I run this alert check once an hour, which means that for 3 hours if the alert condition has been met, it will alert me that something has rolled over too much (1 means to alert, 0 means to not do anything). I am using a third party tool right now, but I bet this could be set up with native SQL alerts or agent jobs.
The song for this post is Little Bit of Love by JP Cooper, it makes me smile even on the toughest days. *hugs*
[…] Andrea Allred wants to figure out how frequently logs roll over: […]