Posted by: sqlswimmer | January 31, 2017

Pesky Percent File Growth

As DBAs we all know setting your file growth to grow by percent is not optimal.  It can cause all kinds of issues, which rear their ugly heads as performance problems (see these articles by Brent Ozar, & Tim Ford).  So, when I have to support a third party application that automatically adds data files using percent instead of fixed size, it really irritates me.  I got tired of seeing these new files show up on my daily exceptions report, so I decided to do something about it.  This post explains what I did.

I have a home grown process that goes out and collects all kinds of information about my servers on a daily basis.  Once that process is complete it sends reports, via email, so I can get a quick look at things when I first arrive at work in the morning.  One of those reports is my file exception report.  It reports things like excessive data/log file growth, data/log files that are almost full, data/log files that use the percent file growth, etc.  The first time I had a file show up on my exceptions report with a percent file growth, I decided I needed to be notified before the report landed in my inbox, so I created a server level trigger that is triggered by the ALTER DATABASE command.  This trigger captures all the relevant information and sends me an email.  Here’s the code I used for my trigger:

CREATE TRIGGER [ddl_trig_alterdatabase]
ON ALL SERVER
FOR ALTER_DATABASE
AS
   DECLARE @Subject nvarchar(255)
      , @Body nvarchar(MAX)
   SELECT @Subject = N’A database was altered on ‘ + @@Servername
      , @Body = EVENTDATA().value(‘(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]’,’nvarchar(max)’)
   exec msdb.dbo.sp_send_dbmail
      @recipients = ‘myemail@emaildomain.com’, — varchar(max)
      @subject = @Subject, — nvarchar(255)
      @body = @Body
GO
ENABLE TRIGGER [ddl_trig_alterdatabase] ON ALL SERVER
GO

 

This worked great, I found out before my report showed up and I could address the issue when it happened.  Unfortunately I discovered that one of the applications was making this change in the middle of the night.  I certainly didn’t want to have to wake up in the middle of the night to address this issue, since it really isn’t a “production down” type of problem (and let’s face it, no DBA wants to be woken in the middle of the night for anything, let alone something that is not production down).

I decided I needed to do something else other than just send an email notification, I needed to take corrective action when it occurred.  So I wrote a little stored procedure that will take the ALTER DATABASE statement as a parameter, parse it and take the appropriate corrective action. 

Simple enough, right?  Now I just need to add the call to my newly created stored procedure in my server level trigger and we are good to go.  But wait, you can’t ALTER a database within an ALTER DATABASE statement (don’t believe me? Use this as a learning exercise to see what happens when you try).  So what could I do?  There are several things you could do, but I chose to create a table that could hold this newly created ALTER DATABASE statement and insert the record there.  Then I created a SQL Agent job that runs once every hour and reads that table and executes any entries it finds, then deletes them after successfully executing.

Here’s the code for my stored procedure:

CREATE PROCEDURE [dbo].[ChangePercentGrowthMaxSizeUnlimited]
@SQLText nvarchar(max)
AS
SET NOCOUNT ON

/* We start with something like this
ALTER DATABASE [DatabaseName]
ADD FILE (NAME = N’DataLogFileName’
         ,FILENAME = N’X:\DataLogFileName.ndf’
         , SIZE = 20
         , FILEGROWTH = 5%
         , MAXSIZE = UNLIMITED)
*/

/*  We want to produce something like this
ALTER DATABASE [DatabaseName]
MODIFY FILE ( NAME = N’DataLogFileName’
            , MAXSIZE = 102400KB
            , FILEGROWTH = 10240KB )
*/

— Local Vars
DECLARE @AddFileText VARCHAR(8) = ‘ADD FILE’
   , @ContainsAddFileText BIT = 0
   , @AddFileStartPosition BIGINT
   , @FileGrowthPercentText VARCHAR(13) = ‘FILEGROWTH = %!%’
   , @ContainsFileGrowthPercent BIT = 0
   , @ContainsMaxSizeUnlmitedText BIT = 0
   , @StartPosition INT
   , @EndPosition INT
   , @Length INT
   , @DatabaseName VARCHAR(128)
   , @FileName VARCHAR(128)
   , @AlterDatabaseLength INT = LEN(‘ALTER DATABASE ‘)
   , @AlterDatabaseSQL NVARCHAR(MAX)

   — Is it an ADD File operation?
   SELECT @AddFileStartPosition = PATINDEX(‘%’ + @AddFileText + ‘%’, @SQLText)
  
   IF @AddFileStartPosition > 0
   BEGIN
      –It’s an ADD File operation
      SET @ContainsAddFileText = 1

      IF @SQLText LIKE ‘%’ + @FileGrowthPercentText + ‘%’ ESCAPE ‘!’
      BEGIN
         — it’s adding a file using percent file growth
         SET @ContainsFileGrowthPercent = 1
     
         — Is it setting MAXSIZE to UNLIMITED?
         IF PATINDEX(‘%’ + @MaxSizeUnlimitedText + ‘%’, @SQLText) > 0
         BEGIN
            SET @ContainsMaxSizeUnlmitedText = 1
         END
        
         — Now we need to parse the ADD FILE expression and build a MODIFY FILE operation from the parts
         — Get database name
         SELECT @StartPosition = @AlterDatabaseLength + 1
         SELECT @Length = CHARINDEX(‘ADD FILE’, @SQLText, @AlterDatabaseLength) – @AlterDatabaseLength
         SELECT @DatabaseName = LTRIM(RTRIM(SUBSTRING(@SQLText, @AlterDatabaseLength + 2, @AddFileStartPosition – 1 – @AlterDatabaseLength – 2)))

         — Get filename
         — Start by finding the start of the logical file name
         SELECT @StartPosition = CHARINDEX(””, @SQLText, PATINDEX(‘%’ + ‘[^FILE]NAME %”’ + ‘%’, @SQLText)) + 1
         SELECT @Length = CHARINDEX(””, @SQLText, @StartPosition) – @StartPosition
         SELECT @FileName = SUBSTRING(@SQLText, @StartPosition, @Length)

         — Now Create the alter database operation
         SELECT @AlterDatabaseSQL = N’ALTER DATABASE ‘ + @DatabaseName + N’ MODIFY FILE ( NAME = N”’ + @FileName + N”’, ‘
         IF @ContainsFileGrowthPercent = 1
            SELECT @AlterDatabaseSQL = @AlterDatabaseSQL + N’FILEGROWTH = 10240KB’

         IF @ContainsMaxSizeUnlmitedText = 1 AND @ContainsFileGrowthPercent = 1
            SELECT @AlterDatabaseSQL = @AlterDatabaseSQL  + ‘, MAXSIZE = 102400KB’
         ELSE
            IF @ContainsMaxSizeUnlmitedText = 1 AND @ContainsFileGrowthPercent = 0
               SELECT @AlterDatabaseSQL = @AlterDatabaseSQL + N’MAXSIZE = 102400KB’
           
         SELECT @AlterDatabaseSQL = @AlterDatabaseSQL + N’ )’

         INSERT dbo.DBAAlterDatabase
         (SQLText)
         VALUES
         (@AlterDatabaseSQL)
      END
   END
   ELSE
   BEGIN
      — It’s not an ADD FILE operation
      PRINT ‘It”s not an ADD FILE operation, no work to do.’
   END

RETURN 0

 

Here’s the code for my modified server level trigger:

CREATE TRIGGER [ddl_trig_alterdatabase]
ON ALL SERVER
FOR ALTER_DATABASE
AS
   DECLARE @Subject nvarchar(255)
      , @Body nvarchar(MAX)
   SELECT @Subject = N’A database was altered on ‘ + @@Servername
      , @Body = EVENTDATA().value(‘(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]’,’nvarchar(max)’)
   exec msdb.dbo.sp_send_dbmail
      @recipients = ‘myemail@emaildomain.com’, — varchar(max)
      @subject = @Subject, — nvarchar(255)
      @body = @Body 
   EXEC dbo.ChangePercentGrowthMaxSizeUnlimited @SQLText = @Body
GO
ENABLE TRIGGER [ddl_trig_alterdatabase] ON ALL SERVER
GO

Works like a charm!  But wait, you might notice that I’m making more than a few assumptions in my stored procedure, and you would be correct.  I feel like I need to add a disclaimer to this post, the same way they add disclaimers to pharmaceutical commercials.

Here are my assumptions: 

  1. Only one file is being created at a time. 
  2. We always want to change our file growth to 10MB. 
  3. We don’t want a max file size of unlimited and we always want to set our max file size to 100MB. 
  4. There will be no errors.
  5. I am in NO way responsible if this code breaks something on your server.

Explanation of assumptions:

  1. What fun would it be if I did all the hard work for you?  This can easily be adapted to work with multiple files being created at the same time.  You can do it, I have faith in you.
  2. For my particular instance, I know the model database settings for this server and I hard coded them, because that’s what I wanted.  You could easily adapt the code to use your model database settings or any other value for that matter (HINT: think sys.sysfiles).
  3. See explanation of assumption 2 above.
  4. I removed all my standard stored procedure framework code (which includes error checking) for brevity.  You should ALWAYS have error checking in your stored procedures!
  5. You should NEVER assume code is not malicious in nature and add it to production without a thorough understanding of what it’s doing.  Shame on you if you did.

 

Posted by: sqlswimmer | January 31, 2017

My First SQL Saturday – As A Speaker!

I am so honored and excited to have been selected to speak at SQL Saturday Richmond on March 18, 2017.

I will be presenting my What is Power BI? session.  There have been a ton of changes to Power BI since I last presented this, so I’m off to start updating my slide deck. 

If you are close to Richmond, VA on March 18, 2017 and want to learn more about what Power BI is, please register for this event and stop by and see me.  I’d love to have you as would the SQL Saturday Richmond team.

Posted by: sqlswimmer | January 23, 2017

How to Exclude Entire Sections in VS Schema Compare

Right now I am so excited, and a little embarrassed.  After using Visual Studio (VS) for database projects for the last 7 years, I am just now finding out about this feature?!  What feature is that, you ask?  Let me tell you…

When using VS for database projects I typically use my environment specific Publish xml file to deploy changes to my local database when experimenting with code changes.  However, every once in a while I will have to use the New Schema Compare tool from the Tools | SQL Server menu when I have a “one-off” database that I need to synch to my database project.

Quite frequently because these are one-off databases there will be a ton of junk items that are in the database, but not in my database project or lots of objects in my project that aren’t in my database.  Either way I want to ignore those changes.  In the past, I’ve always manually unchecked each item, tedious when you have more than two items to uncheck.

SQLSchemaCompare2

I always thought there should be a better way than having to manually uncheck each object, but never thought more about or how to do it.

Well, today I figured it out.  All you have to do is right click on the section and ta-da, you can Include or Exclude all objects depending on the existing state of the objects.  So in the image below, I right click on Delete, select Exclude

SQLSchemaCompare3

and I’m left with the following:

SQLSchemaCompare4

In just one click instead of 14 (if I wanted to exclude all the table drops).

Like I said, I have no idea why it took me so long to figure this out, but surely I can’t be the only person who didn’t know this, so now you do too.

Posted by: sqlswimmer | October 12, 2016

So Long, Farewell, I Hate to Say Goodbye

The Sound of Music is one of my all time favorite movies.  It reminds me of some great times with family during the holidays when I was a kid.  The adults would all be in the living room talking about “boring adult stuff” and all us kids would be piled in the family room on the floor with the fire place roaring and singing along at the top of our lungs to every single song. 

Those are some great memories and those memories are what make us “us” and that is why this post if very hard for me to write.  My mother has been officially diagnosed with Alzheimer’s and it has been devastating for my family.  

Most of you know I do lots of volunteer work for PASS.  It’s one of my PASSions, I love working with folks in our #SQLFamily and the folks at PASS HQ.  However, with my mother’s diagnosis, something has to give and unfortunately my volunteer work with PASS is one of the things that will be affected.  I will be scaling back my volunteer efforts with PASS. 

Currently I serve on the Program Committee as a Program Manager and am a PASS chapter leader, among other various volunteer things throughout the year.  I have been on the Program Committee for the last seven years, spending the last two as a Program Manager.  We have come a long way from where we were when I started seven years ago, but there’s always room for improvement.  I really wish I could be part of the team that keeps making it better, but my family needs my attention now, so I will be stepping down from the Program Committee.  I’ve made some good friends along the way and am thankful for the opportunity I have had to give back to the community by being part of this great team.  I will continue to serve as the Chapter Leader in Greensboro and various other opportunities when they come up, however those opportunities will be in a limited capacity now.

I have some great memories of my PASS family which is why I really hate to say goodbye, so I’m going to think about raindrops on roses and whiskers on kittens, bright copper kettles and warm woolen mittens..then maybe I won’t feel so bad.

Posted by: sqlswimmer | September 1, 2016

SQL Saturday Charlotte is Coming!

That’s right, SQL Saturday Charlotte is coming September 17, 2016.  Next to the annual PASS Summit, this is my favorite SQL Event!  This is the fifth year that the Charlotte BI Group (CBIG) has put on this great event.  They’ve expanded the event this year to include two Pre-Cons on Friday, September 16, 2016, one from BI and data analytics queen Jen Underwood and the other from performance guru Adam Machanic.  While the Pre-Cons aren’t free like the event on Saturday, they certainly are a bargain for an entire day of training with well known experts in their respective fields.

If you haven’t already registered, I’d suggest you do it now before they go to a wait list.  This event is always jam packed full of great sessions from great speakers, both local and national.

Oh yeah, I will be there.  I’m helping out again this year with registration, so be sure to say hello if you see me.  I am always happy to see my #SQLFamily.

Posted by: sqlswimmer | June 30, 2016

SQL Server 2016, Database Mail and .Net 3.5 Framework

There were so many cheers when Microsoft announced that the .Net 3.5 Framework was no longer a pre-requisite for installing SQL Server starting with SQL Server 2016.  Folks were so excited, until they started testing certain pieces of the product, specifically Database Mail.  That’s right, if you want to use Database Mail in SQL Server 2016 you have to install the .Net 3.5 Framework.

If you are installing SQL Server 2016 on Windows Server 2012 R2, this might be a problem for two reasons.  First, the .Net 3.5 Framework is no longer included by default when spinning up a new install.  Second, you will get no errors when testing Database Mail in SQL Server 2016.  Mail will be queued, but will not be sent and there will be no errors in the Database Mail log, the SQL Agent log or even the SQL Server error log.

So if you’ve gone through all the usual steps to troubleshoot Database Mail (or these steps) in SQL Server 2016 to no avail, be sure to double check that the .Net 3.5 Framework has been installed.  If not, you will need to install it, then apply ALL the patches for it.

Posted by: sqlswimmer | June 23, 2016

Struggling With Due Diligence

It’s been a while since my last post, there are various reasons for my absence:  work, illness, having our house rewired, etc., but one of the biggest reasons is my struggle with how a few folks in the SQL community treat the PASS organization and those that volunteer their time to PASS.

As many of you know, I do a lot of volunteer work for PASS.  I do this because it’s an organization that I truly believe in.  It was created by the community for the community.  It’s a place where data professionals can exchange knowledge freely, no strings attached.  To my knowledge there is no other community in the IT world quite like it, we even have our own hashtag on twitter, #sqlfamily.  But lately I’ve been struggling with how a few community members have reacted to policies/procedures/contracts.  Essentially starting a fire, pouring gas on it and walking away.

I’ve been involved with the Program Committee (they are the folks that select the content for the annual Summit) since 2010.  I took over my local PASS chapter when the existing chapter leader stepped down.  I help out with local/regional SQL Saturdays when my schedule allows.  I moderate 24 Hours of PASS when my schedule allows.  I volunteer while on site during the Summit as an Ambassador.  I’ve served on the NomCom (2012).  You get the idea, I am a true believer and not just in lip service, so when someone “attacks” an organization that I truly believe in, I get more than a little irritated.

One thing I have learned throughout the years of being a DBA is that you need to be able to prove a problem is NOT yours by exploring all the other possible areas that could possibly be causing the problem.  You have to look at it from all angles, not just the DBA angle.

We’ve all been there.  Customer calls to say application is slow and a trouble ticket is automatically created and assigned to the DBA team because the application uses a database.  This is somewhat akin to saying the issue with a car’s performance is the gas – all cars use gas so it must be the gas.  It’s tiresome and frustrating, but we go through motions to prove the issue is not ours.  In the financial world, it’s called due diligence.

This kind of due diligence has proven to be useful in other areas of my life, both personal and professional.  I would ask that those in the community please do their own due diligence BEFORE posting a blog, sending a tweet or starting the good old fashioned room mill.  Lately several community members seem to have forgotten the kind of influence they carry with the rest of the community and not done their due diligence before posting a blog, sending a tweet or starting the rumor mill.

You will notice that I did not name any names.  That would really defeat the purpose of this post.  I don’t want to start a fire with this post then allow gasoline to be poured on it with all the comments (not that there would be tons, because I don’t carry a lot of influence in our community – not a slight, just stating a fact) and then walk away.

I want you ALL to think about what you post, tweet or say BEFORE you do it.

Posted by: sqlswimmer | February 15, 2016

Did you know?

Did you know that the call for speakers for PASS Summit 2016 opened on February 3, 2016?

Did you know that the call for speakers for PASS Summit 2016 ends on March 2, 2016?

Did you know that there’s a Speaker Resource Page?

Did you know that you can have your abstract reviewed confidentially BEFORE you submit it for the Summit?

Did you know that your abstract is being reviewed by members of the SQL community?

Did you know that this is the second year the Program team has offered this service?

Did you know that only 32 people took advantage of this service last year?

Did you know that even experienced speakers use this service?

Did you know that you only have until February 26, 2016, to submit an abstract for confidential coaching?

Did you know that you have nothing to lose?

So what are you waiting for?  You’ve been thinking about submitting for a while now but were unsure if your abstract was good enough.  Well, now you have no excuse, use the confidential coaching service and find out.

Posted by: sqlswimmer | January 25, 2016

What is Power BI?

One of the “benefits” of being a chapter leader is that sometimes it means doing a presentation yourself when you can’t get a speaker.  I fell into this exact scenario for February’s meeting of Triad SQL.  I was trying to figure out what to present when the planets aligned. After reading the #EntryLevel post in this month’s PASS Connector News and my boss asking me about Power BI.  He wanted to know more about it and if it was something we could use.

I decided to put a presentation together to answer those questions.  This post is basically the flattening out of my PowerPoint presentation.

The What/Who/Why/Flavors of Power BI

What is Power BI?

When I Googled (yes, I used that as a verb!) “What is Power BI”, this is what I got, “Power BI is an amazing business analytics service that enables anyone to visualize and analyze data.”  This sounds cool, but isn’t all that helpful.  After further research, I found this definition, courtesy of powerbi.microsoft.com

Power BI is a cloud-based business analytics service that enables anyone to visualize and analyze data with greater speed, efficiency, and understanding. It connects users to a broad range of data through easy-to-use dashboards, interactive reports, and compelling visualizations that bring data to life.

Why use Power BI?

There are lots of reasons to use Power BI, other than, it’s so cool.  For instance, Power BI makes it easy to see, in one glance, all the information needed to make decisions.  It also allows you to monitor the most important information about your business.  Power BI makes collaboration easy and when I say easy I mean EZ!  You can also create customized Dashboards tailored to those C-Suite folks or make a completely different dashboard based on the same data for those that actually do the work.

Who can use Power BI?

Anyone who has a work or school email address can use Power BI.  Sorry, no personal email addresses.  Also no government (.gov) or military addresses (.mil).

Flavors of Power BI

There are two flavors of Power BI, Free and Pro.  You can do everything with Pro that you can do with Free plus a few other things.  Here’s a little comparison of the two, there are more differences, but these are the big ones.

Free

Pro

Data refresh frequency: Daily

Data capacity Limit: 1GB/user

Streaming rate: 10K rows/hour

Data sources are limited to content packs for services and importing files

Data refresh frequency: Hourly

Data Capacity Limit: 10GB/user

Streaming rate: 10M rows/hour

Data Sources include free ones plus direct query dataset and on-premises data

Collaboration with content packs

As of January 21, 2016, the Pro flavor goes for $9.99 USD per month per user.

Also, there is the previous version/flavor of Power BI referred to as Power BI for Office 365, which will be deprecated on March 31, 2016, so I am not including this version/flavor in this post.

The How of Power BI

The building blocks of Power BI are Dashboards, Reports & Datasets.

Dashboards

Dashboards are made of Tiles that contain a single visualization created from the data of one or more underlying Datasets.  When I first read this all I heard was “blah blah blah Datasets”.  What this means is simply this, it’s a collection of reports that are all displayed together for a specific reason.  It could be that you want all your sales guys to see different views of how they are doing compared to budget/forecast or it could be that you want to give your C-Suite people a quick overview of how the company is doing as a whole.  You can tailor these dashboards to whatever suits your purpose.  Now the only reference to the limit on the number of dashboards I could find was on the Office 365 site and it was listed as 100 per user or group.  I’m thinking of the old adage “just because you can doesn’t mean you should” would apply here though.

Reports

A report is one or more pages of visualizations.  Reports can be created from scratch within Power BI or Power BI Desktop.  They are very easy to create, you simply click on the type of visualization you want to display then select the data to be used.  One caveat that I will mention here is be sure your data is formatted so that is can be more easily consumed by Power BI.  See this link for tips and tricks on how to build a “proper dataset” for Power BI.  Just as with Dashboards, you have a limit as to the maximum number of reports, which is the same as Dashboards, 100 per user or group.

Datasets

A Dataset is something that you import or connect to.  It contains the actual data you want to translate into visualizations.  Right now you are limited as to the types of files you can import in to Power BI to Excel, Comma Separated Values (.csv)  and Power BI Desktop files (.pbix).  As far as connecting to data sources you can choose from many of the content packs that are available via the Power BI site like Google Analytics, Bing, Mail Chimp, Sales Force and GitHub, just to name a few or you can connect to a database.  As with anything that sounds too good to be true, you are limited to the databases you can connect to.  Right the now current list is limited as well, to Azure SQL Database, Azure SQL Data Warehouse and SQL Server Analysis Services (tabular model only).  There is a 250MB limit to the size of the dataset that you can import in to Power BI and a limit of 100 Datasets.

References

https://powerbi.microsoft.com/en-us/documentation/powerbi-service-get-started/

https://powerbi.microsoft.com/en-us/documentation/powerbi-videos/

https://powerbi.microsoft.com/en-us/documentation/powerbi-webinars/

https://technet.microsoft.com/library/mt282164.aspx

That’s it.  I hope this post provided a little bit of insight into Power BI and whether it’s something that can be useful to you and/or your company.  Check out the following links if you want a deeper dive into Power BI.

Power BI Blog

Melissa Coates Blog

Reza Rad Blog

Chris Webb Blog

Posted by: sqlswimmer | January 22, 2016

What Do You Want?

It’s that time of year, planning for PASS Summit 2016.  We’ve already put out the Call for Volunteers, which closes today, January 22, 2016, so get those applications in and be part of the team that helps determine content for the Summit.  Don’t want to volunteer but still want to help determine content for the Summit?  Then take the survey to tell us what you want to see.  It’s a quick survey, less than five minutes and you only have until Wednesday, January 27, 2016 to tell us what you want.  What are you waiting for, get to it!  You may even win a USB of the session recordings.

What would you like to see at Summit 2016?

« Newer Posts - Older Posts »

Categories