Posted by: sqlswimmer | November 17, 2017

New Session: Data Types Do Matter

I started speaking on a semi-regular basis this year after many years of trepidation.  You can read about my first speaking experience here.  It’s three years later and I have discovered that I love speaking.  I always wanted to be a teacher when I was younger, but it didn’t work out that way.  I have taught swim lessons off and on for over 20 years as well as coaching a swim team, things I am very passionate about, so speaking about SQL Server seemed like the next step.

I am mainly a BI developer now, using the Microsoft stack, but my roots are in DBA land.  There is one thing that I have run across time and again doing my BI development work and it’s something that surprises me every time.  People constantly using the wrong data type for their data.  So I decided people are either lazy or they aren’t educated on the pitfalls of using the incorrect data type.  Hence my new session was born, Data Types Do Matter.

I’m test driving it at the January 30, 2018, Triad SQL User Group meeting.  Come on out and say, “Hi”, if you’re free.  I’d love to see you and get your feedback on this brand new session.

Advertisements
Posted by: sqlswimmer | October 5, 2017

girls + data is coming to SQL Saturday Charlotte

I am so excited to be part of the organizing team for SQL Saturday Charlotte again this year, the CBIG folks are always a pleasure to work with and I am honored to be able work with them.  I am proud to say that this year we are raising the bar in community outreach, specifically for young women who are interested in technology.

We have added a session targeted at young women grades 6 – 8 who are interested in data analytics.  We are partnering with girls + data to bring a session to SQL Saturday Charlotte.  Cost is only $20 for a three hour session and the session is offered once in the morning and once in the afternoon, but hurry, seats are limited.

You can register for the sessions from the SQL Saturday Charlotte website or you can register using the links below.

Data Stories – Morning Session

Data Stories – Afternoon Session

See you on October 14, 2017!

Posted by: sqlswimmer | October 5, 2017

Visual Studio 2017 Irritants From the SSIS Developer View

I was among the many Integration Services (SSIS)/Database (DB) developers out there that cheered when Microsoft finally announced support for SSIS in Visual Studio 2017.  I mean really, VS2017 was released back on March 7, 2017, but if you wanted to develop any SSIS packages in VS2017, you had to wait for the release of SSDT on August 23, 2017.  That’s more than 5 months.  All the other kids got to play with the shiny new toy right away, but we had to wait.  I know there are a ton of reasons for this, but it still sucks.

That being said, I was very happy.  I could finally do ALL my development in the same tool: DB, SSAS, SSRS & SSIS.  But after only 30 days of using it, I have discovered a few things that just irritate me.  I always thought of myself as open to change, but maybe I’m not as open to change as I thought I was.

1 – ISO Is No Longer An Option

That’s right, you can’t just download an ISO and stick it on your network, you have to essentially create your own if you need to do an offline install.  As the title of this post says, this is an “Irritant”, not a deal breaker, but definitely an irritant.

2 – Start Page – It’s Baaack!

There is NO WAY to turn this annoying page off once you close a solution.  You can turn the Start Page off when you first open VS2017 (Tools | Options| Environment | Startup), but as soon as you close a solution, it shows up – ARGH!  See this thread, I’m not the only one annoyed by this behavior.  I’m a very visual person and seeing a list of projects with really long paths don’t help me.  I need my Source Control Explorer to make sure I’m opening the right solution in the right branch.

VS2017-Tools-Startup

3 – Work Items Open in Browser By Default

One of the cool things about using VS as your IDE is that it has this great add-in for source control, especially if you use VSTS, it’s almost a seamless integration (and everyone IS using some kind of source control, right?!).  You can open your list of User Stories based on iteration.  Once you have your list of User Stories, all you have to do is double click the one you want and it opens right there in your VS environment.  Well, the default behavior for where your work items has changed.  Your work items no longer open in VS, they open in a web browser.  You can change this default behavior back to opening in VS (Tools | Options | Work Items | General), but this behavior will be deprecated in the next major version of the product.  There is nothing more irritating than having to switch back and forth between applications when doing development work.  Pro Tip:  If you want folks to use Source Control, make it easy and all in ONE location!!!

VS2017-Tools-Options-WorkItems

VS2017-WorkItems-Deprecated

4 – With Each New Update, Something Breaks

Okay, this is not new behavior in VS2017, unfortunately, it’s continuing behavior from previous versions of VS. <sigh>  I was really hoping that this behavior would stop, but sadly it still persists.  For example, since SSMS is now based on the VS Shell, if I update VS, it usually horks up (yes, that is a very technical term) the settings in my SSMS.

5 – Closing Solutions Take FOREVER!

Holy cow, in previous versions of VS, all I had to do was click File | Close Solution and in a split second my solution would close.  Now, it takes forever.  I have a solution with two projects in it and when I click File | Close Solution it takes anywhere from 15-30 seconds to close (I feel really bad for those that have more than 2 projects in their solution).  Not only that, it says it’s unloading 3 projects, not the 2 that I actually have!  WTH?  According to this thread, the problem has been fixed, but I can confirm that it most definitely has NOT been fixed as of version 15.3.5.  Maybe they only fixed it for Non-SSDT project types?

VS2017-UnloadSolution

Okay, my rant is over, nothing to see here, move along.

Posted by: sqlswimmer | August 30, 2017

Speaking at SQL Saturday Spartanburg (#681)

I am so happy to announce (and honored to be selected) that I will be speaking at SQL Saturday Spartanburg (#681) on September 23, 2017.  I will be presenting my What is Power BI? session at 8:30am.  I know it’s early, but I’d love to see you if you are in the area.

I’ve only been to Spartanburg once and that was a few years ago.  I attended the US Masters South Carolina LCM State Championships, where I walked away with a second place medal in my age group for overall meet performance.  It was an exhausting two days, but so worth it.  I’m hoping my SQL Saturday trip will be just as successful.

Hope to see you there!

Posted by: sqlswimmer | August 22, 2017

SQL Saturday Charlotte (#683)

I am so excited to be helping out with SQL Saturday Charlotte again this year.  This is the 6th year that Charlotte BI Group (CBIG) has put this event on and the 3rd year that I’ve been asked to be part of the organizing team. 

We have a great lineup of speakers and sessions, including two great pre-cons from Leila Etaati and our very own Melissa Coates.

We have worked really hard to put together this event together and hope that you will take the time to come out and get your SQL learnin’ on!

I hope to see you Saturday, October 14, 2017.

Posted by: sqlswimmer | July 28, 2017

Speaking at SQL Saturday Denver (#673)

Wow, I can hardly contain my excitement.  I was selected to speak at SQL Saturday Denver on September 16, 2017.  I’ll be presenting my Intro to Integration Services session.  I’m excited for so many reasons, but firstly, this will be my first trip back “home” in seven years.  We planned this trip back in February so that we could see Muse at Red Rocks.  I lived in Denver for 15 years and never once saw a show at Red Rocks, so I was super excited when I found out they were playing there.  Secondly, I get to talk about Integration Services, which is my favorite piece of SQL Server.  It’s an introductory session aimed at folks who want to learn more about that it is and how it can be used.

So, if you’re in the Denver area on Saturday, September 16, 2017, stop by and say “Hi”, I’d love to see you and chat for a little while.

#SQLSatDenver here we come!

denver-colorado-downtown

Posted by: sqlswimmer | June 30, 2017

Don’t Forget the Keys

I was recently given the nod to upgrade my monitoring server from SQL Server 2012 (SP 3) to SQL Server 2016.  This came none too soon as SQL Server 2012 (SP3) goes out of mainstream support on July 11, 2017.

We decided to go with a brand new box since the existing one was on Windows Server 2012 (not R2) and had been having issues lately.  So my SysAdmin guy stood up a brand new shiny Windows 2016 Server box for me.  This will be the first in our domain.  I get to be the guinea pig – WooHoo!

I got SQL Server 2016 installed on the new box without issue.  This box is used as my monitoring server and my personal sandbox, so it has Reporting Services (SSRS) installed on it as well as the database engine.  Since we are using a brand new box, there is no need to shut the old one off before we turned this one on, which is nice.  I can migrate things when I have time.

My first order of business was to migrate my SSRS databases to the new box and get it configured.  Now, it’s been a while since I’ve migrated an instance of SSRS, so of course I forgot something.  Otherwise you would not be reading this post.

I remembered to backup BOTH databases and the encryption key.  Once I had the databases restored on the new server I started the Reporting Services Configuration Manager so I could restore the encryption key so I wouldn’t lose all my credentials and other security sensitive information (BTW – Here’s a great reference for migrating SSRS).  After I restored my encryption key, I wanted to generate a new key, but the Backup button was not enabled on the Encryption key tab.  I didn’t think much of it, I just restarted SSRS and figured it would be available after restarting.  Of course it wasn’t.  I tried navigating to the URL for the SSRS Web Portal (replaces Report Manager) and I got an error saying Reporting Services was not configured correctly.  Interesting.  It took me about 15 minutes to realize/remember the last step in restoring the encryption key.  Do you remember what it is?  Without Googling it?

Okay, I’ll tell you since you’ve made it this far in the post.  Now I will tell you that I was not the one that installed/configured SSRS on the old server, so I was not aware that whoever did, configured the database for a scaled out deployment, even though it was not being used in a scaled out deployment <sigh>.  When you have a database that has been configured for scaled out deployment, you have to clean up the entries in the Keys table in the ReportServer database (the link above has a note about this very thing about three quarters of the way down).

Lesson learned:  Always look at the existing configuration very closely.  I totally didn’t notice (and hadn’t noticed for 3+ years) that the database had been configured for scaled out deployment, shame on me!

My shiny new SSRS 2016 instance is up and running now.

Microsoft announced a new program last year to help you understand the skills that a Data Scientist needs in their daily life.  It consists of nine courses and a final project, you can get all the details about it on the Microsoft Academy site.  I started working on this at the end of 2016 when things were slow at work and at home.  I completed seven of the courses before things started to pick up at work and home.  I’ve been mid-way through the eighth course for almost four months now, having to go back to the beginning of the course a few times due to being pulled away. 

The program has been very informative so far, providing courses on statistics and probability, Machine Learning, Power BI, R (and Python) and general data science concepts.  I’m hoping things will slow down a bit so I can complete the program by the end of Summer.

If you are at all curious about what a Data Scientist does, I highly recommend this program.  The great thing about this program is that you can take all the courses for free, that’s right, I said free, gratis, no dough required, etc.  However, if you do opt for the free route, you don’t earn that beautiful certificate that you can share with others, you just get the satisfaction of completing the course and broadening your horizons.  Either way, it’s a good way to get started in the field of Data Science.

Posted by: sqlswimmer | June 7, 2017

Import Export Wizard Mapping Files

Recently I had to copy data from our AS400 to a SQL Server database.  Normally I would whip up my fancy SQL Server Integration Services (SSIS) project and get it hammered out pretty quickly.  Unfortunately there were over 4,000 tables that I needed to pull data from, no way in HELL was I going to manually create 4,000+ packages.  Now most of my BIML friends would say, I could BIML that for you in 2 hours and if my BIML weren’t so rusty, I probably could have too.  I didn’t have to do any fancy transformations on the data, I just had to copy it.  So I decided to take the “easy way out” and use the Import Export Wizard in SQL Server Management Studio.  Shoot, all I would have to do is a few clicks and be done with it, right?  Boy was I wrong.

This post talks about the issue I ran into with SSIS Mapping Files.

We currently run DB2 on an IBM iSeries AS400 for our ERP system.  I was tasked with copying data from the AS400 to a SQL Server database for some consultants to use.  The C-Suite didn’t want to give the consultants access to our AS400, so this was the work around that was put forth and accepted (and no, no one asked me before I was “voluntold” for the task).  Since this would essentially be a “one-time” thing, I chose to use the Import Export Wizard, but I would save the package just in case they wanted this process repeated.

I fired up the Import Export Wizard and selected my source, IBM DB2 for i IBMDA400 OLE DB Provider.  Now before you can select this data source you must install the IBM DB2 drivers.  You can find out more about them here, unfortunately you have to have a maintenance contract and an account with IBM before you can download them <sigh>.  It’s a straight forward install once you have the installation package.

EZ peazy, lemon squeezy.

I selected my destination, SQL Server Native Client 11.0, of course!

 

On a roll now, should only be another 30 seconds and I can put this project to bed.  Well, we all know that’s not what happened, otherwise you wouldn’t be reading this.

When I clicked on the Edit mappings button in the wizard to make sure all the datatypes had been mapped successfully, I got “<unknown type>” for every single column.  WTH?!  This doesn’t happen when I’m working in Visual Studio with my SSIS projects.  After some frantic googling, I found a couple of very good articles on the Mapping Files for the Import Export Wizard.

Data Type Mapping

Import Export Wizard can show numbers..

I took the advice of the articles and made copies of my Mapping Files before I modified them.  I made my modifications to include the column types and their respective mappings courtesy of the Data Type Mapping article and saved my changes.  I made sure the Import Export Wizard was closed then started it again.  This isn’t so hard, no big deal, they’ll all populate correctly now..WHAT?!  Still <unknown type> for all columns!  Now this has become a matter of solving it using this method, I will NOT resort to brushing up on my BIML.

After many attempts I finally figured out what the issue was.  There were two.  First, the order in which the Import Export Wizard searches through the Mapping Files.  Second, the Source Type within the Mapping File.

According to the Import Export Wizard, my source provider is IBMDA400 and it can’t find any mapping file.  But if you notice the Mapping file for my destination…

For the first issue, a little background on how the Import Export Wizard works.  When you select a source and destination the wizard has to know how to map the data types from source to destination so you don’t end up with gobbledygook in your destination.  So it searches through all the files in the following directories, depending on your architecture (I’m using SQL Server 2016 hence the 130 in the path):

C:\Program Files\Microsoft SQL Server\130\DTS\MappingFiles

C:\Program Files (x86)\Microsoft SQL Server\130\DTS\MappingFiles

The key word there is ALL the files in the directory, so if you just copy your original files to the same folder (with the famous “ – copy” so courteously appended by Windows Explorer), it will search through both your original AND the modified copy of the respective Mapping File.  In my case it was finding the source/destination Mapping File in the original Mapping File and completely ignoring my modified copy <sigh>.  Easy enough to fix, I moved my original “copies” to a completely different folder.

For the second issue, the source type within the Mapping File.  Now I will admit that I had been working on this for about 5 hours straight and had become so obsessed with making this work with the Import Export Wizard, I started to not pay attention to detail.  I want to see how long it takes you to find the issue, this is my file that I think should work.

 

This is the file that actually works

 

Did you find it?  How long did it take you?  Took me about an hour to figure this out

In case you still haven’t found the issue, the answer is:  The SourceType in the first file is using IBMDADB2* and the second file is using IBMDA*.  Since our source provider is IBMDA400 and we are using the first file (IBMDADB2*), there is will be no match on the source.  As soon as we change the SourceType (IBMDA*) we get a match (* is used as a wild card) it works.  Three little letters, that’s all it took for me to waste half a day.

Now what I ended up doing instead of modifying the original mapping file is creating a copy of it, renaming it to something meaningful to me, but still following the naming convention of the Mapping Files, changing the SourceType value to IBMDA* and adding all the data types that were missing.  This way there will be no conflict if I ever need to use the mapping file with the IBMDADB2 SourceType.

I hope this helps someone else.  There’re tons of posts out there about data mapping, but none of them tell you to pay special attention to the issues I had.  Granted my issues were self created, but they were issues nonetheless.

Posted by: sqlswimmer | May 25, 2017

Speaking at SQL Saturday Atlanta (#652)

I am so excited and honored to have been selected to speak at SQL Saturday Atlanta (#652) this year.  This is huge event where I’ve been a volunteer and attendee in the past, but this will be my first time as a speaker.

I will be presenting my session What is Power BI?  I’ve presented this session a couple of times in the past but will be updating it to contain information regarding the changes that go into effect June 1, 2017.

If you are close to Atlanta on July 15, 2017, please stop by and say “Hello”, I’d love to see you.

#SQLSatATL

Older Posts »

Categories