T-SQL Tuesday #104 – Code You Would Hate To Live Without

This month’s T-SQL Tuesday is hosted by Bert Wagner and he asks us to write about code we’ve written that we would hate to live without.

First off, “hate” is a pretty strong word so let’s go with “code you couldn’t bear to live without”. The first bit of code I couldn’t live without is reviled in some circles. More often it’s misunderstood and lamented, or…well, I’ll just show it to you.

Yes, you read that right. It’s an SSMS Snippet that generates a cursor. I use cursors so often that I decided to create a snippet so I don’t have to rewrite them or copy/paste from a template file all the time.

Yes, really. Don’t @ me, come at me bro, whatever it is the kids are saying these days. I am dependent upon cursors every day and would be lost without them.

Wow Andy, you must be pretty bad at your job if you’re running cursors all the time! Don’t you know that’s terrible for performance? What’s up with that?

If we’ve met, either in-person or virtually on the SQL Community Slack, you probably know that I manage an instance hosting mumblemumble thousand databases. I don’t mean “a couple” thousand databases; we’re looking at Michael Swart’s 10 Percent Rule in the rearview mirror. I regularly have to look for a particular value across the same table in a few dozen/hundred/thousand databases, or pull a report across as many databases, or run the same data change for a big list of databases. Most often, I’ll be given a list of databases or be asked “run this for all the databases that meet these criteria.” And the only way to do that easily is via a cursor because I have to first collect the list of databases from another table. There are 3rd party tools I could use but doing the setup to run against an arbitrary list of databases is tedious, error-prone, and I haven’t quite worked out a way to improve it yet.

Processing a table or result set RBAR is a performance concern. But to crank through a long list of databases and execute the same query against each it’s the only way to go, as far as I know. sp_msforeachdb doesn’t cut it for my purposes because I don’t want to hit every database on my instance.

My second piece of code is more of a technique or design pattern. In a stored procedure or large script with dynamic SQL, I’ll often create two variables (they’re parameters, in the case of stored procedures – choose a sensible default!) called @Debug and @ImSure. They’re just bit types but I use them to control the output of debgging information and code execution.

By doing this, I don’t have to comment/uncomment sections of code all the time just to see what dynamic SQL I’m generating. I also have a failsafe which prevents changes from being executed until I’ve made sure that everything is solid.

Those are probably the two pieces of code that I can share which I couldn’t be without. Honorable mentions which I didn’t write but find indispensable:

  • QUOTENAME – pretty basic T-SQL function but with all that dynamic SQL I’m writing, I need it to keep my SQL clean and safe.
  • dbatools – I’ve written about this PowerShell module quite a bit here but suffice to say for doing bulk administrative tasks, collecting metadata about the environment for analysis, and moving databases or entire instances around, it’s a lifesaver.
  • Brent Ozar Unlimited’s First Responder Kit – I run sp_blitzcache & sp_blitzindex daily looking for places we can tweak our code or indexes to make things run better.
  • Adam Machanic’s sp_whoisactive – Gives me a great lightweight snapshot of what’s going on right now.

Thanks for joining me on this T-SQL Tuesday!

Advertisements

Volunteer for PASS!

This week, I had the opportunity to be the moderator for Joseph Barth’s (b|t) 24 Hours of PASS Summit Preview session about Azure Data Factory V2. It was fun, easy, and I encourage you to sign up to do the same!

Throughout the year, PASS hosts a number of online learning events. 24 Hours of PASS and virtual chapter webinars being the most common/visible. And in each session, the presenter needs a little help managing questions and watching the clock so they can focus on delivering their great content. It’s pretty easy. You just:

  • Sign in about half an hour ahead of the session start time
  • Make sure your audio is working right
  • Chat with the presenter(s) about the timing, whether they want to address audience questions during the presentation or at the end, when they want time alerts, etc.
  • When the session starts, read the PASS lead-in script that’s provided and introduce the speaker
  • Watch for questions and let the speaker know when you’ve hit the agreed-upon checkpoints
  • Read audience questions to the speaker
  • Wrap-up: thank the speaker and audience, read the wrap-up script, and (where applicable) invite the audience to stick around for the next session

So how do you sign up for such a sweet gig? Just set up your PASS profile to indicate that you’re interested in volunteering. When an opportunity comes up, you’ll be contacted by PASS HQ and asked if you’re available for the event.

In the case of 24 Hours of PASS, I was asked to pick a few time slots where I was available but not told who the speaker was in each (which is fine by me – the result is that I attended a session I normally wouldn’t have, and learned some new stuff!). My slot was confirmed and I learned that Joseph would be my speaker. Great! I met him at Summit last year and he founded a user group that I’m familiar with, so we had something to chat about before his session started.

The clock struck 01:00 UTC, I read my script, Joseph did his presentation, and we wrapped up. It went really well and I had fun with it.

So, dear reader, here’s what you’re going to do:

  1. Go to your PASS profile’s myVolunteering section
  2. Check at least two boxes
    • “I would like to sign up to become a PASS volunteer”
    • Any one of the Areas of Interest
  3. When you receive the email from PASS HQ or local coordinators asking for volunteers for an upcoming event, you say “yes!”
  4. Help out with the event
  5. Meet new folks in the SQL Server community
  6. Learn something new

Communities like ours work best when everyone chips in a little bit. Whether it’s speaking, moderating online events, working with a local user group or helping to put on SQL Saturday. It’s a great way to meet other people in the community and give back to a group that gives us all so much, both personally and professionally.

Becoming a Production DBA – A Family Decision

I really enjoy my job. I became a full-time production DBA about 14 months ago and it has been an overwhelmingly positive move. I work for a good company and with a terrific group of people. Many days, I have to force myself to leave the office because I was so engrossed in a task and just didn’t want to set it aside.

But there’s something that not everyone might consider before taking on this job. If you have a partner, children, or both, taking a job as a production DBA is really a family decision.

Being on-call is potentially disruptive to your family schedule. And sleep schedules! My on-call rotation is two weeks on, two weeks off. In those two weeks, I have:

  • The usual alerts that can come in anytime day or night, the emergency fixes when someone deletes something that shouldn’t be deleted, etc.
  • A software release which requires that I get up at 3:45 AM once per rotation
  • Monthly server patching at 2 AM, if it happens during my rotation

Many years ago, I had a job where I carried on-call responsibilities and it was rough. Lots of nights and weekends. Then I got a decade-long break. Before I took my current job, I discussed the on-call requirements with my spouse a bit before accepting. I didn’t want to subject her to that again without making sure that she was OK with it. She is a very light sleeper, so any chirp from the phone is likely to wake her up (by contrast, I once put my phone three inches from my head and slept through multiple personal email alerts).

This job has the potential to impact the whole family, in both small ways and large. Chris Sommer (b|t) said one day in the SQL Community Slack that being a production DBA is kind of a blue-collar job. Shift work, etc. He makes a good point. I’ve adapted to the schedule and it’s not bad…for me.

But I’m not alone in the house and yes, everyone has had to adjust. Sleep has been lost. If an alert comes in overnight, my spouse wakes up too. We’ve scheduled family activities around the on-call schedule. Carried the work laptop all over creation “just in case.” Left the beach to handle urgent tickets. Skipped weekend morning outings. Stayed up late, got up early, missed dinner, or paused a movie to baby-sit a critical job or troubleshoot system issues.

It’s worth it though. After taking on the new role, my job security increased. My career security has increased. My work is more challenging, more interesting, and I have more autonomy than ever before. I look forward to going to work every day. I’m getting more involved in the SQL Server community. On average I’m getting home earlier than I used to, so I’m spending more time with the kids on weekdays. It hurts waking up at 3:45 AM once a month but I’m there to greet them when they get home from school.

Life is full of tradeoffs and compromises, and taking a job with on-call responsibilities involves a lot of those tradeoffs. Overall, it’s been a net win for me. Would I prefer to not have to deal with overnights and weekends? Who wouldn’t? But the positive changes that this job has meant for my career, my family, and myself make it worthwhile.

Triggers vs. Default Constraints: Choose Wisely

Triggers can be really useful in your database but you have to be careful with them. We often use them to record history to a separate table (at least, until we can implement temporal tables) or keep track of when a record was last updated. It’s usually implemented via an AFTER UPDATE trigger.

Several times a day, I’m alerted by our monitoring suite that two processes were deadlocked. This is a (mostly) normal event, but one that we’d like to avoid if we can. More than a few times, I observed this pattern:

  • Insert into table
  • Update a field in the row that was just inserted.

In my case, the LastModified update is implemented as an AFTER INSERT, UPDATE trigger. Which means that as I’m inserting the record into the table I’m turning around and immediately updating it. We can see this in both the execution plan and the IO stats provided by SET STATISTICS IO ON.

Example

Here’s the setup. Just a pair of basic tables, one looks like like the other except one uses a default constraint and the other an AFTER INSERT trigger. We’ll set up the tables and then insert one row into each.

Aside: Yes, I realize that I didn’t name my default constraints. In real life, I totally would and do.

And here’s the data on each.

Great, the data matches what we expect! But how much work did SQL Server do? Let’s check out the execution plan and IO stats. First for the trigger:

SQL Server parse and compile time: 
    CPU time = 0 ms, elapsed time = 0 ms.
Table 'triggertest'. Scan count 0, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server parse and compile time: 
    CPU time = 13 ms, elapsed time = 13 ms.
Table 'triggertest'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
    CPU time = 0 ms,  elapsed time = 7 ms.

SQL Server Execution Times:
    CPU time = 15 ms,  elapsed time = 104 ms.

And now with the default constraint:

SQL Server parse and compile time: 
    CPU time = 0 ms, elapsed time = 0 ms.
Table 'defaulttest'. Scan count 0, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
    CPU time = 0 ms,  elapsed time = 6 ms.

We want to spend our SQL Server licensing dollars wisely, so why ask it to do unnecessary work? More CPU time, more IO, and our client is waiting longer for the process to complete (I don’t like making anyone wait).

There’s a second “gotcha” with the AFTER INSERT method that applies to only some use cases. Let’s say you’re loading some historical data into the table, and that data includes the LastModified date. Or maybe your application has a very specific date that should be inserted into the table for this field.

And here’s what the tables look like after the inserts.

That AFTER INSERT trigger just clobbered real data!

To alleviate this, I’m migrating these tables to use a default constraint of GETDATE() on the LastModified fields, and changing the trigger to AFTER UPDATE (in the cases where I do need to write to a history table upon insert, I’m removing the LastModified portion from the AFTER INSERT trigger).

The result is the same – LastModified is set to the insertion date when first creating the record, and updated when updating the record. But the inserts are less work for SQL Server, and I won’t clobber my data.

Quick Table Info Export with PowerShell

This week I had a user come to me asking about how fields were defined on a few tables he was using in writing some reports. Long story short, he’s been tasked with writing some new reports and updating existing ones, but he doesn’t have visibility to the database itself so he’s left with the “ok, let’s try this” approach and then reading error messages to debug when things go sideways. Very tedious.

I asked him for a list of the tables he was most interested in, and while he worked on that I set to work on (of course) a quick & dirty PowerShell script to collect the data he needed – field names, types, and whether they’re nullable.

With the help of dbatools and Doug Finke’s ImportExcel module, the resulting script to retrieve the requisite data and export it to an Excel workbook with one sheet per table is very short. Each table is on its own worksheet, with the header row bold and frozen so it stays in view while scrolling.

Line by line:

  • Fetch the list of tables from a file provided to me
  • Gather a collection of SMO objects for the tables from the server
  • Loop over each table and extracting the relevant data about each
  • Write the table’s information out to a worksheet named for the table in an Excel workbook

The only tricky part was fetching the field lengths for the nvarchar fields. I had to go spelunking into each field’s Properties property to get that. The result:

Export-TableInfo Results

But I can tell this isn’t disco enough for you, so I’m gonna put it right here.
– Justin Hammer, Iron Man 2

You can do this in a one-liner too. It’s not as pretty, but it’s quite doable and the resulting Excel file is identical.

End to end, I turned this around in about 20 minutes and the the requestor’s life got easier in a big hurry. Best of all, when he comes back with another batch of tables for his next set of reports, I can turn the request around in less than five minutes.

Processing SQL Saturday Raffle Tickets with PowerShell

Every year, I spend the Sunday after SQL Saturday Rochester scanning & processing raffle tickets for our wonderful sponsors. Here’s how the system works:

  • Attendees get tickets (one ticket per sponsor) with their name, the sponsors name, and a QR code on them
  • The QR codes represents a URI, unique to the combination of event, attendee and sponsor.
  • Attendees drop their tickets in a box to enter the sponsor’s raffle prize drawing
  • When the URI from the QR code is accessed, it registers in the SQL Saturday system
  • Organizers run a report for each sponsor that includes the contact info of all attendees who dropped off a raffle ticket, then email the report to the sponsor

It works pretty well, but the hangup is that most QR scanners will open your web browser (or prompt you to open it) to the URL on each scan. For 150+ tickets, this takes a long time. Every year, I lament “oh, how I wish I could just scan these, collect the URLs into a nicely formatted file, and script this whole thing”.

Finally, this year, I found a way to do it with my iPhone, MacBook Pro & PowerShell. Here’s what I did:

  1. Get Beep for iOS.
  2. Scan the tickets. This app is really fast, it may scan before you even realize it. I just stacked them up, pointed the phone at the pile, and as the app beeped (to tell me it had scanned successfully), I tossed the ticket to the side.
  3. When done, tap the file box icon in the upper-right corner
  4. Tap the Share icon
  5. Save the file out to a CSV on iCloud (you can email it if you like, but iCloud is a little easier for me)
  6. On the Mac, open up Terminal and navigate to /Users/YOURNAME/Library/Mobile Documents/com~apple~CloudDocs
  7. Fire up PowerShell (I installed it via HomeBrew with brew install powershell and start it by running pwsh).
  8. Run the following one-liner:

This bit of PowerShell:

  • Imports the CSV file and forces column names (as the file doesn’t include them) of my choosing
  • Extracts the unique URIs from the data
  • Loops through all the URIs and invokes a web request to each one of them

It’s the same process I’ve used in the past, just much faster because I’m not pausing after each scan to load a URI in my web browser.

With nearly 300 raffle and attendance tickets scanned, this zipped through all of them in less than 90 seconds. Best of all, I could start it and walk away to do something else. Doing it this way made my SQL Saturday “closeout” process a little less stressful.

Appearance: SQL Data Partners Podcast

A couple weeks ago Carlos L. Cachon (b|t) put out a call on Twitter looking for SQL Saturday organizers to join him on the SQL Data Partners Podcast. When I signed on to record, I learned that Chris Hyde (b|t) and Eugene Meindinger (b|t) were joining us. I’ve met and spoken with all three previously, so it was easy talking to everyone and I thought the conversation flowed well.

Check out SQL Data Partners Podcast Episode 126: SQLSaturday Edition.

T-SQL Tuesday #99 – Dealer’s Choice / sqlibrium

This month’s T-SQL Tuesday from Aaron Bertrand gives us a choice:

And I’ve got one, maybe two posts in progress on the first topic. Alas,

Thanks to Eugene Meidinger (b|t) for nudging me in the direction of posting this. Ever since this month’s T-SQL Tuesday was announced, work and non-work has been a complete whirlwind and I have failed miserably at reaching sqlibrium.

I’ll expand on this in later posts and link back. Really.

PASS Summit 2017 Wrap-Up – The People!

I need a new social media profile picture.

That’s one of the most important (non-technical) conclusions I drew from my week at PASS Summit 2017. It seemed like everywhere I went, I heard “I didn’t recognize you without the hat!” The picture I use on Slack, Twitter and Instagram is the same one I use here on my About Me page. This photo was taken in 2014 at the West Bend, WI Cache Bash and it’s one of the few photos of myself that I actually like (harsh shadows aside). I nearly did bring The Hat with me, but decided against it as it’s big, heavy, and not really an “indoor” hat.

But I digress. The point is, a lot of the people who I met at Summit only previously knew me with The Hat. But that also means that I was meeting a lot of people in person for the first time. And meeting new people is my second-favorite thing to do at PASS events – my favorite being catching up with everyone I already know.

Once I was registered for Summit, I told myself that I was going to make every after-hours event I could and meet everyone I could all over Summit & the associated events. I even signed up for the Summit Buddy program to meet people while helping them navigate their first Summit.

Monday

I didn’t even get through the hotel lobby in before spotting my first SQLFamily. En route from the front desk to the elevator, I crossed paths with Chrissy LeMaire (b|t), Rob Sewell (b|t), Constantine Kokkinos (b|t), and (I think – memory’s fuzzy here) Sander Stad (b|t)! I immediately introduced myself (with The Hat, no introductions would have been needed). We chatted for a moment but they had dinner plans they needed to get to, and I needed a bit of a rest from my trek before heading out to the networking dinner at Yard House.

I arrived at the Networking Dinner hosted by Lisa Bohm (though I didn’t meet her until Tuesday evening) and all the tables were full. I was early for my seating but after a short wait I was able to get a seat at a table with Jeremy Marx (t), whom I’d spoken with on Slack but we didn’t realize it for a few minutes (again, The Hat). A few moments later, we were joined by George Anderson (t), and then finally Kiril Kravstov (b|t) – another dbatools contributor!

Monday was over and I’d already met a half-dozen people. Incredible. I also managed to track down my friend Bill Schultz (t). We worked together a several years ago, and now despite living only an hour away from one another, we only see each other at Summit.

Tuesday

I had an early start as I needed to meet up with Chrissy to give her the badge ribbons I finally found buried in my backpack (not the first time I’ve lost things in its various pockets). Along the way, I bumped into another dbatools team member, Shawn Melton (b|t), who was awarded MVP status by Microsoft the following day.

A bunch of us hung out in the precon classroom and helped with setup, but as I was neither registered for the precon nor running it, I had to take off. I needed breakfast anyway. At the Daily Grille, I spotted Mike Fal (b|t), Rie Irish (b|t) and Monica Rathbun (b|t) at another booth and when I was finished with my meal, I stopped over to say hi. I worked with Mike when we were both at previous jobs, but I’d only spoken with Rie and Monica via Twitter previously.

I spent much of Tuesday in meetings for User Group leaders and SQL Saturday Organizers, but that just meant more new people to meet! On the lunch break, I walked down to Beecher’s with William Assaf (b|t) and Adrian Aucoin (b. Later in the day was the First Timers Orientation and Speed Networking. I attempted to arrange a meeting of my first-timers group just before that event but was only able to find half of them, Kathy & Jasper. The event is set up as a way to get you talking to new people, but unfortunately when you have a couple hundred people all in one room in pairs, all trying to have the same conversation, it gets very loud and a number of people left early. By chance, I found myself sitting next to James Livingston (t), a fellow Rochestarian!

After Orientation was the Welcome Reception. They had a live band! But wow was it loud. I hung out with Kiril and we managed to chat with a number of people including Luis Gonzales (t), Lisa, and Allen White (b|t) before making our way toward the exit with Chrissy and Rob to head over to Tap House for the dbatools team gathering. At Tap House, I met Amanda Crisp (b|t), whom I’d spoken with a few times on Twitter. I don’t recall what it was that put us each others’ respective radars, but it was good to finally meet!

Wednesday

As is tradition, Wednesday started with #SQLRun at 6 AM. 2017 was a bad year for me with regard to running, but the cool weather and good company like Nick Harshberger (t), Allen, Jen McCown (t) and James (who I ended up running with the whole time with) make it lots of fun. James and I clocked about 3 1/2 miles at a relaxed pace, though I had to take a bit of a breather coming up out of Pike’s Market (the ascent from Alaskan Way is tough).

After the run, I got cleaned up and braced myself for the madness that is Day One. On my way up the escalator to breakfast, I spotted someone I’ve wanted to meet for a while at one of the coffee shops and decided it was now or never. Summit is so large that if there’s someone you want to meet for the first time or someone you already know and want to catch up with, you have to do it the first time you’re anywhere near them. You may not get a second chance.

So, I detoured from my path to breakfast, apologized for interrupting his breakfast, and introduced myself to Brent Ozar (b|t). We chatted for about 5 minutes, he gave me a suggestion for mitigating some SQL Server performance issues I was dealing with at work (which I immediately texted to my colleague back at the office), and then he gave me a few Query Bucks and was gracious enough to pose for a selfie. Terrific start to the day.

After breakfast I meandered to the ballroom for the Keynote and found myself a prime seat right behind the blogger table. Closest to me was Kevin Kline (b|t), and we got to catch up for a few minutes before he had to get ready to liveblog the keynote. While we were talking Gail Shaw (b|t) arrived and I got to meet her as well!

Post-keynote, I found my way to the exhibitor hall. After checking in with a few folks, I bumped into Justin Whaley (b|t), who I discovered was working on some PowerShell functions for Red Gate tools just before Summit. We chatted a bit and decided to catch up later on to discuss his work.

One of the best things that happens at Summit is the chance encounters. As I started down the buffet table to get lunch, I looked up and discovered Deborah Melkin (b|t) across the table from me! Deborah spoke at SQL Saturday Albany last summer and had I been able to attend the event, her session was on my must-see list.

Wednesday evening was the big night for events. As an avid listener of the SQL Data Partners Podcast, I signed up for Carlos & Steve’s SQL Trail Mix event as soon as I heard about it. In their post-Summit podcast, I learned that they had a very limited number of tickets available for this event, so I’m glad I didn’t wait. Right away I saw Kathi Kellenberger (b|t) and Sheila Acker (t) (trivia: Sheila’s one of the first people I met & talked to for more than 5 seconds at my first Summit in 2012). Later, I’d find out that there were several people at this event whom I’d run into later in the week or I wanted to meet up with, but didn’t see.

As I left, I chatted with Carlos & Steve for a bit about Scouting; all three of us are currently or previously have been involved with Cub Scouts and/or Boy Scouts over the years (something I’ve noticed across SQL Family for a while).

Up next was Pike Brewing Company and the Sentry One party. The place was packed and I immediately found myself catching up with Kirsten Benzel (t) (we never got a chance to geek out about our watches though 😦 ), Argenis Fernandez (b|t) (whom I saw briefly at SQL Trail Mix) and Monica, plus I got to meet a few more folks milling around the bar. This was also the event where Lou talked me into trying out the 5X Stout Float, a custom concoction presented by one of the bartenders. I was skeptical at first, but wow. I’m going to have to try this one at home sometime.

Thursday

Thursday tends to be my “easy” day at Summit. It’s the “down” day between “gotta meet everyone I can ASAP” on Wednesday and “gotta catch everyone to say goodbye” on Friday. The big daytime event was the PowerShell panel hosted by Chrissy & Rob. We put out the call on Twitter and Slack to get as many dbatools contributors in the room so that we could get a group photo. By my count we had thirteen! At least one other team member had been at Summit but due to other obligations, he wasn’t able to make it for the photo. We’ll just ‘shop him later, right? I finally got meet John Hohengarten (t) and Jess Pomfret (t) there too (we snuck in a photobomb on Chrissy & Nic Cain (b|t)). I’ve spoken with John a lot on Slack, and Jess is another person with whom I’ve crossed paths on Twitter but never gotten to meet.

Thursday evening there were a few more sponsor parties but I was already signed up for Game Night hosted by Kevin Hill (b|t) at the convention center. Three years ago, I attended a small game night hosted by a sponsor in a shop a few blocks from the convention center, but it’s now a semi-official event with PASS backing – PASS even has a collection of tabletop games they bring to Summit for us! I don’t play a lot of board games (beyond the classics) at home but I’m looking to branch out, so I was really looking forward to this event. And it didn’t disappoint! It’s a small, quiet, laid-back gathering so you can chat with people while figuring out how to play the game you’ve picked out. Some welcome downtime.

But between the sessions and the games, we had some open time and needed to find food. Fortunately, Charlie Brown (t) was in the same boat but he’d heard about a good place nearby to get something quick. Charlie & I had talked a bit on Slack in the weeks leading up to Summit but hadn’t met yet, so we got to close that loop. As we waited in line for our food, Kathy and Bert Wagner (b|t) arrived unexpectedly so we all ate together, then several of us walked back to the convention center.

I teamed up with Kathy, Karin (another first-timer) and Swan Web (t) to learn/play Pandemic. Only Karin had played before so the game moved a little slower than with four experienced players, but the we had good time learning how it worked. A 100% collaborative game (as opposed to competitive) was a new experience for me and I ended up buying it to play at home! As things wound down I chatted with a couple of people including Kevin and Matt Cushing (b|t), once again being reminded that I was missing The Hat.

I thought this would be an early night, but making my way through the Sheraton I saw Justin hanging out in the lounge and stopped to catch up about his PowerShell functions for Red Gate tools. What might have been a 20-minute conversation ended up being several hours as we talked about anything and everything.

Friday

Friday’s kind of a sad day because it means having to say goodbye to everyone. I caught a couple of sessions but also spent a bit of time hanging out in the Community Zone. I also made sure to stop by the last few sponsors I needed to talk to, entered a few more of their drawings, and actually won one of them!

Earlier in the week I saw #SQLFamily badge ribbons and I was determined to find out where those came from. Turns out they were brought by ArcticDBA (b|t) and he wanted a dbatools ribbon, so we managed to finally meet up just before lunch and make an exchange.

Closing down the week, I made sure to attend Carlos L. Cachon’s session on baselining, something that I’m not doing a great job of right now. I discussed the highlight (for me) of that session in an earlier post. After the session, having outed myself as a member of the dbatools team, someone approached me with a question about installing the module, as he was having difficulty with one workstation. Unfortunately he’d already tried everything I could think of, so I suggested that he get onto Slack and ask the folks there. Shortly after Summit, he was there and got a solution to the problem.

The official festivities over, I grabbed my luggage and made my way to Tap House. Kevin mentioned that he was getting a bunch of people together there for dinner and drinks and while I had made plans to go to Crab Pot, I had time to pop in for a drink on the way. We quickly took over the billiards room in the back and by this point in the week, it was almost all familiar faces. I chatted with Shane O’Neill (b|t) for a while and he commented that he’d been to New York to visit family somewhat recently. We got to talking about it and I learned that on that trip, he was actually in my town. Incredible! Hopefully on a return visit we can meet up and maybe even schedule our local user group meeting so he can attend.

On to Crab Pot! I’ve heard about this dinner over the years, hosted by Tim Mitchell (b|t), and decided that since I had to do something besides sit in the airport terminal for five hours, I’d go. I don’t know how many people were there but it had to be at least 50 and it was very busy.

Recap

One thing that really struck me about the week was how little time I spent on social media looking for things to do. Instead, I was talking to people and finding or even making those things happen. I ended up turning the notifications from Slack and Twitter up to eleven to make sure that I didn’t miss anything critical there. As it turned out, my inattention to Twitter resulted in me missing an informal talk at the Microsoft booth about the new SQL Operations Studio, but oh well. On the other hand, I still got the notifications from people I was talking to.

At my past two Summits, I found myself completely drained and exhausted by the time Friday came around. Surprisingly, it didn’t happen this time around despite feeling like I did a lot more. I think I just paced myself better. Or maybe I’m becoming less introverted and talking to people is energizing me more.

I heard “I was hoping to meet you here” a few times outside the folks I’d pre-arranged seeing and that was a completely unexpected, but really awesome, experience.

Things I Did Well

  • Get up the gumption to introduce myself to new people
  • Stay off social media/my phone except where necessary
  • Find lunch tables with the fewest empty seats and join in the conversation, even/especially with strangers
  • Renewed connections with people I knew from past events
  • Not get completely exhausted

Things I Need to Work On

  • Step up the selfie game (including getting the courage to ask people for selfies)
  • Talk to more people at the booths in the exhibitor hall
  • Coordinate with my group if I join the Summit Buddies program
  • Get to a couple more sessions. I bought the session recordings so I can catch up, but sessions are still a good place to meet people

Now, to find that new social media profile photo…

How to Help with dbatools Comment-based Help

I wrote a post over on the dbatools website about how to get involved with improving comment-based help

Working on the CBH is a great way to get started with the dbatools project, even (especially) if you’re not a PowerShell expert or MVP-level DBA. Getting everything clean and consistent in the CBH is an important step on the road to 1.0. Along the way, you’ll pick up on how dbatools is put together, discover functions that you can use in your day-to-day work, and get a feel for PowerShell best practices. You will learn from this experience!

Join us!