PSPowerHour v1.0 Wrap-Up

The first edition of the PSPowerHour is in the books and it looks like it was a big success. This one was dbatools-heavy but I chalk that up to the dbatools community having lots of free time because we’ve automated so many of our tasks 🙂

Overall Impressions

I signed in about half an hour ahead of the webcast and was the first one there. Shortly thereafter, I was joined by Michael Lombardi (t, then Jess Pomfret (b|t) and Chrissy LeMaire (b|t). After ironing out a few glitches, we got everyone in the right place and kicked off the broadcast. Everything ran very smoothly, especially considering the number of people involved – Michael and Warren F. (b|t) did a terrific job of orchestrating everything.

While watching and listening to Chrissy, Doug, Andrew & Jess give their demos, I ran through my own in my head a couple times, adding and rearranging a few things as I observed how they were doing theirs. The big dilemma for me was whether or not to run the camera or exclusively screen share (I ended up going with the screen share only). Having not rehearsed my demo enough in the weeks leading up to the event, I was still not sure where to dip into more detail or dial things back and seeing what others were doing helped quite a bit. Having familiar faces & voices ahead of me in the queue put my nerves to rest.

I wasn’t able to watch the sessions after mine in their entirety due to family commitments. Joshua’s Burnt Toast module looks like it’ll be fun to experiment with and add some nice functionality to scripts (I got to see about half of his demo), and I’m really looking forward to catching a replay of Daniel’s demo of PowerShell on the Raspberry Pi – I didn’t realize that it had been ported already!

My Demo

I demoed Invoke-DbaSqlQuery and why one should use it over Invoke-SQLCmd – primarily for protection from SQL injection. Things didn’t go exactly the way I’d practiced; I ran short of time despite feeling like I rushed things and cutting back on some of what I had planned to say. The latter was in part because of the lead-ins from Chrissy, Andrew, and Jess. Because they did such a good job introducing dbatools, I was able to skip over it. But I was able to throw in a teaser for Matt Cushing’s (b|t) demo at the next PSPowerHour.

Running the demos inside a VM and screen-sharing just that VM made things easier for me as opposed to flipping between apps. My scripts will be available on GitHub along with the other presenters’ once the pull request is approved.

I achieved my goals:

  1. I did it
  2. I successfully demonstrated a SQL injection problem and explained why it’s so bad
  3. I demonstrated how to make database queries from PowerShell both more reliable and safer
  4. I learned about some new stuff that I desperately want to experiment with.

Next time around, I definitely need to rehearse more and get my timing down better but overall, I’m happy.

Check it out!

Advertisements

Is Your DR Plan Complete?

Kevin Hill (b|t) posted a thought-provoking item on his last week about Disaster Recovery Plans. While I am in the 10% who perform DR tests for basic functionality on a regular basis, there’s a lot more to being prepared for disaster than just making sure you can get the databases back online.

You really need to have a full-company business continuity plan (BCP), which your DR plan is an integral portion of. Here come the Boy Scouts chanting “Be Prepared!”

When disaster strikes:

  • How will you communicate it to your customers, including regular status updates?
  • How will you communicate within the company?
  • Do you have your systems prioritized so that you know what order things have to be brought online? Which systems can lag by a day or two while you get the most critical things online?
  • Do you have contingency plans for all of the disaster conditions that could impact your business or failure modes of your systems?

Let’s say you’re prepared to fail over from your primary datacenter to a DR datacenter when a catastrophe hits the primary. You’ve got that all worked out and you rehearse it monthly or quarterly. You can bring critical databases and websites online within the required time period and the steps are well-documented. That’s a great start!

You probably do this periodic test on the 2nd Tuesday of each quarter, from the comfort of your desk at work, under “normal” conditions.

  • What if your main office is unavailable due to fire, flood, or weather conditions? Can you remotely access any of your datacenters or cloud infrastructure without first connecting to the building that just got wiped off the planet by a tornado? Can you weather a wide-scale blackout?

  • Are you expecting everyone to work from home (or wherever they may be/may find convenient), or do you have a fixed location to use as a command center? Do you have a contingency plan if that “command center” is inaccessible due to unsafe travel conditions or the same problems that plague your main office?

  • Have you tested executing your DR/BCP out of those alternate locations?

  • What if you can access the office (either VPN or physically), but the connection to your offsite datacenter(s) is severed?

  • Maybe you’ve got everyone set up to work “remotely”. Are they able to work at 100% or even 50% capacity if you lose the office, or are they dependent upon a VPN endpoint in the office? How many routes to the datacenter(s) do you have? Are all the necessary tools available on laptops for remote work, or are you reliant upon a jumpbox? Is that jumpbox accessible in a true disaster scenario?

  • A modern laptop that’s only running an RDP client (aka smart terminal) can run quite a while on a full charge and is pretty responsive even when tethered to your phone’s LTE connection or a MiFi device. Are you keeping all those battery fully charged (confession: my laptop is at about 40% as it sits in its bag right now, and I don’t carry a battery pack for my phone all the time) so you can work a few hours while waiting for the lights to come back on at home?

As a DBA, I’m responsible for ensuring that we can get the necessary databases online with reasonably recent data (meeting our SLAa) and accepting connections for users. But that presumes that I can gain access to the DR site. It also presumes that communication channels are documented and followed such that my team isn’t being asked for status updates every 3 minutes, instead of allowing us to work the problem.

There are a lot of moving parts that have to be working together for your database DR plan to execute successfully, and many of them are outside the DBA’s realm or even the IT department. Testing your database recovery plan is terrific – but unless you’ve prepared and tested an end-to-end plan that encompasses everything the company needs to do to continue operating, how can you be sure that you’ll even be in a position to execute the database DR plan?

A Day in the Life (2/?) – August 14, 2018

This is my second installment in a series responding to Steve Jones’s (b|t) #SQLCareer challenge. I decided to jot down most of what I did through the day, filling a page and a half in a Field Notes notebook with timestamps and short reminders of what happened. For more, check out the #SQLCareer hashtag on Twitter.

Background

I’m one of two DBAs in my company, and my colleague is (still) on holiday on the opposite side of the planet so I’m juggling everything – on-call, regular operations, consults with developers, you name it. In production, we manage several thousand databases which sit behind about as many websites.

I chose to record this day because it’s a huge departure from the usual routine. In addition to our bi-monthly software release, we had a quarterly event. Let’s check it out. I recommend reading the first installment to get a handle on some of the tasks & terms I might throw around here.

My Day

03:45 – Alarm goes off. It’s early, way too early, and compounded by the fact that I got to bed late-ish last night. We had unexpected dinner guests yesterday but it’s friends who we haven’t seen in three years and they were in town for one day only, so we weren’t going to pass up the opportunity.

04:10 – Hop in the car to get to the office.

04:12 – Nearly hit a deer bounding across the road before I even get out of the neighborhood.

04:50 – Arrive at the office, grab an RxBar (thanks to Drew (b|t) for tipping me off to them) and start getting set up for the deploy. This one’s pretty easy, I only have three changes I’m responsible for:

  • Push a small data change out to all the databases
  • Enable Read Committed Snapshot Isolation on one database
  • Put a clustered primary key on one table in the above database.

05:00 – Red Gate Multi-Script is all set up with the database list and I hit the Go button

05:05 – Multi-Script is done!

05:09 – Enable RCSI & create the clustered PK in that one database.

05:50 – Kick off a data change across a couple dozen databases (via cursor this time, not Multi-Script).

06:05 – Kick off another data change across a couple dozen databases (via cursor).

06:30 – Notice that my installed copy of Brent Ozar Unlimited’s First Responder Kit is out of date by a good 6 months. Refresh it in production with Install-DbaFirstResponderKit, but I’ve got a half-dozen test instances. Fortunately, they’re all registered with a Central Management Server so dbatools makes it even easier.

Get-DbaRegisteredServer -IncludeSelf -ServerInstance MYCMS | Install-DbaFirstResponderKit -Database master

06:48 – Into the queue. There’s a ticket or two that came in late Monday so I get to work on those.

07:15 – Breakfast has arrived! The company buys us a breakfast pizza and amazing donuts when we have software releases.

07:55 – Get to work trying to sort out an issue with a trigger on a critical table. My colleague and I have been ping-ponging this with our lead QA tester for a few weeks and I really want to get it finished as the trigger has been causing deadlocks.

08:45 – 09:50 – Bounce between the queue and the trigger a few times.

09:00 – Get word that one of the data changes I made earlier in the morning appears wrong. It turns out I did exactly what I was asked to do, but the original requestor transposed a couple digits when submitting the request. Fortunately, the changed records are easily identified and the request was otherwise well-documented, so I’m able to reverse my changes without restoring anything from a backup (my SOP is to make a backup immediately before any data change that isn’t trivial or logged in a history table).

09:20 – Pause to gawk at the crazy weather we’re getting. Not too bad in the city, but down in the Finger Lakes they’re getting rain measured in inches per hour and the flash flooding is intense.

09:50 – Break off to secure a spot in the common area for the quarterly presentation.

10:06 – Leave the presentation to address some blocking issues, and bring my laptop back with me so I can take care of others from there.

11:30 – The presentation is almost over and my ability to concentrate on the material is fading fast. I’ve been awake for 8 hours already after a short night’s sleep.

12:00 – Event wraps up and I take my chair back to my desk. Get registered for PASS Summit.

12:30 – Grab lunch. During the warmer months, the company brings a local food truck for lunch on the day we have this quarterly event but they don’t tell us what it is until the day of. Today’s truck – Tom Wahl’s and they’re dishing up Garbage Plates, a local delicacy. I get my once-every-five-years plate and dig in.

Lunchzilla
Lunchzilla

12:43 – Get a note that the reports documenting one of my data changes from 6 hours ago aren’t correct. Either the requirements are unclear to me or my brain is completely fuzzed at this point due to the schedule. I decide to shelve it until Wednesday as I’ll only make things worse at this point. I know the data’s good, I just need to get it documented.

13:00 – Kick off a database copy and subsequent data change across a couple hundred databases (via cursor). We do this at least a dozen times a year to get things set up for our partners and internal users.

13:15 – Poke around at a few databases in search of remaining heap tables and start pondering when I can get that fixed.

13:53 – Roll up my sleeves and get back to working out why Minion CheckDB works fine on my test instances, but throws security exceptions in production. Sean (t) has been working with me for quite a while on this and I can’t thank him enough for it. We’ve narrowed it down to something related to how the commands executed via xp_cmdshell are authenticating to the instance. In this iteration, I’m throwing logging statements (writing to a table) around every call to xp_cmdshell in hopes that I can pinpoint where the error is happening and exactly why.

15:00 – Ordinarily on a release day I’m out of the office by 14:00 but there’s no one else to hold down the fort and I’m making progress on this CheckDB thing. I want to make some good progress and document it before I leave. In the past hour I’ve inserted all my logging into the process and gotten a test run with results logged! Still failing but I’ve got enough information to reproduce the issue outside the confines of xp_cmdshell and I’ve got some really good leads. I document my findings and ship them off to Sean.

15:15 – Just as I’m packing up my bags, a new ticket comes into the queue. It reads “if you have a chance, please run this today” but the deadline is Wednesday 9 AM. I talk to the submitter and tell him I can do it today if necessary, but would prefer Wednesday when my brain’s back online. He agrees to Wednesday.

15:20 – Head home.

16:05 – Arrive at home, say hi to the family. My wife tells me to go upstairs and take a nap. I feel guilty about it, but this is one of those difficult days as a production DBA, she’s very understanding, and I decide to compromise. I lay down in bed and watch most of The Dark Knight instead since it just got put on Netflix (I did try to nod off but it wasn’t happening).

18:00 – Back downstairs for dinner. As I’m still feeling the effects of lunchzilla, I skip the pasta and just have a couple meatballs.

19:00 – Pop into Slack real quick and notice that Friedrich Weinmann (b|t) has a new release of PSFramework. I tripped over a few issues with the logging functions last week and it turned out he was already working on them, so I was awaiting this release. I’ll have to update the module and check it out sometime Wednesday.

19:45 – Get the kids set for wind-down time then bed, and start writing this.

22:00 – After proofreading this, I realize that my phone has been uncharacteristically quiet tonight. Personal emails, texts, work alerts – I’ve received very, very little since 16:30. It’s unnerving, to be honest. I’m considering logging into work just to make sure everything is OK. Although I can see websites are online, so my instance is up and running at least.

Thoughts

Today was quite different from a normal day or even a normal release day. Due to the odd schedule and sleep cycle, I feel like I had a lot more trouble focusing today than what I would consider a “tough” day, to the point where I’d call myself “scatterbrained” today. For a while my colleague & I segmented our days pretty well – before 10 AM and after 2 PM was considered “work the queue” time and the middle of the day was reserved for larger projects and emergencies. I think I need to get back to this model once she returns.

I didn’t accomplish everything I should have today, including a test run or two of my demo for PowerHour which I was going to do with Matt Cushing (b|t). I’ll have to set something up with him later in the week. Wednesday is going to be very much a “write out a full task list and step through, driving each one to 100% completion before moving on” kind of day.

I Will See You in Seattle!

A few weeks ago, I teased good news.

One person hypothesized that I’m joining Microsoft (it seems to be the thing to do lately) and another jumped to the conclusion that I must be pregnant. Both creative responses, but not quite correct.

I’ll be at PASS Summit 2018!

So much to do!

  • Pick some sessions
  • Make my checklist of #sqlfamily I need to see
  • Find a way to pack lighter (I think the iPad will stay home this time)
  • Up my selfie game
  • Get back into shape for #sqlrun
  • Print up some more dbatools ribbons
  • Figure out the social media photo situation (see above, “Up my selfie game”)

If you’re attending Summit, let’s meet up! I’ll be on Twitter, Slack, and Instagram (@alevyinroc across the board) all week and roaming the convention center & various evening events so ping me there to find out where I am.

Because Summit starts on Election Day here in the USA, be sure to either get to your local polling place that morning or follow your state’s process to request & submit an absentee ballot. Every election is more important than the one before it (that’s the most political I’ll get on this blog, I swear).

Speaking: PowerHour, August 21st 2018

It’s official! I will be speaking at the inaugural PowerHour online lightning demo event on Tuesday, August 21st at 2200 UTC. I’ll be demoing Better, Safer SQL Queries from PowerShell.

If you’re working with SQL Server from PowerShell, either as a DBA, analyst, or anyone else running queries, you’ve probably used Invoke-SqlCmd. But depending on how you’re building your queries, this can be error-prone or a huge security exposure! With the help of the dbatools module, I’ll show you how to write and run these queries better and safer – and make them easier to work into your scripts to boot.

I’m excited to be a part of this – it’s been far too long since I’ve done a presentation. Please join us on the YouTube channel/stream next Tuesday!

A Day in the Life (1/?) – August 7, 2018

This is my first installment in (I hope) a series responding to Steve Jones’s (b|t) #SQLCareer challenge. I decided to jot down most of what I did through the day, filling a page and a half in a Field Notes notebook with timestamps and short reminders of what happened. For more, check out the #SQLCareer hashtag on Twitter.

Background

I’m one of two DBAs in my company, and my colleague is on holiday on the opposite side of the planet (literally) for a couple weeks so I’m juggling everything – on-call, regular operations, consults with developers, you name it. In production, we manage several thousand databases which sit behind about as many websites.

My Day

00:48 – Get a handful of SentryOne alerts telling me that one of my test servers has rebooted. That’s unexpected. Will have to check that out in the morning. (Narrator: He forgot to do that in the morning)

05:34 – Wake up to the daily alert that the feed to one of our partners was successful. I really need to ask the developers to put me on the “only alert if failed” list. Go back to sleep.

06:00 – Alarm goes off, followed by the daily “daily integrity check job ran over 4 hours” email from SentryOne. Yes. I know. I’m working on a replacement for the job to make it run faster.

07:35 – Another report notification email. This one’s also my cue to pack up and get on the road.

08:30 – Arrive at work after a quick stop at Wegmans and plot my day. Decide to track my time today for the #SQLCareer challenge. All other plans are quickly abandoned as requests come in.

09:00 – Internal customer IMs me. I missed one step in the data movement/update script I ran for his team yesterday. Write an additional script to truncate that table across two dozen databases.

09:30 – En route to a meeting, I’m informed that one of our big sites isn’t loading. This hasn’t happened in a long time, but I’m pretty sure I know what it is. Fire up SentryOne while skimming my email – yep, I missed an alert for a blocking query that I should have killed a while ago. Usually these things self-correct within 2 minutes but this is a big database and we aren’t so lucky. Kill that spid (it’s just a SELECT query) and everything clears up.

09:35 – Arrive at the weekly standup meeting fashionably late. Receive a flurry of questions via Slack from a developer about how to fix a problem with a small stored procedure he’s writing. Try to sort it out, but can’t do much with just my phone while I’m trying to concentrate on the conversation in the room.

10:30 – Meeting’s over, return to my desk to review/debug the stored procedure. It’s pretty basic, just needs to insert a record into a tracking table and then return the ID (an IDENTITY) of the new record. I rewrote it to use the OUTPUT clause rather than INSERT and then call select SCOPE_IDENTITY() immediately after the INSERT, then while reviewing my changes with him we came to the conclusion that he can use an INSERT right from the application code and skip the stored procedure altogether.

11:10 – Come to the realization that the usage of this new table may present a hotspot for the application and with that database currently using the READ COMMITTED isolation level, we may see some slowdowns due to blocking. Note to self: switch this database to READ COMMITTED SNAPSHOT on the next maintenance window. Switch test environment over to this isolation level now so that it gets exercised before we go live. Also discover that a “scratch” database that only my colleague and I ever use is about 75% empty. We can reclaim double-digit gigabytes so yes, I shrank the database.

11:20 – Look for more lost space in the system and find some transaction logs that got blown out of proportion due to one-off operations. Clean that up and get a few more gigabytes back.

12:00 – Into my ticket queue! This is how we get most of our “tactical” work – data moves, updates spanning multiple databases, report requests, drop databases, etc.

12:30 – Resume the conversation with my developer and remind him that we aren’t going to use READ UNCOMMITTED for his new functionality.

12:45 – Talk to someone from SentryOne about a question I posted to the support forums. Conversation continues for about 90 minutes off and on as he researches how this feature works and how I can make better sense of what it’s telling me.

13:00 – Finally pause to eat. It’s the last of the brisket I smoked on Sunday. OK, not great. I just got a smoker a month ago and I’m still learning the ropes. I’m approaching it slowly, just like the cooking.

13:30 – With one of our system admins, debug the PowerShell script I wrote to pull some error logs out of tables and into flat files for ingestion by a log analysis tool. Turns out my “rolling 24 hours” of log retention didn’t clean up a couple files from last week and it’s still pulling those in over and over again.

13:35 – Back to the queue!

14:45 – Email someone about a potential issue with a service we depend upon to maintain our SLAs. I may have to find a creative way to validate that this service is working properly so that I can start trusting it again.

15:20 – Back to the error logs issue. My script is working in production. It works when I run it manually in test. When I run it via Task Scheduler, it does nothing. Looks like permissions on a directory. I bang my head against the wall for 25 minutes but it’s only a test server – I can get back to it in the morning.

15:45 – Back to the queue! Maybe I can knock out at least the first pass of this report that was requested yesterday.

16:30 – Time to go home, got a couple chores & errands to take care of.

20:15 – Settle down on the porch to sift through some personal email. Turns out I’m first on standby for the inaugural PowerHour so I better start rehearsing my talk. Start writing this blog post.

21:57 – PowerHour update. I’m definitely talking on August 21st.

23:31 – Enough is enough. Stop fretting over this post and just schedule it already!

Thoughts

This was a fun exercise and I thank Steve for proposing it. It made me more conscious of what I do throughout the day, without becoming obsessive over time-tracking. I notice that I didn’t get to do much if any “strategic” work (larger projects, researching system improvements, building automation to simplify future work) and instead did a bunch of tactical, reactive work. My next installment will probably be August 14th, as that’s our bi-weekly release day and is always a change from the usual routine. Hopefully in the course of doing these periodically, I’ll learn more about myself and how I’m spending my time at the office, finding ways to become more productive.

PowerHour – Online PowerShell Lightning Talks!

Earlier this week, the PowerHour was announced. What is it? It’s kind of like a virtual user group. One hour, 6(ish) lightning demos (10 minutes or less), centered on PowerShell. All community-sourced and driven – anyone can submit a proposal for a demo and if accepted, you’ll be slotted into an available spot.

They’ve already set up a YouTube Channel so you can either watch live or catch up later on, and the whole deal is being organized and managed through GitHub. Got something you want to show off? Log an issue using the template!

I’ve submitted my first (but hopefully not last) proposal. Yep, it involves dbatools because that’s my jam.

It’ll be fun for speakers and attendees alike! You can even use your demo(s) for user group meetings or SQL Saturdays – anywhere lightning talk/demo spots are available. Several SQL Server community folks have tossed proposals in and with so many DBAs getting hooked on PowerShell, it’s a great way for these two communities to come together.

T-SQL Tuesday #104 – Code You Would Hate To Live Without

This month’s T-SQL Tuesday is hosted by Bert Wagner and he asks us to write about code we’ve written that we would hate to live without.

First off, “hate” is a pretty strong word so let’s go with “code you couldn’t bear to live without”. The first bit of code I couldn’t live without is reviled in some circles. More often it’s misunderstood and lamented, or…well, I’ll just show it to you.

Yes, you read that right. It’s an SSMS Snippet that generates a cursor. I use cursors so often that I decided to create a snippet so I don’t have to rewrite them or copy/paste from a template file all the time.

Yes, really. Don’t @ me, come at me bro, whatever it is the kids are saying these days. I am dependent upon cursors every day and would be lost without them.

Wow Andy, you must be pretty bad at your job if you’re running cursors all the time! Don’t you know that’s terrible for performance? What’s up with that?

If we’ve met, either in-person or virtually on the SQL Community Slack, you probably know that I manage an instance hosting mumblemumble thousand databases. I don’t mean “a couple” thousand databases; we’re looking at Michael Swart’s 10 Percent Rule in the rearview mirror. I regularly have to look for a particular value across the same table in a few dozen/hundred/thousand databases, or pull a report across as many databases, or run the same data change for a big list of databases. Most often, I’ll be given a list of databases or be asked “run this for all the databases that meet these criteria.” And the only way to do that easily is via a cursor because I have to first collect the list of databases from another table. There are 3rd party tools I could use but doing the setup to run against an arbitrary list of databases is tedious, error-prone, and I haven’t quite worked out a way to improve it yet.

Processing a table or result set RBAR is a performance concern. But to crank through a long list of databases and execute the same query against each it’s the only way to go, as far as I know. sp_msforeachdb doesn’t cut it for my purposes because I don’t want to hit every database on my instance.

My second piece of code is more of a technique or design pattern. In a stored procedure or large script with dynamic SQL, I’ll often create two variables (they’re parameters, in the case of stored procedures – choose a sensible default!) called @Debug and @ImSure. They’re just bit types but I use them to control the output of debgging information and code execution.

By doing this, I don’t have to comment/uncomment sections of code all the time just to see what dynamic SQL I’m generating. I also have a failsafe which prevents changes from being executed until I’ve made sure that everything is solid.

Those are probably the two pieces of code that I can share which I couldn’t be without. Honorable mentions which I didn’t write but find indispensable:

  • QUOTENAME – pretty basic T-SQL function but with all that dynamic SQL I’m writing, I need it to keep my SQL clean and safe.
  • dbatools – I’ve written about this PowerShell module quite a bit here but suffice to say for doing bulk administrative tasks, collecting metadata about the environment for analysis, and moving databases or entire instances around, it’s a lifesaver.
  • Brent Ozar Unlimited’s First Responder Kit – I run sp_blitzcache & sp_blitzindex daily looking for places we can tweak our code or indexes to make things run better.
  • Adam Machanic’s sp_whoisactive – Gives me a great lightweight snapshot of what’s going on right now.

Thanks for joining me on this T-SQL Tuesday!

Volunteer for PASS!

This week, I had the opportunity to be the moderator for Joseph Barth’s (b|t) 24 Hours of PASS Summit Preview session about Azure Data Factory V2. It was fun, easy, and I encourage you to sign up to do the same!

Throughout the year, PASS hosts a number of online learning events. 24 Hours of PASS and virtual chapter webinars being the most common/visible. And in each session, the presenter needs a little help managing questions and watching the clock so they can focus on delivering their great content. It’s pretty easy. You just:

  • Sign in about half an hour ahead of the session start time
  • Make sure your audio is working right
  • Chat with the presenter(s) about the timing, whether they want to address audience questions during the presentation or at the end, when they want time alerts, etc.
  • When the session starts, read the PASS lead-in script that’s provided and introduce the speaker
  • Watch for questions and let the speaker know when you’ve hit the agreed-upon checkpoints
  • Read audience questions to the speaker
  • Wrap-up: thank the speaker and audience, read the wrap-up script, and (where applicable) invite the audience to stick around for the next session

So how do you sign up for such a sweet gig? Just set up your PASS profile to indicate that you’re interested in volunteering. When an opportunity comes up, you’ll be contacted by PASS HQ and asked if you’re available for the event.

In the case of 24 Hours of PASS, I was asked to pick a few time slots where I was available but not told who the speaker was in each (which is fine by me – the result is that I attended a session I normally wouldn’t have, and learned some new stuff!). My slot was confirmed and I learned that Joseph would be my speaker. Great! I met him at Summit last year and he founded a user group that I’m familiar with, so we had something to chat about before his session started.

The clock struck 01:00 UTC, I read my script, Joseph did his presentation, and we wrapped up. It went really well and I had fun with it.

So, dear reader, here’s what you’re going to do:

  1. Go to your PASS profile’s myVolunteering section
  2. Check at least two boxes
    • “I would like to sign up to become a PASS volunteer”
    • Any one of the Areas of Interest
  3. When you receive the email from PASS HQ or local coordinators asking for volunteers for an upcoming event, you say “yes!”
  4. Help out with the event
  5. Meet new folks in the SQL Server community
  6. Learn something new

Communities like ours work best when everyone chips in a little bit. Whether it’s speaking, moderating online events, working with a local user group or helping to put on SQL Saturday. It’s a great way to meet other people in the community and give back to a group that gives us all so much, both personally and professionally.

Becoming a Production DBA – A Family Decision

I really enjoy my job. I became a full-time production DBA about 14 months ago and it has been an overwhelmingly positive move. I work for a good company and with a terrific group of people. Many days, I have to force myself to leave the office because I was so engrossed in a task and just didn’t want to set it aside.

But there’s something that not everyone might consider before taking on this job. If you have a partner, children, or both, taking a job as a production DBA is really a family decision.

Being on-call is potentially disruptive to your family schedule. And sleep schedules! My on-call rotation is two weeks on, two weeks off. In those two weeks, I have:

  • The usual alerts that can come in anytime day or night, the emergency fixes when someone deletes something that shouldn’t be deleted, etc.
  • A software release which requires that I get up at 3:45 AM once per rotation
  • Monthly server patching at 2 AM, if it happens during my rotation

Many years ago, I had a job where I carried on-call responsibilities and it was rough. Lots of nights and weekends. Then I got a decade-long break. Before I took my current job, I discussed the on-call requirements with my spouse a bit before accepting. I didn’t want to subject her to that again without making sure that she was OK with it. She is a very light sleeper, so any chirp from the phone is likely to wake her up (by contrast, I once put my phone three inches from my head and slept through multiple personal email alerts).

This job has the potential to impact the whole family, in both small ways and large. Chris Sommer (b|t) said one day in the SQL Community Slack that being a production DBA is kind of a blue-collar job. Shift work, etc. He makes a good point. I’ve adapted to the schedule and it’s not bad…for me.

But I’m not alone in the house and yes, everyone has had to adjust. Sleep has been lost. If an alert comes in overnight, my spouse wakes up too. We’ve scheduled family activities around the on-call schedule. Carried the work laptop all over creation “just in case.” Left the beach to handle urgent tickets. Skipped weekend morning outings. Stayed up late, got up early, missed dinner, or paused a movie to baby-sit a critical job or troubleshoot system issues.

It’s worth it though. After taking on the new role, my job security increased. My career security has increased. My work is more challenging, more interesting, and I have more autonomy than ever before. I look forward to going to work every day. I’m getting more involved in the SQL Server community. On average I’m getting home earlier than I used to, so I’m spending more time with the kids on weekdays. It hurts waking up at 3:45 AM once a month but I’m there to greet them when they get home from school.

Life is full of tradeoffs and compromises, and taking a job with on-call responsibilities involves a lot of those tradeoffs. Overall, it’s been a net win for me. Would I prefer to not have to deal with overnights and weekends? Who wouldn’t? But the positive changes that this job has meant for my career, my family, and myself make it worthwhile.