Please join me at my new home FLX SQL
I’ll be keeping things here so other links don’t break, but I will no longer be updating or posting new content here.
Please join me at my new home FLX SQL
I’ll be keeping things here so other links don’t break, but I will no longer be updating or posting new content here.
This month’s T-SQL Tuesday is hosted by Jeff Mlakar and he asks us to write about a project that went horribly wrong. My story isn’t really worthy of the name “death march” but it was a pretty rough project.
The project started sometime in mid-2003. I was working as a web developer (Classic ASP) for an insurance company and they wanted to modernize the underwriting process with a web-based “workflow” application.
A platform was selected, the team was picked, and we set about customizing the devil out of it (it was more of a “framework” than turnkey application) and integrating with our existing systems. Of course, being an insurance company everything ultimately had to land in the mainframe.
The tech team was pretty large – upwards of a dozen and a half programmers across various disciplines. Four of us were kind of off on our own building the ASP front end and COM-based middleware – myself and Ned on the ASP side, Tony & Phil (all names changed) on the other side of the cube walls working on COM objects. The four of us worked really well together, with Phil & I each taking the lead in our respective disciplines. Our PM was great; we got along well and she knew the right questions to ask me to nudge me in the right direction.
As Phil & I dug into the platform API, we realized we were going to be implementing more features with our own code than expected as they weren’t actually built into the API yet as we had expected.
We brought a few consultants from a large company you’ve certainly heard of to help us work out a UI design. After 5 weeks, we presented our design proposals. Every – I mean every – proposal put forth by the consultants and me was shot down. The response was “those are very nice, but we want what we described to you 2 months ago, so build that.” I was flabbergasted. I asked the consultants if they’d ever seen anything like that and they admitted that they hadn’t. We just spent five weeks and a bunch of money to produce nothing.
February was approaching and while Phil, Tony, Ned and myself were firing on all cylinders, others weren’t doing as well. We were supposed to launch in March, but it was becoming apparent that we’d miss that date. The release date slipped to mid-April.
The four of us spent the last half of February and all of March tightening up our code. We had some performance concerns but hadn’t figured out where they were coming from yet. Acceptance testing hadn’t started yet; the project was expected to be “nearly complete” before that’d start.
Acceptance testing started in late March. Ned & I knew our code inside and out, and I could suss out pretty easily if a reported issue was there or in the COM objects so I could throw those bugs over the wall to Tony & Phil as necessary.
I returned from lunch on the last Friday of March and Ned said to me “I’m leaving at the end of the day.” Um…OK, thanks Ned, I am too. Then he clarified. See, Ned was a contractor and he he was ending his contract at the end of the day – and starting a new gig in Miami on Monday! We were unable to bring in a replacement, so I was flying solo for the rest of the project.
I spent April fixing bugs as fast as I could, re-implementing some features because they sounded good on paper but once seen in action, they weren’t acceptable. Mid-April came around and the project release date was pushed back by another month.
I went to the PM and expressed my concerns about the release date changes. I reminded her that I was getting married Memorial Day weekend and I’d be unavailable for 2 weeks due to that and the honeymoon. She assured me that this was the last reschedule, and even if that turned out to not be true, I wouldn’t be asked to change my plans.
Early in May was the final blow for me. We had an all-project meeting with several people from the highest levels in the company in attendance. The missed deadlines, the incredible strain placed on not only the development team but their families, the importance of the project to the company, all brought up. People laid some very personal feelings out. The response from one of the top-level people was cold, harsh, uncaring, impersonal, tone-deaf, and completely disheartening.
The project went live 10 days before I left for my wedding and honeymoon.
No project implementation is perfect out of the gate. Right before we went live, Phil figured out what was causing our performance issues and over the next month or two, more or less re-implemented that portion of the vendor’s API himself. We pushed out an updated version as quickly as we could; users were complaining about performance from day one. We got a lackluster “yeah, I guess it’s better now” but no real acknowledgement of the improvement Phil had made so quickly. It was too late; the perception was that this system is slow and that wasn’t going to change, now matter how fast we made it.
The project was started in the summer or early Fall of 2003 and went live in the last third of May 2004. By the end of the first quarter of 2005, the company had decided to go in a different direction and the system was on track to be mothballed; no new business was being scanned into it, and once the last piece of business that was in it was processed, everything was shut down.
The system spent more time under development than it did in use.
This is my third installment in a series responding to Steve Jones’s (b|t) #SQLCareer challenge. I decided to jot down most of what I did through the day, filling a page and a half in a Field Notes notebook with timestamps and short reminders of what happened. For more, check out the #SQLCareer hashtag on Twitter.
I chose to record this day because I was working from home as my car was in the shop and I thought I might get some bigger things done without the distractions of being in the office. But as Mike Tyson famously said, everyone has a plan until they get punched in the mouth.
06:00 – Alarm goes off but I’m already half-awake.
07:00 – Drive my son to school. This year he’s on a much earlier schedule than last year, and while I can’t walk him to school due to the distance and he can take the bus, driving him gives us some time to talk one-to-one and it gets me out the door and into the office earlier to boot. Earlier to work means I leave earlier, giving me more time to spend with my family in the late afternoon/evening.
07:30 – Return home, make a better breakfast for myself than Honey Nut Cheerios. Today it’s scrambled eggs with guacamole. If you haven’t tried it, you’re missing out.
08:00 – Set up camp in our “spare bedroom” which is trying to be a home office now.
08:10 – Log into the VPN and plug in the dongle for the wireless mouse. Windows spends far too long spinning on installing the driver and for the time being I use the built-in trackpad.
08:15 – Finish RDPing into my desktop and discover that SQL Server Management Studio (SSMS) has been restarted due to an update of some kind 45 minutes prior. Spend a bit of time recovering unsaved files and then saving them elsewhere.
08:20 – Give up on the mouse working, tell Windows to stop trying to install the drivers.
08:23 – For reasons I can’t explain, mouse starts working. Decide to take advantage of the interruption to my usual SSMS workflow and install the latest version 17.9. I haven’t seen reports of it blowing anything up in the week or so since it was released.
08:30 – Are we ready to work? I think so. Hop into the queue and take my morning cruise through email and SentryOne to review everything that ran overnight.
08:35 – The VPN seems to be really sensitive to other users of my home WiFi (my wife works from home regularly) so I use this as an excuse to hook up the 5-port switch that arrived from Amazon a couple days prior. We’ve got an ethernet drop in the room but it’s occupied by the VoIP box hooked up to our printer/scanner/fax machine. Yeah, a fax machine. She needs it for her job. But we’re out of electrical outlets in this corner of the room. Rummage around and find an unused power strip in the closet. It’s got 4 always on outlets plus 4 on a timer, and due to the size of the wall-warts for the VoIP box and switch, I need to use both sides. Spend more time than I care to admit figuring out how to program the timer (whose brilliant idea was this thing?). After achieving victory over both electricity and Ethernet, I reconnected my VPN after disconnecting from WiFi and got a more stable connection.
08:54 – Pull some data together for a product owner to document the conditions that caused one of their tools to trigger a half-dozen alerts from SentryOne. Turns out that if you attempt to back up the one database to the same filename in four separate processes simultaneously, three of them will get blocked!
09:03 – Get food for my daughter. I forgot to mention earlier that she was staying home from school due to illness.
09:10 – Back to the queue.
09:30 – Call into our weekly meeting for an upgrade project. Struggle to hear anything due to abysmal acoustics in the meeting room. They can hear every click of my mechanical keyboard over the phone.
10:00 – Pull some reports (Excel files) for the business side of the house. We need a couple years worth of data and our web-based reporting system can’t handle that volume so I have a PowerShell script that breaks it up into chunks and runs the stored procedure directly. I spent a bunch of time arguing with the script as I was trying to “improve” it from the previous iteration instead of just using what has worked in the past.
12:00 – Break off to sit in on the PASS Professional Development Virtual Group presentation “Talk Tech To Me – Improving Your Technical Presentation Skills” by Alexander Arvidsson (b|t).
13:15 – Make lunch
13:35 – Back to the reports I was working on in the morning. Ends up being a half-dozen Excel files, each in the neighborhood of 100MB, and approaching the limits of Excel’s capacity.
14:00 – Pick up my car from the shop.
14:30 – Resume work on another report. I had completed 90% of this one the previous day at work, but one calculation was twisting my brain. As written, the requirements are a little fuzzy and I keep flipping between two possible interpretations of them. I decide to commit to one interpretation and get my head around how to code for it.
16:00 – Check where we are with MinionWare‘s CheckDB. I rolled it out about 6 weeks ago and it’s been working mostly OK, but I’m still working out some issues specific to my environment. Discuss with one of our sysadmins when we’ll install some firmware updates for our servers in the coming weeks.
16:10 – Learn that I picked the wrong interpretation of the requirements for the afternoon report and I need to flip it around. Spend quite a while working that out and validating. Whiteboard it out around a doodle of my daughter’s (can’t erase that!).
17:21 – Realize that I need to get the non-production copy of Minion CheckDB in sync with production. Rather than move individual objects (I’ve been working with Sean for about 2 months on debugging some issues and I don’t have a 100% standard version), I do a backup & restore of the database to the test server. Since production is SQL Server 2008 R2 and test is 2016, I update the compatibility level and rebuild indexes, then decide to compress a couple tables as a test before doing the same in production just to keep things from getting too huge.
17:38 – Receive a text from my neighbor about a snake that’s standing between him and his grill in the backyard. It’s a long story. It should be noted that there are only three species of venomous snakes that call New York home, and none of them live in our neighborhood.
18:00 – Done for the day. Time to get the kids fed while my wife has the cat at the vet getting an unplanned checkup.
While I was working straight through the day, I didn’t really feel like I accomplished much. Usually on a remote work day I can start early and finish a little late, and still end up spending more time “at home” because I’ve eliminated the commute, but thanks to my hardware distractions to start the day, that wasn’t happening.
I finished off two sizable report requests and took care of a few pieces of administrivia, but that’s about it. Quantity vs. quality, I guess? Getting Minion CheckDB updated in my test environment seems to have put an end to the alerts it was triggering, which doesn’t impact others but it’s good for my quality of life.
While we do have a “home office” space, it’s not set up in a way that’s comfortable for me to work. The desk is all wrong, I only have the built-in display on the laptop, it needs a ceiling fan to help circulate air better, and the whiteboard is in an inconvenient place. If I were to be working regularly from home, this day’s experience pretty much seals what I already was feeling – that I need to build out a good workspace in the basement that fits me for both ergonomics and working style. That’s not a small or inexpensive undertaking, so I don’t see it happening in the near future. We have a lot of cleaning, planning and building to make that a reality and the only thing I know for sure is desk I’ll be getting.
The first edition of the PSPowerHour is in the books and it looks like it was a big success. This one was dbatools-heavy but I chalk that up to the dbatools community having lots of free time because we’ve automated so many of our tasks 🙂
I signed in about half an hour ahead of the webcast and was the first one there. Shortly thereafter, I was joined by Michael Lombardi (t, then Jess Pomfret (b|t) and Chrissy LeMaire (b|t). After ironing out a few glitches, we got everyone in the right place and kicked off the broadcast. Everything ran very smoothly, especially considering the number of people involved – Michael and Warren F. (b|t) did a terrific job of orchestrating everything.
While watching and listening to Chrissy, Doug, Andrew & Jess give their demos, I ran through my own in my head a couple times, adding and rearranging a few things as I observed how they were doing theirs. The big dilemma for me was whether or not to run the camera or exclusively screen share (I ended up going with the screen share only). Having not rehearsed my demo enough in the weeks leading up to the event, I was still not sure where to dip into more detail or dial things back and seeing what others were doing helped quite a bit. Having familiar faces & voices ahead of me in the queue put my nerves to rest.
I wasn’t able to watch the sessions after mine in their entirety due to family commitments. Joshua’s Burnt Toast module looks like it’ll be fun to experiment with and add some nice functionality to scripts (I got to see about half of his demo), and I’m really looking forward to catching a replay of Daniel’s demo of PowerShell on the Raspberry Pi – I didn’t realize that it had been ported already!
Invoke-DbaSqlQuery and why one should use it over
Invoke-SQLCmd – primarily for protection from SQL injection. Things didn’t go exactly the way I’d practiced; I ran short of time despite feeling like I rushed things and cutting back on some of what I had planned to say. The latter was in part because of the lead-ins from Chrissy, Andrew, and Jess. Because they did such a good job introducing dbatools, I was able to skip over it. But I was able to throw in a teaser for Matt Cushing’s (b|t) demo at the next PSPowerHour.
Running the demos inside a VM and screen-sharing just that VM made things easier for me as opposed to flipping between apps. My scripts will be available on GitHub along with the other presenters’ once the pull request is approved.
I achieved my goals:
Next time around, I definitely need to rehearse more and get my timing down better but overall, I’m happy.
Kevin Hill (b|t) posted a thought-provoking item on his last week about Disaster Recovery Plans. While I am in the 10% who perform DR tests for basic functionality on a regular basis, there’s a lot more to being prepared for disaster than just making sure you can get the databases back online.
You really need to have a full-company business continuity plan (BCP), which your DR plan is an integral portion of. Here come the Boy Scouts chanting “Be Prepared!”
When disaster strikes:
Let’s say you’re prepared to fail over from your primary datacenter to a DR datacenter when a catastrophe hits the primary. You’ve got that all worked out and you rehearse it monthly or quarterly. You can bring critical databases and websites online within the required time period and the steps are well-documented. That’s a great start!
You probably do this periodic test on the 2nd Tuesday of each quarter, from the comfort of your desk at work, under “normal” conditions.
Are you expecting everyone to work from home (or wherever they may be/may find convenient), or do you have a fixed location to use as a command center? Do you have a contingency plan if that “command center” is inaccessible due to unsafe travel conditions or the same problems that plague your main office?
Have you tested executing your DR/BCP out of those alternate locations?
What if you can access the office (either VPN or physically), but the connection to your offsite datacenter(s) is severed?
Maybe you’ve got everyone set up to work “remotely”. Are they able to work at 100% or even 50% capacity if you lose the office, or are they dependent upon a VPN endpoint in the office? How many routes to the datacenter(s) do you have? Are all the necessary tools available on laptops for remote work, or are you reliant upon a jumpbox? Is that jumpbox accessible in a true disaster scenario?
A modern laptop that’s only running an RDP client (aka smart terminal) can run quite a while on a full charge and is pretty responsive even when tethered to your phone’s LTE connection or a MiFi device. Are you keeping all those battery fully charged (confession: my laptop is at about 40% as it sits in its bag right now, and I don’t carry a battery pack for my phone all the time) so you can work a few hours while waiting for the lights to come back on at home?
As a DBA, I’m responsible for ensuring that we can get the necessary databases online with reasonably recent data (meeting our SLAa) and accepting connections for users. But that presumes that I can gain access to the DR site. It also presumes that communication channels are documented and followed such that my team isn’t being asked for status updates every 3 minutes, instead of allowing us to work the problem.
There are a lot of moving parts that have to be working together for your database DR plan to execute successfully, and many of them are outside the DBA’s realm or even the IT department. Testing your database recovery plan is terrific – but unless you’ve prepared and tested an end-to-end plan that encompasses everything the company needs to do to continue operating, how can you be sure that you’ll even be in a position to execute the database DR plan?
This is my second installment in a series responding to Steve Jones’s (b|t) #SQLCareer challenge. I decided to jot down most of what I did through the day, filling a page and a half in a Field Notes notebook with timestamps and short reminders of what happened. For more, check out the #SQLCareer hashtag on Twitter.
I’m one of two DBAs in my company, and my colleague is (still) on holiday on the opposite side of the planet so I’m juggling everything – on-call, regular operations, consults with developers, you name it. In production, we manage several thousand databases which sit behind about as many websites.
I chose to record this day because it’s a huge departure from the usual routine. In addition to our bi-monthly software release, we had a quarterly event. Let’s check it out. I recommend reading the first installment to get a handle on some of the tasks & terms I might throw around here.
03:45 – Alarm goes off. It’s early, way too early, and compounded by the fact that I got to bed late-ish last night. We had unexpected dinner guests yesterday but it’s friends who we haven’t seen in three years and they were in town for one day only, so we weren’t going to pass up the opportunity.
04:10 – Hop in the car to get to the office.
04:12 – Nearly hit a deer bounding across the road before I even get out of the neighborhood.
04:50 – Arrive at the office, grab an RxBar (thanks to Drew (b|t) for tipping me off to them) and start getting set up for the deploy. This one’s pretty easy, I only have three changes I’m responsible for:
05:00 – Red Gate Multi-Script is all set up with the database list and I hit the Go button
05:05 – Multi-Script is done!
05:09 – Enable RCSI & create the clustered PK in that one database.
05:50 – Kick off a data change across a couple dozen databases (via cursor this time, not Multi-Script).
06:05 – Kick off another data change across a couple dozen databases (via cursor).
06:30 – Notice that my installed copy of Brent Ozar Unlimited’s First Responder Kit is out of date by a good 6 months. Refresh it in production with
Install-DbaFirstResponderKit, but I’ve got a half-dozen test instances. Fortunately, they’re all registered with a Central Management Server so dbatools makes it even easier.
Get-DbaRegisteredServer -IncludeSelf -ServerInstance MYCMS | Install-DbaFirstResponderKit -Database master
06:48 – Into the queue. There’s a ticket or two that came in late Monday so I get to work on those.
07:15 – Breakfast has arrived! The company buys us a breakfast pizza and amazing donuts when we have software releases.
07:55 – Get to work trying to sort out an issue with a trigger on a critical table. My colleague and I have been ping-ponging this with our lead QA tester for a few weeks and I really want to get it finished as the trigger has been causing deadlocks.
08:45 – 09:50 – Bounce between the queue and the trigger a few times.
09:00 – Get word that one of the data changes I made earlier in the morning appears wrong. It turns out I did exactly what I was asked to do, but the original requestor transposed a couple digits when submitting the request. Fortunately, the changed records are easily identified and the request was otherwise well-documented, so I’m able to reverse my changes without restoring anything from a backup (my SOP is to make a backup immediately before any data change that isn’t trivial or logged in a history table).
09:20 – Pause to gawk at the crazy weather we’re getting. Not too bad in the city, but down in the Finger Lakes they’re getting rain measured in inches per hour and the flash flooding is intense.
09:50 – Break off to secure a spot in the common area for the quarterly presentation.
10:06 – Leave the presentation to address some blocking issues, and bring my laptop back with me so I can take care of others from there.
11:30 – The presentation is almost over and my ability to concentrate on the material is fading fast. I’ve been awake for 8 hours already after a short night’s sleep.
12:00 – Event wraps up and I take my chair back to my desk. Get registered for PASS Summit.
12:30 – Grab lunch. During the warmer months, the company brings a local food truck for lunch on the day we have this quarterly event but they don’t tell us what it is until the day of. Today’s truck – Tom Wahl’s and they’re dishing up Garbage Plates, a local delicacy. I get my once-every-five-years plate and dig in.
12:43 – Get a note that the reports documenting one of my data changes from 6 hours ago aren’t correct. Either the requirements are unclear to me or my brain is completely fuzzed at this point due to the schedule. I decide to shelve it until Wednesday as I’ll only make things worse at this point. I know the data’s good, I just need to get it documented.
13:00 – Kick off a database copy and subsequent data change across a couple hundred databases (via cursor). We do this at least a dozen times a year to get things set up for our partners and internal users.
13:15 – Poke around at a few databases in search of remaining heap tables and start pondering when I can get that fixed.
13:53 – Roll up my sleeves and get back to working out why Minion CheckDB works fine on my test instances, but throws security exceptions in production. Sean (t) has been working with me for quite a while on this and I can’t thank him enough for it. We’ve narrowed it down to something related to how the commands executed via
xp_cmdshell are authenticating to the instance. In this iteration, I’m throwing logging statements (writing to a table) around every call to
xp_cmdshell in hopes that I can pinpoint where the error is happening and exactly why.
15:00 – Ordinarily on a release day I’m out of the office by 14:00 but there’s no one else to hold down the fort and I’m making progress on this CheckDB thing. I want to make some good progress and document it before I leave. In the past hour I’ve inserted all my logging into the process and gotten a test run with results logged! Still failing but I’ve got enough information to reproduce the issue outside the confines of
xp_cmdshell and I’ve got some really good leads. I document my findings and ship them off to Sean.
15:15 – Just as I’m packing up my bags, a new ticket comes into the queue. It reads “if you have a chance, please run this today” but the deadline is Wednesday 9 AM. I talk to the submitter and tell him I can do it today if necessary, but would prefer Wednesday when my brain’s back online. He agrees to Wednesday.
15:20 – Head home.
16:05 – Arrive at home, say hi to the family. My wife tells me to go upstairs and take a nap. I feel guilty about it, but this is one of those difficult days as a production DBA, she’s very understanding, and I decide to compromise. I lay down in bed and watch most of The Dark Knight instead since it just got put on Netflix (I did try to nod off but it wasn’t happening).
18:00 – Back downstairs for dinner. As I’m still feeling the effects of lunchzilla, I skip the pasta and just have a couple meatballs.
19:00 – Pop into Slack real quick and notice that Friedrich Weinmann (b|t) has a new release of PSFramework. I tripped over a few issues with the logging functions last week and it turned out he was already working on them, so I was awaiting this release. I’ll have to update the module and check it out sometime Wednesday.
19:45 – Get the kids set for wind-down time then bed, and start writing this.
22:00 – After proofreading this, I realize that my phone has been uncharacteristically quiet tonight. Personal emails, texts, work alerts – I’ve received very, very little since 16:30. It’s unnerving, to be honest. I’m considering logging into work just to make sure everything is OK. Although I can see websites are online, so my instance is up and running at least.
Today was quite different from a normal day or even a normal release day. Due to the odd schedule and sleep cycle, I feel like I had a lot more trouble focusing today than what I would consider a “tough” day, to the point where I’d call myself “scatterbrained” today. For a while my colleague & I segmented our days pretty well – before 10 AM and after 2 PM was considered “work the queue” time and the middle of the day was reserved for larger projects and emergencies. I think I need to get back to this model once she returns.
I didn’t accomplish everything I should have today, including a test run or two of my demo for PowerHour which I was going to do with Matt Cushing (b|t). I’ll have to set something up with him later in the week. Wednesday is going to be very much a “write out a full task list and step through, driving each one to 100% completion before moving on” kind of day.
A few weeks ago, I teased good news.
just got some good news. can't wait to share it—
Andy Levy (@ALevyInROC) July 20, 2018
One person hypothesized that I’m joining Microsoft (it seems to be the thing to do lately) and another jumped to the conclusion that I must be pregnant. Both creative responses, but not quite correct.
I’ll be at PASS Summit 2018!
So much to do!
If you’re attending Summit, let’s meet up! I’ll be on Twitter, Slack, and Instagram (@alevyinroc across the board) all week and roaming the convention center & various evening events so ping me there to find out where I am.
Because Summit starts on Election Day here in the USA, be sure to either get to your local polling place that morning or follow your state’s process to request & submit an absentee ballot. Every election is more important than the one before it (that’s the most political I’ll get on this blog, I swear).
If you’re working with SQL Server from PowerShell, either as a DBA, analyst, or anyone else running queries, you’ve probably used Invoke-SqlCmd. But depending on how you’re building your queries, this can be error-prone or a huge security exposure! With the help of the dbatools module, I’ll show you how to write and run these queries better and safer – and make them easier to work into your scripts to boot.
I’m excited to be a part of this – it’s been far too long since I’ve done a presentation. Please join us on the YouTube channel/stream next Tuesday!
This is my first installment in (I hope) a series responding to Steve Jones’s (b|t) #SQLCareer challenge. I decided to jot down most of what I did through the day, filling a page and a half in a Field Notes notebook with timestamps and short reminders of what happened. For more, check out the #SQLCareer hashtag on Twitter.
I’m one of two DBAs in my company, and my colleague is on holiday on the opposite side of the planet (literally) for a couple weeks so I’m juggling everything – on-call, regular operations, consults with developers, you name it. In production, we manage several thousand databases which sit behind about as many websites.
00:48 – Get a handful of SentryOne alerts telling me that one of my test servers has rebooted. That’s unexpected. Will have to check that out in the morning. (Narrator: He forgot to do that in the morning)
05:34 – Wake up to the daily alert that the feed to one of our partners was successful. I really need to ask the developers to put me on the “only alert if failed” list. Go back to sleep.
06:00 – Alarm goes off, followed by the daily “daily integrity check job ran over 4 hours” email from SentryOne. Yes. I know. I’m working on a replacement for the job to make it run faster.
07:35 – Another report notification email. This one’s also my cue to pack up and get on the road.
08:30 – Arrive at work after a quick stop at Wegmans and plot my day. Decide to track my time today for the #SQLCareer challenge. All other plans are quickly abandoned as requests come in.
09:00 – Internal customer IMs me. I missed one step in the data movement/update script I ran for his team yesterday. Write an additional script to truncate that table across two dozen databases.
09:30 – En route to a meeting, I’m informed that one of our big sites isn’t loading. This hasn’t happened in a long time, but I’m pretty sure I know what it is. Fire up SentryOne while skimming my email – yep, I missed an alert for a blocking query that I should have killed a while ago. Usually these things self-correct within 2 minutes but this is a big database and we aren’t so lucky. Kill that spid (it’s just a
SELECT query) and everything clears up.
09:35 – Arrive at the weekly standup meeting fashionably late. Receive a flurry of questions via Slack from a developer about how to fix a problem with a small stored procedure he’s writing. Try to sort it out, but can’t do much with just my phone while I’m trying to concentrate on the conversation in the room.
10:30 – Meeting’s over, return to my desk to review/debug the stored procedure. It’s pretty basic, just needs to insert a record into a tracking table and then return the ID (an
IDENTITY) of the new record. I rewrote it to use the
OUTPUT clause rather than
INSERT and then call
select SCOPE_IDENTITY() immediately after the
INSERT, then while reviewing my changes with him we came to the conclusion that he can use an
INSERT right from the application code and skip the stored procedure altogether.
11:10 – Come to the realization that the usage of this new table may present a hotspot for the application and with that database currently using the
READ COMMITTED isolation level, we may see some slowdowns due to blocking. Note to self: switch this database to
READ COMMITTED SNAPSHOT on the next maintenance window. Switch test environment over to this isolation level now so that it gets exercised before we go live. Also discover that a “scratch” database that only my colleague and I ever use is about 75% empty. We can reclaim double-digit gigabytes so yes, I shrank the database.
11:20 – Look for more lost space in the system and find some transaction logs that got blown out of proportion due to one-off operations. Clean that up and get a few more gigabytes back.
12:00 – Into my ticket queue! This is how we get most of our “tactical” work – data moves, updates spanning multiple databases, report requests, drop databases, etc.
12:30 – Resume the conversation with my developer and remind him that we aren’t going to use
READ UNCOMMITTED for his new functionality.
12:45 – Talk to someone from SentryOne about a question I posted to the support forums. Conversation continues for about 90 minutes off and on as he researches how this feature works and how I can make better sense of what it’s telling me.
13:00 – Finally pause to eat. It’s the last of the brisket I smoked on Sunday. OK, not great. I just got a smoker a month ago and I’m still learning the ropes. I’m approaching it slowly, just like the cooking.
13:30 – With one of our system admins, debug the PowerShell script I wrote to pull some error logs out of tables and into flat files for ingestion by a log analysis tool. Turns out my “rolling 24 hours” of log retention didn’t clean up a couple files from last week and it’s still pulling those in over and over again.
13:35 – Back to the queue!
14:45 – Email someone about a potential issue with a service we depend upon to maintain our SLAs. I may have to find a creative way to validate that this service is working properly so that I can start trusting it again.
15:20 – Back to the error logs issue. My script is working in production. It works when I run it manually in test. When I run it via Task Scheduler, it does nothing. Looks like permissions on a directory. I bang my head against the wall for 25 minutes but it’s only a test server – I can get back to it in the morning.
15:45 – Back to the queue! Maybe I can knock out at least the first pass of this report that was requested yesterday.
16:30 – Time to go home, got a couple chores & errands to take care of.
20:15 – Settle down on the porch to sift through some personal email. Turns out I’m first on standby for the inaugural PowerHour so I better start rehearsing my talk. Start writing this blog post.
21:57 – PowerHour update. I’m definitely talking on August 21st.
23:31 – Enough is enough. Stop fretting over this post and just schedule it already!
This was a fun exercise and I thank Steve for proposing it. It made me more conscious of what I do throughout the day, without becoming obsessive over time-tracking. I notice that I didn’t get to do much if any “strategic” work (larger projects, researching system improvements, building automation to simplify future work) and instead did a bunch of tactical, reactive work. My next installment will probably be August 14th, as that’s our bi-weekly release day and is always a change from the usual routine. Hopefully in the course of doing these periodically, I’ll learn more about myself and how I’m spending my time at the office, finding ways to become more productive.
Earlier this week, the PowerHour was announced. What is it? It’s kind of like a virtual user group. One hour, 6(ish) lightning demos (10 minutes or less), centered on PowerShell. All community-sourced and driven – anyone can submit a proposal for a demo and if accepted, you’ll be slotted into an available spot.
They’ve already set up a YouTube Channel so you can either watch live or catch up later on, and the whole deal is being organized and managed through GitHub. Got something you want to show off? Log an issue using the template!
It’ll be fun for speakers and attendees alike! You can even use your demo(s) for user group meetings or SQL Saturdays – anywhere lightning talk/demo spots are available. Several SQL Server community folks have tossed proposals in and with so many DBAs getting hooked on PowerShell, it’s a great way for these two communities to come together.