The Tech & Gear of PASS Summit 2017

Continuing my series of posts about my PASS Summit 2017 experience. This is about gadgets/gear I brought & software I used, the gadgets I saw around the convention center, and then a little about the hardware & software that was demoed.

Personal

Gadgets

I only brought three gadgets, plus their support items:

  • iPhone 8
  • iPad Air 2
  • Apple Watch Series 3
  • 4-port Anker wall charger
  • Anker 15K mAH battery pack
  • 2x Lightning cable (for the iPhone & iPad), 1x Micro-USB cable (to charge the battery pack), 1X Apple Watch charge cord

For the amount I used the iPad, I wish I had left it home. I only used it to watch a couple episodes of Stranger Things on the plane. The iPhone astounded me with its battery life. After charging overnight, it still had 30% left on it at 4:30 PM, even with heavy usage. Even better, it charged off the Anker battery pack fast– I was back up to 90% or better in an hour or less, much faster than I’ve experienced with other devices. This allowed me to top up the battery in the final session/event each afternoon and roam the city for the evening, comfortable that I had enough juice to last me until I returned to the hotel.

Software

Throughout the week, I used Day One to jot down important things – people I met, conversations I had, thoughts that came to mind, photos that I didn’t want to lose to the depths of my photo library, etc. I could have used paper and pen, but these were things I didn’t want to lose to my terrible handwriting. The other benefit of using Day One is that it records metadata about each entry – location, the current weather, how many steps I’d logged to that point in the day, even tags for categorization. Plus, it’s secured by TouchID. All told, I recorded 38 notes from the time I got to the airport on Monday to the time I left Seattle on Friday (although the first one, in which I mused about the TSA, is not fit for publication).

Because I’m skeptical of free open WiFi especially in such a large gathering, I bought a 1-week plan for Encrypt.me for protection.

Slack was used in several sessions and pre-cons throughout the week to drive Q&A – Brent Ozar & Erik Darling used it for their pre-con, the dbatools crew used it for theirs, and it was used for the PowerShell panel discussion as well. There was general chatter on Slack as well, but I think a lot more was going on on Twitter.

I set up an IFTTT recipe to capture #PASSSummit tweets to a Google Drive spreadsheet and it collected over 10K tweets over the week; someday I’ll go back through them to see what I missed (I set one up for #SQLFamily too, but haven’t reviewed that one yet) and make the full dataset available as a download. Twitter seems to be better/more manageable for getting notifications than Slack.

Late on Thursday, I spotted this tweet but failed to note who wrote it (had to search just now):

Intrigued, I downloaded the app and gave it a test run in a couple sessions and this was my response:

If you are anywhere you find a need to take a photo of a whiteboard, projector screen, or document, get this app. Apple may have introduced document scanning in iOS 11 but this is several levels above and it has earned a permanent spot on my phone. It automatically straightens/de-skews images and makes them very readable, then OCRs them. It even works for business cards and integrates with a number of apps/services already on your phone (OneNote, Photos, Mail, etc.). Here’s an example:

Photo of a slide, right off the camera.
The same slide, after being processed by Lens

The one place Lens falls short (in my experience thus far) is with color images, at least in Whiteboard mode. But if the content is text and line art, it’s quite useful.

Other

Despite my terrible handwriting, I still like taking notes at events like Summit (or even in meetings at the office) with pen and paper as I find that writing helps cement the ideas in my mind. My weapons of choice are the Uniball Jetstream 2 pen (seems they’re no longer producing this one, or maybe I misremembered the model; the Jetstream RT is hopefully similar) and Staples Sustainable Earth 9 1/2″ x 6″ spiral-bound notebook. The notebook has a couple pockets for stashing stuff and the covers are rigid enough that they protect the pages and I don’t have to put the notebook on a table to write.

My Eddie Bauer sling backpack got over-stuffed in a hurry. Too much swag plus my water bottle and other daily carry stuff. I need to find a replacement for it but don’t want to give up the convenience/comfort of the single-shoulder sling style. On the bright side, its obnoxious orange color makes me easy to spot from across the convention center.

Around the convention center

I didn’t see a lot of people walking around with iPads or Android tablets. Maybe when the iPad Pro & Apple Pencil become more widespread we’ll see people taking notes on them instead of paper. I did see a number of Microsoft Surface computers amongst attendees, and a few laptops. Lugging a full laptop around all week sounds like a drag (not to mention the battery anxiety) but if I had a well-spec’d Surface and large enough backpack, I might consider taking it.

The WSCC WiFi seemed shaky on Tuesday, but settled down and worked well for the remainder of the week. This seems to be the pattern at Summit, in my experience.

There was a common thread running through almost every session I attended as well as the Tuesday meetings, of the projectors blinking on and off for no apparent reason. It wasn’t any one presenter’s computer, nor was it any one room. It was bizarre but after a while, I think we all got used to it.

New stuff demoed

In Wednesday’s keynote, Microsoft ran several PowerBI (and PowerBI-adjacent) demos, but I didn’t find them particularly captivating. They were quite brief, and didn’t get into the technical work that made it possible. The HPE ProLiant DL380 Gen10 was shown off, boasting high performance thanks to persisted memory. All these demos were very shiny, but very brief. This is a technical audience – give us some more depth here, please.

The item that I found most interesting spent about 5 seconds on screen – a desktop app that looked like someone stuffed SQL Server Management Studio into Visual Studio Code, then a quick slide where the name SQL Server Operations Studio was revealed, along with a note that it’s a cross-platform GUI for managing SQL Server. Ever since SQL Server for Linux/macOS was announced, I’ve wanted this, and they skimmed over it in 5 seconds! Apparently there was a demo session at the Microsoft booth in the Exhibitor Hall later, but only advertised via Twitter; I didn’t hear about it until Thursday.

Advertisements

dbatools at PASS Summit 2017

I registered for Summit about a month before getting actively involved in the dbatools project, so when I saw the team was running a pre-con and I was going to meet them, I was pretty excited. It was amazing getting to meet and hang out with Chrissy, Rob, CK, Shane, Jess, John, Shawn, Aaron, Ben, Kiril, Shane, and Drew (sorry if I forgot anyone!), even if it was only for a moment.

But I’ll have another post about the people of Summit. This one’s about dbatools being talked about all over Summit and my experience with that as a member of the team. I’m certain there’s a heavy amount of confirmation bias here, but dbatools seems to have caught fire in the SQL Server community. And with good reason!

I was able to hand out about 300 of the dbatools fan ribbons I brought with me; half went to pre-con attendees, and the rest were handed out on the conference center floor at random. Sitting at the PowersShell table at the BoF lunches, people would join us and say “hey, I’ve heard about this dbatools thing but haven’t had a chance to learn it yet.” People would see mine and ask for one as they’d heard about the project and even used it themselves.

Rob Sewell talked about it at the SentryOne booth. I heard on Twitter and around the conference center that dbatools was getting mentioned in a number of speakers’ sessions, even the ones that didn’t advertise it in their abstracts. There was a panel discussion about PowerShell in general, spearheaded by the key dbatools team members and of course dbatools was talked about there. But the star of that session was Ken Van Hyning, aka SQL Tools Guy (t), talking about the roots and evolution of many of the tools we use and where he sees them going. He also hold us how we can impact the direction of the current tools and make pitches for new ones. Key takeaways:

  • Cross-platform, open-source where possible seems to be the way of the future
  • There’s a lot of work to be done to migrate the infrastructure and tooling around the tools to get the existing ones there (I think this is why we’re seeing new tooling come out instead of direct ports)
  • The squeaky wheel gets the love, so make your voice heard on Microsoft Connect and Twitter!
We even managed to get a group photo with the dbatools team members who were in the building!

After all the “I can’t believe this is happening!” moments through the week, the final session on Friday was the icing on the cake. I was in Carlos L Chacon’s session Measuring Performance Through Baselines and dbatools popped up on one of his slides.

dbatools on Carlos’s slide

Later, Carlos demonstrated a couple of functions, Get-DbaAgentAlert and Get-DbaUptime. The latter sounded familiar, so I jumped on Github and checked the history to confirm. Yep, it’s one of the functions I’d done some (non-CBH) work on. Which means that code I wrote was executed in a PASS Summit presentation! Yes, it’s a small thing and I’m the only person who even knew it as it was happening, but it happened. Which is pretty awesome.

Sleepless for Seattle

PASS Summit 2017 is only a week away and to say I’m excited about it would be an understatement. This will be my third trip to the epic gathering of SQL Server and Microsoft data platform professionals and each time, it gets better and better.

Not only is this a time for learning and networking, it’s a giant #sqlfamily reunion. The list of people I’m excited to see is long, both people I’ve known for a while and new friends I’ve only spoken with online.

How to find me:

As a “Summit Buddy” this year, I’ll be helping four Summit first-timers navigate the week. We’ve already been in contact via email and we’ll be meeting for the first time at the First-Timer Orientation & Speed Networking event late Tuesday afternoon. We’ll check in a few times through the week, probably over breakfast or lunch and hopefully see each other in the Community Zone and sessions as well. I’m hopeful that they’ll enjoy Summit as much as I do.

I’m still working out my session schedule. So many great sessions to choose from! My pre-conference and after-hours schedules are shaping up nicely though. For the first time ever, I’m attending as a User Group co-leader and SQL Saturday Organizer, so I’ll be in meetings for those on Tuesday.

Events to find me at outside the normal Summit hours:

  • Monday, 7:00 PM – Networking dinner
  • Tuesday, 4:45 PM – 6:00 PM – First-Timer Orientation & Speed Networking
  • Tuesday, 6:00 PM – 7:30 PM – Welcome Reception
  • Tuesday, 8:00 PM – dbatools team gathering
  • Wednesday, 4:30 PM – 7:00 PM – SQL Trail Mix
  • Thursday, 7:00 PM – 10:30 PM – Games Night
  • One Summit tradition I’m undecided about right now is SQL run. It’s no longer an official event but people still do it. I’ve got a sore leg right now and if I can’t get it fixed I’ll pass on the running. Seattle is a nice place to run, especially by the waterfront. But it’s hilly.

As with every Summit, the schedule is jam-packed and it’s going to be exhausting. I can’t wait.

My First Migration with dbatools

I’ve been a proponent of dbatools for close to a year now and even contributed to the project, but surprisingly haven’t been a heavy user of it. Mostly due to a lack of opportunity. I’m aware of many of the functions by virtue of working on the built-in documentation and following the project and presentations about it.

So when the need arose to move a development/test instance of SQL Server from a VM onto a physical server, I knew exactly what I wanted to do. I was warned that the contents of this instance had been moved once before and it resulted in over a week of work and a bunch of trouble. I can’t speculate on why this was as I wasn’t there to see it, but I wasn’t going to let that happen on my watch. So, with equal parts hubris and stubbornness (and a dash of naïveté), I dove in. We have the technology. We will migrate this thing.

The advertising for Start-DbaMigration makes it look so easy. Source, destination, your method of moving the data, and you’re done. Right? Well, sure – in a small, controlled sandbox. This one was neither. About 150 databases. Two dozen Agent jobs. User account cleanup. Different drive letters and sizes. And when it was all over, the server name, instance name, and IP of the new box had to match the old one so that we didn’t disrupt production or the developers.

Of course we’re going to rehearse this. But with the destination being a physical machine, I didn’t have the luxury of rolling back a snapshot each time, or restarting from a golden image. And I couldn’t do everything because it wasn’t an isolated environment – I couldn’t test all the Agent jobs (don’t want emails going out in error) and couldn’t reconfigure the IP or server name. Which meant that my script had to clean up any artifacts from previous runs before doing the migration. Each time.

I also wanted to bring the new instance up in a controlled fashion as opposed to just moving everything and letting it go, so that I could check things out before letting them break. I also had to work in checkpoints so the network/server admin could do his pieces. Which meant that after the migration, everything on the old server had to be stopped, and Agent jobs on the new one disabled (but with a record of what was enabled/disabled on the source, so I could replicate it).

I rehearsed what I could about a half-dozen times. Each time through took about 4 hours (having multiple tests helps build confidence in your elapsed time estimates), primarily because of the amount of data that had to be moved (about 700GB). Each time, I found another tweak needed. Maybe not entirely necessary, but I was out to prove something. I didn’t want this migration to be “good enough, a little rough around the edges” – this had to work right, right away.

This is truly standing on the shoulders of giants. Without the thousands of person-hours put in by Chrissy and the rest of the team, a short script like this to do a mountain of work simply is not possible. It’s not just having the huge amount of code to build on – it’s the suite of tests they run with every pull request that tells me that I can trust it’ll work right.

Looking back on it, there’s definitely a few things I’d change in this script, and more dbatools functions I could have used. But after successfully testing a couple times, I didn’t want to break what was working.

When the migration was complete, I did a brief checkout and then gave my server admin the green light. He flipped names & IPs around, and then I ran Repair-DbaServerName which I had just discovered a few days earlier. I was expecting to do it manually but I trust the dbatools crew and their test suite more than myself on this one as I’ve never done this before. When that was complete, I had a grand total of three issues (that I could find):

  • Database owners weren’t set appropriately. I was able to resolve this via Set-DbaDatabaseOwner easily enough.
  • Outgoing dbmail didn’t work. Turns out the SMTP relay on the new server wasn’t started. Easy fix.
  • I had a Linked Server on my production instance which was unable to communicate to the new test server. This took me the longest to figure out. We checked everything – SQL Server Configuration Manager, the network itself, and then finally my colleague suggested testing something outside SQL Server – mapping a drive from production to test. This last test succeeded, which pointed us at the SQL Server connection specifically. The root cause: I had two firewall rules on the new server that blocked connections from all but servers on the local subnet. The production server isn’t on the local subnet.

None of these are total showstoppers. I had workarounds (or quick solutions) for them and as this is a test instance we could live with minor inconvenience for a day or two. One or two final tests, and I was satisfied that everything was working properly so I went ahead and enabled my Agent jobs. Some of them still have incorrect owners but I can fix that later – they were wrong on the source instance too.

I consider this migration a huge success. We had 95% functionality by 9am. By 3pm, the last real problems were resolved (and only that late due to a series of meetings keeping me away from my desk). Most importantly, it was achieved with minimal downtime for the development and QA teams. I’m now one week post-migration and everything is still running smoothly on the new instance.

dbatools Badge Ribbons at PASS Summit

One of the (many) fun things to do at PASS Summit is to check out the ribbons people have attached to their badges. Some are witty or goofy, others informational, others technical, and still more that let you express how you identify with a community within the community.

To celebrate dbatools and the awesome team & community around it, two limited edition badges will be available from/distributed by me and a handful of other folks all week at Summit. Check ’em out:

Be on the lookout for these badges and talk to us about dbatools! What you like, what you’d like to see changed, new feature ideas, questions about how to use functions, anything at all. Even if you’ve never used dbatools, we love talking about it and showing people the awesome things they can do with it so please, introduce yourself!

T-SQL Tuesday #94 – Automating Configuration Comparison

tsql2sday-300x300This month’s T-SQL Tuesday is hosted by Rob Sewell and he’s posed the following question:

What are you going to automate today with PowerShell?

I’m cheating a little bit in that this is something I did a couple weeks ago, but it was immensely helpful. I’d been working on building out a new instance to migrate our test databases onto, but the developers had an urgent need to do some testing in isolation so they “borrowed” that new instance. But we had an additional requirement – the configuration needed to match production as closely as possible, more than our current test instance. Of course, I reached for Powershell and dbatools.

I started with Get-DbaSpConfigure to retrieve the settings available from sp_configure as these were the most important to my comparison. I ran this against production as well as each of my test instances and saved the results of each to a variable. Because accessing my production instance requires either jumping through hoops or using SQL Authentication, I passed -SqlCredential (get-credential -Message "Prod" -UserName MySQLLogin) so I’d be prompted for that password instead of using Windows Authentication.

My configurations saved for reference, I can now look at one of the objects returned to see which properties need to be compared:

ServerName            : TEST1
ConfigName            : AdHocDistributedQueriesEnabled
DisplayName           : Ad Hoc Distributed Queries
Description           : Enable or disable Ad Hoc Distributed Queries
IsAdvanced            : True
IsDynamic             : True
MinValue              : 0
MaxValue              : 1
ConfiguredValue       : 0
RunningValue          : 0
DefaultValue          : 0
IsRunningDefaultValue : True

Looks like I want to be checking out ConfigName and RunningValue. ConfigName is the same name that you’d pass to sp_configure. PowerShell comes with a handy function Compare-Object which (you guessed it!) lets you compare two objects and reports the differences.

Hmm…that’s no good. I know there are differences between test and production – for one, production has about 24 times the amount of RAM test has. I took to the SQL Community Slack for help, and was reminded that Compare-Object by default doesn’t do a “deep” comparison on PSCustomObjects, so you have to specify which property(ies) you want compared. In this case, RunningValue. So, passing both ConfigName and RunningValue into Compare-Object (the former so that I’d know what was being compared), then sorting the output, I was able to readily see the differences.

The value corresponding to the left-pointing arrow is what came from the reference object, and the right-pointing arrow is the value from the difference object (which instance is the “reference” in this case isn’t terribly important, as long as you remember which is which). So MaxDOP and MaxServerMemory are both higher in production – which is expected.

If we really want to get crazy, we can even make this a one-liner. But I don’t recommend it.

Running this against my second test instance as well let me quickly deliver the news to the developers that the instances were configured as closely as possible, with any differences being limited to the hardware/environments they were in which is not something we were in a position to address.

Stashing Data for dbatools

While working on an enhancement to dbatools, I had a need to stash a local copy of a file downloaded from the internet, but in a safe place that I could reasonably expect to be safe from accidental deletion.

  • User’s home directory? Maybe, but it’ll be clutter, the user might see it appear and fear that they’ve got malware. And likely deleted ina “cleanup” effort.

  • Create my own directory somewhere on the file system? See above.

  • A temp directory fetched from env:temp, env:tmp, or [System.IO.Path]::GetTempPath()? Well, it wouldn’t be hidden, but by definition it’ll be prone to getting purged. Not great for potential medium-term storage.

  • Let the user specify a location at runtime? I don’t know about you, but I’ll forget about 5 minutes later and I want the parameters for this to be simple.

No good solutions there. Fortunately, the dbatools team has it covered. The module has a system for storing its own configuration settings and data/files and has a few settings pre-set for you. You can see the full list with Get-DbaConfig:

In this case, the setting I’m looking for is called Path.DbatoolsData. Accessing it is easy. Get-DbaConfigValue -Name "Path.DbatoolsData" gets me the value of that setting – C:\Users\andy\AppData\Roaming\PowerShell\dbatools in this case.

Combine this with ‘Join-Path’ and I’ve got quick access to that file I tucked away for later. Join-Path -Path (Get-DbaConfigValue -Name "Path.DbatoolsData") -ChildPath "MyFile.zip" returns C:\Users\andy\AppData\Roaming\PowerShell\dbatools\MyFile.zip

You can create your own configuration settings & values via Set-DbaConfig but be warned: these do not persist across sessions. If you want to persist configuration values across sessions, you’ll need to write them out to a file, then read them in from that file in the new session.

Getting Started with GitHub for dbatools

I’ve recently started contributing to the dbatools project and it’s all done through GitHub. Prior to this, I’d never used git and GitHub for anything more than an offsite repository for my own small repositories (I’ve used Subversion for over a decade) and I never totally understood how it worked in a large collaborative project until this came along.

I’m putting this together here for my own reference and to hopefully write it up in a way that helps things “click” for some people who need that extra nudge to get into “aha!” territory. A number of the examples I’ve seen elsewhere have mixed the command-line and GUI clients, but the more I use git GUIs, the less I like them for the basic workflow. You only need to know a handful of commands to be productive and for that, the command line beats the GUI in my opinion.

So, here we go. My GitHub workflow for working on dbatools, with as much command-line work as possible. This walk-through assumes basic familiarity with source control concepts.

  1. If you don’t already have one, get yourself a GitHub account. While you’re doing that up, please set up two-factor authentication.
  2. Install a git client. If you install GitHub Desktop, it’ll come with the command-line client. I think GitKraken does as well. If you use macOS or Linux, you should already have the command-line client.
  3. Go to the dbatools main repository and click the Fork button on the upper-right corner.
  4. Now it’s time to get a copy of the repository onto your computer. Hop over to your profile on GitHub and get into your fork of the dbatools repository. Click the Clone button and copy the URL.

    Now open up your command line interface of choice and point it at the directory where your local copy is going to reside and run the following (using the URL you just copied):
    git clone https://github.com/YOURNAME/dbatools.git
    This will create a directory named dbatools in the current directory and pull the entire repository down into it.

    Congratulations! You’re ready to start coding. Almost.
  5. In order to keep up with the very rapid pace of the main project, you’re going to need a way to keep pulling in the changes that happen upstream from your fork. When I started working in GitHub, this was one of the most confusing things to me, so here’s the secret: git remote. I found this page that explains in a generic way what needs to be done. In English, you configure your local copy of the repository so that it knows about the next repository beyond what you cloned from, so that you can pull updates from there. For dbatools, run the following commands:
    git remote add upstream https://github.com/sqlcollaborative/dbatools.git
    git fetch origin
    git fetch upstream
    git merge upstream/master
    git push origin
    What’s this doing?

    • Set the an alias in your local repository called upstream that points at the main dbatools repository.
    • Fetch all changes from origin (your fork on GitHub)
    • Fetch all changes from upstream
    • Merge all changes from the master branch of upstream into your local repository
    • Push everything back up to your fork on GitHub (but at this point, there’s nothing to push)
      Keep those handy; you’ll use them a lot (see the “Maintaining your repository/fork” section below). Now you can check what remotes you have set up for your repository with git remote -v and verify that you have an upstream that points to the main repository.
  6. Git projects (including dbatools) make very heavy use of branches and merging. In this context, branches are a lightweight way of keeping your changes separate from one another. You can code against one branch, commit your changes, then switch to another branch to work on another set of changes altogether without disrupting the first set. In the dbatools main repository, the master branch is considered the release version. All development work is done using the development branch as a starting point. So, it makes sense to set up your fork and local repository the same way. We’ll create our own development branch with git branch development.
  7. Creating a branch doesn’t mean that you’re automatically working in it. Switching to a branch is done with git checkout (if you’re accustomed to Subversion, this new usage of checkout may seem odd). Running git checkout development switches into the new branch. Ready to code? Just about.
  8. You’re working in development now but it’s strongly recommended that you create a new branch for each new logical set of changes as it’ll make issuing Pull Requests easier and more manageable (PRs are merged into the main development branch). You want to create this branch from development, so now that you’re in that branch, you’re going to branch again. This time we’ll shortcut with git checkout -b Fix-Updates. This both creates the branch and checks it out with a single command.
  9. OK, now you can get your code on. The dbatools maintainers prefer that you make each change set only one file, or a small number of files (if they’re all related to one change) to make merging into the main project easier. What are you waiting for? Get in there and code!
  10. You’ve got some great code written and you’re ready to commit. First, let’s look at what’s changed with git status

    git shows that one file has been changed, but it’s not able to be committed yet. For that, you first have to add it (another difference from Subversion; this file is tracked, but you have to add or “stage” it for this commit) and git even tells you how – git add functions/Update-dbatools.ps1. Once that file is added, re-check your status and you’ll see that the file is taken care of.
  11. Now that everything is staged and ready to go, it’s time to commit. Do not be afraid to make lots of small commits to your repository as you work so that you can fall back to an earlier version if something goes wrong. Make sure you’re including a useful message along with your commit so that people (yourself included) know what’s going on six months from now. You commit with (conveniently enough), git commit.
    git commit -m "This is my awesome commit message"
  12. Great! You’ve committed your changes to the local repository, now how do you get them back up to GitHub? By pushing them to the origin (your GitHub fork). Run git push and you’ll be informed that you can’t do that quite yet.

    Copy & paste that, and you’ll get your changes pushed up to GitHub.

    Note that the second attempt was only needed because origin was unaware of the branch. Subsequent pushes to this branch can be done with just git push.
  13. We’re almost there. Jump back to your web browser and refresh your repository. You’ll see that your new branch is front and center. To get your changes in front of the dbatools maintainers, you need to issue a Pull Request via that green button on the far right.

    By default, the master branch of the upstream repository is used as the basis for comparison; you need to change this by selecting development from the drop-down.

    Then fill out the form as completely as possible and click Create Pull Request.

Congratulations! You’ve just submitted your first change to the dbatools project for review. You’ll probably get some comments on your first PR. And your tenth. And your hundredth. And that’s okay! They’re constructive comments meant to help you and make your code better – it’s not an indictment of your programming skills or DBA knowledge or experience. Your contribution is definitely appreciated. The dbatools team wants to put out the best code possible and collaboration is the best way to do that. Everyone is working toward the same goal and it’s a learning experience through and through.

Anyway…there may be some conversation on your PR about suggested changes, things to remove, things to add into it, style, etc. Please don’t give up & walk away, but don’t just blindly do whatever is suggested either. If you have good reasons behind your decisions, present them. The team is there to guide you and shepherd the project, keeping the quality high, so it may take a couple resubmissions before your code is ready for prime time. What’s really cool with GitHub is that if you make further changes to your branch, the Pull Request is updated automatically when you push that updated branch back up (this is why it’s important to create a new branch for each change that will become a PR).

And then, when that’s finished and Chrissy accepts your PR and you get that “Merged” email with an emoji (I think Chrissy always puts an emoji in them), you can sit back and smile.

Workflow Snapshot

That’s a lot of steps. Here’s the short-short version:

  1. Fork sqlcollaborative/dbatools
  2. Clone to your computer
  3. Set upstream
  4. Create a local development branch
  5. Merge upstream/development into local development
  6. Create & check out feature branch Fix-Updates
  7. Code
  8. Commit
  9. Push
  10. Issue Pull Request

Steps 1-4 you’ll only do once; everything else is the work cycle that you’ll get accustomed to quickly.

Where am I working?

If you’re working on multiple changes over time, or even if you’ve worked on a series of changes (completing one before moving on to the next), you’ll find yourself with a number of local branches and it’s easy to lose track of where you are. git branch will tell you what branches exist, and highlight in green the one that you’re working in.

Remember that you always want to check out development before creating a new branch.

Maintaining your repository/fork

As you work on dbatools more, you’re going to have to manage your branches and keep up with the Joneses…I mean upstream. The good news is that thanks to the work you did earlier in setting up an upstream repository, the latter is pretty easy.

Keeping up with development

To keep up with upstream‘s development branch, switch into your development branch, then pull things down into it and merge. You should do this pretty often; anytime you start a new branch (remember, you’re branching off development every time you start new work, so you want the freshest code possible).

git fetch upstream
git merge upstream/development

This will pull the latest changes from upstream into your development branch. Then you’ll want to push that back up to Github the same way you pushed your Fix-Updates branch up to GitHub.

You should also merge in from upstream/master occasionally. Switch to your master branch with git checkout master and do the same as above:

git fetch upstream
git merge upstream/master

You’ll also want to maintain your origin/master branch the same way; just use origin instead of upstream in the example above.

Conclusion

I hope that this has been easy to follow and gets you started down the road of contributing to dbatools or another Open Source project on GitHub. git looks intimidating from the sheer number of commands it has and the crazy things you can do with it, but for a normal, simple workflow there’s only a handful of commands you need and in many cases if you get a command slightly wrong or miss a step, it’ll help you out. The most important thing is to read the contribution guidelines before jumping into the deep end, and if you have any questions please don’t hesitate to ask in the #dbatools channel on the SQL Community Slack.

T-SQL Tuesday #92: Lessons Learned the Hard Way

tsql2sday-300x300This month’s T-SQL Tuesday is hosted by Raul Gonzalez and he’s asked everyone to share things we might be a bit embarrassed about:

For this month, I want you peers to write about those important lessons that you learned the hard way, for instance something you did and put your systems down or maybe something you didn’t do and took your systems down. It can be also a bad decision you or someone else took back in the day and you’re still paying for it…

  • In the stress/performance testing portion of an upgrade of a critical system, we were short on disk space. So, rather than having a separate set of VMs for the performance testing (as we needed to be able to get back to functional testing quickly), we decided to just take VM snapshots of all the servers. Testing was delayed a day or two – but we didn’t switch off the snapshots. Then we started testing and performance was terrific…for about five minutes. Then everything came to a screeching halt. Panicked, we thought we were going to need a pile of new hardware until the VMWare admin realized that our disks were getting hammered and we still had those active snapshots.
    Lesson learned: If you take VM-level snapshots of your database server and let them “soak” for an extended period, you’re gonna have a bad time. Unless you need to take a snapshot of the host OS or instance configuration itself, use a database snapshot instead of a VM-level snapshot.

  • A couple of times, I’ve had under-performing VMs running SQL Server. As I hadn’t been involved in the configuration, I thought everything had been provisioned properly. Turns out…not so much. Memory reservations, storage configuration, power profiles, all set up for suboptimal performance.
    Lesson learned: Ask your VMWare admin if they’ve perused the best practices guide and review things yourself before going down the rabbit hole of SQL Server configuration & query tuning. If the underlying systems aren’t configured well, you’ll spin your wheels for a long time.

  • In doing a configuration review of a rather large (production) instance, I noted that at least one configuration option was still set to the default value – Cost Threshold for Parallelism was stuck at 5. Running sp_BlitzCache, I found that I had quite a few simple queries going parallel and huge CXPACKET waits. CXPACKET isn’t bad per se, but if you’ve got a low-cost query that’s going parallel and waiting on threads where it could be running faster overall single-threaded (verified this was the case for several of the top offenders), increasing the cost threshold can help. I did some checking, verified that it was a configuration change I could make on the fly, and set the value to 50.
    And then everything. Slowed. Down.
    When I made this configuration change on the test instance, it wasn’t much a problem. But that was a much smaller instance, with much less traffic. What I failed to fully comprehend was the impact of this operation. I overlooked that changing this setting (and a number of others I wasn’t aware of) blows out the plan cache. In the case of this instance, about 26Gb of plan cache. Not only was performance impacted while the plan cache was re-filled, we took a hit while all the old plans were being evicted from cache.
    Lesson learned: Even if it seemed OK in test, those “low impact” changes can have a much larger impact on production unless you can make test mirror production in every way. So plan when you make these changes accordingly.

We learn the most from our mistakes. We can learn almost as much from the mistakes of others. Learn from mine.

Spell-checking dbatools with Visual Studio Code

Earlier this week I was working on adding a new feature to Update-DbaTools and while looking at another cmdlet to check syntax/conventions, I noticed an ugly typo in some of the help for it. 100% perfect prose isn’t necessary in the comment-based help for PowerShell cmdlets, but seeing misspellings and such kind of bugs me. Fortunately this is something I can help fix since the module is on Github.

First I needed to find a spell-checker that works with Visual Studio Code to help me spot misspellings. This was slightly trickier than expected, as I use macOS at home and at least one of the first plugins I found was Windows-only. I finally settled on Code Spellchecker.

But as you can see from the marketplace page there, by default this plugin doesn’t know PowerShell. In my user settings file settings.json, I added PowerShell to the cSpell.enabledLanguageIds section so it’s always recognized:

"cSpell.enabledLanguageIds": [
        "c",
        "cpp",
        "csharp",
        "go",
        "javascript",
        "javascriptreact",
        "json",
        "latex",
        "markdown",
        "php",
        "plaintext",
        "powershell",
        "python",
        "text",
        "typescript",
        "typescriptreact",
        "yml",
        "powershell"
    ],

And with that, VSCode was giving me green squiggles under lots of words – both misspelled and not. Code Spellchecker doesn’t understand PowerShell in its default setup, it doesn’t have a dictionary for it. Just to get things started, I added a cSpell.userWords section to my settings.json and the squiggles started disappearing. The list I’m working with so far is posted as a gist on Github:

I’ll keep this updated as I encounter more strings that need to be recognized, whether they’re PowerShell tokens or specific to the dbatools project. In addition to actual PowerShell syntax in there, I’m dropping in strings that are commonly found throughout the module. Eventually I suppose I should get a proper dictionary file or two together, but this works well for a quick & dirty way to get going with a spellcheck & language cleanup for the module.