PSPowerHour v1.0 Wrap-Up

The first edition of the PSPowerHour is in the books and it looks like it was a big success. This one was dbatools-heavy but I chalk that up to the dbatools community having lots of free time because we’ve automated so many of our tasks 🙂

Overall Impressions

I signed in about half an hour ahead of the webcast and was the first one there. Shortly thereafter, I was joined by Michael Lombardi (t, then Jess Pomfret (b|t) and Chrissy LeMaire (b|t). After ironing out a few glitches, we got everyone in the right place and kicked off the broadcast. Everything ran very smoothly, especially considering the number of people involved – Michael and Warren F. (b|t) did a terrific job of orchestrating everything.

While watching and listening to Chrissy, Doug, Andrew & Jess give their demos, I ran through my own in my head a couple times, adding and rearranging a few things as I observed how they were doing theirs. The big dilemma for me was whether or not to run the camera or exclusively screen share (I ended up going with the screen share only). Having not rehearsed my demo enough in the weeks leading up to the event, I was still not sure where to dip into more detail or dial things back and seeing what others were doing helped quite a bit. Having familiar faces & voices ahead of me in the queue put my nerves to rest.

I wasn’t able to watch the sessions after mine in their entirety due to family commitments. Joshua’s Burnt Toast module looks like it’ll be fun to experiment with and add some nice functionality to scripts (I got to see about half of his demo), and I’m really looking forward to catching a replay of Daniel’s demo of PowerShell on the Raspberry Pi – I didn’t realize that it had been ported already!

My Demo

I demoed Invoke-DbaSqlQuery and why one should use it over Invoke-SQLCmd – primarily for protection from SQL injection. Things didn’t go exactly the way I’d practiced; I ran short of time despite feeling like I rushed things and cutting back on some of what I had planned to say. The latter was in part because of the lead-ins from Chrissy, Andrew, and Jess. Because they did such a good job introducing dbatools, I was able to skip over it. But I was able to throw in a teaser for Matt Cushing’s (b|t) demo at the next PSPowerHour.

Running the demos inside a VM and screen-sharing just that VM made things easier for me as opposed to flipping between apps. My scripts will be available on GitHub along with the other presenters’ once the pull request is approved.

I achieved my goals:

  1. I did it
  2. I successfully demonstrated a SQL injection problem and explained why it’s so bad
  3. I demonstrated how to make database queries from PowerShell both more reliable and safer
  4. I learned about some new stuff that I desperately want to experiment with.

Next time around, I definitely need to rehearse more and get my timing down better but overall, I’m happy.

Check it out!

Speaking: PowerHour, August 21st 2018

It’s official! I will be speaking at the inaugural PowerHour online lightning demo event on Tuesday, August 21st at 2200 UTC. I’ll be demoing Better, Safer SQL Queries from PowerShell.

If you’re working with SQL Server from PowerShell, either as a DBA, analyst, or anyone else running queries, you’ve probably used Invoke-SqlCmd. But depending on how you’re building your queries, this can be error-prone or a huge security exposure! With the help of the dbatools module, I’ll show you how to write and run these queries better and safer – and make them easier to work into your scripts to boot.

I’m excited to be a part of this – it’s been far too long since I’ve done a presentation. Please join us on the YouTube channel/stream next Tuesday!

PowerHour – Online PowerShell Lightning Talks!

Earlier this week, the PowerHour was announced. What is it? It’s kind of like a virtual user group. One hour, 6(ish) lightning demos (10 minutes or less), centered on PowerShell. All community-sourced and driven – anyone can submit a proposal for a demo and if accepted, you’ll be slotted into an available spot.

They’ve already set up a YouTube Channel so you can either watch live or catch up later on, and the whole deal is being organized and managed through GitHub. Got something you want to show off? Log an issue using the template!

I’ve submitted my first (but hopefully not last) proposal. Yep, it involves dbatools because that’s my jam.

It’ll be fun for speakers and attendees alike! You can even use your demo(s) for user group meetings or SQL Saturdays – anywhere lightning talk/demo spots are available. Several SQL Server community folks have tossed proposals in and with so many DBAs getting hooked on PowerShell, it’s a great way for these two communities to come together.

Quick Table Info Export with PowerShell

This week I had a user come to me asking about how fields were defined on a few tables he was using in writing some reports. Long story short, he’s been tasked with writing some new reports and updating existing ones, but he doesn’t have visibility to the database itself so he’s left with the “ok, let’s try this” approach and then reading error messages to debug when things go sideways. Very tedious.

I asked him for a list of the tables he was most interested in, and while he worked on that I set to work on (of course) a quick & dirty PowerShell script to collect the data he needed – field names, types, and whether they’re nullable.

With the help of dbatools and Doug Finke’s ImportExcel module, the resulting script to retrieve the requisite data and export it to an Excel workbook with one sheet per table is very short. Each table is on its own worksheet, with the header row bold and frozen so it stays in view while scrolling.

import-module dbatools,importexcel;
$DBServer = 'MYSERVER';
$TableList = Get-Content TableList.txt;
$Tables = Get-DbaTable -ServerInstance $DBServer -Database Satellites -Table $TableList;
foreach ($Table in $Tables) {
$Columns = $Table.columns | Select-Object Name,DataType,Nullable,@{name='Length';expression={$_.Properties["Length"].Value}};
$Columns | Export-Excel -Path C:\Users\andy\Documents\TableInfo.xlsx -WorkSheetname $Table.Name -FreezeTopRow -BoldTopRow;
}

Line by line:

  • Fetch the list of tables from a file provided to me
  • Gather a collection of SMO objects for the tables from the server
  • Loop over each table and extracting the relevant data about each
  • Write the table’s information out to a worksheet named for the table in an Excel workbook

The only tricky part was fetching the field lengths for the nvarchar fields. I had to go spelunking into each field’s Properties property to get that. The result:

Export-TableInfo Results

But I can tell this isn’t disco enough for you, so I’m gonna put it right here.
– Justin Hammer, Iron Man 2

You can do this in a one-liner too. It’s not as pretty, but it’s quite doable and the resulting Excel file is identical.

Get-Content TableList.txt | Foreach-Object {Get-DbaTable -ServerInstance MYSERVER -Database satellites -Table $PSItem | ForEach-Object {$PSItem.Columns | Select-Object Name,DataType,Nullable,@{name='Length';expression={$_.Properties["Length"].Value}} | Export-Excel -Path C:\Users\andy\Documents\TableInfo2.xlsx -WorkSheetname $PSItem.Name -FreezeTopRow -BoldTopRow}}

End to end, I turned this around in about 20 minutes and the the requestor’s life got easier in a big hurry. Best of all, when he comes back with another batch of tables for his next set of reports, I can turn the request around in less than five minutes.

Processing SQL Saturday Raffle Tickets with PowerShell

Every year, I spend the Sunday after SQL Saturday Rochester scanning & processing raffle tickets for our wonderful sponsors. Here’s how the system works:

  • Attendees get tickets (one ticket per sponsor) with their name, the sponsors name, and a QR code on them
  • The QR codes represents a URI, unique to the combination of event, attendee and sponsor.
  • Attendees drop their tickets in a box to enter the sponsor’s raffle prize drawing
  • When the URI from the QR code is accessed, it registers in the SQL Saturday system
  • Organizers run a report for each sponsor that includes the contact info of all attendees who dropped off a raffle ticket, then email the report to the sponsor

It works pretty well, but the hangup is that most QR scanners will open your web browser (or prompt you to open it) to the URL on each scan. For 150+ tickets, this takes a long time. Every year, I lament “oh, how I wish I could just scan these, collect the URLs into a nicely formatted file, and script this whole thing”.

Finally, this year, I found a way to do it with my iPhone, MacBook Pro & PowerShell. Here’s what I did:

  1. Get Beep for iOS.
  2. Scan the tickets. This app is really fast, it may scan before you even realize it. I just stacked them up, pointed the phone at the pile, and as the app beeped (to tell me it had scanned successfully), I tossed the ticket to the side.
  3. When done, tap the file box icon in the upper-right corner
  4. Tap the Share icon
  5. Save the file out to a CSV on iCloud (you can email it if you like, but iCloud is a little easier for me)
  6. On the Mac, open up Terminal and navigate to /Users/YOURNAME/Library/Mobile Documents/com~apple~CloudDocs
  7. Fire up PowerShell (I installed it via HomeBrew with brew install powershell and start it by running pwsh).
  8. Run the following one-liner:
    import-csv './FILENAME.csv' -Header URI,Type,Date|select-object -Unique -Property uri |foreach-object {invoke-webrequest $PSItem.uri}

This bit of PowerShell:

  • Imports the CSV file and forces column names (as the file doesn’t include them) of my choosing
  • Extracts the unique URIs from the data
  • Loops through all the URIs and invokes a web request to each one of them

It’s the same process I’ve used in the past, just much faster because I’m not pausing after each scan to load a URI in my web browser.

With nearly 300 raffle and attendance tickets scanned, this zipped through all of them in less than 90 seconds. Best of all, I could start it and walk away to do something else. Doing it this way made my SQL Saturday “closeout” process a little less stressful.

My First Migration with dbatools

I’ve been a proponent of dbatools for close to a year now and even contributed to the project, but surprisingly haven’t been a heavy user of it. Mostly due to a lack of opportunity. I’m aware of many of the functions by virtue of working on the built-in documentation and following the project and presentations about it.

So when the need arose to move a development/test instance of SQL Server from a VM onto a physical server, I knew exactly what I wanted to do. I was warned that the contents of this instance had been moved once before and it resulted in over a week of work and a bunch of trouble. I can’t speculate on why this was as I wasn’t there to see it, but I wasn’t going to let that happen on my watch. So, with equal parts hubris and stubbornness (and a dash of naïveté), I dove in. We have the technology. We will migrate this thing.

The advertising for Start-DbaMigration makes it look so easy. Source, destination, your method of moving the data, and you’re done. Right? Well, sure – in a small, controlled sandbox. This one was neither. About 150 databases. Two dozen Agent jobs. User account cleanup. Different drive letters and sizes. And when it was all over, the server name, instance name, and IP of the new box had to match the old one so that we didn’t disrupt production or the developers.

Of course we’re going to rehearse this. But with the destination being a physical machine, I didn’t have the luxury of rolling back a snapshot each time, or restarting from a golden image. And I couldn’t do everything because it wasn’t an isolated environment – I couldn’t test all the Agent jobs (don’t want emails going out in error) and couldn’t reconfigure the IP or server name. Which meant that my script had to clean up any artifacts from previous runs before doing the migration. Each time.

I also wanted to bring the new instance up in a controlled fashion as opposed to just moving everything and letting it go, so that I could check things out before letting them break. I also had to work in checkpoints so the network/server admin could do his pieces. Which meant that after the migration, everything on the old server had to be stopped, and Agent jobs on the new one disabled (but with a record of what was enabled/disabled on the source, so I could replicate it).

I rehearsed what I could about a half-dozen times. Each time through took about 4 hours (having multiple tests helps build confidence in your elapsed time estimates), primarily because of the amount of data that had to be moved (about 700GB). Each time, I found another tweak needed. Maybe not entirely necessary, but I was out to prove something. I didn’t want this migration to be “good enough, a little rough around the edges” – this had to work right, right away.

import-module dbatools;
start-transcript;
$VerbosePreference = "Continue";
$WhatIfPreference = $False;
$SrcSQLInstance = 'SOURCE\INSTNAME'
$DestSQLInstance = "DEST\INSTNAME";
Get-DbaAgentJob -ServerInstance $DestSQLInstance | Foreach-object {Remove-DbaAgentJob -serverinstance $psitem.sqlinstance -job $psitem}
get-dbaagentschedule -serverinstance $DestSQLInstance | foreach-object {remove-dbaagentschedule -ServerInstance $DestSQLInstance -schedule $PSItem}
get-dbadatabase -serverinstance $DestSQLInstance | remove-dbadatabase;
Copy-DbaLogin -source $SrcSQLInstance -destination $DestSQLInstance -ExcludeLogin 'DOMAIN\User1','DOMAIN\User2','User3';
# Get list of agent jobs and their state on SOURCE
Get-DbaAgentJob -ServerInstance $SrcSQLInstance | Select-Object -Property sqlinstance, name, isenabled | export-csv -Path 'jobstate_$(get-date -format "yyyyMMddHHmmss").csv' -NoTypeInformation;
Start-dbaMigration -Source $SrcSQLInstance -Destination $DestSQLInstance -DisableJobsOnDestination -BackupRestore -NetworkShare \\DEST\SQLMigration -WithReplace -Force -nologins -DisableJobsOnSource;
$DestServices = @('sqlagent$INSTNAME','mssqlfdlauncher$INSTNAME','mssql$INSTNAME');
foreach ($Service in $DestServices) {
Get-Service -ComputerName $DestSQLInstance.Split('\')[0] -Name $Service | Stop-Service;
# Want to shut down services on SOURCE and keep them that way
Get-Service -ComputerName $SrcSQLInstance.Split('\')[0] -Name $Service | Stop-Service;
}
# Bring services up in the correct order
[array]::Reverse($DestServices);
foreach ($Service in $DestServices) {
Get-Service -ComputerName $DestSQLInstance.Split('\')[0] -Name $Service | Start-Service;
}
$VerbosePreference = "SilentlyContinue";
Stop-Transcript
remove-module dbatools;

This is truly standing on the shoulders of giants. Without the thousands of person-hours put in by Chrissy and the rest of the team, a short script like this to do a mountain of work simply is not possible. It’s not just having the huge amount of code to build on – it’s the suite of tests they run with every pull request that tells me that I can trust it’ll work right.

Looking back on it, there’s definitely a few things I’d change in this script, and more dbatools functions I could have used. But after successfully testing a couple times, I didn’t want to break what was working.

When the migration was complete, I did a brief checkout and then gave my server admin the green light. He flipped names & IPs around, and then I ran Repair-DbaServerName which I had just discovered a few days earlier. I was expecting to do it manually but I trust the dbatools crew and their test suite more than myself on this one as I’ve never done this before. When that was complete, I had a grand total of three issues (that I could find):

  • Database owners weren’t set appropriately. I was able to resolve this via Set-DbaDatabaseOwner easily enough.
  • Outgoing dbmail didn’t work. Turns out the SMTP relay on the new server wasn’t started. Easy fix.
  • I had a Linked Server on my production instance which was unable to communicate to the new test server. This took me the longest to figure out. We checked everything – SQL Server Configuration Manager, the network itself, and then finally my colleague suggested testing something outside SQL Server – mapping a drive from production to test. This last test succeeded, which pointed us at the SQL Server connection specifically. The root cause: I had two firewall rules on the new server that blocked connections from all but servers on the local subnet. The production server isn’t on the local subnet.

None of these are total showstoppers. I had workarounds (or quick solutions) for them and as this is a test instance we could live with minor inconvenience for a day or two. One or two final tests, and I was satisfied that everything was working properly so I went ahead and enabled my Agent jobs. Some of them still have incorrect owners but I can fix that later – they were wrong on the source instance too.

I consider this migration a huge success. We had 95% functionality by 9am. By 3pm, the last real problems were resolved (and only that late due to a series of meetings keeping me away from my desk). Most importantly, it was achieved with minimal downtime for the development and QA teams. I’m now one week post-migration and everything is still running smoothly on the new instance.

T-SQL Tuesday #94 – Automating Configuration Comparison

tsql2sday-300x300This month’s T-SQL Tuesday is hosted by Rob Sewell and he’s posed the following question:

What are you going to automate today with PowerShell?

I’m cheating a little bit in that this is something I did a couple weeks ago, but it was immensely helpful. I’d been working on building out a new instance to migrate our test databases onto, but the developers had an urgent need to do some testing in isolation so they “borrowed” that new instance. But we had an additional requirement – the configuration needed to match production as closely as possible, more than our current test instance. Of course, I reached for Powershell and dbatools.

I started with Get-DbaSpConfigure to retrieve the settings available from sp_configure as these were the most important to my comparison. I ran this against production as well as each of my test instances and saved the results of each to a variable. Because accessing my production instance requires either jumping through hoops or using SQL Authentication, I passed -SqlCredential (get-credential -Message "Prod" -UserName MySQLLogin) so I’d be prompted for that password instead of using Windows Authentication.

My configurations saved for reference, I can now look at one of the objects returned to see which properties need to be compared:

ServerName            : TEST1
ConfigName            : AdHocDistributedQueriesEnabled
DisplayName           : Ad Hoc Distributed Queries
Description           : Enable or disable Ad Hoc Distributed Queries
IsAdvanced            : True
IsDynamic             : True
MinValue              : 0
MaxValue              : 1
ConfiguredValue       : 0
RunningValue          : 0
DefaultValue          : 0
IsRunningDefaultValue : True

Looks like I want to be checking out ConfigName and RunningValue. ConfigName is the same name that you’d pass to sp_configure. PowerShell comes with a handy function Compare-Object which (you guessed it!) lets you compare two objects and reports the differences.

C:\...\_Projects\Server Config>compare-object $Test1 $Prod
C:\...\_Projects\Server Config>
view raw gistfile1.ps1 hosted with ❤ by GitHub

Hmm…that’s no good. I know there are differences between test and production – for one, production has about 24 times the amount of RAM test has. I took to the SQL Community Slack for help, and was reminded that Compare-Object by default doesn’t do a “deep” comparison on PSCustomObjects, so you have to specify which property(ies) you want compared. In this case, RunningValue. So, passing both ConfigName and RunningValue into Compare-Object (the former so that I’d know what was being compared), then sorting the output, I was able to readily see the differences.

$Prod = Get-DbaSpConfigure -ServerInstance PROD -SqlCredential (get-credential -Message "Prod" -UserName MySQLLogin);
$Test1 = Get-DbaSpConfigure -ServerInstance Test1;
Compare-Object -ReferenceObject $Test1 -DifferenceObject $Prod -property ConfigName,RunningValue|Sort-Object ConfigName;
ConfigName RunningValue SideIndicator
---------- ------------ -------------
MaxDegreeOfParallelism 6 =>
MaxDegreeOfParallelism 4 <=
MaxServerMemory 563200 =>
MaxServerMemory 25600 <=
view raw gistfile1.ps1 hosted with ❤ by GitHub

The value corresponding to the left-pointing arrow is what came from the reference object, and the right-pointing arrow is the value from the difference object (which instance is the “reference” in this case isn’t terribly important, as long as you remember which is which). So MaxDOP and MaxServerMemory are both higher in production – which is expected.

If we really want to get crazy, we can even make this a one-liner. But I don’t recommend it.

Compare-Object -ReferenceObject (Get-DbaSpConfigure -ServerInstance Test1;) -DifferenceObject (Get-DbaSpConfigure -ServerInstance PROD -SqlCredential (get-credential -Message "Prod" -UserName MySQLLogin)) -property ConfigName,RunningValue|Sort-Object ConfigName;

Running this against my second test instance as well let me quickly deliver the news to the developers that the instances were configured as closely as possible, with any differences being limited to the hardware/environments they were in which is not something we were in a position to address.

Spell-checking dbatools with Visual Studio Code

Earlier this week I was working on adding a new feature to Update-DbaTools and while looking at another cmdlet to check syntax/conventions, I noticed an ugly typo in some of the help for it. 100% perfect prose isn’t necessary in the comment-based help for PowerShell cmdlets, but seeing misspellings and such kind of bugs me. Fortunately this is something I can help fix since the module is on Github.

First I needed to find a spell-checker that works with Visual Studio Code to help me spot misspellings. This was slightly trickier than expected, as I use macOS at home and at least one of the first plugins I found was Windows-only. I finally settled on Code Spellchecker.

But as you can see from the marketplace page there, by default this plugin doesn’t know PowerShell. In my user settings file settings.json, I added PowerShell to the cSpell.enabledLanguageIds section so it’s always recognized:

"cSpell.enabledLanguageIds": [
        "c",
        "cpp",
        "csharp",
        "go",
        "javascript",
        "javascriptreact",
        "json",
        "latex",
        "markdown",
        "php",
        "plaintext",
        "powershell",
        "python",
        "text",
        "typescript",
        "typescriptreact",
        "yml",
        "powershell"
    ],

And with that, VSCode was giving me green squiggles under lots of words – both misspelled and not. Code Spellchecker doesn’t understand PowerShell in its default setup, it doesn’t have a dictionary for it. Just to get things started, I added a cSpell.userWords section to my settings.json and the squiggles started disappearing. The list I’m working with so far is posted as a gist on Github:

"cSpell.enableCompoundWords":true,
"cSpell.userWords": [
"MSSQL",
"Chrissy",
"gmail",
"cmdlet",
"Maire",
"clemaire",
"dbatools",
"computername",
"interop",
"sqlcredential",
"wildcard",
"failover",
"subnet",
"securestring",
"scriptblock",
"verifyonly",
"notin",
"notlike",
"notcontains",
"sqlcollaborative",
"pscustomobject",
"pscmdlet",
"shouldprocess",
"cmdletbinding",
"msdb",
"remoting",
"HKLM",
"machinekey",
"winrm",
"scred",
"dcred",
"netnerds",
"sqlserver",
"sqlcluster",
"filestreams",
"filegroups",
"recurse",
"percentcomplete",
"tempdb",
"SSIS",
"SSISDB",
"Progressbar",
"nvarchar",
"datatable",
"datarow",
"bulkcopy",
"bigint",
"smallint",
"tinyint",
"datetime",
"guid",
"uniqueidentifier",
"wildcard",
"wildcards",
"multiprotocol",
"subnet",
"subnets",
"Updateable",
"OLTP",
"Abshire",
"servername",
"adhoc",
"SSAS",
"endregion",
"parameterclass",
"paramserver",
"notmatch",
"sqlinstance",
"cmdlets",
"ldap",
"contoso",
"ctrlb"
]

I’ll keep this updated as I encounter more strings that need to be recognized, whether they’re PowerShell tokens or specific to the dbatools project. In addition to actual PowerShell syntax in there, I’m dropping in strings that are commonly found throughout the module. Eventually I suppose I should get a proper dictionary file or two together, but this works well for a quick & dirty way to get going with a spellcheck & language cleanup for the module.

An Unexpected Side-Effect of Invoke-WebRequest

Recently I was working on a bit of PowerShell to download the awesome First Responder Kit from Brent Ozar Unlimited. The canonical URL for the FRK is http://firstresponderkit.org/ but that’s a redirect to the GitHub repository where all the magic happens. I thought to myself:

Self! Rather than take a chance on that GitHub URL changing, use the “main” URL and Invoke-WebRequest will take care of the redirect for you.

So off to the PowerShell prompt I went and ran Invoke-WebRequest -Uri http://firstresponderkit.org/ to start looking at the object returned so I could see what I needed to parse out to find my way to the true download URL.

Then Firefox (my default browser) opened, and I was staring at https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit/tree/master.

Alt text

I was expecting an HTTP 30X redirect status code which, based upon previous experience, Invoke-WebRequest would honor. Instead, I got a 200 OK which is the web server saying “yep, here’s your stuff, HAND!”

Invoke-WebRequest -Uri http://firstresponderkit.org | Select-Object -ExpandProperty Headers


Key              Value
---              -----
x-amz-id-2       {QtTLMVw5QobGd/xlueEIY44Ech2va1ZKALhaMrY9f/yI0fBHvAoA6KwGUa5jTQxPF5fF85tuYws=}
x-amz-request-id {86A4E2A10548CA53}
Date             {Sat, 03 Jun 2017 16:14:47 GMT}
ETag             {"4ff7c8b410c399d5b18e2ab05bbfce22"}
Server           {AmazonS3}

Hmmm…nope, nothing there. OK, in a past life I did some non-redirect redirects through page contents. Let’s look at the content of the page itself (if any):


    Invoke-WebRequest -Uri http://firstresponderkit.org | Select-Object -ExpandProperty Content
    
    <!DOCTYPE HTML>
    <html lang="en-US">
    <head>
        <meta charset="UTF-8">
        <meta http-equiv="refresh" content="1;url=https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit/tree/master">
        <script type="text/javascript">
            window.location.href = "https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit/tree/master"
        </script>
        <title>Page Redirection</title>
    </head>
    <body>
        If you are not redirected automatically, <a href="https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit/tree/master">head over here.</a>
    </body>
    </html>

Now we’ve got something. The web page itself has both a meta tag-based refresh/redirect and a JavaScript redirect, and that JavaScript redirect is being executed! How do we prevent the browser from opening and send the script to the right place?

Answer: the -UseBasicParsing switch for Invoke-WebRequest. From the docs:

Indicates that the cmdlet uses the response object for HTML content without Document Object Model (DOM) parsing.

This parameter is required when Internet Explorer is not installed on the computers, such as on a Server Core installation of a Windows Server operating system.

Note that this doesn’t eliminate all parsing of the content, and it’s not required to get parsing done on systems without Internet Explorer – everything I’ve written in this post was run in PowerShell on macOS, where Internet Explorer definitely doesn’t exist. But what it will do is prevent the parsing/execution of the JavaScript that’s embedded in the web page, which is what caused the browser to open in this case.

Looking closer at the output of Invoke-WebRequest, there’s a Links collection that looks pretty good.


    (Invoke-WebRequest -Uri http://firstresponderkit.org).Links |Format-List

    outerHTML : <a href="https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit/tree/master">head over here.</a>
    tagName   : A
    href      : https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit/tree/master

So now I can dig a little deeper and send my script to the URL that Brent & Co. want me to go to, and continue my search for the one true First Responder Kit download link by crawling subsequent pages.

Invoke-WebRequest -UseBasicParsing -uri $((Invoke-WebRequest -Uri http://firstresponderkit.org).Links[0].href)

Adding Application Name to Invoke-SqlCmd2

In a previous post, I expressed some frustration over Invoke-SqlCmd not setting an Application Name for its ODBC connection, leaving us with the generic .NET SqlClient Library when looking at active sessions in sp_who2 and sp_whoisactive (and any other monitoring tool). Unfortunately, I can’t really do anything about Invoke-SqlCmd aside from posting a suggestion on Connect or the Client Tools Trello board, but Invoke-SqlCmd2 has the same issue and that’s on GitHub. So, here we go!

In its current form, if a SqlConnection object isn’t passed into Invoke-SqlCmd2, the cmdlet does the following:

if ($Credential)
{
$ConnectionString = "Server={0};Database={1};User ID={2};Password=`"{3}`";Trusted_Connection=False;Connect Timeout={4};Encrypt={5}" -f $SQLInstance,$Database,$Credential.UserName,$Credential.GetNetworkCredential().Password,$ConnectionTimeout,$Encrypt
}
else
{
$ConnectionString = "Server={0};Database={1};Integrated Security=True;Connect Timeout={2};Encrypt={3}" -f $SQLInstance,$Database,$ConnectionTimeout,$Encrypt
}
$conn = New-Object System.Data.SqlClient.SQLConnection
$conn.ConnectionString = $ConnectionString
Write-Debug "ConnectionString $ConnectionString"

I decided to change this around so that it no longer uses string formatting, but instead a SqlConnectionStringBuilder. I had a couple reasons for this:

  • It will eliminate redundant code. There are several common elements in each of the ConnectionStrings above. If more complex logic is needed, there are potentially more copies of this ConnectionString kicking around.
  • It’s prone to copy/paste and other editing errors. If there’s a change that affects both versions of the ConnectionString and the developer just copies the line from one branch of the if statement to the other, code will be lost or invalid values will be substituted because of positioning.

With this in mind, I factored the common elements out to build the base of the ConnectionString, then added the remaining elements conditionally based on the cmdlet inputs.

$CSBuilder = New-Object -TypeName System.Data.SqlClient.SqlConnectionStringBuilder
$CSBuilder["Server"] = $SQLInstance
$CSBuilder["Database"] = $Database
$CSBuilder["Connection Timeout"] = $ConnectionTimeout
if ($Encrypt) {
$CSBuilder["Encrypt"] = $true
}
if ($Credential) {
$CSBuilder["Trusted_Connection"] = $false
$CSBuilder["User ID"] = $Credential.UserName
$CSBuilder["Password"] = $Credential.GetNetworkCredential().Password
} else {
$CSBuilder["Integrated Security"] = $true
}

Before going any further in adding support for inserting Application Name into the ConnectionString, I had to add a parameter to the cmdlet itself.

[Parameter( Position=11, Mandatory=$false )]
[Alias( 'Application', 'AppName' )]
[String]
$ApplicationName

With that complete, I can now add it into the SqlConnectionStringBuilder.

if ($ApplicationName) {
$CSBuilder["Application Name"] = $ApplicationName
} else {
$ScriptName = (Get-PSCallStack)[-1].Command.ToString()
if ($ScriptName -ne "") {
$CSBuilder["Application Name"] = $ScriptName
}
}

Because ApplicationName is an optional parameter, I had to account for cases where it’s not specified by the caller. What the code above is doing is looking at the entire call stack and going back up to the very top to get the name of the script file that was run and ultimately called Invoke-SqlCmd2. Finally, I extract the ConnectionString from the SqlConnectionStringBuilder and assign it to the SqlConnection‘s ConnectionString property.

$ConnectionString = $CSBuilder.ToString()
$conn.ConnectionString = $ConnectionString
Write-Debug "ConnectionString $ConnectionString"

My code changes complete, I reviewed my work and sent my first real pull request off to Warren (B|T) as PR #7 for the module.