Make Your Application’s Name Heard

Odds are, you’ve got more than one application or script accessing your database or SQL Server instance at any given time. You’re probably stacking them on a small number of servers in an attempt to cut down on licensing costs and resource usage. All those Powershell scripts running on the central job server are running under a single service account, and you’ve got a lazy vendor who set up both the website and back-end application server to run under the same account, maybe even on a single app/web server.

Then the call comes in. “The database is slow.” Before firing up your monitoring suite, you take a quick pass with sp_who2 or sp_whoisactive and you’re greeted with a dozen sessions, all sporting a program_name of .Net SqlClient Data Provider. Terrific! Which session is coming from which application and causing all the trouble? Not so easy to figure out.

Fortunately, the .NET SqlClient (and other ODBC drivers as well) has a built-in solution. Your application’s connection string has quite a few parameters available to provide configuration and information, and one that seems to get overlooked is Application Name. This one does exactly what it says on the tin – it lets you specify a name that will be displayed to anyone looking for it in SQL Server, including sp_whoisactive. Anyplace you have the ability to write a connection string, you can use this. It costs you nothing!

  • If you’re using System.Data.SqlClient.ConnectionStringBuilder, it’s just another item in the properties collection.
  • If you’re constructing your connection string as a regular string, just add Application Name=Andy's Awesome App; to the end of your current connection string.

$DBConnection = New-Object System.Data.SqlClient.SqlConnection;
$DBCSBuilder = New-Object System.Data.SqlClient.SqlConnectionStringBuilder;
$DBCSBuilder['Data Source'] = ".\sql2016";
$DBCSBuilder['Initial Catalog'] = "WideWorldImporters";
$DBCSBuilder['Application Name'] = "Andy's Awesome Application";
$DBCSBuilder['Integrated Security'] = "true";
$DBConnection.ConnectionString = $DBCSBuilder.ToString();

#alternative method:
#$DBConnection.ConnectionString = "Data Source=.\sql2016;Initial Catalog=WideWorldImporters;Integrated Security=true;Application name=Andy's Awesome Application;"

$DBConnection.Open();
$QueryCmd = $DBConnection.CreateCommand();
$QueryCmd.CommandText = "select c.CityName from application.cities c join application.stateprovinces s on c.StateProvinceID = s.stateprovinceid join application.countries ct on ct.CountryID = s.CountryID where ct.CountryName = 'United States' and s.StateProvinceName = 'New York'; waitfor delay '00:00:15';";
$SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter;
$QueryCmd.Connection = $DBConnection;
$SqlAdapter.SelectCommand = $QueryCmd;
$DataSet = New-Object System.Data.DataSet;
$SqlAdapter.Fill($DataSet);
$DataSet.Tables[0];


import-module sqlserver;
Invoke-Sqlcmd -ServerInstance .\sql2016 -Database wideworldimporters -Query "select c.CityName from application.cities c join application.stateprovinces s on c.StateProvinceID = s.stateprovinceid join application.countries ct on ct.CountryID = s.CountryID where ct.CountryName = 'United States' and s.StateProvinceName = 'New York'; waitfor delay '00:00:15';"
remove-module sqlserver;

Here’s the difference between using SqlClient and Invoke-SQLCmd in Powershell when seen from the SQL Server side:

invoke-sqlcmd-app-name

While the .NET Framework lets you specify this fairly easily, as we move up in abstraction layers these options become buried or entirely unavailable. This is the case with the Powershell Invoke-SqlCmd cmdlet. One of the many benefits of using Invoke-SqlCmd2 is that it does construct its own connection string (if you don’t pass in a SqlConnection object) but it doesn’t expose Application Name as an available parameter for it.

In an upcoming post, I’ll show how I added Application Name support to Invoke-SqlCmd2.

SQL Saturday Returns to Rochester!

The Rochester, NY chapter of PASS is holding our 6th annual SQL Saturday on April 29th, 2017! As always, RIT is hosting our event on campus.

SQL Saturday is a free day of training centered on the Microsoft Data Platform. Volunteer speakers come from all over the country (and sometimes beyond) to share their knowledge with attendees. There are sessions available for professionals of all skill levels, whether you’re just starting to learn about databases or a seasoned veteran, in addition to valuable professional development guidance.

Why should you attend?

  • Free training from renowned experts
  • Network with other professionals in the field
  • Check out new products and services from our sponsors

Do you work for a company that offers products or services that would be of interest to developers, data professionals or system administrators? Please consider sponsoring our event! We offer several sponsorship plans and if you don’t see something that quite works for you, let us know and we’ll discuss a custom sponsorship plan.

Why sponsor?

  • Get face time with data professionals in our local community
  • Learn about the topics that are front of mind for developers and DBAs
  • SQL Saturday attendees and presenters are people who are taking time out of their weekend to grow their professional skills and networks. They are leaders and decision makers. They are the people you want in your organization and advising their management teams about infrastructure, architecture and software purchasing decisions

Our call for speakers is open through March 7th, 2017. Don’t let a lack of speaking experience stop you! You’ve got lots of time to rehearse and many SQL Saturday speakers have spoken first at SQL Saturday Rochester.

Follow #SQLSatROC  on Twitter and join us in April!

Don’t Count on Me

This post is in support of Tim Ford’s (b|t#iwanttohelp challenge. And also written because this has burned me twice in the past 3 months and by blogging about it, hopefully it’ll stick in my mind.

Setup

I’ve recently been doing a bunch of work with stored procedures, trying to improve performance that’s been suffering due to suboptimal queries. Some of this tuning has resulted in creating temporary tables. After making my changes and running the procedures in SSMS, everything looked good – data’s correct, performance metrics are all improved. Everyone wins!

Then I checked the web app. At first, it appeared to work OK. But on reloading the page with different parameters, I got no data, or the data from the previous parameters, or other data that was completely out of left field. Not good!

After a bit of head-scratching, I popped over to the SQL Server Slack and asked for ideas about why I’d be getting different results depending on how the procedure was called. After kicking a few ideas around, someone asked if the procedure included SET NOCOUNT ON. It didn’t, so I added it and my problems were solved!

Explanation

So what happened here?  When you execute a query against SQL Server, both your data and some additional information is sent back to the client. This additional information is sent via a separate channel which is accessible via the SqlConnection.InfoMessages (or if you’re still using classic ADO, the InfoMessage) event. When you run queries in SSMS, you see this information in the Messages tab of the results pane most often as X row(s) affected.

That’s where my new stored procedures were causing problems. Where the original procedures were returning only one event which corresponded to the number of records returned by the single query in each procedure. But now that I’m loading temp tables, I’m getting multiple messages back – at a minimum, a count of the records affected when loading the temp table plus a count of the records returned to the calling application.

I’m not sure what exactly my application was doing with this, but as soon as multiple messages were passed back through InfoMessage(s), it got very confused and started doing unexpected things with the query results. I suspect that it saw multiple events and attempted to use the data associated with the first one, of which there was none because it was just inserting into my temp table.

By starting the stored procedure with SET NOCOUNT ON, InfoMessages is disabled and this additional data isn’t transmitted to the client. It’s also said that this can improve performance (although it’s more about network traffic these days) but my primary interest in using it is to keep client applications that I can’t change from blowing up.

Something I find very interesting is that SSMS ships with two different templates for stored procedures and one does include SET NOCOUNT ON, while the other doesn’t.

Example

Here are three simple stored procedures to demonstrate the effect of this setting.

CREATE OR ALTER PROCEDURE dbo.GetCounties
AS
print 'GetCounties';
select s.name,c.countyname
from states s join counties c on s.StateId = c.StateId;
go

CREATE OR ALTER PROCEDURE dbo.GetCounties2
AS
create table #StatesCounties (
StateName nvarchar(100)
,CountyName nvarchar(100)
);
print 'GetCounties2';
insert into #StatesCounties
select s.name as StateName,c.countyname
from states s join counties c on s.StateId = c.StateId;
select StateName,CountyName from #StatesCounties;
go

CREATE OR ALTER PROCEDURE dbo.GetCounties3
AS
SET NOCOUNT ON
create table #StatesCounties (
StateName nvarchar(100)
,CountyName nvarchar(100)
);
print 'GetCounties3';
insert into #StatesCounties
select s.name as StateName,c.countyname
from states s join counties c on s.StateId = c.StateId;
select StateName,CountyName from #StatesCounties;

And the result of running each, from the SSMS Messages tab.

GetCounties

(122 row(s) affected)
GetCounties2

(122 row(s) affected)

(122 row(s) affected)
GetCounties3

Note how the first reports the number of rows returned, while the second reports both the number of rows inserted into the temp table and the number returned from the query. In the last example, no messages are returned. In all cases, the print messages are returned because they’re explicitly output by my code.

Summary

  • Unless you have a very specific need to get this alternate data stream in your calling application, use SET NOCOUNT ON in your stored procedures
  • The next time you’re working in a stored procedure, add it if it’s not already there
  • Add it to the template you use for creating new stored procedures

T-SQL Tuesday #83: Why Leave Well Enough Alone?

It’s 2016. So why are we still dealing with T-SQL code and design patterns that were designed 7 versions ago?tsql2sday-300x300

 

In the 15 years I have been using databases professionally, we’re still dealing with:

  • Peoples’ names are split into first name, last name and middle initial fields. Ignoring that this falls afoul of several of the myths programmers believe about names, the first name column was defined as CHAR(10) in a standard installation. How many characters are in the name Christopher (hint: I had to take off a shoe to count them all)?
  • Other arbitrarily short column sizes which cause problems as the system scales out in usage. For example, an event ID field that’s 8 characters: 2 letters and a 6-digit number which is used as a sequence. Guess what happens when you hit the millionth event in that sequence.
  • Processes originally developed as transactions (for good reasons), but not designed in such a way that they scale to today’s demands.
  • NOLOCK hints everywhere. It’s even in newly-developed code for this application.
  • Cursors used anytime a set of records has to be updated with a small bit of conditional logic built in. A set-based operation with appropriate CASE statements would work much better.

The primary system I deal with on a daily basis was originally developed as a DOS application and several of the above examples are drawn from it. Looking at the core tables and columns, it’s easy to identify those that began life in those early days – they all have 8-character names. Time moved on and the system grew and evolved. DOS to Windows. Windows to the web. But the database, and the practices and patterns used in the database, haven’t come along for the ride.

Data schema conversions can be hard and disruptive – you need to update your application, your stored procedures, and provide customers/users with a clean migration path. Code changes require testing. Complexity and cost grows every time you introduce changes. I get that.

But by not keeping up with the advancements of the platform your data resides on and ignoring the evolution of how to work with your data, you do your customers, users, partners, colleagues and yourself a disservice.

How do you improve this? I’m certainly not advocating for scrapping everything and rewriting all of your code. Complete rewrites are generally a bad idea.

What I am saying, however, is:

  • You need to be constantly watching the state of the platforms your software runs on. If you drop support for a particular version (say, dropping SQL Server 2005 support as Microsoft no longer supports it), start evaluating the 2008+ features that are now open to you.
  • Drop support for old versions of SQL Server. Don’t let the past shackle your future.
  • Get outside opinions from trusted sources. Whether it be from your local user group, a short consulting engagement, or bringing in new people. But most importantly, when you seek advice, make use of it. Don’t ask for advice and then ignore it.
  • Don’t accept the status quo. Anytime you’re working in a piece of code, review the whole thing. Can this section be cleaned up? Is it even needed anymore? Has the system scaled in usage/data volume that it needs to be re-thought entirely? Have you learned something new from your favorite SQL Server blog or a SQL Saturday event that you can apply to it?

That last point is where everyone responsible for an application or database can make the most impact. To co-opt Baden-Powell’s last message to the Boy Scouts of the world: Leave the code a little better than you found it. If you do this every time you touch a component of your database, you’ll make enough incremental updates that these 15 year old problems will go away and stay away.

Name Your Defaults So SQL Server Doesn’t

Something in SQL Server that isn’t always obvious to beginners is that when you create a default value for a column on a table, SQL Server creates a constraint (much like a primary or foreign key). All constraints must have a name, and if one isn’t specified SQL Server will generate one for you. For example:

CREATE TABLE [dbo].[point_types] (
[typeid] [int] NOT NULL DEFAULT(NEXT VALUE FOR [pointtypeid])
,[typename] [nvarchar](30) NOT NULL DEFAULT 'Unspecified'
,CONSTRAINT [PK_PointType] PRIMARY KEY CLUSTERED ([typeid] ASC)
)
GO

We’ve got a simple table here and both fields have a default value set (the primary key’s value is generated from a sequence object, pointtypeid). Time goes on, and a change in policy comes up which requires that I change the default value of typename to Unknown. To do this, I have to drop the constraint and re-create it. To find the name of the constraint, I can either ask sp_help, or run this query:

SELECT all_columns.NAME
,default_constraints.NAME
,default_constraints.DEFINITION
FROM sys.all_columns
INNER JOIN sys.tables
ON all_columns.object_id = tables.object_id
INNER JOIN sys.schemas
ON tables.schema_id = schemas.schema_id
INNER JOIN sys.default_constraints
ON all_columns.default_object_id = default_constraints.object_id
WHERE schemas.NAME = 'dbo'
AND tables.NAME = 'point_types';

I’ve got my constraint name now, so I can drop it & re-create it

NameDefaults01

ALTER TABLE [dbo].[point_types]

DROP CONSTRAINT DF__point_typ__typen__21B6055D;
GO

ALTER TABLE [dbo].[point_types] ADD DEFAULT('Unknown')
FOR [typename];
GO

And if I re-run the above query, I can see that the constraint’s name is different.

NameDefaults02

This means that everywhere I need to change this constraint (development, test and production), I’ll need to figure out the constraint name in that particular database and drop it before re-creating it. This makes a deployment script a bit messier, as it needs more code to find those constraint names

DECLARE @constraintname VARCHAR(50);

SELECT @constraintname = default_constraints.NAME
FROM sys.all_columns
INNER JOIN sys.tables
ON all_columns.object_id = tables.object_id
INNER JOIN sys.schemas
ON tables.schema_id = schemas.schema_id
INNER JOIN sys.default_constraints
ON all_columns.default_object_id = default_constraints.object_id
WHERE schemas.NAME = 'dbo'
AND tables.NAME = 'point_types'

DECLARE @sql NVARCHAR(200) = N'alter table [dbo].[point_types] drop constraint ' + @constraintname;

PRINT @sql;

NameDefaults03

EXECUTE sp_executesql @sql;

ALTER TABLE [dbo].[point_types] ADD DEFAULT('Unknown')
FOR [typename];
GO

But this doesn’t really solve my problem, it just works around it. It’s still messy and fragile. If I need to do other operations on the default constraint, I need to go through the same exercise to find its name.

Fortunately, SQL Server lets us name default constraints just like any other constraint, and by doing so we avoid this trouble. By setting my own name for the constraint, I know what it’ll be in every database, without having to query system tables. The name can be set in both the CREATE TABLE statement and an independent ALTER TABLE.

CREATE TABLE [dbo].[point_types] (
[typeid] [int] NOT NULL DEFAULT(NEXT VALUE FOR [pointtypeid])
,[typename] [nvarchar](30) NOT NULL CONSTRAINT [DF_PT_TypeName] DEFAULT 'Unspecified'
,CONSTRAINT [PK_PointType] PRIMARY KEY CLUSTERED ([typeid] ASC)
);
GO

ALTER TABLE [dbo].[point_types]

DROP CONSTRAINT [DF_PT_TypeName];
GO

ALTER TABLE [dbo].[point_types] ADD CONSTRAINT [DF_PT_TypeName] DEFAULT('Unknown')
FOR [typename];
GO

I can also combine these in the next deployment that requires a change to the default constraint, dropping the system-generated name and establishing my own static name to make things simpler in the future.

Is explicitly naming default (or any other) constraints necessary? No, but doing so helps your database document itself, and it makes future deployment/promotion scripts simpler and less prone to breakage. SQL Server needs a name for the constraint regardless; it’s worth specifying it yourself.

SQL New Blogger Challenge November 2015 Edition – Week 3 Digest

This week’s #sqlnewblogger posts!

Author Post
@eleightondick [T-SQL Tuesday] Data modeling: The trouble with prefixes | The Data Files
@tomsql Adventures With TomSQL, aka Tom Staab
@EdDebug Automatically name primary key constraints in SSDT | the.agilesql.club
@rabryst Born SQL on Twitter: “Temporal Tables – Under the Covers with the Transaction Log. 
@YatesSQL Community Involvement–Why Wait? | The SQL Professor
@cjsommer Identity Column Increment Value (EVEN/ODD) | cjsommer.com
@DBA_ANDY Nebraska SQL from @DBA_ANDY: CHECKDB – The database could not be exclusively locked to perform the operation
@ALevyInROC Selectively Locking Down Data – Gracefully – The Rest is Just Code
@eleightondick SQLNewBlogger, Week 3 | The Data Files
@tomsql Being Our Collective Best
@SQLMickey T-SQL Tuesday #72 Summary – Data Modeling Gone Wrong | Mickey’s T-SQL Ponderings

Selectively Locking Down Data – Gracefully

I have a situation where I need to retrieve the data in an encrypted column from, but don’t want to give all my users access to the symmetric key used to encrypt that column. The data is of the sort where it’s important for the  application to produce the required output, but if a user runs the stored procedure to see what the application is getting from it, it’s not critical that they see this one field.

The catch is that if the stored procedure is written with the assumption that the caller has permission to access the encryption key or its certificate, they’ll get an error. After a bit of research and pondering later, I came up with two options:

  1. Create the stored procedure with EXECUTE AS OWNER (the owner in this case is dbo). This would let all users see the encrypted data; not an ideal solution.
  2. Use SQL Server’s TRY/CATCH construct to gracefully handle the error thrown when the user attempts to open the key, but doesn’t have permission to do so.

Let’s check out option 2. This example is simplified from my actual scenario to demonstrate the idea.


declare @BankId varchar(6) = '123456';

SELECT cast('' as varchar(50)) AS AccountNum,
,AccountName
,AccountOwner
INTO #AccountData
FROM dbo.Accounts
WHERE OriginatingBank = @BankId
AND AccountType = 'Checking'

DECLARE @AcctNo VARCHAR(30);

BEGIN TRY
OPEN SYMMETRIC KEY MyKey DECRYPTION BY CERTIFICATE My_Cert

SELECT @AcctNo = CONVERT(VARCHAR, decryptbykey(AccountNum))
FROM dbo.Accounts
WHERE OriginatingBank = @BankId
AND AccountType = 'Checking'

CLOSE SYMMETRIC KEY MyKey
END TRY

BEGIN CATCH
SET @AcctNo = 'Access Restricted';
END CATCH

UPDATE #AccountData SET AccountNum = @AcctNo;

SELECT * FROM #AccountData;

DROP TABLE #AccountData;

TRY/CATCH in T-SQL works similarly to how it does in languages like C# or PowerShell. It allows you to attempt an operation and take care of any error conditions fairly easily.

In this case, I’m attempting to open the encryption key. But if the user doesn’t have permission to do so, it doesn’t terminate the stored procedure with an error. Instead, it jumps to the CATCH block, where I’ve defined an alternate way of handling the situation. Here, if the user doesn’t have the appropriate permissions, they’ll just get “Access Restricted” for the account number, and access to that sensitive data is a little more tightly controlled – while still letting users access the data they do need.

SQL New Blogger Challenge November 2015 Edition – Week 2 Digest

This week’s #sqlnewblogger posts!

Author Post
@arrowdrive Anders On SQL: T-SQL Tuesday #72: Data modelling gone extremely wrong
@rabryst Time After Time – An Introduction to Temporal Tables in SQL Server 2016 using a DeLorean
@EdDebug Deploy SSDT INSERTS in Batches | the.agilesql.club
@ALevyInROC Don’t Trust the Wizard – The Rest is Just Code
@DBA_ANDY Nebraska SQL from @DBA_ANDY: T-SQL Tuesday #72 – Implicit Conversion Problems
@eleightondick SQL New Blogger Challenge: Week 1 recap | The Data Files
@eleightondick SQL New Blogger Challenge: Week 2 ideas | The Data Files
@BeginTry SQL Server 2012 Upgrade: The RPC Server is Unavailable | It's All Just Electrons

Don’t Trust the Wizard

The one wizard you can trust
The one wizard you can trust

If you need to move data from one table into a new table, or even tables in a database into another database, the Import/Export Wizard in SQL Server Management Studio looks pretty tempting. Set up a source & destination, click a few buttons, kick back with a cup of tea and watch the progress bars, right?

It turns out that the wizard just isn’t as smart as it may seem. If you’re not careful, you won’t get what you’re expecting. Let’s check it out.

We’ll start by creating a real simple table in a database, containing a primary key and a computed column.

Create table sample.dbo.SourceTable (
RowId int identity(1,1) not null primary key,
Num1 int not null,
Num2 int not null,
Total as (Num1+Num2)
);

Let’s populate it with a few rows of data, then update some of that data to make sure the computed column is working. Remember, this is just to demonstrate the idea.

insert into sample.dbo.SourceTable (Num1, Num2) values (1,2);
go 100
select top 5 * from sample.dbo.SourceTable order by RowId;
update sample.dbo.SourceTable set Num1 = Num1 * RowId where RowId <= 3;
select top 5 * from sample.dbo.SourceTable order by RowId;

wizard_Screenshot 1

Great! We’ve got data, the computed columns are working, let’s copy it over to a new table in another database. We’ll just going to click Next, Next, Next through the wizard this time around.

wizard_Screenshot 2
wizard_Screenshot 3
wizard_Screenshot 4
wizard_Screenshot 5
Success! Our table has been copied and the data’s all there.

select top 5 * from Sample2.dbo.SourceTable order by RowId;

wizard_Screenshot 6

Let’s do some work on our new table and check out the results.

select top 5 * from Sample2.dbo.SourceTable order by RowId;
update Sample2.dbo.SourceTable set Num2 = Num2 * RowId where RowId < 3;
select top 5 * from Sample2.dbo.SourceTable order by RowId;

wizard_Screenshot 7

Woah! That’s not right. That Total column is supposed to be Num1 + Num2, and last time I checked 2 + 4 was not 4. Let’s keep going and try adding a new record the same way it was done earlier.

insert into Sample2.dbo.SourceTable (Num1, Num2) values (100,200);

Cannot insert the value NULL into column 'RowId', table 'Sample2.dbo.SourceTable'; column does not allow nulls. INSERT fails.

Huh. Now that’s really odd, isn’t it? RowId is supposed to be an identity – we shouldn’t have to populate it. What is going on here? Let’s script out the table.

USE [Sample2]
GO

/****** Object: Table [dbo].[SourceTable] Script Date: 2015-11-10 22:36:23 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[SourceTable](
[RowId] [int] NOT NULL,
[Num1] [int] NOT NULL,
[Num2] [int] NOT NULL,
[Total] [int] NULL
) ON [PRIMARY]

GO

This is all kinds of wrong! What happened to the primary key? Or the computed column? Well, it turns out that the wizard isn’t that smart, and if you just take all the default values, you’re going to get burned. Let’s go back to the wizard and click that Edit Mappings button in the Select Source Tables and Views screen.
wizard_Screenshot 8

Well…that looks like what we got above. And it’s not what we wanted. If we click Edit SQL, this is confirmed – the table being created is not defined the same way the source table is being defined.
wizard_Screenshot 9
Fortunately, we can edit the SQL here and make it match the source table definition, then finish the wizard.
wizard_Screenshot 10
OK, data’s copied – what do we have?

select top 5 * from Sample3.dbo.SourceTable order by RowId;
update Sample3.dbo.SourceTable set Num2 = Num2 * RowId where RowId < 3;
select top 5 * from Sample3.dbo.SourceTable order by RowId;
insert into Sample3.dbo.SourceTable (Num1, Num2) values (100,200);
select * from sample3.dbo.SourceTable where rowid >= 100 order by RowId;

wizard_Screenshot 11

Everything’s there, and it’s working the way it’s supposed to. Lesson learned: don’t blindly trust the defaults, especially the ones in a wizard. Double-check everything, and then verify that your schema works the way you expect it to before doing any business with it.

SQL New Blogger Challenge, November Edition, Week 1 Digest

Ed Leighton-Dick has renewed his New Blogger Challenge this month. Here are all (I think) the posts for this week after Ed posted his announcement. If I’ve missed any, please let me know and I’ll update.

Author Post
@arrowdrive Anders On SQL: First Timer Summit impressions.
@EdDebug Deploy SSDT INSERTS in Batches | the.agilesql.club
@EdDebug Looking at SSDT upgrade scripts | the.agilesql.club
@DBA_ANDY Nebraska SQL from @DBA_ANDY: PASS Summit 2015 Recap
@eleightondick PASS Summit 2015 Highlights | The Data Files
@OliverAsmus PASS Summit 2015: My Experience | OliverAsmus.com
@EdDebug ScriptDom Visualizer | the.agilesql.club
@eleightondick SQL New Blogger Challenge: Looking back… and a new challenge! | The Data Files
@Clem1029 Tearing down the wall | SQLDEV@Clemsplace
@ALevyInROC Why Ask Why? – The Rest is Just Code
@rabryst The SQL Server Family